Multiple task transfer learning with small sample sizes

Budhaditya Saha, Sunil Gupta, Dinh Phung, Svetha Venkatesh

Research output: Contribution to journalArticleResearchpeer-review

22 Citations (Scopus)


Prognosis, such as predicting mortality, is common in medicine. When confronted with small numbers of samples, as in rare medical conditions, the task is challenging. We propose a framework for classification with data with small numbers of samples. Conceptually, our solution is a hybrid of multi-task and transfer learning, employing data samples from source tasks as in transfer learning, but considering all tasks together as in multi-task learning. Each task is modelled jointly with other related tasks by directly augmenting the data from other tasks. The degree of augmentation depends on the task relatedness and is estimated directly from the data. We apply the model on three diverse real-world data sets (healthcare data, handwritten digit data and face data) and show that our method outperforms several state-of-the-art multi-task learning baselines. We extend the model for online multi-task learning where the model parameters are incrementally updated given new data or new tasks. The novelty of our method lies in offering a hybrid multi-task/transfer learning model to exploit sharing across tasks at the data-level and joint parameter learning.

Original languageEnglish
Pages (from-to)315-342
Number of pages28
JournalKnowledge and Information Systems
Issue number2
Publication statusPublished - Feb 2016
Externally publishedYes


  • Data mining
  • Healthcare
  • Multi-task
  • Optimization
  • Statistical analysis
  • Transfer learning

Cite this