Zero-Shot Task Transfer

Arghya Pal, Vineeth N. Balasubramanian

Research output: Chapter in Book/Report/Conference proceedingChapter (Book)Researchpeer-review

Abstract

Cognitive science has shown results on how human subjects (human children) and other mammals adapt to an entirely novel task (depth measurement) by understanding the association with already learned tasks (self-motion, shoulder movements) without receiving an explicit supervision. Motivated by such prior work, in this chapter, we present a meta-regression algorithm that regresses model parameters of zero-shot tasks from the model parameters of known tasks and the correlation of known and zero-shot tasks. The proposed method is evaluated on the Taskonomy dataset [54] considering surface normal estimation, depth estimation, room layout and camera pose estimation as zero-shot tasks. Our proposed methodology outperforms state-of-the-art models (which use ground truth) on each of our zero-shot tasks, showing promise on zero-shot task transfer. We also conducted extensive experiments to study the various choices of our methodology, as well as showed how the proposed method can also be used in transfer learning. To the best of our knowledge, this is the first such effort on zero-shot learning in the task space.

Original languageEnglish
Title of host publicationDomain Adaptation in Computer Vision with Deep Learning
EditorsHemanth Venkateswara, Sethuraman Panchanathan
Place of PublicationCham Switzerland
PublisherSpringer
Chapter13
Pages235-256
Number of pages22
ISBN (Electronic)9783030455293
ISBN (Print)9783030455286
DOIs
Publication statusPublished - 2020
Externally publishedYes

Keywords

  • Crowd sourcing
  • Dawid-Skene
  • Transfer learning
  • Zero-shot task learning

Cite this