Traversing latent space using decision ferns

Yan Zuo, Gil Avraham, Tom Drummond

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

1 Citation (Scopus)

Abstract

The practice of transforming raw data to a feature space so that inference can be performed in that space has been popular for many years. Recently, rapid progress in deep neural networks has given both researchers and practitioners enhanced methods that increase the richness of feature representations, be it from images, text or speech. In this work we show how a constructed latent space can be explored in a controlled manner and argue that this complements well founded inference methods. For constructing the latent space a Variational Autoencoder is used. We present a novel controller module that allows for smooth traversal in the latent space and construct an end-to-end trainable framework. We explore the applicability of our method for performing spatial transformations as well as kinematics for predicting future latent vectors of a video sequence.

Original languageEnglish
Title of host publicationComputer Vision – ACCV 2018
Subtitle of host publication14th Asian Conference on Computer Vision Perth, Australia, December 2–6, 2018 Revised Selected Papers, Part I
EditorsC.V. Jawahar, Hongdong Li, Greg Mori, Konrad Schindler
Place of PublicationCham Switzerland
PublisherSpringer
Pages593-608
Number of pages16
ISBN (Electronic)9783030208875
ISBN (Print)9783030208868
DOIs
Publication statusPublished - 2019
EventAsian Conference on Computer Vision 2018 - Perth, Australia
Duration: 2 Dec 20186 Dec 2018
Conference number: 14th
http://accv2018.net/
https://link.springer.com/book/10.1007/978-3-030-20887-5 (Proceedings)

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume11361
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceAsian Conference on Computer Vision 2018
Abbreviated titleACCV 2018
CountryAustralia
CityPerth
Period2/12/186/12/18
Internet address

Cite this