Skip to main navigation Skip to search Skip to main content

HVQ-VAE: Variational auto-encoder with hyperbolic vector quantization

Shangyu Chen, Pengfei Fang, Mehrtash Harandi, Trung Le, Jianfei Cai, Dinh Phung

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Vector quantized-variational autoencoder (VQ-VAE) and its variants have made significant progress in creating discrete latent space via learning a codebook. Previous works on VQ-VAE have focused on discrete latent spaces in Euclidean or in spherical spaces. This paper studies the geometric prior of hyperbolic spaces as a way to improve the learning capacity of VQ-VAE. That being said, working with the VQ-VAE in the hyperbolic space is not without difficulties, and the benefits of using hyperbolic space as the geometric prior for the latent space have never been studied in VQ-VAE. We bridge this gap by developing the VQ-VAE with hyperbolic vector quantization. To this end, we propose the hyperbolic VQ-VAE (HVQ-VAE), which learns the latent embedding of data and the codebook in the hyperbolic space. Specifically, we endow the discrete latent space in the Poincaré ball, such that the clustering algorithm can be formulated and optimized in the Poincaré ball. Thorough experiments against various baselines are conducted to evaluate the superiority of the proposed HVQ-VAE empirically. We show that HVQ-VAE enjoys better image reconstruction, effective codebook usage, and fast convergence than baselines. We also present evidence that HVQ-VAE outperforms VQ-VAE in low-dimensional latent space.

Original languageEnglish
Article number104392
Number of pages10
JournalComputer Vision and Image Understanding
Volume258
DOIs
Publication statusPublished - Jul 2025

Keywords

  • Hyperbolic neural networks
  • Unsupervised learning
  • VQ-VAE

Cite this