Batch Normalized Deep Boltzmann Machines

Hung Vu, Tu Dinh Nguyen, Trung Le, Wei Luo, Dinh Phung

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

Training Deep Boltzmann Machines (DBMs) is a challenging task in deep generative model studies. The careless training usually leads to a divergence or a useless model. We discover that this phenomenon is due to the change of DBM layers’ input signals during model parameter updates, similar to other deterministic deep networks such as Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs). The change of layers’ input distributions not only complicates the learning process but also causes redundant neurons that simply imitate the others’ behaviors. Although this phenomenon can be coped using batch normalization in deep learning, integrating this technique into the probabilistic network of DBMs is a challenging problem since it has to satisfy two conditions of energy function and conditional probabilities. In this paper, we introduce Batch Normalized Deep Boltzmann Machines (BNDBMs) that meet both aforementioned conditions and successfully combine batch normalization and DBMs into the same framework. However, unlike CNNs, due to the probabilistic nature of DBMs, training DBMs with batch normalization has some differences: i) fixing shift parameters β but learning scale parameters γ; ii) avoiding normalizing the first hidden layer and iii) maintaining multiple pairs of population means and variances per neuron rather than one pair in CNNs. We observe that our proposed BNDBMs can stabilize the input signals of network layers and facilitate the training process as well as improve the
model quality. More interestingly, BNDBMs can be trained successfully without pretraining, which is usually a mandatory step in most existing DBMs. The ex- perimental results in MNIST, Fashion-MNIST and Caltech 101 Silhouette datasets show that our BNDBMs outperform DBMs and centered DBMs in terms of feature represen tation and classification accuracy (3.98% and 5.84% average improvement for pretrainingand no pretraining respectively).
Original languageEnglish
Title of host publicationProceedings of Asian Conference on Machine Learning 2018
EditorsJun Zhu, Ichiro Takeuchi
Place of PublicationUSA
PublisherProceedings of Machine Learning Research (PMLR)
Pages359-374
Number of pages16
Publication statusPublished - 2018
EventAsian Conference on Machine Learning 2018 - Beijing, China
Duration: 14 Nov 201816 Nov 2018
Conference number: 10th
http://www.acml-conf.org/2018/
http://proceedings.mlr.press/v95/ (Proceedings)

Publication series

NameProceedings of Machine Learning Research
PublisherProceedings of Machine Learning Research (PMLR)
Volume95
ISSN (Print)1938-7228

Conference

ConferenceAsian Conference on Machine Learning 2018
Abbreviated titleACML 2018
CountryChina
CityBeijing
Period14/11/1816/11/18
Internet address

Cite this