Research of stacked denoising sparse autoencoder

Lingheng Meng, Shifei Ding, Nan Zhang, Jian Zhang

Research output: Contribution to journalArticleResearchpeer-review

19 Citations (Scopus)

Abstract

Learning results depend on the representation of data, so how to efficiently represent data has been a research hot spot in machine learning and artificial intelligence. With the deepening of the deep learning research, studying how to train the deep networks to express high dimensional data efficiently also has been a research frontier. In order to present data more efficiently and study how to express data through deep networks, we propose a novel stacked denoising sparse autoencoder in this paper. Firstly, we construct denoising sparse autoencoder through introducing both corrupting operation and sparsity constraint into traditional autoencoder. Then, we build stacked denoising sparse autoencoders which has multi-hidden layers by layer-wisely stacking denoising sparse autoencoders. Experiments are designed to explore the influences of corrupting operation and sparsity constraint on different datasets, using the networks with various depth and hidden units. The comparative experiments reveal that test accuracy of stacked denoising sparse autoencoder is much higher than other stacked models, no matter what dataset is used and how many layers the model has. We also find that the deeper the network is, the less activated neurons in every layer will have. More importantly, we find that the strengthening of sparsity constraint is to some extent equal to the increase in corrupted level.

Original languageEnglish
Pages (from-to)2083-2100
Number of pages18
JournalNeural Computing and Applications
Volume30
Issue number7
DOIs
Publication statusPublished - Oct 2018
Externally publishedYes

Keywords

  • Autoencoder
  • Deep learning
  • Feature extraction
  • Sparse coding
  • Stacked autoencoders
  • Unsupervised learning

Cite this