Skip to main navigation Skip to search Skip to main content

Effective training of convolutional neural networks with low-bitwidth weights and activations

Bohan Zhuang, Mingkui Tan, Jing Liu, Lingqiao Liu, Ian Reid, Chunhua Shen

Research output: Contribution to journalArticleResearchpeer-review

Abstract

This paper tackles the problem of training a deep convolutional neural network of both low-bitwidth weights and activations. Optimizing a low-precision network is challenging due to the non-differentiability of the quantizer, which may result in substantial accuracy loss. To address this, we propose three practical approaches, including (i) progressive quantization; (ii) stochastic precision; and (iii) joint knowledge distillation to improve the network training. First, for progressive quantization, we propose two schemes to progressively find good local minima. Specifically, we propose to first optimize a net with quantized weights and subsequently quantize activations. This is in contrast to the traditional methods which optimize them simultaneously. Furthermore, we propose a second scheme which gradually decreases the bit-width from high-precision to low-precision during training. Second, to alleviate the excessive training burden due to the multi-round training stages, we further propose a one-stage stochastic precision strategy to randomly sample and quantize sub-networks while keeping other parts in full-precision. Finally, we propose to jointly train a full-precision model alongside the low-precision one. By doing so, the full-precision model provides hints to guide the low-precision model training and significantly improves the performance of the low-precision network. Extensive experiments show the effectiveness of the proposed methods.

Original languageEnglish
Pages (from-to)6140-6152
Number of pages13
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume44
Issue number10
DOIs
Publication statusPublished - 1 Oct 2022

Keywords

  • image classification
  • knowledge distillation
  • Knowledge engineering
  • Neural networks
  • Numerical models
  • progressive quantization
  • Quantization (signal)
  • Quantized neural network
  • stochastic precision
  • Stochastic processes
  • Task analysis
  • Training

Cite this