Faster training of very deep networks via p-norm gates

Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

10 Citations (Scopus)


A major contributing factor to the recent advances in deep neural networks is structural units that let sensory information and gradients to propagate easily. Gating is one such structure that acts as a flow control. Gates are employed in many recent state-of-the-art recurrent models such as LSTM and GRU, and feedforward models such as Residual Nets and Highway Networks. This enables learning in very deep networks with hundred layers and helps achieve record-breaking results in vision (e.g., ImageNet with Residual Nets) and NLP (e.g., machine translation with GRU). However, there is limited work in analysing the role of gating in the learning process. In this paper, we propose a flexible p-norm gating scheme, which allows user-controllable flow and as a consequence, improve the learning speed. This scheme subsumes other existing gating schemes, including those in GRU, Highway Networks and Residual Nets as special cases. Experiments on large sequence and vector datasets demonstrate that the proposed gating scheme helps improve the learning speed significantly without extra overhead.

Original languageEnglish
Title of host publication2016 23rd International Conference on Pattern Recognition (ICPR 2016)
EditorsLarry Davis , Alberto Del Bimbo, Brian C. Lovell
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages6
ISBN (Electronic)9781509048472
ISBN (Print)9781509048489
Publication statusPublished - 2016
Externally publishedYes
EventInternational Conference on Pattern Recognition 2016 - Cancun, Mexico
Duration: 4 Dec 20168 Dec 2016
Conference number: 23rd (Proceedings)


ConferenceInternational Conference on Pattern Recognition 2016
Abbreviated titleICPR 2016
Internet address

Cite this