Approximate fisher information matrix to characterise the training of deep neural networks

Zhibin Liao, Tom Drummond, Ian Reid, Gustavo Carneiro

Research output: Contribution to journalArticleResearchpeer-review

Abstract

In this paper, we introduce a novel methodology for characterising the performance of deep learning networks (ResNets and DenseNet) with respect to training convergence and generalisation as a function of mini-batch size and learning rate for image classification. This methodology is based on novel measurements derived from the eigenvalues of the approximate Fisher information matrix, which can be efficiently computed even for high capacity deep models. Our proposed measurements can help practitioners to monitor and control the training process (by actively tuning the mini-batch size and learning rate) to allow for good training convergence and generalisation. Furthermore, the proposed measurements also allow us to show that it is possible to optimise the training process with a new dynamic sampling training approach that continuously and automatically change the mini-batch size and learning rate during the training process. Finally, we show that the proposed dynamic sampling training approach has a faster training time and a competitive classification accuracy compared to the current state of the art.

Original languageEnglish
Number of pages12
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
DOIs
Publication statusAccepted/In press - 16 Oct 2018

Keywords

  • Computational modeling
  • Convergence
  • Linear programming
  • Machine learning
  • Neural networks
  • Testing
  • Training

Cite this

@article{69658fed254b477784a2c3b362fad2b1,
title = "Approximate fisher information matrix to characterise the training of deep neural networks",
abstract = "In this paper, we introduce a novel methodology for characterising the performance of deep learning networks (ResNets and DenseNet) with respect to training convergence and generalisation as a function of mini-batch size and learning rate for image classification. This methodology is based on novel measurements derived from the eigenvalues of the approximate Fisher information matrix, which can be efficiently computed even for high capacity deep models. Our proposed measurements can help practitioners to monitor and control the training process (by actively tuning the mini-batch size and learning rate) to allow for good training convergence and generalisation. Furthermore, the proposed measurements also allow us to show that it is possible to optimise the training process with a new dynamic sampling training approach that continuously and automatically change the mini-batch size and learning rate during the training process. Finally, we show that the proposed dynamic sampling training approach has a faster training time and a competitive classification accuracy compared to the current state of the art.",
keywords = "Computational modeling, Convergence, Linear programming, Machine learning, Neural networks, Testing, Training",
author = "Zhibin Liao and Tom Drummond and Ian Reid and Gustavo Carneiro",
year = "2018",
month = "10",
day = "16",
doi = "10.1109/TPAMI.2018.2876413",
language = "English",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",

}

Approximate fisher information matrix to characterise the training of deep neural networks. / Liao, Zhibin; Drummond, Tom; Reid, Ian; Carneiro, Gustavo.

In: IEEE Transactions on Pattern Analysis and Machine Intelligence, 16.10.2018.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Approximate fisher information matrix to characterise the training of deep neural networks

AU - Liao, Zhibin

AU - Drummond, Tom

AU - Reid, Ian

AU - Carneiro, Gustavo

PY - 2018/10/16

Y1 - 2018/10/16

N2 - In this paper, we introduce a novel methodology for characterising the performance of deep learning networks (ResNets and DenseNet) with respect to training convergence and generalisation as a function of mini-batch size and learning rate for image classification. This methodology is based on novel measurements derived from the eigenvalues of the approximate Fisher information matrix, which can be efficiently computed even for high capacity deep models. Our proposed measurements can help practitioners to monitor and control the training process (by actively tuning the mini-batch size and learning rate) to allow for good training convergence and generalisation. Furthermore, the proposed measurements also allow us to show that it is possible to optimise the training process with a new dynamic sampling training approach that continuously and automatically change the mini-batch size and learning rate during the training process. Finally, we show that the proposed dynamic sampling training approach has a faster training time and a competitive classification accuracy compared to the current state of the art.

AB - In this paper, we introduce a novel methodology for characterising the performance of deep learning networks (ResNets and DenseNet) with respect to training convergence and generalisation as a function of mini-batch size and learning rate for image classification. This methodology is based on novel measurements derived from the eigenvalues of the approximate Fisher information matrix, which can be efficiently computed even for high capacity deep models. Our proposed measurements can help practitioners to monitor and control the training process (by actively tuning the mini-batch size and learning rate) to allow for good training convergence and generalisation. Furthermore, the proposed measurements also allow us to show that it is possible to optimise the training process with a new dynamic sampling training approach that continuously and automatically change the mini-batch size and learning rate during the training process. Finally, we show that the proposed dynamic sampling training approach has a faster training time and a competitive classification accuracy compared to the current state of the art.

KW - Computational modeling

KW - Convergence

KW - Linear programming

KW - Machine learning

KW - Neural networks

KW - Testing

KW - Training

UR - http://www.scopus.com/inward/record.url?scp=85055042439&partnerID=8YFLogxK

U2 - 10.1109/TPAMI.2018.2876413

DO - 10.1109/TPAMI.2018.2876413

M3 - Article

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

ER -