Medical image classification: a comparison of deep pre-trained neural networks

David Olayemi Alebiosu, Fermi Pasha Muhammad

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearch

9 Citations (Scopus)

Abstract

Medical image classification is an important step in the effective and accurate retrieval of medical images from large digital database where they are stored. This paper examines the effectiveness of using domain transferred neural networks (DCNNs) for classification of medical X-ray images. We employed two different convolutional neural network (CNN) architectures. VGGNet-16 and AlexNet pre-trained on ImageNet, a non- medical image database consisting of over 1.2 million scenery images were used for the classification task. The pre-trained networks served both as feature extractors and as fine-tuned networks. The extracted feature vector was used to train a linear support vector machine (SVM) to generate a model for the classification task. The fine-tuning process was done by replacing and retraining the last fully connected layers through backward propagation. Our method was evaluated on ImageCLEF2007 medical database. The database consist of 11, 000 medical X-ray images (training dataset) and 1, 000 images (testing dataset) classified into 116 categories. We compared the performance of the two networks both as feature generators and as fine-tuned networks on our dataset. The overall classification accuracy across all the 116 image classes shows that VGGNet-16 + SVM produced 79.6% and 85.77% as fine-tuned network. AlexNet + SVM produced a total classification accuracy of 84.27% and as a fine-tuned network produced a total of 86.47% which is the highest among the four techniques across all the 116 image classes. This study shows that the employment of a shallower pre-trained neural network such as AlexNet learn features that are more generalizable compared to deeper networkers such as VGGNet-16 and has a greater capability of increasing classification accuracy of medical image database. Though the pre-trained AlexNet outperformed VGGNet-16 in both ways, it can be noted that some image classes from the same sub-body region are difficult to classify accurately. This is as a result of inter-class similarity that exists among the images.

Original languageEnglish
Title of host publication2019 17th IEEE Student Conference on Research and Development (SCOReD)
EditorsSyed Saad Azhar Ali, Kishore Bingi, Lakshmi Manasa Vedantham
Place of PublicationPiscataway NJ USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages306-310
Number of pages5
ISBN (Electronic)9781728126135, 9781728126128
ISBN (Print)9781728126142
DOIs
Publication statusPublished - 2019
EventIEEE Student Conference on Research and Development (SCOReD) 2019 - Seri Iskandar, Perak, Malaysia
Duration: 15 Oct 201917 Oct 2019
Conference number: 17th
https://ieeexplore.ieee.org/xpl/conhome/8890748/proceeding (Proceedings)
https://ieeemy.org/scored/ (Website)

Conference

ConferenceIEEE Student Conference on Research and Development (SCOReD) 2019
Abbreviated titleSCOReD 2019
Country/TerritoryMalaysia
CitySeri Iskandar, Perak
Period15/10/1917/10/19
Internet address

Keywords

  • AlexNet
  • Classification
  • Feature Extraction
  • Fine-tuning
  • SVM
  • VGGNet-16

Cite this