A fast Indoor positioning using a knowledge-distilled convolutional neural network (KD-CNN)

Aqilah Binti Mazlan, Yin Hoe Ng, Chee Keong Tan

Research output: Contribution to journalArticleResearchpeer-review

2 Citations (Scopus)


Fingerprint-based indoor positioning systems (F-IPS) may provide inexpensive solutions to GPS-denied environments. Most F-IPSs adopt traditional machine learning for position prediction, resulting in low accuracy. Deep neural networks (DNN) were recently employed for F-IPSs to minimize positioning errors. Nevertheless, a DNN-IPS fails to guarantee high accuracy in dynamic environments as it is sensitive to changes in the input data. A convolutional neural network (CNN) is recommended to replace DNN due to its capability to learn the overall topology of fingerprinting images and capture highly abstract features. Due to the convolution process and image representation, CNN-IPS incurs prohibitive storage and computational requirement for implementation on resource-limited devices. This paper incorporates knowledge distillation (KD) into CNN-IPS to distil knowledge from large deep CNNs into small CNNs. The pre-trained teacher network uses the soft probability output where the score vector from the trained network is converted into a probability distribution, which is softened by the temperature hyperparameter, leading to a more simplified model. Based on the numerical results, KD-CNN-IPS manifests better localization performance where 79.84% of the positioning errors are within 2 meters while its testing time is only 79.68% of that of the teacher model. Compared to the CNN-IPS, KD-CNN-IPS with precisely the same architecture and size could achieve a performance improvement of 13.65% in terms of the average positioning error.

Original languageEnglish
Pages (from-to)65326-65338
Number of pages13
JournalIEEE Access
Publication statusPublished - 2022


  • convolutional neural networks
  • fingerprint
  • Indoor positioning
  • knowledge distillation
  • received signal strength indicators

Cite this