Abstract
In this paper, we propose a classifier for classification of data vectors with mutually independent but not identically distributed elements. For the proposed classifier, we prove that the error probability goes to zero as the length of the data vectors goes to infinity, even when there is only one training data vector per label available. Finally, we present numerical examples where we show that the performance of the proposed classifier outperforms conventional classification algorithms when the number of training data is small.
Original language | English |
---|---|
Title of host publication | 2021 IEEE International Symposium on Information Theory, Proceedings |
Editors | Bikash Dey |
Place of Publication | Piscataway NJ USA |
Publisher | IEEE, Institute of Electrical and Electronics Engineers |
Pages | 2637-2642 |
Number of pages | 6 |
ISBN (Electronic) | 9781538682098 |
ISBN (Print) | 9781538682104 |
DOIs | |
Publication status | Published - 2021 |
Event | IEEE International Symposium on Information Theory 2021 - Online, Melbourne, Australia Duration: 12 Jul 2021 → 20 Jul 2021 https://ieeexplore.ieee.org/xpl/conhome/9517708/proceeding (Proceedings) https://2021.ieee-isit.org/ |
Publication series
Name | IEEE International Symposium on Information Theory - Proceedings |
---|---|
Publisher | IEEE, Institute of Electrical and Electronics Engineers |
Volume | 2021-July |
ISSN (Print) | 2157-8095 |
Conference
Conference | IEEE International Symposium on Information Theory 2021 |
---|---|
Abbreviated title | ISIT 2021 |
Country/Territory | Australia |
City | Melbourne |
Period | 12/07/21 → 20/07/21 |
Internet address |
Keywords
- Classification
- Error probability
- Independent but not identically distribution