TY - JOUR
T1 - Performance of an optimal subset of Zernike features for pattern classification
AU - Raveendran, P.
AU - Omatu, Sigeru
PY - 1994/5
Y1 - 1994/5
N2 - This paper presents a technique of selecting an optimal number of features from the original set of features. Due to the large number of features considered, it is computationally more efficient to select a subset of features that can discriminate as well as the original set. The subset of features is determined using stepwise discriminant analysis. The results of using such a scheme to classify scaled, rotated, and translated binary images and also images that have been perturbed with random noise are reported. The features used in this study are Zernike moments, which are the mapping of the image onto a set of complex orthogonal polynomials. The performance of using a subset is examined through its comparison to the original set. The classifiers used in this study are neural network and a statistical nearest neighbor classifier. The back-propagation learning algorithm is used in training the neural network. The classifers are trained with some noiseless images and are tested with the remaining data set. When an optimal subset of features is used, the classifers performed almost as well as when trained with the original set of features.
AB - This paper presents a technique of selecting an optimal number of features from the original set of features. Due to the large number of features considered, it is computationally more efficient to select a subset of features that can discriminate as well as the original set. The subset of features is determined using stepwise discriminant analysis. The results of using such a scheme to classify scaled, rotated, and translated binary images and also images that have been perturbed with random noise are reported. The features used in this study are Zernike moments, which are the mapping of the image onto a set of complex orthogonal polynomials. The performance of using a subset is examined through its comparison to the original set. The classifiers used in this study are neural network and a statistical nearest neighbor classifier. The back-propagation learning algorithm is used in training the neural network. The classifers are trained with some noiseless images and are tested with the remaining data set. When an optimal subset of features is used, the classifers performed almost as well as when trained with the original set of features.
UR - http://www.scopus.com/inward/record.url?scp=43949147848&partnerID=8YFLogxK
U2 - 10.1016/1069-0115(94)90006-X
DO - 10.1016/1069-0115(94)90006-X
M3 - Article
AN - SCOPUS:43949147848
SN - 1069-0115
VL - 1
SP - 133
EP - 147
JO - Information Sciences - Applications
JF - Information Sciences - Applications
IS - 3
ER -