Browse > Article
http://dx.doi.org/10.5370/KIEE.2015.64.1.113

Design of Fuzzy k-Nearest Neighbors Classifiers based on Feature Extraction by using Stacked Autoencoder  

Rho, Suck-Bum (WI-A Corporation R&D Center)
Oh, Sung-Kwun (Dept. of Electrical Engineering, University of Suwon)
Publication Information
The Transactions of The Korean Institute of Electrical Engineers / v.64, no.1, 2015 , pp. 113-120 More about this Journal
Abstract
In this paper, we propose a feature extraction method using the stacked autoencoders which consist of restricted Boltzmann machines. The stacked autoencoders is a sort of deep networks. Restricted Boltzmann machines (RBMs) are probabilistic graphical models that can be interpreted as stochastic neural networks. In terms of pattern classification problem, the feature extraction is a key issue. We use the stacked autoencoders networks to extract new features which have a good influence on the improvement of the classification performance. After feature extraction, fuzzy k-nearest neighbors algorithm is used for a classifier which classifies the new extracted data set. To evaluate the classification ability of the proposed pattern classifier, we make some experiments with several machine learning data sets.
Keywords
Stacked Autoencoders; Boltzmann Machine; Restrict Boltzmann Machine; Deep Networks; Fuzzy C-Means Clustering; Fuzzy k-Nearest Neighbors;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.   DOI   ScienceOn
2 J. C. Bezdek, “Pattern Recognition with Fuzzy Objective Function Algorithms,𠇜 Plenum Press, NY, 1981.
3 P. Smolensky, “Information processing in dynamical systems: Foundations of harmony theory,” In:Rumelhart, D.E., McClelland, J.L. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1: Foundations, pp. 194–281. MIT Press, 1986.
4 W. Lam, C. K. Keung, C. X. Ling, "Learning good prototypes for classification using filtering and abstraction of instances," Pattern Recognition, vol. 35, pp. 1491-1506. 2002   DOI   ScienceOn
5 G.E. Hinton, S. Osindero, Y.W. The, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006.   DOI   ScienceOn
6 G. Pajares, M. Guijarro, A. Ribeiro, “A Hopfield Neural Network for combining classifiers applied to textured images,” Neural Networks, Vol.23, pp. 144-153, 2010.   DOI   ScienceOn
7 B. Chan, G. Polatkan, G. Sapiro, D. Blei, D. Dunson, L. Carin, "Deep Learning with Hierarchical Convolutional Factor Analysis," IEEE Trans. on Pattern and Machine Intelligence, Vol. 35, No. 8, pp. 1958-1971, 2013.   DOI   ScienceOn
8 G. Hinton and R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504-507, 2006.   DOI   ScienceOn
9 Y. Bengio and Y. LeCun, “Scaling learning algorithms toward AI,” in Large Scale Kernel Machine. Cambridge, MA: MIT Press, pp. 321-360, 2007.
10 Y. Bengio, “Learning deep architectures for AI,” Foundations Trends Mach. Learn., vol. 2, no. 1, pp. 1–127, 2009.   DOI