Browse > Article
http://dx.doi.org/10.15207/JKCS.2018.9.2.013

Classification algorithm using characteristics of EBP and OVSSA  

Lee, Jong Chan (Dept. of Internet, Chungwoon University)
Publication Information
Journal of the Korea Convergence Society / v.9, no.2, 2018 , pp. 13-18 More about this Journal
Abstract
This paper is based on a simple approach that the most efficient learning of a multi-layered network is the process of finding the optimal set of weight vectors. To overcome the disadvantages of general learning problems, the proposed model uses a combination of features of EBP and OVSSA. In other words, the proposed method can construct a single model by taking advantage of each algorithm so that it can escape to the probability theory of OVSSA in order to reinforce the property that EBP falls into local minimum value. In the proposed algorithm, methods for reducing errors in EBP are used as energy functions and the energy is minimized to OVSSA. A simple experimental result confirms that two algorithms with different properties can be combined.
Keywords
EBP; OVSSA; Classification; Energy function; Optimization problem;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 D. E. Rumelhart, G. E. Hinton & R. J. Williams (1986), Learning internal representations by error propagation, PDP, I, 318-362.
2 P. D. Wasserman. (1990), A combined back- propagation / cauchy machine network, Journal of Neural Network Computing, 34-40.
3 Y. LeCun, Y. Bengio & G. Hinton. (2015, May) ,Deep learning, Nature, 521, 436-444.   DOI
4 G. Hinton & R. Salakhutdinov. (2006, July) Reducing the dimensionality of data with neural networks, Science, 313.
5 V. Nair & G. E. Hinton. (2010), Rectified linear units improve restricted boltzmann machines, International Conference on Machine Learning.
6 M. Ranzato, & M. Szummer. (2008), Semi- supervised learning of compact document representations with deep networks. International Conference on Machine Learning, 792-799.
7 J. Schmidhuber. (2015), Deep learning in neural networks : An overview, Neural networks, 1-88.
8 J.C.Lee & W.D.Lee. (1994), Pattern classification model based on an optimization tool, International conference on Neural Information Processing, 1744-1748.
9 N. Baba & M. Kozaki. (1992, June), An intelligent forecasting system of stock price using neural networks, IJCNN, I, 371-377.
10 H. Jeong. (1988, Oct), Learning scheme for neural networks by simulated annealing with back-propagation, Workshop for Information Science society, Korean Federation Science and Technology Societies, 15-20.
11 K.Lee, K.Cho, W.Lee & S,Lee. (1992, June), Mean field annealing with continuous variables and its application to the quantification analysis Problem, IJCNN, II, 431-435.
12 M.Kim, H.Choi & W.D.Lee. (1992, June), Fuzzy clustering using extended MFA for continuous valued state space, IJCNN, II, 733-738.
13 G. Wang & N. Ansari. (1997), Optimal broadcast scheduling in packet radio networks using mean field annealing, IEEE Journal on selected areas in communications, 15(2).
14 G. D. Kim & Y. H. Kim. (2017), A survey on oil spill and weather forecast using machine learning based on neural networks and statistical methods, Journal of the Korea Convergence Society, 8(10), 1-8.   DOI
15 Y.D.Yun, Y.W.Yang, H.S.Ji & H.S.Lim. (2017), Development of smart senior classification model based on activity profile using machine learning method, Journal of the Korea Convergence Society, 8(1), 25-34.   DOI