DOI QR코드

DOI QR Code

Stepwise Constructive Method for Neural Networks Using a Flexible Incremental Algorithm

Flexible Incremental 알고리즘을 이용한 신경망의 단계적 구축 방법

  • 박진일 (충북대학교 전자정보대학 제어로봇공학과) ;
  • 정지석 (충북대학교 전자정보대학 제어로봇공학과) ;
  • 조영임 (수원대학교 IT대학 컴퓨터학과) ;
  • 전명근 (충북대학교 전자정보대학 제어로봇공학과)
  • Received : 2009.01.19
  • Accepted : 2009.07.31
  • Published : 2009.08.25

Abstract

There have been much difficulties to construct an optimized neural network in complex nonlinear regression problems such as selecting the networks structure and avoiding overtraining problem generated by noise. In this paper, we propose a stepwise constructive method for neural networks using a flexible incremental algorithm. When the hidden nodes are added, the flexible incremental algorithm adaptively controls the number of hidden nodes by a validation dataset for minimizing the prediction residual error. Here, the ELM (Extreme Learning Machine) was used for fast training. The proposed neural network can be an universal approximator without user intervene in the training process, but also it has faster training and smaller number of hidden nodes. From the experimental results with various benchmark datasets, the proposed method shows better performance for real-world regression problems than previous methods.

복잡한 비선형 회귀문제들에서 최적의 신경망을 구축하기 위해서는 구조의 선정 및 노이즈에 의한 과잉학습(overtraining)등에 따른 많은 문제들이 있다. 본 논문에서는 flexible incremental 알고리즘을 이용하여 단계적으로 최적의 신경망을 구축하는 방법을 제안한다. Flexible incremental 알고리즘은 예측 잔류오차를 최소화하기 위해 단계적으로 추가되어지는 은닉노드 개수를 검증데이터를 이용하여 신축성 있게 조절하고, 빠른 학습을 위하여 ELM (Extreme Learning Machine)을 이용한다. 제안된 방법은 신경망의 구축과정에서 사용자의 어떠한 관여 없이도 빠른 학습과 적은 수의 은닉노드들에 의한 범용 근사화 (universal approximation)가 가능한 신경망의 구축이 가능한 장점을 가지고 있다. 다양한 종류의 벤치마크 데이터들을 이용한 실험 결과를 통하여 제안된 방법이 실제 회귀문제들에서 우수한 성능을 가짐을 확인하였다.

Keywords

References

  1. T. Andersen and T. Martinez, 'Cross Validation and MLP Architecture Selection,' in Proc. of IEEE Int. Joint Conf. on Neural Networks IJCNN'99, CD Paper #192, 1999
  2. G. Castellano, A. M. Fanelli, and M. Pelillo, 'An Iterative Pruning Algorithm for Feedforward Neural Networks,' IEEE Trans. on Neural Networks, Vol. 8, No. 3, pp. 519-531, 1997 https://doi.org/10.1109/72.572092
  3. G. B. Huang, Q. Y. Zhu, and C. K. Siew, 'Extreme learning machine: a new learning scheme of feedforward neural networks,' in Proc. Int. Joint Conf. Neural Networks(IJCNN2004), Vol. 2, pp. 985-990. 2004
  4. F. Han and D. S. Huang, 'Improved Extreme Learning Machine for Function Approximation by Encoding a Priori Information,' Neurocomputing, Vol. 69, No. 16-18, pp. 2369-2373, 2006 https://doi.org/10.1016/j.neucom.2006.02.013
  5. N. Liang, G. Huang, P. Saratchandran, and N. Sundararajan, 'A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks,' IEEE Trans. on Neural Networks, Vol. 17, No. 6, pp. 1411-1423, 2006 https://doi.org/10.1109/TNN.2006.880583
  6. H. T. Huynh and Y. Won, 'Small Number of Hidden Units for ELM with Two-stage Linear Model,' IEICE Trans. on Information and Systems, Vol. E91-D, No. 4, pp. 1042-1049, 2008 https://doi.org/10.1093/ietisy/e91-d.4.1042
  7. 조재훈, 이대종, 전명근, 'Bacterial Foraging Algorithm을 이용한 Extreme Learning Machine의 파라미터 최적화,' 한국지능시스템학회 논문지, Vol. 17, No. 6, pp. 807-812, 2007
  8. G. B. Huang, L. Chen, and C. Siew, 'Universal Approximation Using Incremental Constructive Feedforward Networks With Random Hidden Nodes,' IEEE Trans. on Neural Networks, Vol. 17, No. 4, pp. 879-892, 2006 https://doi.org/10.1109/TNN.2006.875977
  9. C. Blake, C. Merz, UCI repository of machine learning databases, (http://www.ics.uci.edu/∼mlearn/MLRepository.html), Department of Information and Computer Sciences, University of California, Irvine, USA, 1998
  10. G. B. Huang and L. Chen, 'Convex Incremental Extreme Learning Machine,' Neurocomputing, Vol. 70, pp. 3056-3062, 2007 https://doi.org/10.1016/j.neucom.2007.02.009
  11. G. B. Huang and L. Chen, 'Enhanced Random Search based Incremental Extreme Learning Machine,' Neurocomputing, Vol. 71, pp. 3460-3468, 2008 https://doi.org/10.1016/j.neucom.2007.10.008