Browse > Article

A self-organizing algorithm for multi-layer neural networks  

이종석 (한국과학기술원 전자전산학과)
김재영 (한국과학기술원 전자전산학)
정승범 (삼성전자(주)기술총괄소프트웨어센)
박철훈 (한국과학기술원 전자전산학과)
Publication Information
Abstract
When a neural network is used to solve a given problem it is necessary to match the complexity of the network to that of the problem because the complexity of the network significantly affects its learning capability and generalization performance. Thus, it is desirable to have an algorithm that can find appropriate network structures in a self-organizing way. This paper proposes algorithms which automatically organize feed forward multi-layer neural networks with sigmoid hidden neurons for given problems. Using both constructive procedures and pruning procedures, the proposed algorithms try to find the near optimal network, which is compact and shows good generalization performance. The performances of the proposed algorithms are tested on four function regression problems. The results demonstrate that our algorithms successfully generate near-optimal networks in comparison with the previous method and the neural networks of fixed topology.
Keywords
self-organizing algorithm; multi-layer neural networks; construction; pruning; impact factor;
Citations & Related Records
연도 인용수 순위
  • Reference
1 R. Setiono and L. C. K. Hui, 'Use of a quasi-Newton method in a feedforward neural network construction algorithm,' IEEE Trans. Neural Networks, vol. 6, no. 1, pp. 273-277, Jan. 1995   DOI   ScienceOn
2 S. Haykin Neural Networks: A Comprehensive Foundation, NJ: Prentice-Hall, 1999
3 J.-S. Lee and C. H. Park, 'Self-organizing neural network using adaptive neurons,' in Proc. Int. Conf. Neural Information Processing, Singapore, pp. 935-939, Nov. 2002
4 T. Ash, 'Dynamic node creation in backpropagation networks,' Connection Sci., vol. 1, no. 4, pp. 365-375, 1989   DOI   ScienceOn
5 S. Tamura and M. Tateishi, 'Capabilities of a four-layered feedforward neural network: four layers versus three,' IEEE Trans. Neural Networks, vol. 8, pp. 251-255, Mar. 1997   DOI   ScienceOn
6 M. Maechler, D. Martin, J. Schimert, M. Csoppensky and J. N. Hwang, 'Project pursuit learning networks for regression,' in Proc. Int. Conf. Tools for AI, Washington D.C., pp. 350-358, Nov. 1990   DOI
7 M. T. Hagan and M. B. Menhaj, 'Training feedforward networks with the Marquardt algorithm,' IEEE Trans. Neural Networks, vol. 5, no. 6, pp. 989-993, Nov. 1994   DOI   ScienceOn
8 K. Hormik, M. Stinchdombe, and H. White, 'Multilayer feedforward networks are universal approximaters,' Neural Networks, vol. 2, pp. 359-366, 1989   DOI   ScienceOn
9 J. H. Friedman, 'Classification and multiple regression through projection pursuit,' Dept. Statistics, Standford Univ. Technical Report, no12, Jan. 1985
10 L. Breiman 'The pi method for estimating multivariate functions from noisy data,' Technometrics, vol. 3, no. 2, pp. 125-160, 1991   DOI
11 V. Cherkassly and H. Lari-Najafi, 'Constrained topological mapping for nonparametric regression analysis,' Neural Networks, vol. 4, pp. 27-40, 1991   DOI   ScienceOn
12 T.-Y. Kwok and D.-Y. Yeung, 'Constructive algorithms for structure learning in feedforward neural networks for regression problems,' IEEE Trans. Neural Networks, vol. 8, no. 3, pp. 630-645   DOI   ScienceOn
13 M. T. Hagan, H. B. Dermuth, and M. Beale, Neural Network Design, Boston, MA: PWS Publishing, 1996