신경망 학습앙상블에 관한 연구 - 주가예측을 중심으로 -

A Study on Training Ensembles of Neural Networks - A Case of Stock Price Prediction

  • 발행 : 1999.06.01

초록

In this paper, a comparison between different methods to combine predictions from neural networks will be given. These methods are bagging, bumping, and balancing. Those are based on the analysis of the ensemble generalization error into an ambiguity term and a term incorporating generalization performances of individual networks. Neural Networks and AI machine learning models are prone to overfitting. A strategy to prevent a neural network from overfitting, is to stop training in early stage of the learning process. The complete data set is spilt up into a training set and a validation set. Training is stopped when the error on the validation set starts increasing. The stability of the networks is highly dependent on the division in training and validation set, and also on the random initial weights and the chosen minimization procedure. This causes early stopped networks to be rather unstable: a small change in the data or different initial conditions can produce large changes in the prediction. Therefore, it is advisable to apply the same procedure several times starting from different initial weights. This technique is often referred to as training ensembles of neural networks. In this paper, we presented a comparison of three statistical methods to prevent overfitting of neural network.

키워드

참고문헌

  1. Advances in Neural Information Processing Systems 9 For valid generalization, the size of the weights is more important than the size of the network Bartlett, P. L.;Mozer, M.C.(ed.);Jordan, M.I.(ed.);Petsche, T.(ed.)
  2. Introduction to the Theory of Neural Computation Hertz, J.;A. Krogh;R. G. Palmer
  3. Advances in Neural Information Processing System 9 Balancing between bagging and bumping Heskes, T.;Mozer, M. C.(ed.);M. I. Jordan(ed.);T. Petche(ed.)
  4. Neural Computation v.4 Neural Networks and the Bias/Variance Dilemma Geman, S.;Bienenstock, E.;Doursat, R.
  5. Proceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI-97 Lessons in Neural Network Training: Overfitting May be harder than Expected Lawrence, S.;C. L. Giles;Ah Chung Tsoi
  6. NIPS 4 The Effective Number of Parameters: An Analysis of Generalization and Regularization in Nonlinear Learning Systems Moody, J. E.
  7. Proceeding of the 27th Symposium on the Interface of Computing Science and Statistics Stopped Training and Other Remedies for Overfitting Sarle, W. S.
  8. Neural Networks for Statistical Modeling Smith, M.
  9. Proceedings of the 1993 Connectionist Models Summer School On overfitting and the effective number of hidden units Weigend, A.