• Title/Summary/Keyword: Neural Network Pruning

Search Result 44, Processing Time 0.018 seconds

Genetic Algorithm for Node P겨ning of Neural Networks (신경망의 노드 가지치기를 위한 유전 알고리즘)

  • Heo, Gi-Su;Oh, Il-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.2
    • /
    • pp.65-74
    • /
    • 2009
  • In optimizing the neural network structure, there are two methods of the pruning scheme and the constructive scheme. In this paper we use the pruning scheme to optimize neural network structure, and the genetic algorithm to find out its optimum node pruning. In the conventional researches, the input and hidden layers were optimized separately. On the contrary we attempted to optimize the two layers simultaneously by encoding two layers in a chromosome. The offspring networks inherit the weights from the parent. For teaming, we used the existing error back-propagation algorithm. In our experiment with various databases from UCI Machine Learning Repository, we could get the optimal performance when the network size was reduced by about $8{\sim}25%$. As a result of t-test the proposed method was shown better performance, compared with other pruning and construction methods through the cross-validation.

A Pruning Algorithm of Neural Networks Using Impact Factors (임팩트 팩터를 이용한 신경 회로망의 연결 소거 알고리즘)

  • 이하준;정승범;박철훈
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.77-86
    • /
    • 2004
  • In general, small-sized neural networks, even though they show good generalization performance, tend to fail to team the training data within a given error bound, whereas large-sized ones learn the training data easily but yield poor generalization. Therefore, a way of achieving good generalization is to find the smallest network that can learn the data, called the optimal-sized neural network. This paper proposes a new scheme for network pruning with ‘impact factor’ which is defined as a multiplication of the variance of a neuron output and the square of its outgoing weight. Simulation results of function approximation problems show that the proposed method is effective in regression.

A Pruning Algorithm for Network Structure Optimization in the Forecasting Climate System Using Neural Network (신경망을 이용한 기상예측시스템에서 망구조 최적화를 위한 Pruning 알고리즘)

  • Lee, Kee-Jun;Kang, Myung-A;Jung, Chai-Yeoung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.385-391
    • /
    • 2000
  • Recently, neural network research for forecasting the consecutive controlling rules of the future is being progressed, using the series data which are different from the traditional statistical analysis methods. In this paper, we suggest the pruning algorithm for the fast and exact weather forecast that excludes the hidden layer of the early optional designed nenral network. There are perform the weather forecast experiments using the 22080 kinds of weather data gathered from 1987 to 1996 for proving the efficiency of this suggested algorithm. Through the experiments, the early optional composed $26{\times}50{\times}1$ nenral network became the most suitable $26{\times}2{\times}1$ structure through the pruning algorithm suggested, in the optimum neural network $26{\times}2{\times}1$, in the case of the error temperature ${\pm}0.5^{\circ}C$, the average was 33.55%, in the case of ${\pm}1^{\circ}C$, the average was 61.57%, they showed more superior than the average 29.31% and 54.47% of the optional designed structure, also. we can reduce the calculation frequency more than maximum 25 times as compared with the optional sturcture neural network in the calculation frequencies.

  • PDF

Adaptive Structure of Modular Wavelet Neural Network (모듈환된 웨이블렛 신경망의 적응 구조 설계)

  • 서재용;김성주;조현찬;전홍태
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.9
    • /
    • pp.782-787
    • /
    • 2001
  • In this paper, we propose an growing and pruning algorithm to design the adaptive structure of modular wavelet neural network(MWNN) with F-projection and geometric growing criterion. Geometric growing criterion consists of estimated error criterion considering local error and angel criterion which attempts to assign wavelet function that is nearly orthogonal to all other existing wavelet functions. There criteria provide a methodology that a network designer can constructs wavelet neural network according to one s intention. The proposed growing algorithm grows the module and the size of modules. Also, the pruning algorithm eliminates unnecessary node of module or module from constructed MWNN to overcome the problem due to localized characteristics of wavelet neural network which is used to modules of MWNN. We apply the proposed constructing algorithm of the adaptive structure of MWNN to approximation problems of 1-D function and 2-D function, and evaluate the effectiveness of the proposed algorithm.

  • PDF

Structure Optimization of Neural Networks using Rough Set Theory (러프셋 이론을 이용한 신경망의 구조 최적화)

  • 정영준;이동욱;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.03a
    • /
    • pp.49-52
    • /
    • 1998
  • Neural Network has good performance in pattern classification, control and many other fields by learning ability. However, there is effective rule or systematic approach to determine optimal structure. In this paper, we propose a new method to find optimal structure of feed-forward multi-layer neural network as a kind of pruning method. That eliminating redundant elements of neural network. To find redundant elements we analysis error and weight changing with Rough Set Theory, in condition of executing back-propagation leaning algorithm.

  • PDF

Performance Analysis of Optimal Neural Network structural BPN based on character value of Hidden node (은닉노드의 특징 값을 기반으로 한 최적신경망 구조의 BPN성능분석)

  • 강경아;이기준;정채영
    • Journal of the Korea Society of Computer and Information
    • /
    • v.5 no.2
    • /
    • pp.30-36
    • /
    • 2000
  • The hidden node plays a role of the functional units that classifies the features of input pattern in the given question. Therefore, a neural network that consists of the number of a suitable optimum hidden node has be on the rise as a factor that has an important effect upon a result. However there is a problem that decides the number of hidden nodes based on back-propagation learning algorithm. If the number of hidden nodes is designated very small perfect learning is not done because the input pattern given cannot be classified enough. On the other hand, if designated a lot, overfitting occurs due to the unnecessary execution of operation and extravagance of memory point. So, the recognition rate is been law and the generality is fallen. Therefore, this paper suggests a method that decides the number of neural network node with feature information consisted of the parameter of learning algorithm. It excludes a node in the Pruning target, that has a maximum value among the feature value obtained and compares the average of the rest of hidden node feature value with the feature value of each hidden node, and then would like to improve the learning speed of neural network deciding the optimum structure of the multi-layer neural network as pruning the hidden node that has the feature value smaller than the average.

  • PDF

Improving Generalization Performance of Neural Networks using Natural Pruning and Bayesian Selection (자연 프루닝과 베이시안 선택에 의한 신경회로망 일반화 성능 향상)

  • 이현진;박혜영;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.326-338
    • /
    • 2003
  • The objective of a neural network design and model selection is to construct an optimal network with a good generalization performance. However, training data include noises, and the number of training data is not sufficient, which results in the difference between the true probability distribution and the empirical one. The difference makes the teaming parameters to over-fit only to training data and to deviate from the true distribution of data, which is called the overfitting phenomenon. The overfilled neural network shows good approximations for the training data, but gives bad predictions to untrained new data. As the complexity of the neural network increases, this overfitting phenomenon also becomes more severe. In this paper, by taking statistical viewpoint, we proposed an integrative process for neural network design and model selection method in order to improve generalization performance. At first, by using the natural gradient learning with adaptive regularization, we try to obtain optimal parameters that are not overfilled to training data with fast convergence. By adopting the natural pruning to the obtained optimal parameters, we generate several candidates of network model with different sizes. Finally, we select an optimal model among candidate models based on the Bayesian Information Criteria. Through the computer simulation on benchmark problems, we confirm the generalization and structure optimization performance of the proposed integrative process of teaming and model selection.

A Study on the Self-Evolving Expert System using Neural Network and Fuzzy Rule Extraction (인공신경망과 퍼지규칙 추출을 이용한 상황적응적 전문가시스템 구축에 관한 연구)

  • 이건창;김진성
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.231-240
    • /
    • 2001
  • Conventional expert systems has been criticized due to its lack of capability to adapt to the changing decision-making environments. In literature, many methods have been proposed to make expert systems more environment-adaptive by incorporating fuzzy logic and neural networks. The objective of this paper is to propose a new approach to building a self-evolving expert system inference mechanism by integrating fuzzy neural network and fuzzy rule extraction technique. The main recipe of our proposed approach is to fuzzify the training data, train them by a fuzzy neural network, extract a set of fuzzy rules from the trained network, organize a knowledge base, and refine the fuzzy rules by applying a pruning algorithm when the decision-making environments are detected to be changed significantly. To prove the validity, we tested our proposed self-evolving expert systems inference mechanism by using the bankruptcy data, and compared its results with the conventional neural network. Non-parametric statistical analysis of the experimental results showed that our proposed approach is valid significantly.

  • PDF

(Adaptive Structure of Modular Wavelet Neural Network Using Growing and Pruning Algorithm) (성장과 소거 알고리즘을 이용한 모듈화된 웨이블렛 신경망의 적응구조 설계)

  • Seo, Jae-Yong;Kim, Yong-Taek;Jo, Hyeon-Chan;Jeon, Hong-Tae
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.39 no.1
    • /
    • pp.16-23
    • /
    • 2002
  • In this paper, we propose the growing and pruning algorithm to design the optimal structure of modular wavelet neural network(MWNN) with F-projection and geometric growing criterion. Geometric growing criterion consists of estimated error criterion considering local error and angle criterion which attempts to assign wavelet function that is nearly orthogonal to all other existing wavelet functions. These criteria provide a methodology which a network designer can construct MWNN according to one's intention. The proposed growing algorithm increases in number of module or the size of modules of MWNN. Also, the pruning algorithm eliminates unnecessary node of module or module from constructed MWNN to overcome the problem due to localized characteristic of wavelet neural network which is used to modules of MWNN. We apply the proposed constructing algorithm of the optimal structure of MWNN to approximation problems of 1-D function and 2-D function, and evaluate the effectiveness of the proposed algorithm.

Review on Genetic Algorithms for Pattern Recognition (패턴 인식을 위한 유전 알고리즘의 개관)

  • Oh, Il-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.1
    • /
    • pp.58-64
    • /
    • 2007
  • In pattern recognition field, there are many optimization problems having exponential search spaces. To solve of sequential search algorithms seeking sub-optimal solutions have been used. The algorithms have limitations of stopping at local optimums. Recently lots of researches attempt to solve the problems using genetic algorithms. This paper explains the huge search spaces of typical problems such as feature selection, classifier ensemble selection, neural network pruning, and clustering, and it reviews the genetic algorithms for solving them. Additionally we present several subjects worthy of noting as future researches.