• Title/Summary/Keyword: Momentum Learning

Search Result 122, Processing Time 0.066 seconds

Performance and Root Mean Squared Error of Kernel Relaxation by the Dynamic Change of the Moment (모멘트의 동적 변환에 의한 Kernel Relaxation의 성능과 RMSE)

  • 김은미;이배호
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.5
    • /
    • pp.788-796
    • /
    • 2003
  • This paper proposes using dynamic momentum for squential learning method. Using The dynamic momentum improves convergence speed and performance by the variable momentum, also can identify it in the RMSE(root mean squared error). The proposed method is reflected using variable momentum according to current state. While static momentum is equally influenced on the whole, dynamic momentum algorithm can control the convergence rate and performance. According to the variable change of momentum by training. Unlike former classification and regression problems, this paper confirms both performance and regression rate of the dynamic momentum. Using RMSE(root mean square error ), which is one of the regression methods. The proposed dynamic momentum has been applied to the kernel adatron and kernel relaxation as the new sequential learning method of support vector machine presented recently. In order to show the efficiency of the proposed algorithm, SONAR data, the neural network classifier standard evaluation data, are used. The simulation result using the dynamic momentum has a better convergence rate, performance and RMSE than those using the static moment, respectively.

  • PDF

Batch-mode Learning in Neural Networks (신경회로망에서 일괄 학습)

  • 김명찬;최종호
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.3
    • /
    • pp.503-511
    • /
    • 1995
  • A batch-mode algorithm is proposed to increase the speed of learning in the error backpropagation algorithm with variable learning rate and variable momentum parameters in classification problems. The objective function is normalized with respect to the number of patterns and output nodes. Also the gradient of the objective function is normalized in updating the connection weights to increase the effect of its backpropagated error. The learning rate and momentum parameters are determined from a function of the gradient norm and the number of weights. The learning rate depends on the square rott of the gradient norm while the momentum parameters depend on the gradient norm. In the two typical classification problems, simulation results demonstrate the effectiveness of the proposed algorithm.

  • PDF

Improving Learning Performance of Support Vector Machine using the Kernel Relaxation and the Dynamic Momentum (Kernel Relaxation과 동적 모멘트를 조합한 Support Vector Machine의 학습 성능 향상)

  • Kim, Eun-Mi;Lee, Bae-Ho
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.735-744
    • /
    • 2002
  • This paper proposes learning performance improvement of support vector machine using the kernel relaxation and the dynamic momentum. The dynamic momentum is reflected to different momentum according to current state. While static momentum is equally influenced on the whole, the proposed dynamic momentum algorithm can control to the convergence rate and performance according to the change of the dynamic momentum by training. The proposed algorithm has been applied to the kernel relaxation as the new sequential learning method of support vector machine presented recently. The proposed algorithm has been applied to the SONAR data which is used to the standard classification problems for evaluating neural network. The simulation results of proposed algorithm have better the convergence rate and performance than those using kernel relaxation and static momentum, respectively.

Variation of activation functions for accelerating the learning speed of the multilayer neural network (다층 구조 신경회로망의 학습 속도 향상을 위한 활성화 함수의 변화)

  • Lee, Byung-Do;Lee, Min-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.8 no.1
    • /
    • pp.45-52
    • /
    • 1999
  • In this raper, an enhanced learning method is proposed for improving the learning speed of the error back propagation learning algorithm. In order to cope with the premature saturation phenomenon at the initial learning stage, a variation scheme of active functions is introduced by using higher order functions, which does not need much increase of computation load. It naturally changes the learning rate of inter-connection weights to a large value as the derivative of sigmoid function abnormally decrease to a small value during the learning epoch. Also, we suggest the hybrid learning method incorporated the proposed method with the momentum training algorithm. Computer simulation results show that the proposed learning algorithm outperforms the conventional methods such as momentum and delta-bar-delta algorithms.

  • PDF

Edge detection method using unbalanced mutation operator in noise image (잡음 영상에서 불균등 돌연변이 연산자를 이용한 효율적 에지 검출)

  • Kim, Su-Jung;Lim, Hee-Kyoung;Seo, Yo-Han;Jung, Chai-Yeoung
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.673-680
    • /
    • 2002
  • This paper proposes a method for detecting edge using an evolutionary programming and a momentum back-propagation algorithm. The evolutionary programming does not perform crossover operation as to consider reduction of capability of algorithm and calculation cost, but uses selection operator and mutation operator. The momentum back-propagation algorithm uses assistant to weight of learning step when weight is changed at learning step. Because learning rate o is settled as less in last back-propagation algorithm the momentum back-propagation algorithm discard the problem that learning is slow as relative reduction because change rate of weight at each learning step. The method using EP-MBP is batter than GA-BP method in both learning time and detection rate and showed the decreasing learning time and effective edge detection, in consequence.

An efficient learning algorithm of nonlinear PCA neural networks using momentum (모멘트를 이용한 비선형 주요성분분석 신경망의 효율적인 학습알고리즘)

  • Cho, Yong-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.3 no.4
    • /
    • pp.361-367
    • /
    • 2000
  • This paper proposes an efficient feature extraction of the image data using nonlinear principal component analysis neural networks of a new learning algorithm. The proposed method is a learning algorithm with momentum for reflecting the past trends. It is to get the better performance by restraining an oscillation due to converge the global optimum. The proposed algorithm has been applied to the cancer image of $256{\times}256$ pixels and the coin image of $128{\times}128$ pixels respectively. The simulation results show that the proposed algorithm has better performances of the convergence and the nonlinear feature extraction, in comparison with those using the backpropagation and the conventional nonlinear PCA neural networks.

  • PDF

Enhanced RBF Network by Using Auto- Turning Method of Learning Rate, Momentum and ART2

  • Kim, Kwang-baek;Moon, Jung-wook
    • Proceedings of the KAIS Fall Conference
    • /
    • 2003.11a
    • /
    • pp.84-87
    • /
    • 2003
  • This paper proposes the enhanced REF network, which arbitrates learning rate and momentum dynamically by using the fuzzy system, to arbitrate the connected weight effectively between the middle layer of REF network and the output layer of REF network. ART2 is applied to as the learning structure between the input layer and the middle layer and the proposed auto-turning method of arbitrating the learning rate as the method of arbitrating the connected weight between the middle layer and the output layer. The enhancement of proposed method in terms of learning speed and convergence is verified as a result of comparing it with the conventional delta-bar-delta algorithm and the REF network on the basis of the ART2 to evaluate the efficiency of learning of the proposed method.

  • PDF

An Analysis of Patterns and Characteristics of Momentum Effect on Learning Science Concepts (과학개념 학습지속 효과의 유형과 그 특성 분석)

  • Kwon, Jae-Sool;Kim, Jun-Tae
    • Journal of The Korean Association For Science Education
    • /
    • v.12 no.1
    • /
    • pp.11-21
    • /
    • 1992
  • This study tried to find out the effect to types of test items upon the momentum effect. The previous studies showed that the momentum effect is influenced by stduents' congnitive levels and the abstractness of test items. In this study focused on the types of test items The test items are divided into 4 different types of quantitative and qualitative, verbal and image. The result showed that qualitative items showed a longer momentum effect than quantitative ones. The image items and verbal items did not show significant difference in the duration of momentum effect. The interpretation of this would need a careful psychological analysis. Anyhow, this result reconfirmed the existence of the momentum effect and showed that the study on the momentum effect could be a Significant research paradigm.

  • PDF

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.

An Enhanced Counterpropagation Algorithm for Effective Pattern Recognition (효과적인 패턴 인식을 위한 개선된 Counterpropagation 알고리즘)

  • Kim, Kwang-Baek
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1682-1688
    • /
    • 2008
  • The Counterpropagation algorithm(CP) is a combination of Kohonen competition network as a hidden layer and the outstar structure of Grossberg as an output layer. CP has been used in many real applications for pattern matching, classification, data compression and statistical analysis since its learning speed is faster than other network models. However, due to the Kohonen layer's winner-takes-all strategy, it often causes instable learning and/or incorrect pattern classification when patterns are relatively diverse. Also, it is often criticized by the sensitivity of performance on the learning rate. In this paper, we propose an enhanced CP that has multiple Kohonen layers and dynamic controlling facility of learning rate using the frequency of winner neurons and the difference between input vector and the representative of winner neurons for stable learning and momentum learning for controlling weights of output links. A real world application experiment - pattern recognition from passport information - is designed for the performance evaluation of this enhanced CP and it shows that our proposed algorithm improves the conventional CP in learning and recognition performance.