• Title/Summary/Keyword: Constant Learning

Search Result 250, Processing Time 0.026 seconds

Self-Organizing Feature Map with Constant Learning Rate and Binary Reinforcement (일정 학습계수와 이진 강화함수를 가진 자기 조직화 형상지도 신경회로망)

  • 조성원;석진욱
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.1
    • /
    • pp.180-188
    • /
    • 1995
  • A modified Kohonen's self-organizing feature map (SOFM) algorithm which has binary reinforcement function and a constant learning rate is proposed. In contrast to the time-varing adaptaion gain of the original Kohonen's SOFM algorithm, the proposed algorithm uses a constant adaptation gain, and adds a binary reinforcement function in order to compensate for the lowered learning ability of SOFM due to the constant learning rate. Since the proposed algorithm does not have the complicated multiplication, it's digital hardware implementation is much easier than that of the original SOFM.

  • PDF

Competitive Learning Neural Network with Binary Reinforcement and Constant Adaptation Gain (일정적응 이득과 이진 강화함수를 갖는 경쟁 학습 신경회로망)

  • Seok, Jin-Wuk;Cho, Seong-Won;Choi, Gyung-Sam
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.326-328
    • /
    • 1994
  • A modified Kohonen's simple Competitive Learning(SCL) algorithm which has binary reinforcement function and a constant adaptation gain is proposed. In contrast to the time-varing adaptation gain of the original Kohonen's SCL algorithm, the proposed algorithm uses a constant adaptation gain, and adds a binary reinforcement function in order to compensate for the lowered learning ability of SCL due to the constant adaptation gain. Since the proposed algorithm does not have the complicated multiplication, it's digital hardware implementation is much easier than one of the original SCL.

  • PDF

Maximization of Zero-Error Probability for Adaptive Channel Equalization

  • Kim, Nam-Yong;Jeong, Kyu-Hwa;Yang, Liuqing
    • Journal of Communications and Networks
    • /
    • v.12 no.5
    • /
    • pp.459-465
    • /
    • 2010
  • A new blind equalization algorithm that is based on maximizing the probability that the constant modulus errors concentrate near zero is proposed. The cost function of the proposed algorithm is to maximize the probability that the equalizer output power is equal to the constant modulus of the transmitted symbols. Two blind information-theoretic learning (ITL) algorithms based on constant modulus error signals are also introduced: One for minimizing the Euclidean probability density function distance and the other for minimizing the constant modulus error entropy. The relations between the algorithms and their characteristics are investigated, and their performance is compared and analyzed through simulations in multi-path channel environments. The proposed algorithm has a lower computational complexity and a faster convergence speed than the other ITL algorithms that are based on a constant modulus error. The error samples of the proposed blind algorithm exhibit more concentrated density functions and superior error rate performance in severe multi-path channel environments when compared with the other algorithms.

Production Volume Forecating of each Manufactured Goods by Neural Networks (신경회로망에 의한 제품별 생산량 예측에 관한 연구)

  • Lee, Oh-Keol;Lee, Joon-Tark
    • Proceedings of the KIPE Conference
    • /
    • 2001.07a
    • /
    • pp.298-300
    • /
    • 2001
  • This paper presents a forecasting method for production volume of each model manufactured goods by using Back-Propagation technique of Neural Networks. As the learning constant and the momentum constant are respectively 0.65 and 0.94, the learning number is the least, and the forecating accuracy is the highest. When the learning process is more than 1,000 times, the accurate forecating was possible regardless of kind of product.

  • PDF

Improvement of Track Tracking Performance Using Deep Learning-based LSTM Model (딥러닝 기반 LSTM 모형을 이용한 항적 추적성능 향상에 관한 연구)

  • Hwang, Jin-Ha;Lee, Jong-Min
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.189-192
    • /
    • 2021
  • This study applies a deep learning-based long short-term memory(LSTM) model to track tracking technology. In the case of existing track tracking technology, the weight of constant velocity, constant acceleration, stiff turn, and circular(3D) flight is automatically changed when tracking track in real time using LMIPDA based on Kalman filter according to flight characteristics of an aircraft such as constant velocity, constant acceleration, stiff turn, and circular(3D) flight. In this process, it is necessary to improve performance of changing flight characteristic weight, because changing flight characteristics such as stiff turn flight during constant velocity flight could incur the loss of track and decreasing of the tracking performance. This study is for improving track tracking performance by predicting the change of flight characteristics in advance and changing flight characteristic weigh rapidly. To get this result, this study makes deep learning-based Long Short-Term Memory(LSTM) model study the plot and target of simulator applied with radar error model, and compares the flight tracking results of using Kalman filter with those of deep learning-based Long Short-Term memory(LSTM) model.

  • PDF

Hybrid Neural Networks for Pattern Recognition

  • Kim, Kwang-Baek
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.6
    • /
    • pp.637-640
    • /
    • 2011
  • The hybrid neural networks have characteristics such as fast learning times, generality, and simplicity, and are mainly used to classify learning data and to model non-linear systems. The middle layer of a hybrid neural network clusters the learning vectors by grouping homogenous vectors in the same cluster. In the clustering procedure, the homogeneity between learning vectors is represented as the distance between the vectors. Therefore, if the distances between a learning vector and all vectors in a cluster are smaller than a given constant radius, the learning vector is added to the cluster. However, the usage of a constant radius in clustering is the primary source of errors and therefore decreases the recognition success rate. To improve the recognition success rate, we proposed the enhanced hybrid network that organizes the middle layer effectively by using the enhanced ART1 network adjusting the vigilance parameter dynamically according to the similarity between patterns. The results of experiments on a large number of calling card images showed that the proposed algorithm greatly improves the character extraction and recognition compared with conventional recognition algorithms.

A Study on the Cost-Volume-Profit Analysis Adjusted for Learning Curve (C.V.P. 분석에 있어서 학습곡선의 적용에 관한 연구)

  • 연경화
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.5 no.6
    • /
    • pp.69-78
    • /
    • 1982
  • Traditional CVP (Cost-Volume-Profit) analysis employs linear cost and revenue functions within some specified time period and range of operations. Therefore CVP analysis is assumption of constant labor productivity. The use of linear cost functions implicity assumes, among other things, that firm's labor force is either a homogenous group or a collection homogenous subgroups in a constant mix, and that total production changes in a linear fashion through appropriate increase or decrease of seemingly interchangeable labor unit. But productivity rates in many firms are known to change with additional manufacturing experience in employee skill. Learning curve is intended to subsume the effects of all these resources of productivity. This learning phenomenon is quantifiable in the form of a learning curve, or manufacturing progress function. The purpose d this study is to show how alternative assumptions regarding a firm's labor force may be utilize by integrating conventional CVP analysis with learning curve theory, Explicit consideration of the effect of learning should substantially enrich CVP analysis and improve its use as a tool for planning and control of industry.

  • PDF

The dynamics of self-organizing feature map with constant learning rate and binary reinforcement function (시불변 학습계수와 이진 강화 함수를 가진 자기 조직화 형상지도 신경회로망의 동적특성)

  • Seok, Jin-Uk;Jo, Seong-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.2
    • /
    • pp.108-114
    • /
    • 1996
  • We present proofs of the stability and convergence of Self-organizing feature map (SOFM) neural network with time-invarient learning rate and binary reinforcement function. One of the major problems in Self-organizing feature map neural network concerns with learning rate-"Kalman Filter" gain in stochsatic control field which is monotone decreasing function and converges to 0 for satisfying minimum variance property. In this paper, we show that the stability and convergence of Self-organizing feature map neural network with time-invariant learning rate. The analysis of the proposed algorithm shows that the stability and convergence is guranteed with exponentially stable and weak convergence properties as well.s as well.

  • PDF

A Study on the Parameter Estimation of an Induction Motor using Neural Networks (신경회로망을 이용한 유도전동기의 피라미터 추정)

  • 류한민;김성환;박태식;유지윤
    • Proceedings of the KIPE Conference
    • /
    • 1998.07a
    • /
    • pp.225-229
    • /
    • 1998
  • If there is a mismatch between the controller programmed rotor time constant and the actual time constant of motor, the decoupling between the flux and torque is lost in an indirect rotor field oriented control. This paper presents a new estimation scheme for rotor time constant using artificial neural networks. The parameters of induction motor model organize 2 layer neural to be weight between neuron, which is proposed new in this paper. This method makes networks simple, so its brings not only the improvement in speed but simplification in calculation. Furthermore, it is possible to estimated rotor time constant real time through on-line learning without using off-line learning. The digital simulation and the experimental results to verify the effectiveness of the new method are described in this paper.

  • PDF

Production Volume Forecast using Neural Networks (신경회로망을 이용한 생산량 예측에 관한 연구)

  • Lee, Oh-Keol;Song, Ho-Shin
    • Proceedings of the KIEE Conference
    • /
    • 2001.07e
    • /
    • pp.62-64
    • /
    • 2001
  • This paper presents a forecasting method for production volume of each model manufacture d goods by using Back-Propagation technique of Neural Networks. As the learning constant and the momentum constant are respectively 0.65 and 0.94, the teaming number is the least, and the forecating accuracy is the highest. When the learning process is more than 1,000 times, the accurate forecating was possible regardless of kind of product.

  • PDF