• 제목/요약/키워드: Layer-by-layer learning

검색결과 661건 처리시간 0.028초

계층구조 신경망을 이용한 한글 인식 (Hangul Recognition Using a Hierarchical Neural Network)

  • 최동혁;류성원;강현철;박규태
    • 전자공학회논문지B
    • /
    • 제28B권11호
    • /
    • pp.852-858
    • /
    • 1991
  • An adaptive hierarchical classifier(AHCL) for Korean character recognition using a neural net is designed. This classifier has two neural nets: USACL (Unsupervised Adaptive Classifier) and SACL (Supervised Adaptive Classifier). USACL has the input layer and the output layer. The input layer and the output layer are fully connected. The nodes in the output layer are generated by the unsupervised and nearest neighbor learning rule during learning. SACL has the input layer, the hidden layer and the output layer. The input layer and the hidden layer arefully connected, and the hidden layer and the output layer are partially connected. The nodes in the SACL are generated by the supervised and nearest neighbor learning rule during learning. USACL has pre-attentive effect, which perform partial search instead of full search during SACL classification to enhance processing speed. The input of USACL and SACL is a directional edge feature with a directional receptive field. In order to test the performance of the AHCL, various multi-font printed Hangul characters are used in learning and testing, and its processing its speed and and classification rate are compared with the conventional LVQ(Learning Vector Quantizer) which has the nearest neighbor learning rule.

  • PDF

Enhanced RBF Network by Using Auto- Turning Method of Learning Rate, Momentum and ART2

  • Kim, Kwang-baek;Moon, Jung-wook
    • 한국산학기술학회:학술대회논문집
    • /
    • 한국산학기술학회 2003년도 Proceeding
    • /
    • pp.84-87
    • /
    • 2003
  • This paper proposes the enhanced REF network, which arbitrates learning rate and momentum dynamically by using the fuzzy system, to arbitrate the connected weight effectively between the middle layer of REF network and the output layer of REF network. ART2 is applied to as the learning structure between the input layer and the middle layer and the proposed auto-turning method of arbitrating the learning rate as the method of arbitrating the connected weight between the middle layer and the output layer. The enhancement of proposed method in terms of learning speed and convergence is verified as a result of comparing it with the conventional delta-bar-delta algorithm and the REF network on the basis of the ART2 to evaluate the efficiency of learning of the proposed method.

  • PDF

Robustness를 형성시키기 위한 Hybrid 학습법칙을 갖는 다층구조 신경회로망 (Multi-layer Neural Network with Hybrid Learning Rules for Improved Robust Capability)

  • 정동규;이수영
    • 전자공학회논문지B
    • /
    • 제31B권8호
    • /
    • pp.211-218
    • /
    • 1994
  • In this paper we develope a hybrid learning rule to improve the robustness of multi-layer Perceptions. In most neural networks the activation of a neuron is deternined by a nonlinear transformation of the weighted sum of inputs to the neurons. Investigating the behaviour of activations of hidden layer neurons a new learning algorithm is developed for improved robustness for multi-layer Perceptrons. Unlike other methods which reduce the network complexity by putting restrictions on synaptic weights our method based on error-backpropagation increases the complexity of the underlying proplem by imposing it saturation requirement on hidden layer neurons. We also found that the additional gradient-descent term for the requirement corresponds to the Hebbian rule and our algorithm incorporates the Hebbian learning rule into the error back-propagation rule. Computer simulation demonstrates fast learning convergence as well as improved robustness for classification and hetero-association of patterns.

  • PDF

다층퍼셉트론의 오류역전파 학습과 계층별 학습의 비교 분석 (Comparative Analysis on Error Back Propagation Learning and Layer By Layer Learning in Multi Layer Perceptrons)

  • 곽영태
    • 한국정보통신학회논문지
    • /
    • 제7권5호
    • /
    • pp.1044-1051
    • /
    • 2003
  • 본 논문은 MLP의 학습 방법으로 사용되는 EBP학습, Cross Entropy함수, 계층별 학습을 소개하고, 필기체 숫자인식 문제를 대상으로 각 학습 방법의 장단점을 비교한다. 실험 결과, EBP학습은 학습 초기에 학습 속도가 다른 학습 방법에 비해 느리지만, 일반화 성능이 좋다. 또한, EBP학습의 단점을 보안한 Cross Entropy 함수는 학습 속도가 EBP학습보다 빠르다. 그러나, 출력층의 오차 신호가 목표 벡터에 대해 선형적으로 학습하기 때문에, 일반화 성능이 EBP학습보다 낮다. 그리고, 계층별 학습은 학습 초기에, 학습 속도가 가장 빠르다. 그러나, 일정한 시간 후, 더 이상 학습이 진행되지 않기 때문에, 일반화 성능이 가장 낮은 결과를 얻었다. 따라서, 본 논문은 MLP를 응용하고자 할 때, 학습 방법의 선택 기준을 제시한다.

새로운 Preceding Layer Driven MLP 신경회로망의 학습 모델과 그 응용 (Learning Model and Application of New Preceding Layer Driven MLP Neural Network)

  • 한효진;김동훈;정호선
    • 전자공학회논문지B
    • /
    • 제28B권12호
    • /
    • pp.27-37
    • /
    • 1991
  • In this paper, the novel PLD (Preceding Layer Driven) MLP (Multi Layer Perceptron) neural network model and its learning algorithm is described. This learning algorithm is different from the conventional. This integer weights and hard limit function are used for synaptic weight values and activation function, respectively. The entire learning process is performed by layer-by-layer method. the number of layers can be varied with difficulty of training data. Since the synaptic weight values are integers, the synapse circuit can be easily implemented with CMOS. PLD MLP neural network was applied to English Characters, arbitrary waveform generation and spiral problem.

  • PDF

New Approach to Optimize the Size of Convolution Mask in Convolutional Neural Networks

  • Kwak, Young-Tae
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권1호
    • /
    • pp.1-8
    • /
    • 2016
  • Convolutional neural network (CNN) consists of a few pairs of both convolution layer and subsampling layer. Thus it has more hidden layers than multi-layer perceptron. With the increased layers, the size of convolution mask ultimately determines the total number of weights in CNN because the mask is shared among input images. It also is an important learning factor which makes or breaks CNN's learning. Therefore, this paper proposes the best method to choose the convolution size and the number of layers for learning CNN successfully. Through our face recognition with vast learning examples, we found that the best size of convolution mask is 5 by 5 and 7 by 7, regardless of the number of layers. In addition, the CNN with two pairs of both convolution and subsampling layer is found to make the best performance as if the multi-layer perceptron having two hidden layers does.

Forward C-P. Net.을 이용한 3단 LVQ 학습알고리즘 (3 Steps LVQ Learning Algorithm using Forward C.P. Net.)

  • 이용구;최우승
    • 한국컴퓨터정보학회논문지
    • /
    • 제9권4호
    • /
    • pp.33-39
    • /
    • 2004
  • 본 논문에서는 LVQ 네트워크의 분류성능을 향상시키기 위하여 F.C.P. Net.을 이용하여 LVQ 학습알고리즘을 설계하였다. F.C.P. Net.의 입력층과 부류층 사이의 연결강도는 SOM과 LVQ 알고리즘을 이용하여 초기 참조벡터의 설정 및 학습이 가능하게 하였다. 마지막으로 패턴벡터를 부류층의 뉴런에 의해 종속부류로 분류하고, F.C.P. Net.의 부류층과 출력층 사이의 연결강도는 분류된 종속부류를 부류로 지정하는 학습을 하게 된다. 또한 부류의 수가 결정되기만 하면 입력층, 부류층, 출력층의 뉴런의 수를 결정 할 수 있도록 하였다. 제안된 학습알고리즘의 성능을 검증하기 위하여 Fisher의 Iris 데이터를 학습벡터 및 시험 벡터로 사용하여 시뮬레이션 하였고, 제안된 학습방식의 분류 성능은 기존의 LVQ와 비교되어 기존의 학습방식보다 우수한 분류성공률을 확인하였다.

  • PDF

부분 학습구조의 신경회로와 로보트 역 기구학 해의 응용 (A neural network with local weight learning and its application to inverse kinematic robot solution)

  • 이인숙;오세영
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1990년도 한국자동제어학술회의논문집(국내학술편); KOEX, Seoul; 26-27 Oct. 1990
    • /
    • pp.36-40
    • /
    • 1990
  • Conventional back propagation learning is generally characterized by slow and rather inaccurate learning which makes it difficult to use in control applications. A new multilayer perception architecture and its learning algorithm is proposed that consists of a Kohonen front layer followed by a back propagation network. The Kohonen layer selects a subset of the hidden layer neurons for local tuning. This architecture has been tested on the inverse kinematic solution of robot manipulator while demonstrating its fast and accurate learning capabilities.

  • PDF

텔타규칙을 이용한 다단계 신경회로망 컴퓨터:Recognitron III (Multilayer Neural Network Using Delta Rule: Recognitron III)

  • 김춘석;박충규;이기한;황희영
    • 대한전기학회논문지
    • /
    • 제40권2호
    • /
    • pp.224-233
    • /
    • 1991
  • The multilayer expanson of single layer NN (Neural Network) was needed to solve the linear seperability problem as shown by the classic example using the XOR function. The EBP (Error Back Propagation ) learning rule is often used in multilayer Neural Networks, but it is not without its faults: 1)D.Rimmelhart expanded the Delta Rule but there is a problem in obtaining Ca from the linear combination of the Weight matrix N between the hidden layer and the output layer and H, wich is the result of another linear combination between the input pattern and the Weight matrix M between the input layer and the hidden layer. 2) Even if using the difference between Ca and Da to adjust the values of the Weight matrix N between the hidden layer and the output layer may be valid is correct, but using the same value to adjust the Weight matrixd M between the input layer and the hidden layer is wrong. Recognitron III was proposed to solve these faults. According to simulation results, since Recognitron III does not learn the three layer NN itself, but divides it into several single layer NNs and learns these with learning patterns, the learning time is 32.5 to 72.2 time faster than EBP NN one. The number of patterns learned in a EBP NN with n input and output cells and n+1 hidden cells are 2**n, but n in Recognitron III of the same size. [5] In the case of pattern generalization, however, EBP NN is less than Recognitron III.

  • PDF

상관 계수를 이용한 다층퍼셉트론의 계층별 학습 (A Layer-by-Layer Learning Algorithm using Correlation Coefficient for Multilayer Perceptrons)

  • 곽영태
    • 한국컴퓨터정보학회논문지
    • /
    • 제16권8호
    • /
    • pp.39-47
    • /
    • 2011
  • 다층퍼셉트론의 계층별 학습 방법의 하나인 Ergezinger 방법은 출력 노드가 1개로 구성되어 있고, 출력층의 가중치를 최소자승법으로 학습하기 때문에 출력층의 가중치에 조기포화 현상이 발생할 수 있다. 이런 조기 포화현상은 학습 시간과 수렴 속도에 장애가 된다. 따라서, 본 논문은 Ergezinger의 학습 방법을 출력층에서 벡터 형태로 학습할 수 있는 알고리즘으로 확대하고 학습 시간과수렴 속도를 개선하기 위해서 학습 상수를 도입한다. 학습상수는 은닉층 가중치 조정 시, 새로이 계산된 가중치와 기존 가중치의 상관 관계를 계산하여 학습 상수에 반영하는 가변적인 방법이다. 실험은 제안된 방법과 기존 방법의 비교를 위해서 iris 문제와 비선형 근사화 문제를 대상으로 실험하였다. 실험에서, 제안 방법은 기존 Ergezinger 방법보다 학습 시간과 수렴 속도에서 우수한 결과를 얻었으며, 상관 관계를 고려한 CPU time 측정에서도 제안한 방법이 기존 방법보다 약 35%의 시간을 절약할 수 있었다.