• 제목/요약/키워드: Backpropagation(BP)

검색결과 55건 처리시간 0.029초

다치 신경 망의 BP 학습 알고리즘을 이용한 패턴 인식 (Pattern Recognition Using BP Learning Algorithm of Multiple Valued Logic Neural Network)

  • 김두완;정환묵
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2002년도 추계학술대회 및 정기총회
    • /
    • pp.502-505
    • /
    • 2002
  • 본 논문은 다치(MVL:Multiple Valued Logic) 신경망의 BP(Backpropagation) 학습 알고리즘을 이용하여 패턴 인식에 이용하는 방법을 제안한다. MVL 신경망을 이용하여 패턴 인식에 이용함으로서, 네트워크에 필요한 시간 및 기억 공간을 최소화할 수 있고 환경 변화에 적응할 수 있는 가능성을 제시하였다. MVL 신경망은 다치 논리 함수를 기반으로 신경망을 구성하였으며, 입력은 리터럴 함수로 변환시키고, 출력은 MIN과 MAX 연산을 사용하여 구하였고, 학습을 하기 위해 다치 논리식의 편 미분을 사용하였다.

BACKPROPAGATION BASED ON THE CONJUGATE GRADIENT METHOD WITH THE LINEAR SEARCH BY ORDER STATISTICS AND GOLDEN SECTION

  • Choe, Sang-Woong;Lee, Jin-Choon
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 The Third Asian Fuzzy Systems Symposium
    • /
    • pp.107-112
    • /
    • 1998
  • In this paper, we propose a new paradigm (NEW_BP) to be capable of overcoming limitations of the traditional backpropagation(OLD_BP). NEW_BP is based on the method of conjugate gradients with the normalized direction vectors and computes step size through the linear search which may be characterized by order statistics and golden section. Simulation results showed that NEW_BP was definitely superior to both the stochastic OLD_BP and the deterministic OLD_BP in terms of accuracy and rate of convergence and might sumount the problem of local minima. Furthermore, they confirmed us that stagnant phenomenon of training in OLD_BP resulted from the limitations of its algorithm in itself and that unessential approaches would never cured it of this phenomenon.

  • PDF

적응 오류 제약 Backpropagation 알고리즘 (Adaptive Error Constrained Backpropagation Algorithm)

  • 최수용;고균병;홍대식
    • 한국통신학회논문지
    • /
    • 제28권10C호
    • /
    • pp.1007-1012
    • /
    • 2003
  • Multilayer perceptrons (MLPs)를 위한 일반적인 BP 알고리즘의 학습 속도를 개선하기 위하여 제약을 갖는 최적화 기술을 제안하고 이를 backpropagation (BP) 알고리즘에 적용한다. 먼저 잡음 제약을 갖는 LMS (noise constrained least mean square : NCLMS) 알고리즘과 영잡음 제약 LMS (ZNCLMS) 알고리즘을 BP 알고리즘에 적용한다. 이러한 알고리즘들은 다음과 같은 가정을 반드시 필요로 하여 알고리즘의 이용에 많은 제약을 갖는다. NCLMS 알고리즘을 이용한 NCBP 알고리즘은 정확한 잡음 전력을 알고 있다고 가정한다. 또한 ZNCLMS 알고리즘을 이용한 ZNCBP 알고리즘은 잡음의 전력을 0으로 가정, 즉 잡음을 무시하고 학습을 진행한다. 본 논문에서는 확장된(augmented) Lagrangian multiplier를 이용하여, 비용함수(cost function)를 변형한다. 이를 통하여 잡음에 대한 가정을 제거하고 ZNCBP와 NCBP 알고리즘을 확장, 일반화하여 적응 오류 제약 BP(adaptive error constrained BP : AECBP) 알고리즘을 유도, 제안한다. 제안한 알고리즘들의 수렴 속도는 일반적인 BP 알고리즘보다 약 30배정도 빠른 학습 속도를 나타내었으며, 일반적인 선형 필터와 거의 같은 수렴속도를 나타내었다.

개선된 역전파법 : 알고리즘과 수치예제 (Enhanced Backpropagation : Algorithm and Numeric Examples)

  • 한홍수;최상웅;정현식;노정구
    • 경영과정보연구
    • /
    • 제2권
    • /
    • pp.75-93
    • /
    • 1998
  • In this paper, we propose a new algorithm(N_BP) to be capable of overcoming limitations of the traditional backpropagation(O_BP). The N_BP is based on the method of conjugate gradients and calculates learning parameters through the line search which may be characterized by order statistics and golden section. Experimental results showed that the N_BP was definitely superior to the O_BP with and without a stochastic term in terms of accuracy and rate of convergence and might surmount the problem of local minima. Furthermore, they confirmed us that the stagnant phenomenon of learning in the O_BP resulted from the limitations of its algorithm in itself and that unessential approaches would never cured it of this phenomenon.

  • PDF

역전과 알고리즘(BP)을 이용한 대지저항률 추청 방법에 관한 연구 (A Study on Methodology of Soil Resistivity Estimation Using the BP)

  • 류보혁;위원석;김정훈
    • 대한전기학회논문지:전력기술부문A
    • /
    • 제51권2호
    • /
    • pp.76-82
    • /
    • 2002
  • This paper presents the method of sail-resistivity estimation using the backpropagation(BP) neural network. Existing estimation programs are expensive, and their estimation methods need complex techniques and take much time. Also, those programs have not become well spreaded in Korea yet. Soil resistivity estimation method using BP algorithm has studied for the reason mentioned above. This paper suggests the method which differs from expensive program or graphic technology requiring many input stages, complicated calculation and professional knowledge. The equivalent earth resistivity can be presented immediately after inputting apparent resistivity through the personal computer with a simplified Program without many Processing stages. This program has the advantages of reasonable accuracy, rapid processing time and confident of anti users.

유전자 알고리즘을 위한 지역적 미세 조정 메카니즘 (Genetic Algorithm with the Local Fine-Tuning Mechanism)

  • 임영희
    • 인지과학
    • /
    • 제4권2호
    • /
    • pp.181-200
    • /
    • 1994
  • 다층 신경망의 학습에 있어서 역전파 알고리즘은 시스템이 지역적 최소치에 빠질수 있고,탐색공간의 피라미터들에 의해 신경망 시스템의 성능이 크게 좌우된다는 단점이 있다.이러한 단점을 보완하기 의해 유전자 알고리즘이 신경망의 학습에 도입도었다.그러나 유전자 알고리즘에는 역전파 알고리즘과 같은 미세 조정되는 지역적 탐색(fine-tuned local search) 을 위한 메카니즘이 존재하지 않으므로 시스템이 전역적 최적해로 수렴하는데 많은 시간을 필요로 한다는 단점이 있다. 따라서 본 논문에서는 역전파 알고리즘의 기울기 강하 기법(gradient descent method)을 교배나 돌연변이와 같은 유전 연산자로 둠으로써 유전자 알고리즘에 지역적 미세 조정(local fine-tuning)을 위한 메카니즘을 제공해주는 새로운 형태의 GA-BP 방법을 제안한다.제안된 방법의 유용성을 보이기 위해 3-패러티 비트(3-parity bit) 문제에 실험하였다.

Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison

  • Devi, Swagatika;Jagadev, Alok Kumar;Patnaik, Srikanta
    • Journal of information and communication convergence engineering
    • /
    • 제13권2호
    • /
    • pp.123-131
    • /
    • 2015
  • Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input-output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.

비선형 시스템의 동적 궤환 입출력 선형화 (Input-Output Linearization of Nonlinear Systems via Dynamic Feedback)

  • 조현섭
    • 한국정보전자통신기술학회논문지
    • /
    • 제6권4호
    • /
    • pp.238-242
    • /
    • 2013
  • We consider the problem of constructing observers for nonlinear systems with unknown inputs. Connectionist networks, also called neural networks, have been broadly applied to solve many different problems since McCulloch and Pitts had shown mathematically their information processing ability in 1943. In this thesis, we present a genetic neuro-control scheme for nonlinear systems. Our method is different from those using supervised learning algorithms, such as the backpropagation (BP) algorithm, that needs training information in each step. The contributions of this thesis are the new approach to constructing neural network architecture and its training.

유전자 알고리즘을 이용한 신경 회로망 성능향상에 관한 연구 (A study on Performance Improvement of Neural Networks Using Genetic algorithms)

  • 임정은;김해진;장병찬;서보혁
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 제37회 하계학술대회 논문집 D
    • /
    • pp.2075-2076
    • /
    • 2006
  • In this paper, we propose a new architecture of Genetic Algorithms(GAs)-based Backpropagation(BP). The conventional BP does not guarantee that the BP generated through learning has the optimal network architecture. But the proposed GA-based BP enable the architecture to be a structurally more optimized network, and to be much more flexible and preferable neural network than the conventional BP. The experimental results in BP neural network optimization show that this algorithm can effectively avoid BP network converging to local optimum. It is found by comparison that the improved genetic algorithm can almost avoid the trap of local optimum and effectively improve the convergent speed.

  • PDF

Construction of coordinate transformation map using neural network

  • Lee, Wonchang;Nam, Kwanghee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1991년도 한국자동제어학술회의논문집(국제학술편); KOEX, Seoul; 22-24 Oct. 1991
    • /
    • pp.1845-1847
    • /
    • 1991
  • In general, it is not easy to find the linearizing coordinate transformation map for a class of systems which are state equivalent to linear systems, because it is required to solve a set of partial differential equations. It is possible to construct an arbitrary nonlinear function with a backpropagation(BP) net. Utilizing this property of BP neural net, we construct a desired linearizing coordinate transformation map. That is, we implement a unknown coordinate transformation map through the training of neural weights. We have shown an example which supports this idea.

  • PDF