• 제목/요약/키워드: Single-Hidden-Layer Neural Network

검색결과 43건 처리시간 0.028초

Neural Network Active Control of Structures with Earthquake Excitation

  • Cho Hyun Cheol;Fadali M. Sami;Saiidi M. Saiid;Lee Kwon Soon
    • International Journal of Control, Automation, and Systems
    • /
    • 제3권2호
    • /
    • pp.202-210
    • /
    • 2005
  • This paper presents a new neural network control for nonlinear bridge systems with earthquake excitation. We design multi-layer neural network controllers with a single hidden layer. The selection of an optimal number of neurons in the hidden layer is an important design step for control performance. To select an optimal number of hidden neurons, we progressively add one hidden neuron and observe the change in a performance measure given by the weighted sum of the system error and the control force. The number of hidden neurons which minimizes the performance measure is selected for implementation. A neural network was trained for mitigating vibrations of bridge systems caused by El Centro earthquake. We applied the proposed control approach to a single-degree-of-freedom (SDOF) and a two-degree-of-freedom (TDOF) bridge system. We assessed the robustness of the control system using randomly generated earthquake excitations which were not used in training the neural network. Our results show that the neural network controller drastically mitigates the effect of the disturbance.

역전파 알고리즘을 이용한 경계결정의 구성에 관한 연구 (The Structure of Boundary Decision Using the Back Propagation Algorithms)

  • 이지영
    • 정보학연구
    • /
    • 제8권1호
    • /
    • pp.51-56
    • /
    • 2005
  • The Back propagation algorithm is a very effective supervised training method for multi-layer feed forward neural networks. This paper studies the decision boundary formation based on the Back propagation algorithm. The discriminating powers of several neural network topology are also investigated against five manually created data sets. It is found that neural networks with multiple hidden layer perform better than single hidden layer.

  • PDF

텔타규칙을 이용한 다단계 신경회로망 컴퓨터:Recognitron III (Multilayer Neural Network Using Delta Rule: Recognitron III)

  • 김춘석;박충규;이기한;황희영
    • 대한전기학회논문지
    • /
    • 제40권2호
    • /
    • pp.224-233
    • /
    • 1991
  • The multilayer expanson of single layer NN (Neural Network) was needed to solve the linear seperability problem as shown by the classic example using the XOR function. The EBP (Error Back Propagation ) learning rule is often used in multilayer Neural Networks, but it is not without its faults: 1)D.Rimmelhart expanded the Delta Rule but there is a problem in obtaining Ca from the linear combination of the Weight matrix N between the hidden layer and the output layer and H, wich is the result of another linear combination between the input pattern and the Weight matrix M between the input layer and the hidden layer. 2) Even if using the difference between Ca and Da to adjust the values of the Weight matrix N between the hidden layer and the output layer may be valid is correct, but using the same value to adjust the Weight matrixd M between the input layer and the hidden layer is wrong. Recognitron III was proposed to solve these faults. According to simulation results, since Recognitron III does not learn the three layer NN itself, but divides it into several single layer NNs and learns these with learning patterns, the learning time is 32.5 to 72.2 time faster than EBP NN one. The number of patterns learned in a EBP NN with n input and output cells and n+1 hidden cells are 2**n, but n in Recognitron III of the same size. [5] In the case of pattern generalization, however, EBP NN is less than Recognitron III.

  • PDF

Self-generation을 이용한 퍼지 지도 학습 알고리즘 (Fuzzy Supervised Learning Algorithm by using Self-generation)

  • 김광백
    • 한국멀티미디어학회논문지
    • /
    • 제6권7호
    • /
    • pp.1312-1320
    • /
    • 2003
  • 본 논문에서는 하나의 은닉층을 가지는 다층 구조 신경망이 고려되었다. 다층 구조 신경망에서 널리 사용되는 오루 역전파 학습 방법은 초기 가중치와 불충분한 은닉층 노드 수로 인하여 지역 최소화에 빠질 가능성이 있다. 따라서 본 논문에서는 퍼지 단층 퍼셉트론에 ART1을 결합한 방법으로, 은닉층의 노드를 자가 생성(self-generation)하는 퍼지 지도 학습 알고리즘을 제안한다. 입력층에서 은닉층으로 노드를 생성시키는 방식은 ART1을 수정하여 사용하였고, 가중치 조정은 특정 패턴에 대한 저장 패턴을 수정하도록 하는 winner-take-all 방식을 적용하였다. 제안된 학습 방법의 성능을 평가하기 위하여 학생증 영상을 대상으로 실험한 결과. 기존의 오류 역전파 알고즘보다 연결 가중치들이 지역 최소화에 위치할 가능성이 줄었고 학습 속도 및 정체 현상이 개선되었다.

  • PDF

Bayesian Analysis for Neural Network Models

  • Chung, Younshik;Jung, Jinhyouk;Kim, Chansoo
    • Communications for Statistical Applications and Methods
    • /
    • 제9권1호
    • /
    • pp.155-166
    • /
    • 2002
  • Neural networks have been studied as a popular tool for classification and they are very flexible. Also, they are used for many applications of pattern classification and pattern recognition. This paper focuses on Bayesian approach to feed-forward neural networks with single hidden layer of units with logistic activation. In this model, we are interested in deciding the number of nodes of neural network model with p input units, one hidden layer with m hidden nodes and one output unit in Bayesian setup for fixed m. Here, we use the latent variable into the prior of the coefficient regression, and we introduce the 'sequential step' which is based on the idea of the data augmentation by Tanner and Wong(1787). The MCMC method(Gibbs sampler and Metropolish algorithm) can be used to overcome the complicated Bayesian computation. Finally, a proposed method is applied to a simulated data.

다중 신경망의 계층 결합에 의한 필기체 숫자 인식에 관한 연구 (A Study on Handwritten Digit Recognition by Layer Combination of Multiple Neural Network)

  • 김두식;임길택;남윤석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 추계종합학술대회 논문집
    • /
    • pp.468-471
    • /
    • 1999
  • In this paper, we present a solution for combining multiple neural networks. Each neural network is trained with different features. And the neural networks are combined by four methods. The recognition rates by four combination methods are compared. The experimental results for handwritten digit recognition shows that the combination at hidden layers by single layer neural network is superior to any other methods. The reasons of the results are explained.

  • PDF

Bagging 방법을 이용한 원전SG 세관 결함패턴 분류성능 향상기법 (Classification Performance Improvement of Steam Generator Tube Defects in Nuclear Power Plant Using Bagging Method)

  • 이준표;조남훈
    • 전기학회논문지
    • /
    • 제58권12호
    • /
    • pp.2532-2537
    • /
    • 2009
  • For defect characterization in steam generator tubes in nuclear power plant, artificial neural network has been extensively used to classify defect types. In this paper, we study the effectiveness of Bagging for improving the performance of neural network for the classification of tube defects. Bagging is a method that combines outputs of many neural networks that were trained separately with different training data set. By varying the number of neurons in the hidden layer, we carry out computer simulations in order to compare the classification performance of bagging neural network and single neural network. From the experiments, we found that the performance of bagging neural network is superior to the average performance of single neural network in most cases.

진화 프로그래밍 기법을 이용한 신경망의 자동설계에 관한 연구 (A Study on an Artificial Neural Network Design using Evolutionary Programming)

  • 강신준;고택범;우천희;이덕규;우광방
    • 제어로봇시스템학회논문지
    • /
    • 제5권3호
    • /
    • pp.281-287
    • /
    • 1999
  • In this paper, a design method based on evolutionary programming for feedforward neural networks which have a single hidden layer is presented. By using an evolutionary programming, the network parameters such as the network structure, weight, slope of sigmoid functions and bias of nodes can be acquired simultaneously. To check the effectiveness of the suggested method, two numerical examples are examined. The performance of the identified network is demonstrated.

  • PDF

Neural Network Image Reconstruction for Magnetic Particle Imaging

  • Chae, Byung Gyu
    • ETRI Journal
    • /
    • 제39권6호
    • /
    • pp.841-850
    • /
    • 2017
  • We investigate neural network image reconstruction for magnetic particle imaging. The network performance strongly depends on the convolution effects of the spectrum input data. The larger convolution effect appearing at a relatively smaller nanoparticle size obstructs the network training. The trained single-layer network reveals the weighting matrix consisting of a basis vector in the form of Chebyshev polynomials of the second kind. The weighting matrix corresponds to an inverse system matrix, where an incoherency of basis vectors due to low convolution effects, as well as a nonlinear activation function, plays a key role in retrieving the matrix elements. Test images are well reconstructed through trained networks having an inverse kernel matrix. We also confirm that a multi-layer network with one hidden layer improves the performance. Based on the results, a neural network architecture overcoming the low incoherence of the inverse kernel through the classification property is expected to become a better tool for image reconstruction.

신경망 알고리즘을 이용한 아크 용접부 품질 예측 (Prediction of Arc Welding Quality through Artificial Neural Network)

  • 조정호
    • Journal of Welding and Joining
    • /
    • 제31권3호
    • /
    • pp.44-48
    • /
    • 2013
  • Artificial neural network (ANN) model is applied to predict arc welding process window for automotive steel plate. Target weldment was various automotive steel plate combination with lap fillet joint. The accuracy of prediction was evaluated through comparison experimental result to ANN simulation. The effect of ANN variables on the accuracy is investigated such as number of hidden layers, perceptrons and transfer function type. A static back propagation model is established and tested. The result shows comparatively accurate predictability of the suggested ANN model. However, it restricts to use nonlinear transfer function instead of linear type and suggests only one single hidden layer rather than multiple ones to get better accuracy. In addition to this, obvious fact is affirmed again that the more perceptrons guarantee the better accuracy under the precondition that there are enough experimental database to train the neural network.