• 제목/요약/키워드: hidden layer

검색결과 511건 처리시간 0.025초

인공신경망을 이용한 플라이애시 및 실리카 흄 복합 콘크리트의 압축강도 예측 (Prediction of strength development of fly ash and silica fume ternary composite concrete using artificial neural network)

  • 번위결;최영지;왕소용
    • 산업기술연구
    • /
    • 제41권1호
    • /
    • pp.1-6
    • /
    • 2021
  • Fly ash and silica fume belong to industry by-products that can be used to produce concrete. This study shows the model of a neural network to evaluate the strength development of blended concrete containing fly ash and silica fume. The neural network model has four input parameters, such as fly ash replacement content, silica fume replacement content, water/binder ratio, and ages. Strength is the output variable of neural network. Based on the backpropagation algorithm, the values of elements in the hidden layer of neural network are determined. The number of neurons in the hidden layer is confirmed based on trial calculations. We find (1) neural network can give a reasonable evaluation of the strength development of composite concrete. Neural network can reflect the improvement of strength due to silica fume additions and can consider the reductions of strength as water/binder increases. (2) When the number of neurons in the hidden layer is five, the prediction results show more accuracy than four neurons in the hidden layer. Moreover, five neurons in the hidden layer can reproduce the strength crossover between fly ash concrete and plain concrete. Summarily, the neural network-based model is valuable for design sustainable composite concrete containing silica fume and fly ash.

인공 신경망의 학습에 있어 가중치 변화방법과 은닉층의 노드수가 예측정확성에 미치는 영향 (The Influence of Weight Adjusting Method and the Number of Hidden Layer있s Node on Neural Network있s Performance)

  • 김진백;김유일
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제9권1호
    • /
    • pp.27-44
    • /
    • 2000
  • The structure of neural networks is represented by a weighted directed graph with nodes representing units and links representing connections. Each link is assigned a numerical value representing the weight of the connection. In learning process, the values of weights are adjusted by errors. Following experiment results, the interval of adjusting weights, that is, epoch size influenced neural networks' performance. As epoch size is larger than a certain size, neural networks'performance decreased drastically. And the number of hidden layer's node also influenced neural networks'performance. The networks'performance decreased as hidden layers have more nodes and then increased at some number of hidden layer's node. So, in implementing of neural networks the epoch size and the number of hidden layer's node should be decided by systematic methods, not empirical or heuristic methods.

  • PDF

다층퍼셉트론의 은닉노드 근사화를 이용한 개선된 오류역전파 학습 (Modified Error Back Propagation Algorithm using the Approximating of the Hidden Nodes in Multi-Layer Perceptron)

  • 곽영태;이영직;권오석
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제28권9호
    • /
    • pp.603-611
    • /
    • 2001
  • 본 논문은 학습 속도가 계층별 학습처럼 빠르며, 일반화 성능이 우수한 학습 방법을 제안한다. 제안한 방법은 최소 제곡법을 통해 구한 은닉층의 목표값을 이용하여 은닉층의 가중치를 조정하는 방법으로, 은닉층 경사 벡터의 크기가 작아 학습이 지연되는 것을 막을 수 있다. 필기체 숫자인식 문제를 대상으로 실험한 결과, 제안한 방법의 학습 속도는 오류역전파 학습과 수정된 오차 함수의 학습보다 빠르고, Ooyen의 방법과 계층별 학습과는 비슷했다. 또한, 일반화 성능은 은닉노드의 수에 관련없이 가장 좋은 결과를 얻었다. 결국, 제안한 방법은 계층별 학습의 학습 속도와 오류역전파 학습과 수정된 오차 함수의 일반화 성능을 장점으로 가지고 있음을 확인하였다.

  • PDF

계층구조 신경망을 이용한 한글 인식 (Hangul Recognition Using a Hierarchical Neural Network)

  • 최동혁;류성원;강현철;박규태
    • 전자공학회논문지B
    • /
    • 제28B권11호
    • /
    • pp.852-858
    • /
    • 1991
  • An adaptive hierarchical classifier(AHCL) for Korean character recognition using a neural net is designed. This classifier has two neural nets: USACL (Unsupervised Adaptive Classifier) and SACL (Supervised Adaptive Classifier). USACL has the input layer and the output layer. The input layer and the output layer are fully connected. The nodes in the output layer are generated by the unsupervised and nearest neighbor learning rule during learning. SACL has the input layer, the hidden layer and the output layer. The input layer and the hidden layer arefully connected, and the hidden layer and the output layer are partially connected. The nodes in the SACL are generated by the supervised and nearest neighbor learning rule during learning. USACL has pre-attentive effect, which perform partial search instead of full search during SACL classification to enhance processing speed. The input of USACL and SACL is a directional edge feature with a directional receptive field. In order to test the performance of the AHCL, various multi-font printed Hangul characters are used in learning and testing, and its processing its speed and and classification rate are compared with the conventional LVQ(Learning Vector Quantizer) which has the nearest neighbor learning rule.

  • PDF

Self-generation을 이용한 퍼지 지도 학습 알고리즘 (Fuzzy Supervised Learning Algorithm by using Self-generation)

  • 김광백
    • 한국멀티미디어학회논문지
    • /
    • 제6권7호
    • /
    • pp.1312-1320
    • /
    • 2003
  • 본 논문에서는 하나의 은닉층을 가지는 다층 구조 신경망이 고려되었다. 다층 구조 신경망에서 널리 사용되는 오루 역전파 학습 방법은 초기 가중치와 불충분한 은닉층 노드 수로 인하여 지역 최소화에 빠질 가능성이 있다. 따라서 본 논문에서는 퍼지 단층 퍼셉트론에 ART1을 결합한 방법으로, 은닉층의 노드를 자가 생성(self-generation)하는 퍼지 지도 학습 알고리즘을 제안한다. 입력층에서 은닉층으로 노드를 생성시키는 방식은 ART1을 수정하여 사용하였고, 가중치 조정은 특정 패턴에 대한 저장 패턴을 수정하도록 하는 winner-take-all 방식을 적용하였다. 제안된 학습 방법의 성능을 평가하기 위하여 학생증 영상을 대상으로 실험한 결과. 기존의 오류 역전파 알고즘보다 연결 가중치들이 지역 최소화에 위치할 가능성이 줄었고 학습 속도 및 정체 현상이 개선되었다.

  • PDF

빠른 학습 속도를 갖는 로보트 매니퓰레이터의 병렬 모듈 신경제어기 설계 (A Design of Parallel Module Neural Network for Robot Manipulators having a fast Learning Speed)

  • 김정도;이택종
    • 전자공학회논문지B
    • /
    • 제32B권9호
    • /
    • pp.1137-1153
    • /
    • 1995
  • It is not yet possible to solve the optimal number of neurons in hidden layer at neural networks. However, it has been proposed and proved by experiments that there is a limit in increasing the number of neuron in hidden layer, because too much incrememt will cause instability,local minima and large error. This paper proposes a module neural controller with pattern recognition ability to solve the above trade-off problems and to obtain fast learning convergence speed. The proposed neural controller is composed of several module having Multi-layer Perrceptron(MLP). Each module have the less neurons in hidden layer, because it learns only input patterns having a similar learning directions. Experiments with six joint robot manipulator have shown the effectiveness and the feasibility of the proposed the parallel module neural controller with pattern recognition perceptron.

  • PDF

2단 회귀신경망의 숫자음 인식에관한 연구 (A study on the spoken digit recognition performance of the Two-Stage recurrent neural network)

  • 안점영
    • 한국통신학회논문지
    • /
    • 제25권3B호
    • /
    • pp.565-569
    • /
    • 2000
  • We compose the two-stage recurrent neural network that returns both signals of a hidden and an output layer to the hidden layer. It is tested on the basis of syllables for Korean spoken digit from /gong/to /gu. For these experiments, we adjust the neuron number of the hidden layer, the predictive order of input data and self-recurrent coefficient of the decision state layer. By the experimental results, the recognition rate of this neural network is between 91% and 97.5% in the speaker-dependent case and between 80.75% and 92% in the speaker-independent case. In the speaker-dependent case, this network shows an equivalent recognition performance to Jordan and Elman network but in the speaker-independent case, it does improved performance.

  • PDF

Neural Network Active Control of Structures with Earthquake Excitation

  • Cho Hyun Cheol;Fadali M. Sami;Saiidi M. Saiid;Lee Kwon Soon
    • International Journal of Control, Automation, and Systems
    • /
    • 제3권2호
    • /
    • pp.202-210
    • /
    • 2005
  • This paper presents a new neural network control for nonlinear bridge systems with earthquake excitation. We design multi-layer neural network controllers with a single hidden layer. The selection of an optimal number of neurons in the hidden layer is an important design step for control performance. To select an optimal number of hidden neurons, we progressively add one hidden neuron and observe the change in a performance measure given by the weighted sum of the system error and the control force. The number of hidden neurons which minimizes the performance measure is selected for implementation. A neural network was trained for mitigating vibrations of bridge systems caused by El Centro earthquake. We applied the proposed control approach to a single-degree-of-freedom (SDOF) and a two-degree-of-freedom (TDOF) bridge system. We assessed the robustness of the control system using randomly generated earthquake excitations which were not used in training the neural network. Our results show that the neural network controller drastically mitigates the effect of the disturbance.

오류 역전파 학습에서 확률적 가중치 교란에 의한 전역적 최적해의 탐색 (Searching a global optimum by stochastic perturbation in error back-propagation algorithm)

  • 김삼근;민창우;김명원
    • 전자공학회논문지C
    • /
    • 제35C권3호
    • /
    • pp.79-89
    • /
    • 1998
  • The Error Back-Propagation(EBP) algorithm is widely applied to train a multi-layer perceptron, which is a neural network model frequently used to solve complex problems such as pattern recognition, adaptive control, and global optimization. However, the EBP is basically a gradient descent method, which may get stuck in a local minimum, leading to failure in finding the globally optimal solution. Moreover, a multi-layer perceptron suffers from locking a systematic determination of the network structure appropriate for a given problem. It is usually the case to determine the number of hidden nodes by trial and error. In this paper, we propose a new algorithm to efficiently train a multi-layer perceptron. OUr algorithm uses stochastic perturbation in the weight space to effectively escape from local minima in multi-layer perceptron learning. Stochastic perturbation probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the EGP learning gets stuck to it. Addition of new hidden nodes also can be viewed asa special case of stochastic perturbation. Using stochastic perturbation we can solve the local minima problem and the network structure design in a unified way. The results of our experiments with several benchmark test problems including theparity problem, the two-spirals problem, andthe credit-screening data show that our algorithm is very efficient.

  • PDF

지진 이벤트 분류를 위한 정규화 기법 분석 (Analysis of normalization effect for earthquake events classification)

  • 장수;구본화;고한석
    • 한국음향학회지
    • /
    • 제40권2호
    • /
    • pp.130-138
    • /
    • 2021
  • 본 논문에서는 지진 이벤트 분류를 위한 다양한 정규화 기법 분석 및 효과적인 합성곱 신경망(Convolutional Neural Network, CNN)기반의 네트워크 구조를 제안하였다. 정규화 기법은 신경망의 학습 속도를 개선할 뿐만 아니라 잡음에 강인한 모습을 보여 준다. 본 논문에서는 지진 이벤트 분류를 위한 딥러닝 모델에서 입력 정규화 및 은닉 레이어 정규화가 모델에 미치는 영향을 분석하였다. 또한, 적용 은닉 레이어의 구조에 따른 다양한 실험을 통해 효과적인 모델을 도출하였다. 다양한 모의실험 결과 입력 데이터 정규화 및 제1 은닉 레이어에 가중치 정규화를 적용한 모델이 가장 안정적인 성능 향상을 보여 주었다.