• 제목/요약/키워드: Layer-By-Layer Training

검색결과 300건 처리시간 0.04초

다층 퍼셉트론의 층별 학습을 위한 중간층 오차 함수 (A New Hidden Error Function for Layer-By-Layer Training of Multi layer Perceptrons)

  • 오상훈
    • 한국콘텐츠학회:학술대회논문집
    • /
    • 한국콘텐츠학회 2005년도 추계 종합학술대회 논문집
    • /
    • pp.364-370
    • /
    • 2005
  • 다층 퍼셉트론의 학습을 빠르게 하기 위한 방법으로 층별 학습이 제안되었었다. 이 방법에서는 각 층별로 주어진 오차함수를 최적화 방법을 사용하여 감소시키도록 학습이 이루어진다. 이 경우 중간층 오차함수가 학습의 성능에 큰 영향을 미치는 데, 이 논문에서는 층별 학습의 성능을 개선하기 위한 중간층 오차함수를 제안한다. 이 중간층 오차함수는 출력층 오차함수에서 중간층 가중치의 학습에 관계된 성분을 유도하는 형태로 제안된다. 제안한 방법은 필기체 숫자 인식과 고립단어인식 문제의 시뮬레이션으로 효용성을 확인하였다.

  • PDF

다층 퍼셉트론의 층별 학습 가속을 위한 중간층 오차 함수 (A New Hidden Error Function for Training of Multilayer Perceptrons)

  • 오상훈
    • 한국콘텐츠학회논문지
    • /
    • 제5권6호
    • /
    • pp.57-64
    • /
    • 2005
  • 다층 퍼셉트론의 학습을 빠르게 하기 위한 방법으로 층별 학습이 제안되었었다. 이 방법에서는 각 층별로 오차함수가 주어지고, 이렇게 층별로 주어진 오차함수를 최적화 방법을 사용하여 감소시키도록 학습이 이루어진다. 이 경우 중간층 오차함수가 학습의 성능에 큰 영향을 미치는 데, 이 논문에서는 층별 학습의 성능을 개선하기 위한 중간층 오차함수를 제안한다. 이 중간층 오차함수는 출력층 오차함수에서 중간층 가중치의 학습에 관계된 성분을 유도하는 형태로 제안된다. 제안한 방법은 필기체 숫자 인식과 고립단어인식 문제의 시뮬레이션으로 효용성을 확인하였다.

  • PDF

신경회로망과 실험계획법을 이용한 타이어의 장력 추정 (Tension Estimation of Tire using Neural Networks and DOE)

  • 이동우;조석수
    • 한국정밀공학회지
    • /
    • 제28권7호
    • /
    • pp.814-820
    • /
    • 2011
  • It takes long time in numerical simulation because structural design for tire requires the nonlinear material property. Neural networks has been widely studied to engineering design to reduce numerical computation time. The numbers of hidden layer, hidden layer neuron and training data have been considered as the structural design variables of neural networks. In application of neural networks to optimize design, there are a few studies about arrangement method of input layer neurons. To investigate the effect of input layer neuron arrangement on neural networks, the variables of tire contour design and tension in bead area were assigned to inputs and output for neural networks respectively. Design variables arrangement in input layer were determined by main effect analysis. The number of hidden layer, the number of hidden layer neuron and the number of training data and so on have been considered as the structural design variables of neural networks. In application to optimization design problem of neural networks, there are few studies about arrangement method of input layer neurons. To investigate the effect of arrangement of input neurons on neural network learning tire contour design parameters and tension in bead area were assigned to neural input and output respectively. Design variables arrangement in input layer was determined by main effect analysis.

Cross-Validation Probabilistic Neural Network Based Face Identification

  • Lotfi, Abdelhadi;Benyettou, Abdelkader
    • Journal of Information Processing Systems
    • /
    • 제14권5호
    • /
    • pp.1075-1086
    • /
    • 2018
  • In this paper a cross-validation algorithm for training probabilistic neural networks (PNNs) is presented in order to be applied to automatic face identification. Actually, standard PNNs perform pretty well for small and medium sized databases but they suffer from serious problems when it comes to using them with large databases like those encountered in biometrics applications. To address this issue, we proposed in this work a new training algorithm for PNNs to reduce the hidden layer's size and avoid over-fitting at the same time. The proposed training algorithm generates networks with a smaller hidden layer which contains only representative examples in the training data set. Moreover, adding new classes or samples after training does not require retraining, which is one of the main characteristics of this solution. Results presented in this work show a great improvement both in the processing speed and generalization of the proposed classifier. This improvement is mainly caused by reducing significantly the size of the hidden layer.

Damage detection in structures using modal curvatures gapped smoothing method and deep learning

  • Nguyen, Duong Huong;Bui-Tien, T.;Roeck, Guido De;Wahab, Magd Abdel
    • Structural Engineering and Mechanics
    • /
    • 제77권1호
    • /
    • pp.47-56
    • /
    • 2021
  • This paper deals with damage detection using a Gapped Smoothing Method (GSM) combined with deep learning. Convolutional Neural Network (CNN) is a model of deep learning. CNN has an input layer, an output layer, and a number of hidden layers that consist of convolutional layers. The input layer is a tensor with shape (number of images) × (image width) × (image height) × (image depth). An activation function is applied each time to this tensor passing through a hidden layer and the last layer is the fully connected layer. After the fully connected layer, the output layer, which is the final layer, is predicted by CNN. In this paper, a complete machine learning system is introduced. The training data was taken from a Finite Element (FE) model. The input images are the contour plots of curvature gapped smooth damage index. A free-free beam is used as a case study. In the first step, the FE model of the beam was used to generate data. The collected data were then divided into two parts, i.e. 70% for training and 30% for validation. In the second step, the proposed CNN was trained using training data and then validated using available data. Furthermore, a vibration experiment on steel damaged beam in free-free support condition was carried out in the laboratory to test the method. A total number of 15 accelerometers were set up to measure the mode shapes and calculate the curvature gapped smooth of the damaged beam. Two scenarios were introduced with different severities of the damage. The results showed that the trained CNN was successful in detecting the location as well as the severity of the damage in the experimental damaged beam.

한글 인식을 위한 신경망 분류기의 응용 (A Neural Net Classifier for Hangeul Recognition)

  • 최원호;최동혁;이병래;박규태
    • 대한전자공학회논문지
    • /
    • 제27권8호
    • /
    • pp.1239-1249
    • /
    • 1990
  • In this paper, using the neural network design techniques, an adaptive Mahalanobis distance classifier(AMDC) is designed. This classifier has three layers: input layer, internal layer and output layer. The connection from input layer to internal layer is fully connected, and that from internal to output layer has partial connection that might be thought as an Oring. If two ormore clusters of patterns of one class are laid apart in the feature space, the network adaptively generate the internal nodes, whhch are corresponding to the subclusters of that class. The number of the output nodes in just same as the number of the classes to classify, on the other hand, the number of the internal nodes is defined by the number of the subclusters, and can be optimized by itself. Using the method of making the subclasses, the different patterns that are of the same class can easily be distinguished from other classes. If additional training is needed after the completion of the traning, the AMDC does not have to repeat the trainging that has already done. To test the performance of the AMDC, the experiments of classifying 500 Hangeuls were done. In experiment, 20 print font sets of Hangeul characters(10,000 cahracters) were used for training, and with 3 sets(1,500 characters), the AMDC was tested for various initial variance \ulcornerand threshold \ulcorner and compared with other statistical or neural classifiers.

  • PDF

중심외주시 훈련 후 망막 외망상층에서의 신경 재조직화 (Neural Reorganization in Retinal Outer Plexiform Layer Induced by Eccentric Viewing Training)

  • 서재명
    • 한국안광학회지
    • /
    • 제19권2호
    • /
    • pp.247-252
    • /
    • 2014
  • 목적: 단기간의 중심외주시 훈련 후 발생하는 신경의 재조직화의 특성과 호발 위치를 알아보고자 했다. 방법: 정상시력을 가진 성인 14명을 대상으로 21일 간 중심외주시 훈련을 하고 훈련 전후 광지각도와 다국소망막전위도를 측정하여 사후 분석했다. 결과: 중심외주시 훈련 전후 값을 비교한 광지각도 검사(p<0.047)에서 뿐만 아니라 다국소망막전위도 검사에서도 유의한 개선을 보였다(p<0.028). 결론: 시각 말초신경계는 재생이 불가능하지만 단기간의 중심외주시 훈련은 말초신경계에서 신경 재조직화를 발생시킨다.

비전공자 학부생의 훈련데이터와 기초 인공신경망 개발 결과 분석 및 Orange 활용 (Analysis and Orange Utilization of Training Data and Basic Artificial Neural Network Development Results of Non-majors)

  • 허경
    • 실천공학교육논문지
    • /
    • 제15권2호
    • /
    • pp.381-388
    • /
    • 2023
  • 스프레드시트를 활용한 인공신경망 교육을 통해, 비전공자 학부생들은 인공신경망의 동작 원리을 이해하며 자신만의 인공신경망 SW를 개발할 수 있다. 여기서, 인공신경망의 동작 원리 교육은 훈련데이터의 생성과 정답 라벨의 할당부터 시작한다. 이후, 인공 뉴런의 발화 및 활성화 함수, 입력층과 은닉층 그리고 출력층의 매개변수들로부터 계산되는 출력값을 학습한다. 마지막으로, 최초 정의된 각 훈련데이터의 정답 라벨과 인공신경망이 계산한 출력값 간 오차를 계산하는 과정을 학습하고 오차제곱의 총합을 최소화하는 입력층과 은닉층 그리고 출력층의 매개변수들이 계산되는 과정을 학습한다. 스프레드시트를 활용한 인공신경망 동작 원리 교육을 비전공자 학부생 대상으로 실시하였다. 그리고 이미지 훈련데이터와 기초 인공신경망 개발 결과를 수집하였다. 본 논문에서는 12화소 크기의 소용량 이미지로 두 가지 훈련데이터와 해당 인공신경망 SW를 수집한 결과를 분석하고, 수집한 훈련데이터를 Orange 머신러닝 모델 학습 및 분석 도구에 활용하는 방법과 실행 결과를 제시하였다.

새로운 Preceding Layer Driven MLP 신경회로망의 학습 모델과 그 응용 (Learning Model and Application of New Preceding Layer Driven MLP Neural Network)

  • 한효진;김동훈;정호선
    • 전자공학회논문지B
    • /
    • 제28B권12호
    • /
    • pp.27-37
    • /
    • 1991
  • In this paper, the novel PLD (Preceding Layer Driven) MLP (Multi Layer Perceptron) neural network model and its learning algorithm is described. This learning algorithm is different from the conventional. This integer weights and hard limit function are used for synaptic weight values and activation function, respectively. The entire learning process is performed by layer-by-layer method. the number of layers can be varied with difficulty of training data. Since the synaptic weight values are integers, the synapse circuit can be easily implemented with CMOS. PLD MLP neural network was applied to English Characters, arbitrary waveform generation and spiral problem.

  • PDF

인공신경망 이론을 이용한 충주호의 수질예측 (Water Quality Forecasting of Chungju Lake Using Artificial Neural Network Algorithm)

  • 정효준;이소진;이홍근
    • 한국환경과학회지
    • /
    • 제11권3호
    • /
    • pp.201-207
    • /
    • 2002
  • This study was carried out to evaluate the artificial neural network algorithm for water quality forecasting in Chungju lake, north Chungcheong province. Multi-layer perceptron(MLP) was used to train artificial neural networks. MLP was composed of one input layer, two hidden layers and one output layer. Transfer functions of the hidden layer were sigmoid and linear function. The number of node in the hidden layer was decided by trial and error method. It showed that appropriate node number in the hidden layer is 10 for pH training, 15 for DO and BOD, respectively. Reliability index was used to verify for the forecasting power. Considering some outlying data, artificial neural network fitted well between actual water quality data and computed data by artificial neural networks.