• Title/Summary/Keyword: Layer-by-layer learning

Search Result 642, Processing Time 0.032 seconds

Fuzzy Supervised Learning Algorithm by using Self-generation (Self-generation을 이용한 퍼지 지도 학습 알고리즘)

  • 김광백
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.7
    • /
    • pp.1312-1320
    • /
    • 2003
  • In this paper, we consider a multilayer neural network, with a single hidden layer. Error backpropagation learning method used widely in multilayer neural networks has a possibility of local minima due to the inadequate weights and the insufficient number of hidden nodes. So we propose a fuzzy supervised learning algorithm by using self-generation that self-generates hidden nodes by the compound fuzzy single layer perceptron and modified ART1. From the input layer to hidden layer, a modified ART1 is used to produce nodes. And winner take-all method is adopted to the connection weight adaptation, so that a stored pattern for some pattern gets updated. The proposed method has applied to the student identification card images. In simulation results, the proposed method reduces a possibility of local minima and improves learning speed and paralysis than the conventional error backpropagation learning algorithm.

  • PDF

Design of new CNN structure with internal FC layer (내부 FC층을 갖는 새로운 CNN 구조의 설계)

  • Park, Hee-mun;Park, Sung-chan;Hwang, Kwang-bok;Choi, Young-kiu;Park, Jin-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.466-467
    • /
    • 2018
  • Recently, artificial intelligence has been applied to various fields such as image recognition, image recognition speech recognition, and natural language processing, and interest in Deep Learning technology is increasing. Many researches on Convolutional Neural Network(CNN), which is one of the most representative algorithms among Deep Learning, have strong advantages in image recognition and classification and are widely used in various fields. In this paper, we propose a new network structure that transforms the general CNN structure. A typical CNN structure consists of a convolution layer, ReLU layer, and a pooling layer. Therefore in this paper, We intend to construct a new network by adding fully connected layer inside a general CNN structure. This modification is intended to increase the learning and accuracy of the convoluted image by including the generalization which is an advantage of the neural network.

  • PDF

Damage detection in structures using modal curvatures gapped smoothing method and deep learning

  • Nguyen, Duong Huong;Bui-Tien, T.;Roeck, Guido De;Wahab, Magd Abdel
    • Structural Engineering and Mechanics
    • /
    • v.77 no.1
    • /
    • pp.47-56
    • /
    • 2021
  • This paper deals with damage detection using a Gapped Smoothing Method (GSM) combined with deep learning. Convolutional Neural Network (CNN) is a model of deep learning. CNN has an input layer, an output layer, and a number of hidden layers that consist of convolutional layers. The input layer is a tensor with shape (number of images) × (image width) × (image height) × (image depth). An activation function is applied each time to this tensor passing through a hidden layer and the last layer is the fully connected layer. After the fully connected layer, the output layer, which is the final layer, is predicted by CNN. In this paper, a complete machine learning system is introduced. The training data was taken from a Finite Element (FE) model. The input images are the contour plots of curvature gapped smooth damage index. A free-free beam is used as a case study. In the first step, the FE model of the beam was used to generate data. The collected data were then divided into two parts, i.e. 70% for training and 30% for validation. In the second step, the proposed CNN was trained using training data and then validated using available data. Furthermore, a vibration experiment on steel damaged beam in free-free support condition was carried out in the laboratory to test the method. A total number of 15 accelerometers were set up to measure the mode shapes and calculate the curvature gapped smooth of the damaged beam. Two scenarios were introduced with different severities of the damage. The results showed that the trained CNN was successful in detecting the location as well as the severity of the damage in the experimental damaged beam.

A Study on the Control of Recognition Performance and the Rehabilitation of Damaged Neurons in Multi-layer Perceptron (다층 퍼셉트론으 인식력 제어와 복원에 관한 연구)

  • 박인정;장호성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.2
    • /
    • pp.128-136
    • /
    • 1991
  • A neural network of multi layer perception type, learned by error back propagation learning rule, is generally used for the verification or clustering of similar type of patterns. When learning is completed, the network has a constant value of output depending on a pattern. This paper shows that the intensity of neuron's out put can be controlled by a function which intensifies the excitatory interconnection coefficients or the inhibitory one between neurons in output layer and those in hidden layer. In this paper the value of factor in the function to control the output is derived from the know values of the neural network after learning is completed And also this paper show that the amount of an increased neuron's output in output layer by arbitary value of the factor is derived. For the applications increased recognition performance of a pattern than has distortion is introduced and the output of partially damaged neurons are first managed and this paper shows that the reduced recognition performance can be recovered.

  • PDF

Segmentation of Objects with Multi Layer Perceptron by Using Informations of Window

  • Kwak, Young-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.4
    • /
    • pp.1033-1043
    • /
    • 2007
  • The multi layer perceptron for segmenting objects in images only uses the input windows that are made from a image in a fixed size. These windows are recognized so each independent learning data that they make the performance of the multi layer perceptron poor. The poor performance is caused by not considering the position information and effect of input windows in input images. So we propose a new approach to add the position information and effect of input windows to the multi layer perceptron#s input layer. Our new approach improves the performance as well as the learning time in the multi layer perceptron. In our experiment, we can find our new algorithm good.

  • PDF

Development of Deep Learning Structure to Improve Quality of Polygonal Containers (다각형 용기의 품질 향상을 위한 딥러닝 구조 개발)

  • Yoon, Suk-Moon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.493-500
    • /
    • 2021
  • In this paper, we propose the development of deep learning structure to improve quality of polygonal containers. The deep learning structure consists of a convolution layer, a bottleneck layer, a fully connect layer, and a softmax layer. The convolution layer is a layer that obtains a feature image by performing a convolution 3x3 operation on the input image or the feature image of the previous layer with several feature filters. The bottleneck layer selects only the optimal features among the features on the feature image extracted through the convolution layer, reduces the channel to a convolution 1x1 ReLU, and performs a convolution 3x3 ReLU. The global average pooling operation performed after going through the bottleneck layer reduces the size of the feature image by selecting only the optimal features among the features of the feature image extracted through the convolution layer. The fully connect layer outputs the output data through 6 fully connect layers. The softmax layer multiplies and multiplies the value between the value of the input layer node and the target node to be calculated, and converts it into a value between 0 and 1 through an activation function. After the learning is completed, the recognition process classifies non-circular glass bottles by performing image acquisition using a camera, measuring position detection, and non-circular glass bottle classification using deep learning as in the learning process. In order to evaluate the performance of the deep learning structure to improve quality of polygonal containers, as a result of an experiment at an authorized testing institute, it was calculated to be at the same level as the world's highest level with 99% good/defective discrimination accuracy. Inspection time averaged 1.7 seconds, which was calculated within the operating time standards of production processes using non-circular machine vision systems. Therefore, the effectiveness of the performance of the deep learning structure to improve quality of polygonal containers proposed in this paper was proven.

Modified Error Back Propagation Algorithm using the Approximating of the Hidden Nodes in Multi-Layer Perceptron (다층퍼셉트론의 은닉노드 근사화를 이용한 개선된 오류역전파 학습)

  • Kwak, Young-Tae;Lee, young-Gik;Kwon, Oh-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.603-611
    • /
    • 2001
  • This paper proposes a novel fast layer-by-layer algorithm that has better generalization capability. In the proposed algorithm, the weights of the hidden layer are updated by the target vector of the hidden layer obtained by least squares method. The proposed algorithm improves the learning speed that can occur due to the small magnitude of the gradient vector in the hidden layer. This algorithm was tested in a handwritten digits recognition problem. The learning speed of the proposed algorithm was faster than those of error back propagation algorithm and modified error function algorithm, and similar to those of Ooyen's method and layer-by-layer algorithm. Moreover, the simulation results showed that the proposed algorithm had the best generalization capability among them regardless of the number of hidden nodes. The proposed algorithm has the advantages of the learning speed of layer-by-layer algorithm and the generalization capability of error back propagation algorithm and modified error function algorithm.

  • PDF

A Study on the Implementation of Modified Hybrid Learning Rule (변형하이브리드 학습규칙의 구현에 관한 연구)

  • 송도선;김석동;이행세
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.12
    • /
    • pp.116-123
    • /
    • 1994
  • A modified Hybrid learning rule(MHLR) is proposed, which is derived from combining the Back Propagation algorithm that is known as an excellent classifier with modified Hebbian by changing the orginal Hebbian which is a good feature extractor. The network architecture of MHLR is multi-layered neural network. The weights of MHLR are calculated from sum of the weight of BP and the weight of modified Hebbian between input layer and higgen layer and from the weight of BP between gidden layer and output layer. To evaluate the performance, BP, MHLR and the proposed Hybrid learning rule (HLR) are simulated by Monte Carlo method. As the result, MHLR is the best in recognition rate and HLR is the second. In learning speed, HLR and MHLR are much the same, while BP is relatively slow.

  • PDF

Tension Estimation of Tire using Neural Networks and DOE (신경회로망과 실험계획법을 이용한 타이어의 장력 추정)

  • Lee, Dong-Woo;Cho, Seok-Swoo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.7
    • /
    • pp.814-820
    • /
    • 2011
  • It takes long time in numerical simulation because structural design for tire requires the nonlinear material property. Neural networks has been widely studied to engineering design to reduce numerical computation time. The numbers of hidden layer, hidden layer neuron and training data have been considered as the structural design variables of neural networks. In application of neural networks to optimize design, there are a few studies about arrangement method of input layer neurons. To investigate the effect of input layer neuron arrangement on neural networks, the variables of tire contour design and tension in bead area were assigned to inputs and output for neural networks respectively. Design variables arrangement in input layer were determined by main effect analysis. The number of hidden layer, the number of hidden layer neuron and the number of training data and so on have been considered as the structural design variables of neural networks. In application to optimization design problem of neural networks, there are few studies about arrangement method of input layer neurons. To investigate the effect of arrangement of input neurons on neural network learning tire contour design parameters and tension in bead area were assigned to neural input and output respectively. Design variables arrangement in input layer was determined by main effect analysis.

A Multi-layer Bidirectional Associative Neural Network with Improved Robust Capability for Hardware Implementation (성능개선과 하드웨어구현을 위한 다층구조 양방향연상기억 신경회로망 모델)

  • 정동규;이수영
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.159-165
    • /
    • 1994
  • In this paper, we propose a multi-layer associative neural network structure suitable for hardware implementaion with the function of performance refinement and improved robutst capability. Unlike other methods which reduce network complexity by putting restrictions on synaptic weithts, we are imposing a requirement of hidden layer neurons for the function. The proposed network has synaptic weights obtainted by Hebbian rule between adjacent layer's memory patterns such as Kosko's BAM. This network can be extended to arbitary multi-layer network trainable with Genetic algorithm for getting hidden layer memory patterns starting with initial random binary patterns. Learning is done to minimize newly defined network error. The newly defined error is composed of the errors at input, hidden, and output layers. After learning, we have bidirectional recall process for performance improvement of the network with one-shot recall. Experimental results carried out on pattern recognition problems demonstrate its performace according to the parameter which represets relative significance of the hidden layer error over the sum of input and output layer errors, show that the proposed model has much better performance than that of Kosko's bidirectional associative memory (BAM), and show the performance increment due to the bidirectionality in recall process.

  • PDF