• Title/Summary/Keyword: multi layer perceptron

Search Result 436, Processing Time 0.019 seconds

A Machine Vision Algorithm for Inspecting a Crimpled Terminal (압착단자의 자동검사를 위한 시각인식 알고리즘)

  • Lee, Moon-Kyu;Lee, Jung-Hwa
    • IE interfaces
    • /
    • v.11 no.1
    • /
    • pp.191-197
    • /
    • 1998
  • This paper describes a machine vision algorithm for inspecting a crimpled terminal. The crimpled terminal is one of wire harness assemblies which transmit current or signals between a pair of electrical or electronic assemblies. The major defect considered is wire exposure on wire barrels. To detect the wire exposure, we develope a multi-layer perceptron in which three features extracted from the image of the crimpled terminal are used as input data. The three features are edginess, variance, and total number of valley points(TVP). The multi-layer neural network has been successfully tested on a number of real specimens collected from a wire-harness factory.

  • PDF

Voiced-Unvoiced-Silence Detection Algorithm using Perceptron Neural Network (퍼셉트론 신경회로망을 사용한 유성음, 무성음, 묵음 구간의 검출 알고리즘)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.2
    • /
    • pp.237-242
    • /
    • 2011
  • This paper proposes a detection algorithm for each section which detects the voiced section, unvoiced section, and the silence section at each frame using a multi-layer perceptron neural network. First, a power spectrum and FFT (fast Fourier transform) coefficients obtained by FFT are used as the input to the neural network for each frame, then the neural network is trained using these power spectrum and FFT coefficients. In this experiment, the performance of the proposed algorithm for detection of the voiced section, unvoiced section, and silence section was evaluated based on the detection rates using various speeches, which are degraded by white noise and used as the input data of the neural network. In this experiment, the detection rates were 92% or more for such speech and white noise when training data and evaluation data were the different.

Genetically Opimized Self-Organizing Fuzzy Polynomial Neural Networks Based on Fuzzy Polynomial Neurons (퍼지다항식 뉴론 기반의 유전론적 최적 자기구성 퍼지 다항식 뉴럴네트워크)

  • 박호성;이동윤;오성권
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.8
    • /
    • pp.551-560
    • /
    • 2004
  • In this paper, we propose a new architecture of Self-Organizing Fuzzy Polynomial Neural Networks (SOFPNN) that is based on a genetically optimized multilayer perceptron with fuzzy polynomial neurons (FPNs) and discuss its comprehensive design methodology involving mechanisms of genetic optimization, especially genetic algorithms (GAs). The proposed SOFPNN gives rise to a structurally optimized structure and comes with a substantial level of flexibility in comparison to the one we encounter in conventional SOFPNNs. The design procedure applied in the construction of each layer of a SOFPNN deals with its structural optimization involving the selection of preferred nodes (or FPNs) with specific local characteristics (such as the number of input variables, the order of the polynomial of the consequent part of fuzzy rules, and a collection of the specific subset of input variables) and addresses specific aspects of parametric optimization. Through the consecutive process of such structural and parametric optimization, an optimized and flexible fuzzy neural network is generated in a dynamic fashion. To evaluate the performance of the genetically optimized SOFPNN, the model is experimented with using two time series data(gas furnace and chaotic time series), A comparative analysis reveals that the proposed SOFPNN exhibits higher accuracy and superb predictive capability in comparison to some previous models available in the literatures.

The Recognition of Korean Character Using Preceding Layer Driven MLP (Preceding Layer Driven 다층 퍼셉트론을 이용한 한글문자 인식)

  • 백승엽;김동훈;정호선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.5
    • /
    • pp.382-393
    • /
    • 1991
  • In this paper, we propose a method for recognizing printed Korean characters using the Preceding Layer Driven multi-layer perceptron. The new learning algorithm which assigns the weight values to an integer and makes use of the transfer function as the step function was presented to design the hardware. We obtained 522 Korean character-image as an experimental object through scanner with 600DPI resolution. The preprocessing for feature extraction of Korean character is the separation of individual character, noise elimination smoothing, thinnig, edge point extraction, branch point extraction, and stroke segmentation. The used feature data are the number of edge points and their shapes, the number of branch points, and the number of strokes with 8 directions.

  • PDF

A Design of Parallel Module Neural Network for Robot Manipulators having a fast Learning Speed (빠른 학습 속도를 갖는 로보트 매니퓰레이터의 병렬 모듈 신경제어기 설계)

  • 김정도;이택종
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.9
    • /
    • pp.1137-1153
    • /
    • 1995
  • It is not yet possible to solve the optimal number of neurons in hidden layer at neural networks. However, it has been proposed and proved by experiments that there is a limit in increasing the number of neuron in hidden layer, because too much incrememt will cause instability,local minima and large error. This paper proposes a module neural controller with pattern recognition ability to solve the above trade-off problems and to obtain fast learning convergence speed. The proposed neural controller is composed of several module having Multi-layer Perrceptron(MLP). Each module have the less neurons in hidden layer, because it learns only input patterns having a similar learning directions. Experiments with six joint robot manipulator have shown the effectiveness and the feasibility of the proposed the parallel module neural controller with pattern recognition perceptron.

  • PDF

Design of CNN with MLP Layer (MLP 층을 갖는 CNN의 설계)

  • Park, Jin-Hyun;Hwang, Kwang-Bok;Choi, Young-Kiu
    • Journal of the Korean Society of Mechanical Technology
    • /
    • v.20 no.6
    • /
    • pp.776-782
    • /
    • 2018
  • After CNN basic structure was introduced by LeCun in 1989, there has not been a major structure change except for more deep network until recently. The deep network enhances the expression power due to improve the abstraction ability of the network, and can learn complex problems by increasing non linearity. However, the learning of a deep network means that it has vanishing gradient or longer learning time. In this study, we proposes a CNN structure with MLP layer. The proposed CNNs are superior to the general CNN in their classification performance. It is confirmed that classification accuracy is high due to include MLP layer which improves non linearity by experiment. In order to increase the performance without making a deep network, it is confirmed that the performance is improved by increasing the non linearity of the network.

Slime mold and four other nature-inspired optimization algorithms in analyzing the concrete compressive strength

  • Yinghao Zhao;Hossein Moayedi;Loke Kok Foong;Quynh T. Thi
    • Smart Structures and Systems
    • /
    • v.33 no.1
    • /
    • pp.65-91
    • /
    • 2024
  • The use of five optimization techniques for the prediction of a strength-based concrete mixture's best-fit model is examined in this work. Five optimization techniques are utilized for this purpose: Slime Mold Algorithm (SMA), Black Hole Algorithm (BHA), Multi-Verse Optimizer (MVO), Vortex Search (VS), and Whale Optimization Algorithm (WOA). MATLAB employs a hybrid learning strategy to train an artificial neural network that combines least square estimation with backpropagation. Thus, 72 samples are utilized as training datasets and 31 as testing datasets, totaling 103. The multi-layer perceptron (MLP) is used to analyze all data, and results are verified by comparison. For training datasets in the best-fit models of SMA-MLP, BHA-MLP, MVO-MLP, VS-MLP, and WOA-MLP, the statistical indices of coefficient of determination (R2) in training phase are 0.9603, 0.9679, 0.9827, 0.9841 and 0.9770, and in testing phase are 0.9567, 0.9552, 0.9594, 0.9888 and 0.9695 respectively. In addition, the best-fit structures for training for SMA, BHA, MVO, VS, and WOA (all combined with multilayer perceptron, MLP) are achieved when the term population size was modified to 450, 500, 250, 150, and 500, respectively. Among all the suggested options, VS could offer a stronger prediction network for training MLP.

한글 단어를 발음 기호로 변환 시키는 인공신경망에 관한 연구

  • Yang, Jae-U;Kim, Doo-Hyeon
    • ETRI Journal
    • /
    • v.10 no.3
    • /
    • pp.113-124
    • /
    • 1988
  • 본 논문에서는 한글 단어를 발음 기호로 변환시키는 인공신경망의 설계와 이를 시뮬레이션한 결과에 대하여 논한다. 이 인공신경망은 multi-layer perceptron 구조를 가지며 error back-propagation 학습 알고리즘을 사용하였다. 이 인공신경망에 한글 발음 사전의 일부를 반복적으로 제시하여 학습시킨 결과, 학습한 단어에 대하여 최고 97%의 정확도로 변환 작업을 수행하였고 학습하지 않은 단어에 대해서는 91%의 정확도를 보였다. 이는 설계된 인공신경망이 발음 사전 내에 포괄적으로 내재되어 있는 발음규칙을 스스로 학습하였음을 나타낸다. 아울러 신경망의 학습 성취도와 입력 코드와의 관계도 연구하였는데, 한글단어를 발음기호로 변환하는 데에 있어서 compact 코드 보다 local 코드일 때 학습 성취도가 높은 것이 실험을 통해 밝혀졌다.

  • PDF

Prediction of Monthly Transition of the Composition Stock Price Index Using Error Back-propagation Method (신경회로망을 이용한 종합주가지수의 변화율 예측)

  • Roh, Jong-Lae;Lee, Jong-Ho
    • Proceedings of the KIEE Conference
    • /
    • 1991.07a
    • /
    • pp.896-899
    • /
    • 1991
  • This paper presents the neural network method to predict the Korea composition stock price index. The error back-propagation method is used to train the multi-layer perceptron network. Ten of the various economic indices of the past 7 Nears are used as train data and the monthly transition of the composition stock price index is represented by five output neurons. Test results of this method using the data of the last 18 months are very encouraging.

  • PDF

Classification performance comparison of inductive learning methods (귀납적 학습방법들의 분류성능 비교)

  • 이상호;지원철
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1997.10a
    • /
    • pp.173-176
    • /
    • 1997
  • In this paper, the classification performances of inductive learning methods are investigated using the credit rating data. The adopted classifiers are Multiple Discriminant Analysis (MDA), C4.5 of Quilan, Multi-Layer Perceptron (MLP) and Cascade Correlation Network (CCN). The data used in this analysis is obtained using the publicly announced rating reports from the three korean rating agencies. The performances of 4 classifiers are analyzed in term of prediction accuracy. The results show that no classifier is dominated by the other classifiers.

  • PDF