• Title/Summary/Keyword: Hidden Layer

Search Result 511, Processing Time 0.025 seconds

The Structure of Boundary Decision Using the Back Propagation Algorithms (역전파 알고리즘을 이용한 경계결정의 구성에 관한 연구)

  • Lee, Ji-Young
    • The Journal of Information Technology
    • /
    • v.8 no.1
    • /
    • pp.51-56
    • /
    • 2005
  • The Back propagation algorithm is a very effective supervised training method for multi-layer feed forward neural networks. This paper studies the decision boundary formation based on the Back propagation algorithm. The discriminating powers of several neural network topology are also investigated against five manually created data sets. It is found that neural networks with multiple hidden layer perform better than single hidden layer.

  • PDF

Improvement of Electroforming Process System Based on Double Hidden Layer Network (이중 비밀 다층구조 네트워크에 기반한 전기주조 공정 시스템의 개선)

  • Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.3
    • /
    • pp.61-67
    • /
    • 2023
  • In order to optimize the pulse electroforming copper process, a double hidden layer BP (Back Propagation) neural network is constructed. Through sample training, the mapping relationship between electroforming copper process conditions and target properties is accurately established, and the prediction of microhardness and tensile strength of the electroforming layer in the pulse electroforming copper process is realized. The predicted results are verified by electrodeposition copper test in copper pyrophosphate solution system with pulse power supply. The results show that the microhardness and tensile strength of copper layer predicted by "3-4-3-2" structure double hidden layer neural network are very close to the experimental values, and the relative error is less than 2.32%. In the parameter range, the microhardness of copper layer is between 100.3~205.6MPa and the tensile strength is between 112~485MPa.When the microhardness and tensile strength are optimal,the corresponding process conditions are as follows: current density is 2A-dm-2, pulse frequency is 2KHz and pulse duty cycle is 10%.

Merging of Two Artificial Neural Networks

  • Kim, Mun-Hyuk;Park, Jin-Young
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.258-261
    • /
    • 2002
  • This paper addresses the problem of merging two feedforward neural networks into one network. Merging is accomplished at the level of hidden layer. A new network selects its hidden layer's units from the two networks to be merged We uses information theoretic criterion (quadratic mutual information) in the selection process. The hidden unit's output and the target patterns are considers as random variables and the mutual information between them is calculated. The mutual information between hidden units are also considered to prevent the statistically dependent units from being selected. Because mutual information is invariant under linear transformation of the variables, it shows the property of the robust estimation.

  • PDF

A Basic Study on the Effect of Number of Hidden Layers on Performance of Estimation Model of Compressive Strength of Concrete Using Deep Learning Algorithms (Hidden Layer의 개수가 Deep Learning Algorithm을 이용한 콘크리트 압축강도 추정 모델의 성능에 미치는 영향에 관한 기초적 연구)

  • Lee, Seung-Jun;Lee, Han-Seung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2018.05a
    • /
    • pp.130-131
    • /
    • 2018
  • The compressive strength of concrete is determined by various influencing factors. However, the conventional method for estimating the compressive strength of concrete has been suggested by considering only 1 to 3 specific influential factors as variables. In this study, nine influential factors (W/B ratio, Water, Cement, Aggregate(Coarse, Fine), Fly ash, Blast furnace slag, Curing temperature, and humidity) of papers opened for 10 years were collected at 4 conferences in order to know the various correlations among data and the tendency of data. The selected mixture and compressive strength data were learned using the Deep Learning Algorithm to derive an estimated function model. The purpose of this study is to investigate the effect of the number of hidden layers on the prediction performance in the process of estimating the compressive strength for an arbitrary combination.

  • PDF

Network Analysis and Neural Network Approach for the Cellular Manufacturing System Design (Network 분석과 신경망을 이용한 Cellular 생산시스템 설계)

  • Lee, Hong-Chul
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.24 no.1
    • /
    • pp.23-35
    • /
    • 1998
  • This article presents a network flow analysis to form flexible machine cells with minimum intercellular part moves and a neural network model to form part families. The operational sequences and production quantity of the part, and the number of cells and the cell size are taken into considerations for a 0-1 quadratic programming formulation and a network flow based solution procedure is developed. After designing the machine cells, a neural network approach for the integration of part families and the automatic assignment of new parts to the existing cells is proposed. A multi-layer backpropagation network with one hidden layer is used. Experimental results with varying number of neurons in hidden layer to evaluate the role of hidden neurons in the network learning performance are also presented. The comprehensive methodology developed in this article is appropriate for solving large-scale industrial applications without building the knowledge-based expert rule for the cellular manufacturing environment.

  • PDF

The Study of Neural Networks Using Orthogonal function System in Hidden-Layer (직교함수를 은닉층에 지닌 신경회로망에 대한 연구)

  • 권성훈;최용준;이정훈;유석용;엄기환;손동설
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.482-485
    • /
    • 1999
  • In this paper we proposed a heterogeneous hidden layer consisting of both sigmoid functions and RBFs(Radial Basis Function) in multi-layered neural networks. Focusing on the orthogonal relationship between the sigmoid function and its derivative, a derived RBF that is a derivative of the sigmoid function is used as the RBF in the neural network. so the proposed neural network is called ONN(Orthogonal Neural Network). Identification results using a nonlinear function confirm both the ONN's feasibility and characteristics by comparing with those obtained using a conventional neural network which has sigmoid function or RBF in hidden layer

  • PDF

Cross-Validation Probabilistic Neural Network Based Face Identification

  • Lotfi, Abdelhadi;Benyettou, Abdelkader
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1075-1086
    • /
    • 2018
  • In this paper a cross-validation algorithm for training probabilistic neural networks (PNNs) is presented in order to be applied to automatic face identification. Actually, standard PNNs perform pretty well for small and medium sized databases but they suffer from serious problems when it comes to using them with large databases like those encountered in biometrics applications. To address this issue, we proposed in this work a new training algorithm for PNNs to reduce the hidden layer's size and avoid over-fitting at the same time. The proposed training algorithm generates networks with a smaller hidden layer which contains only representative examples in the training data set. Moreover, adding new classes or samples after training does not require retraining, which is one of the main characteristics of this solution. Results presented in this work show a great improvement both in the processing speed and generalization of the proposed classifier. This improvement is mainly caused by reducing significantly the size of the hidden layer.

Adaptive Control of the Nonlinear Systems Using Diagonal Recurrent Neural Networks (대각귀환 신경망을 이용한 비선형 적응 제어)

  • Ryoo, Dong-Wan;Lee, Young-Seog;Seo, Bo-Hyeok
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.939-942
    • /
    • 1996
  • This paper presents a stable learning algorithm for diagonal recurrent neural network(DRNN). DRNN is applied to a problem of controlling nonlinear dynamical systems. A architecture of DRNN is a modified model of the Recurrent Neural Network(RNN) with one hidden layer, and the hidden layer is comprised of self-recurrent neurons. DRNN has considerably fewer weights than RNN. Since there is no interlinks amongs in the hidden layer. DRNN is dynamic mapping and is better suited for dynamical systems than static forward neural network. To guarantee convergence and for faster learning, an adaptive learning rate is developed by using Lyapunov function. The ability and effectiveness of identifying and controlling a nonlinear dynamic system using the proposed algorithm is demonstrated by computer simulation.

  • PDF

Reducing the Number of Hidden Nodes in MLP using the Vertex of Hidden Layer's Hypercube (은닉층 다차원공간의 Vertex를 이용한 MLP의 은닉 노드 축소방법)

  • 곽영태;이영직;권오석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.9B
    • /
    • pp.1775-1784
    • /
    • 1999
  • This paper proposes a method of removing unnecessary hidden nodes by a new cost function that evaluates the variance and the mean of hidden node outputs during training. The proposed cost function makes necessary hidden nodes be activated and unnecessary hidden nodes be constants. We can remove the constant hidden nodes without performance degradation. Using the CEDAR handwritten digit recognition, we have shown that the proposed method can remove the number of hidden nodes up to 37.2%, with higher recognition rate and shorter learning time.

  • PDF

A Study on the Speech Recognition Performance of the Multilayered Recurrent Prediction Neural Network (다층회귀예측신경망의 음성인식성능에 관한 연구)

  • 안점영
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.2
    • /
    • pp.313-319
    • /
    • 1999
  • We devise the 3 models of Multilayered Recurrent Prediction Neural Network(MLRPNN), which are obtained by modifying the Multilayered Perceptron(MLP) with 4 layers. We experimentally study the speech recognition performance of 3 models by a comparative method, according to the variation of the prediction order, the number of neurons in two hidden layers, initial values of connecting weights and transfer function, respectively. By the experiment, the recognition performance of each MLRPNN is better than that of MLP. At the model that returns the output of the upper hidden layer to the lower hidden layer, the recognition performance shows the best value. All MLRPNNs, which have 10 or 15 neurons in the upper and lower hidden layer and is predicted by 3rd or 4th order, show the improved speech recognition rate. On learning, these MLRPNNs have a better recognition rate when we set the initial weights between -0.5 and 0.5, and use the unipolar sigmoid transfer function in the lower hidden layer.

  • PDF