• Title/Summary/Keyword: Input Layer

Search Result 1,135, Processing Time 0.032 seconds

Tension Estimation of Tire using Neural Networks and DOE (신경회로망과 실험계획법을 이용한 타이어의 장력 추정)

  • Lee, Dong-Woo;Cho, Seok-Swoo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.28 no.7
    • /
    • pp.814-820
    • /
    • 2011
  • It takes long time in numerical simulation because structural design for tire requires the nonlinear material property. Neural networks has been widely studied to engineering design to reduce numerical computation time. The numbers of hidden layer, hidden layer neuron and training data have been considered as the structural design variables of neural networks. In application of neural networks to optimize design, there are a few studies about arrangement method of input layer neurons. To investigate the effect of input layer neuron arrangement on neural networks, the variables of tire contour design and tension in bead area were assigned to inputs and output for neural networks respectively. Design variables arrangement in input layer were determined by main effect analysis. The number of hidden layer, the number of hidden layer neuron and the number of training data and so on have been considered as the structural design variables of neural networks. In application to optimization design problem of neural networks, there are few studies about arrangement method of input layer neurons. To investigate the effect of arrangement of input neurons on neural network learning tire contour design parameters and tension in bead area were assigned to neural input and output respectively. Design variables arrangement in input layer was determined by main effect analysis.

Hangul Recognition Using a Hierarchical Neural Network (계층구조 신경망을 이용한 한글 인식)

  • 최동혁;류성원;강현철;박규태
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.11
    • /
    • pp.852-858
    • /
    • 1991
  • An adaptive hierarchical classifier(AHCL) for Korean character recognition using a neural net is designed. This classifier has two neural nets: USACL (Unsupervised Adaptive Classifier) and SACL (Supervised Adaptive Classifier). USACL has the input layer and the output layer. The input layer and the output layer are fully connected. The nodes in the output layer are generated by the unsupervised and nearest neighbor learning rule during learning. SACL has the input layer, the hidden layer and the output layer. The input layer and the hidden layer arefully connected, and the hidden layer and the output layer are partially connected. The nodes in the SACL are generated by the supervised and nearest neighbor learning rule during learning. USACL has pre-attentive effect, which perform partial search instead of full search during SACL classification to enhance processing speed. The input of USACL and SACL is a directional edge feature with a directional receptive field. In order to test the performance of the AHCL, various multi-font printed Hangul characters are used in learning and testing, and its processing its speed and and classification rate are compared with the conventional LVQ(Learning Vector Quantizer) which has the nearest neighbor learning rule.

  • PDF

Segmentation of Objects with Multi Layer Perceptron by Using Informations of Window

  • Kwak, Young-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.4
    • /
    • pp.1033-1043
    • /
    • 2007
  • The multi layer perceptron for segmenting objects in images only uses the input windows that are made from a image in a fixed size. These windows are recognized so each independent learning data that they make the performance of the multi layer perceptron poor. The poor performance is caused by not considering the position information and effect of input windows in input images. So we propose a new approach to add the position information and effect of input windows to the multi layer perceptron#s input layer. Our new approach improves the performance as well as the learning time in the multi layer perceptron. In our experiment, we can find our new algorithm good.

  • PDF

Multilayer Neural Network Using Delta Rule: Recognitron III (텔타규칙을 이용한 다단계 신경회로망 컴퓨터:Recognitron III)

  • 김춘석;박충규;이기한;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.2
    • /
    • pp.224-233
    • /
    • 1991
  • The multilayer expanson of single layer NN (Neural Network) was needed to solve the linear seperability problem as shown by the classic example using the XOR function. The EBP (Error Back Propagation ) learning rule is often used in multilayer Neural Networks, but it is not without its faults: 1)D.Rimmelhart expanded the Delta Rule but there is a problem in obtaining Ca from the linear combination of the Weight matrix N between the hidden layer and the output layer and H, wich is the result of another linear combination between the input pattern and the Weight matrix M between the input layer and the hidden layer. 2) Even if using the difference between Ca and Da to adjust the values of the Weight matrix N between the hidden layer and the output layer may be valid is correct, but using the same value to adjust the Weight matrixd M between the input layer and the hidden layer is wrong. Recognitron III was proposed to solve these faults. According to simulation results, since Recognitron III does not learn the three layer NN itself, but divides it into several single layer NNs and learns these with learning patterns, the learning time is 32.5 to 72.2 time faster than EBP NN one. The number of patterns learned in a EBP NN with n input and output cells and n+1 hidden cells are 2**n, but n in Recognitron III of the same size. [5] In the case of pattern generalization, however, EBP NN is less than Recognitron III.

  • PDF

Properties of Chemical Vapor Deposited ZrC coating layer for TRISO Coated Fuel Particle (화학증착법에 의하여 제조된 탄화지르코늄 코팅층의 물성)

  • Kim, Jun-Gyu;Kum, E-Sul;Choi, Doo-Jin;Lee, Young-Woo;Park, Ji-Yeon
    • Journal of the Korean Ceramic Society
    • /
    • v.44 no.10
    • /
    • pp.580-584
    • /
    • 2007
  • The ZrC layer instead of SiC layer is a critical and essential layer in TRISO coated fuel particles since it is a protective layer against diffusion of fission products and provides mechanical strength for the fuel particle. In this study, we carried out computational simulation before actual experiment. With these simulation results, Zirconium carbide (ZrC) films were chemically vapor deposited on $ZrO_2$ substrate using zirconium tetrachloride $(ZrCl_4),\;CH_4$ as a source and $H_2$ dilution gas, respectively. The change of input gas ratio was correlated with growth rate and morphology of deposited ZrC films. The growth rate of ZrC films increased as the input gas ratio decreased. The microstructure of ZrC films was changed with input gas ratio; small granular type grain structure was exhibited at the low input gas ratio. Angular type structure of increased grain size was observed at the high input gas ratio.

Smoothed RSSI-Based Distance Estimation Using Deep Neural Network (심층 인공신경망을 활용한 Smoothed RSSI 기반 거리 추정)

  • Hyeok-Don Kwon;Sol-Bee Lee;Jung-Hyok Kwon;Eui-Jik Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.2
    • /
    • pp.71-76
    • /
    • 2023
  • In this paper, we propose a smoothed received signal strength indicator (RSSI)-based distance estimation using deep neural network (DNN) for accurate distance estimation in an environment where a single receiver is used. The proposed scheme performs a data preprocessing consisting of data splitting, missing value imputation, and smoothing steps to improve distance estimation accuracy, thereby deriving the smoothed RSSI values. The derived smoothed RSSI values are used as input data of the Multi-Input Single-Output (MISO) DNN model, and are finally returned as an estimated distance in the output layer through input layer and hidden layer. To verify the superiority of the proposed scheme, we compared the performance of the proposed scheme with that of the linear regression-based distance estimation scheme. As a result, the proposed scheme showed 29.09% higher distance estimation accuracy than the linear regression-based distance estimation scheme.

Deep LS-SVM for regression

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.3
    • /
    • pp.827-833
    • /
    • 2016
  • In this paper, we propose a deep least squares support vector machine (LS-SVM) for regression problems, which consists of the input layer and the hidden layer. In the hidden layer, LS-SVMs are trained with the original input variables and the perturbed responses. For the final output, the main LS-SVM is trained with the outputs from LS-SVMs of the hidden layer as input variables and the original responses. In contrast to the multilayer neural network (MNN), LS-SVMs in the deep LS-SVM are trained to minimize the penalized objective function. Thus, the learning dynamics of the deep LS-SVM are entirely different from MNN in which all weights and biases are trained to minimize one final error function. When compared to MNN approaches, the deep LS-SVM does not make use of any combination weights, but trains all LS-SVMs in the architecture. Experimental results from real datasets illustrate that the deep LS-SVM significantly outperforms state of the art machine learning methods on regression problems.

Pattern Analysis of Organizational Leader Using Fuzzy TAM Network (퍼지TAM 네트워크를 이용한 조직리더의 패턴분석)

  • Park, Soo-Jeom;Hwang, Seung-Gook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.2
    • /
    • pp.238-243
    • /
    • 2007
  • The TAM(Topographic Attentive Mapping) network neural network model is an especially effective one for pattern analysis. It is composed of of Input layer, category layer, and output layer. Fuzzy rule, lot input and output data are acquired from it. The TAM network with three pruning rules for reducing links and nodes at the layer is called fuzzy TAM network. In this paper, we apply fuzzy TAM network to pattern analysis of leadership type for organizational leader and show its usefulness. Here, criteria of input layer and target value of output layer are the value and leadership related personality type variables of the Egogram and Enneagram, respectively.

Optical Implementation of Single-Layer Adaptive Neural Network for Multicategory Classification. (다영상 분류를 위한 단층 적응 신경회로망의 광학적 구현)

  • 이상훈
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 1991.06a
    • /
    • pp.23-28
    • /
    • 1991
  • A single-layer neural network with 4$\times$4 input neurons and 4 output neurons is optically implemented. Holographic lenslet arrays are used for the e optical interconnection topology, a liquid crystal light valve(LCLV) is used for controlling optical interconection weights. Using a Perceptron learning rule, it classifics input patterns into 4 different categories. It is shown that the performance of the adaptive neural network depends on the learning rate, the correlation of input patterns, and the nonlinear characteristic properties of the liquid crystal light valve.

  • PDF

A Clustering Algorithm Using the Ordered Weight of Self-Organizing Feature Maps (자기조직화 신경망의 정렬된 연결강도를 이용한 클러스터링 알고리즘)

  • Lee Jong-Sup;Kang Maing-Kyu
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.31 no.3
    • /
    • pp.41-51
    • /
    • 2006
  • Clustering is to group similar objects into clusters. Until now there are a lot of approaches using Self-Organizing feature Maps (SOFMS) But they have problems with a small output-layer nodes and initial weight. For example, one of them is a one-dimension map of c output-layer nodes, if they want to make c clusters. This approach has problems to classify elaboratively. This Paper suggests one-dimensional output-layer nodes in SOFMs. The number of output-layer nodes is more than those of clusters intended to find and the order of output-layer nodes is ascending in the sum of the output-layer node's weight. We un find input data in SOFMs output node and classify input data in output nodes using Euclidean distance. The proposed algorithm was tested on well-known IRIS data and TSPLIB. The results of this computational study demonstrate the superiority of the proposed algorithm.