• 제목/요약/키워드: Input Layer

검색결과 1,138건 처리시간 0.04초

신경회로망과 실험계획법을 이용한 타이어의 장력 추정 (Tension Estimation of Tire using Neural Networks and DOE)

  • 이동우;조석수
    • 한국정밀공학회지
    • /
    • 제28권7호
    • /
    • pp.814-820
    • /
    • 2011
  • It takes long time in numerical simulation because structural design for tire requires the nonlinear material property. Neural networks has been widely studied to engineering design to reduce numerical computation time. The numbers of hidden layer, hidden layer neuron and training data have been considered as the structural design variables of neural networks. In application of neural networks to optimize design, there are a few studies about arrangement method of input layer neurons. To investigate the effect of input layer neuron arrangement on neural networks, the variables of tire contour design and tension in bead area were assigned to inputs and output for neural networks respectively. Design variables arrangement in input layer were determined by main effect analysis. The number of hidden layer, the number of hidden layer neuron and the number of training data and so on have been considered as the structural design variables of neural networks. In application to optimization design problem of neural networks, there are few studies about arrangement method of input layer neurons. To investigate the effect of arrangement of input neurons on neural network learning tire contour design parameters and tension in bead area were assigned to neural input and output respectively. Design variables arrangement in input layer was determined by main effect analysis.

계층구조 신경망을 이용한 한글 인식 (Hangul Recognition Using a Hierarchical Neural Network)

  • 최동혁;류성원;강현철;박규태
    • 전자공학회논문지B
    • /
    • 제28B권11호
    • /
    • pp.852-858
    • /
    • 1991
  • An adaptive hierarchical classifier(AHCL) for Korean character recognition using a neural net is designed. This classifier has two neural nets: USACL (Unsupervised Adaptive Classifier) and SACL (Supervised Adaptive Classifier). USACL has the input layer and the output layer. The input layer and the output layer are fully connected. The nodes in the output layer are generated by the unsupervised and nearest neighbor learning rule during learning. SACL has the input layer, the hidden layer and the output layer. The input layer and the hidden layer arefully connected, and the hidden layer and the output layer are partially connected. The nodes in the SACL are generated by the supervised and nearest neighbor learning rule during learning. USACL has pre-attentive effect, which perform partial search instead of full search during SACL classification to enhance processing speed. The input of USACL and SACL is a directional edge feature with a directional receptive field. In order to test the performance of the AHCL, various multi-font printed Hangul characters are used in learning and testing, and its processing its speed and and classification rate are compared with the conventional LVQ(Learning Vector Quantizer) which has the nearest neighbor learning rule.

  • PDF

Segmentation of Objects with Multi Layer Perceptron by Using Informations of Window

  • Kwak, Young-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • 제18권4호
    • /
    • pp.1033-1043
    • /
    • 2007
  • The multi layer perceptron for segmenting objects in images only uses the input windows that are made from a image in a fixed size. These windows are recognized so each independent learning data that they make the performance of the multi layer perceptron poor. The poor performance is caused by not considering the position information and effect of input windows in input images. So we propose a new approach to add the position information and effect of input windows to the multi layer perceptron#s input layer. Our new approach improves the performance as well as the learning time in the multi layer perceptron. In our experiment, we can find our new algorithm good.

  • PDF

텔타규칙을 이용한 다단계 신경회로망 컴퓨터:Recognitron III (Multilayer Neural Network Using Delta Rule: Recognitron III)

  • 김춘석;박충규;이기한;황희영
    • 대한전기학회논문지
    • /
    • 제40권2호
    • /
    • pp.224-233
    • /
    • 1991
  • The multilayer expanson of single layer NN (Neural Network) was needed to solve the linear seperability problem as shown by the classic example using the XOR function. The EBP (Error Back Propagation ) learning rule is often used in multilayer Neural Networks, but it is not without its faults: 1)D.Rimmelhart expanded the Delta Rule but there is a problem in obtaining Ca from the linear combination of the Weight matrix N between the hidden layer and the output layer and H, wich is the result of another linear combination between the input pattern and the Weight matrix M between the input layer and the hidden layer. 2) Even if using the difference between Ca and Da to adjust the values of the Weight matrix N between the hidden layer and the output layer may be valid is correct, but using the same value to adjust the Weight matrixd M between the input layer and the hidden layer is wrong. Recognitron III was proposed to solve these faults. According to simulation results, since Recognitron III does not learn the three layer NN itself, but divides it into several single layer NNs and learns these with learning patterns, the learning time is 32.5 to 72.2 time faster than EBP NN one. The number of patterns learned in a EBP NN with n input and output cells and n+1 hidden cells are 2**n, but n in Recognitron III of the same size. [5] In the case of pattern generalization, however, EBP NN is less than Recognitron III.

  • PDF

화학증착법에 의하여 제조된 탄화지르코늄 코팅층의 물성 (Properties of Chemical Vapor Deposited ZrC coating layer for TRISO Coated Fuel Particle)

  • 김준규;금이슬;최두진;이영우;박지연
    • 한국세라믹학회지
    • /
    • 제44권10호
    • /
    • pp.580-584
    • /
    • 2007
  • The ZrC layer instead of SiC layer is a critical and essential layer in TRISO coated fuel particles since it is a protective layer against diffusion of fission products and provides mechanical strength for the fuel particle. In this study, we carried out computational simulation before actual experiment. With these simulation results, Zirconium carbide (ZrC) films were chemically vapor deposited on $ZrO_2$ substrate using zirconium tetrachloride $(ZrCl_4),\;CH_4$ as a source and $H_2$ dilution gas, respectively. The change of input gas ratio was correlated with growth rate and morphology of deposited ZrC films. The growth rate of ZrC films increased as the input gas ratio decreased. The microstructure of ZrC films was changed with input gas ratio; small granular type grain structure was exhibited at the low input gas ratio. Angular type structure of increased grain size was observed at the high input gas ratio.

심층 인공신경망을 활용한 Smoothed RSSI 기반 거리 추정 (Smoothed RSSI-Based Distance Estimation Using Deep Neural Network)

  • 권혁돈;이솔비;권정혁;김의직
    • 사물인터넷융복합논문지
    • /
    • 제9권2호
    • /
    • pp.71-76
    • /
    • 2023
  • 본 논문에서는 단일 수신기가 사용되는 환경에서 정확한 거리 추정을 위해 심층 인공신경망 (Deep Neural Network, DNN)을 활용한 Smoothed Received Signal Strength Indicator (RSSI) 기반 거리 추정 기법을 제안한다. 제안 기법은 거리 추정 정확도 향상을 위해 Data Splitting, 결측치 대치, Smoothing 단계로 구성된 전처리 과정을 수행하여 Smoothed RSSI 값을 도출한다. 도출된 다수의 Smoothed RSSI 값은 Multi-Input Single-Output(MISO) DNN 모델의 Input Data로 사용되며 Input Layer와 Hidden Layer를 통과하여 최종적으로 Output Layer에서 추정 거리로 반환된다. 제안 기법의 우수성을 입증하기 위해 제안 기법과 선형회귀 기반 거리 추정 기법의 성능을 비교하였다. 실험 결과, 제안 기법이 선형회귀 기반 거리 추정 기법 대비 29.09% 더 높은 거리 추정 정확도를 보였다.

Deep LS-SVM for regression

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권3호
    • /
    • pp.827-833
    • /
    • 2016
  • In this paper, we propose a deep least squares support vector machine (LS-SVM) for regression problems, which consists of the input layer and the hidden layer. In the hidden layer, LS-SVMs are trained with the original input variables and the perturbed responses. For the final output, the main LS-SVM is trained with the outputs from LS-SVMs of the hidden layer as input variables and the original responses. In contrast to the multilayer neural network (MNN), LS-SVMs in the deep LS-SVM are trained to minimize the penalized objective function. Thus, the learning dynamics of the deep LS-SVM are entirely different from MNN in which all weights and biases are trained to minimize one final error function. When compared to MNN approaches, the deep LS-SVM does not make use of any combination weights, but trains all LS-SVMs in the architecture. Experimental results from real datasets illustrate that the deep LS-SVM significantly outperforms state of the art machine learning methods on regression problems.

퍼지TAM 네트워크를 이용한 조직리더의 패턴분석 (Pattern Analysis of Organizational Leader Using Fuzzy TAM Network)

  • 박수점;황승국
    • 한국지능시스템학회논문지
    • /
    • 제17권2호
    • /
    • pp.238-243
    • /
    • 2007
  • 신경망 모델에 기반한 TAM 네트워크는 특별히 패턴분석에 효과적인 모델이다. TAM 네트워크는 입력층, 카테고리층, 출력층으로 구성되어 있다. 입력 및 출력 데이터에 대한 퍼지룰은 TAM 네트워크에서 얻어진다. 각 층에서 링크와 노드를 감소하기 위한 3가지의 프루닝룰을 사용하는 TAM 네크워크를 퍼지 TAM 네트워크라고 한다. 본 논문에서는 퍼지 TAM 네트워크를 조직리더에 대한 리더십 유형의 패턴분석에 적용하고 그 유용성을 보인다. 여기서, 입력층의 평가기준은 이고그램의 성격유형 관련변수의 값이고, 출력층의 목표값은 에니어그램의 성격유형과 관련된 리더십이다.

다영상 분류를 위한 단층 적응 신경회로망의 광학적 구현 (Optical Implementation of Single-Layer Adaptive Neural Network for Multicategory Classification.)

  • 이상훈
    • 한국광학회:학술대회논문집
    • /
    • 한국광학회 1991년도 제6회 파동 및 레이저 학술발표회 Prodeedings of 6th Conference on Waves and Lasers
    • /
    • pp.23-28
    • /
    • 1991
  • A single-layer neural network with 4$\times$4 input neurons and 4 output neurons is optically implemented. Holographic lenslet arrays are used for the e optical interconnection topology, a liquid crystal light valve(LCLV) is used for controlling optical interconection weights. Using a Perceptron learning rule, it classifics input patterns into 4 different categories. It is shown that the performance of the adaptive neural network depends on the learning rate, the correlation of input patterns, and the nonlinear characteristic properties of the liquid crystal light valve.

  • PDF

자기조직화 신경망의 정렬된 연결강도를 이용한 클러스터링 알고리즘 (A Clustering Algorithm Using the Ordered Weight of Self-Organizing Feature Maps)

  • 이종섭;강맹규
    • 한국경영과학회지
    • /
    • 제31권3호
    • /
    • pp.41-51
    • /
    • 2006
  • Clustering is to group similar objects into clusters. Until now there are a lot of approaches using Self-Organizing feature Maps (SOFMS) But they have problems with a small output-layer nodes and initial weight. For example, one of them is a one-dimension map of c output-layer nodes, if they want to make c clusters. This approach has problems to classify elaboratively. This Paper suggests one-dimensional output-layer nodes in SOFMs. The number of output-layer nodes is more than those of clusters intended to find and the order of output-layer nodes is ascending in the sum of the output-layer node's weight. We un find input data in SOFMs output node and classify input data in output nodes using Euclidean distance. The proposed algorithm was tested on well-known IRIS data and TSPLIB. The results of this computational study demonstrate the superiority of the proposed algorithm.