• Title/Summary/Keyword: Hidden Layer

Search Result 511, Processing Time 0.028 seconds

Generalization of Recurrent Cascade Correlation Algorithm and Morse Signal Experiments using new Activation Functions (순환 케스케이드 코릴레이션 알고리즘의 일반화와 새로운 활성화함수를 사용한 모스 신호 실험)

  • Song Hae-Sang;Lee Sang-Wha
    • Journal of Intelligence and Information Systems
    • /
    • v.10 no.2
    • /
    • pp.53-63
    • /
    • 2004
  • Recurrent-Cascade-Correlation(RCC) is a supervised teaming algorithm that automatically determines the size and topology of the network. RCC adds new hidden neurons one by one and creates a multi-layer structure in which each hidden layer has only one neuron. By second order RCC, new hidden neurons are added to only one hidden layer. These created neurons are not connected to each other. We present a generalization of the RCC Architecture by combining the standard RCC Architecture and the second order RCC Architecture. Whenever a hidden neuron has to be added, the new RCC teaming algorithm automatically determines whether the network topology grows vertically or horizontally. This new algorithm using sigmoid, tanh and new activation functions was tested with the morse-benchmark-problem. Therefore we recognized that the number of hidden neurons was decreased by the experiments of the RCC network generalization which used the activation functions.

  • PDF

New Approach to Optimize the Size of Convolution Mask in Convolutional Neural Networks

  • Kwak, Young-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.1
    • /
    • pp.1-8
    • /
    • 2016
  • Convolutional neural network (CNN) consists of a few pairs of both convolution layer and subsampling layer. Thus it has more hidden layers than multi-layer perceptron. With the increased layers, the size of convolution mask ultimately determines the total number of weights in CNN because the mask is shared among input images. It also is an important learning factor which makes or breaks CNN's learning. Therefore, this paper proposes the best method to choose the convolution size and the number of layers for learning CNN successfully. Through our face recognition with vast learning examples, we found that the best size of convolution mask is 5 by 5 and 7 by 7, regardless of the number of layers. In addition, the CNN with two pairs of both convolution and subsampling layer is found to make the best performance as if the multi-layer perceptron having two hidden layers does.

New criteria to fix number of hidden neurons in multilayer perceptron networks for wind speed prediction

  • Sheela, K. Gnana;Deepa, S.N.
    • Wind and Structures
    • /
    • v.18 no.6
    • /
    • pp.619-631
    • /
    • 2014
  • This paper proposes new criteria to fix hidden neuron in Multilayer Perceptron Networks for wind speed prediction in renewable energy systems. To fix hidden neurons, 101 various criteria are examined based on the estimated mean squared error. The results show that proposed approach performs better in terms of testing mean squared errors. The convergence analysis is performed for the various proposed criteria. Mean squared error is used as an indicator for fixing neuron in hidden layer. The proposed criteria find solution to fix hidden neuron in neural networks. This approach is effective, accurate with minimal error than other approaches. The significance of increasing the number of hidden neurons in multilayer perceptron network is also analyzed using these criteria. To verify the effectiveness of the proposed method, simulations were conducted on real time wind data. Simulations infer that with minimum mean squared error the proposed approach can be used for wind speed prediction in renewable energy systems.

A New Type of the Elmaln Neural Network (새로운 형태의 Elman 신경회로망)

  • 최우승;김주동
    • Journal of the Korea Society of Computer and Information
    • /
    • v.4 no.1
    • /
    • pp.62-67
    • /
    • 1999
  • The neural network is a static network that consists of a number of layer: input layer, output layer and one or more hidden layer connected in a feed forward way. The popularity of neural network appear to be its ability of learning and approximation capability. The Elman Neural Network proposed the J. Elman, is a type of recurrent network. Is has the feedback links from hidden layer to context layer. So Elman Neural Network is the better performance than the neural network. In this paper. we propose the Modified Elman Neural Network. The structure of a MENN is based on the basic ENN. The recurrency of the network is due to the feedback links from the output layer and the hidden layer to the context layer. In order to certify the usefulness of the proposed method, the MENN apply to the X-Y cartesian tracking system. Simulation shows that the proposed MENN method is better performance than the multi layer neural network and ENN.

  • PDF

A Control Method using the modified Elman Neural Network (변형된 Elman 신경회로망을 이용한 제어방식)

  • 최우승;김주동
    • Journal of the Korea Society of Computer and Information
    • /
    • v.4 no.3
    • /
    • pp.67-72
    • /
    • 1999
  • The neural network is a static network that consists of a number of layer: input layer, output layer and one or more hidden layer connected in a feed forward way. The popularity of neural network appear to be its ability of learning and approximation capability. The Elman Neural Network proposed the J. Elman. is a type of recurrent network. Is has the feedback links from hidden layer to context layer. So Elman Neural Network is the better performance than the neural network. In this paper. we propose the Modified Elman Neural Network. The structure of a MENN is based on the basic ENN. The recurrency of the network is due to the feedback links from the output layer and the hidden layer to the context layer. In order to certify the usefulness or the proposed method. the MENN apply to the multi target system. Simulation shows that the proposed MENN method is better performance than the multi layer neural network and ENN.

A Efficient Rule Extraction Method Using Hidden Unit Clarification in Trained Neural Network (인공 신경망에서 은닉 유닛 명확화를 이용한 효율적인 규칙추출 방법)

  • Lee, Hurn-joo;Kim, Hyeoncheol
    • The Journal of Korean Association of Computer Education
    • /
    • v.21 no.1
    • /
    • pp.51-58
    • /
    • 2018
  • Recently artificial neural networks have shown excellent performance in various fields. However, there is a problem that it is difficult for a person to understand what is the knowledge that artificial neural network trained. One of the methods to solve these problems is an algorithm for extracting rules from trained neural network. In this paper, we extracted rules from artificial neural networks using ordered-attribute search(OAS) algorithm, which is one of the methods of extracting rules, and analyzed result to improve extracted rules. As a result, we have found that the distribution of output values of the hidden layer unit affects the accuracy of rules extracted by using OAS algorithm, and it is suggested that efficient rules can be extracted by binarizing hidden layer output values using hidden unit clarification.

Parameter Optimization of Extreme Learning Machine Using Bacterial Foraging Algorithm (Bacterial Foraging Algorithm을 이용한 Extreme Learning Machine의 파라미터 최적화)

  • Cho, Jae-Hoon;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.6
    • /
    • pp.807-812
    • /
    • 2007
  • Recently, Extreme learning machine(ELM), a novel learning algorithm which is much faster than conventional gradient-based learning algorithm, was proposed for single-hidden-layer feedforward neural networks. The initial input weights and hidden biases of ELM are usually randomly chosen, and the output weights are analytically determined by using Moore-Penrose(MP) generalized inverse. But it has the difficulties to choose initial input weights and hidden biases. In this paper, an advanced method using the bacterial foraging algorithm to adjust the input weights and hidden biases is proposed. Experiment at results show that this method can achieve better performance for problems having higher dimension than others.

A Study on the Syllable Recognition Using Neural Network Predictive HMM

  • Kim, Soo-Hoon;Kim, Sang-Berm;Koh, Si-Young;Hur, Kang-In
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.2E
    • /
    • pp.26-30
    • /
    • 1998
  • In this paper, we compose neural network predictive HMM(NNPHMM) to provide the dynamic feature of the speech pattern for the HMM. The NNPHMM is the hybrid network of neura network and the HMM. The NNPHMM trained to predict the future vector, varies each time. It is used instead of the mean vector in the HMM. In the experiment, we compared the recognition abilities of the one hundred Korean syllables according to the variation of hidden layer, state number and prediction orders of the NNPHMM. The hidden layer of NNPHMM increased from 10 dimensions to 30 dimensions, the state number increased from 4 to 6 and the prediction orders increased from 10 dimensions to 30 dimension, the state number increased from 4 to 6 and the prediction orders increased from the second oder to the fourth order. The NNPHMM in the experiment is composed of multi-layer perceptron with one hidden layer and CMHMM. As a result of the experiment, the case of prediction order is the second, the average recognition rate increased 3.5% when the state number is changed from 4 to 5. The case of prediction order is the third, the recognition rate increased 4.0%, and the case of prediction order is fourth, the recognition rate increased 3.2%. But the recognition rate decreased when the state number is changed from 5 to 6.

  • PDF

Segment unit shuffling layer in deep neural networks for text-independent speaker verification (문장 독립 화자 인증을 위한 세그멘트 단위 혼합 계층 심층신경망)

  • Heo, Jungwoo;Shim, Hye-jin;Kim, Ju-ho;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.148-154
    • /
    • 2021
  • Text-Independent speaker verification needs to extract text-independent speaker embedding to improve generalization performance. However, deep neural networks that depend on training data have the potential to overfit text information instead of learning the speaker information when repeatedly learning from the identical time series. In this paper, to prevent the overfitting, we propose a segment unit shuffling layer that divides and rearranges the input layer or a hidden layer along the time axis, thus mixes the time series information. Since the segment unit shuffling layer can be applied not only to the input layer but also to the hidden layers, it can be used as generalization technique in the hidden layer, which is known to be effective compared to the generalization technique in the input layer, and can be applied simultaneously with data augmentation. In addition, the degree of distortion can be adjusted by adjusting the unit size of the segment. We observe that the performance of text-independent speaker verification is improved compared to the baseline when the proposed segment unit shuffling layer is applied.

Development of Artificial Neural Network Model for the Prediction of Descending Time of Room Air Temperature (실온하강신간 예측을 위한 신경망 모델의 개발)

  • 양인호;김광우
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.12 no.11
    • /
    • pp.1038-1047
    • /
    • 2000
  • The objective of this study is to develop an optimized Artificial Neural Network(ANN) model to predict the descending time of room air temperature. For this, program for predicting room air temperature and ANN program using generalized delta rule were collected through simulation for predicting room air temperature. ANN was trained and the ANN model having the optimized values-learning rate, moment, bias, number of hidden layer, and number of neuron of hidden layer was presented.

  • PDF