• Title/Summary/Keyword: Hidden Neurons

Search Result 131, Processing Time 0.035 seconds

Improvement of Neural Network Performance for Estimating Defect Size of Steam Generator Tube using Multifold Cross-Validation (다중겹 교차검증 기법을 이용한 증기세관 결함크기 예측을 위한 신경회로망 성능 향상)

  • Kim, Nam-Jin;Jee, Su-Jung;Jo, Nam-Hoon
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.26 no.9
    • /
    • pp.73-79
    • /
    • 2012
  • In this paper, we study on how to determine the number of hidden layer neurons in neural network for predicting defect size of steam generator tube. It was reported in the literature that the number of hidden layer neurons can be efficiently determined with the help of cross-validation. Although the cross-validation provides decent estimation performance in most cases, the performance depends on the selection of validation set and rather poor performance may be led to in some cases. In order to avoid such a problem, we propose to use multifold cross-validation. Through the simulation study, it is shown that the estimation performance of defect width (defect depth, respectively) attains 94% (99.4%, respectively) of the best performance achievable among the considered neuron numbers.

Estimating chlorophyll-A concentration in the Caspian Sea from MODIS images using artificial neural networks

  • Boudaghpour, Siamak;Moghadam, Hajar Sadat Alizadeh;Hajbabaie, Mohammadreza;Toliati, Seyed Hamidreza
    • Environmental Engineering Research
    • /
    • v.25 no.4
    • /
    • pp.515-521
    • /
    • 2020
  • Nowadays, due to various pollution sources, it is essential for environmental scientists to monitor water quality. Phytoplanktons form the end of the food chain in water bodies and are one of the most important biological indicators in water pollution studies. Chlorophyll-A, a green pigment, is found in all phytoplankton. Chlorophyll-A concentration indicates phytoplankton biomass directly. Therefore, Chlorophyll-A is an indirect indicator of pollutants, including phosphorus and nitrogen, and their refinement and control are important. The present study, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite images were used to estimate the chlorophyll-A concentration in southern coastal waters in the Caspian Sea. For this purpose, Multi-layer perceptron neural networks (NNs) were applied which contained three and four feed-forward layers. The best three-layer NN has 15 neurons in its hidden layer and the best four-layer one has 5 in each. The three- and four- layer networks both resulted in similar root mean square errors (RMSE), 0.1($\frac{{\mu}g}{l}$), however, the four-layer NNs proved superior in terms of R2 and also required less training data. Accordingly, a four-layer feed-forward NN with 5 neurons in each hidden layer, is the best network structure for estimating Chlorophyll-A concentration in the southern coastal waters of the Caspian Sea.

Optimal Synthesis Method for Binary Neural Network using NETLA (NETLA를 이용한 이진 신경회로망의 최적 합성방법)

  • Sung, Sang-Kyu;Kim, Tae-Woo;Park, Doo-Hwan;Jo, Hyun-Woo;Ha, Hong-Gon;Lee, Joon-Tark
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2726-2728
    • /
    • 2001
  • This paper describes an optimal synthesis method of binary neural network(BNN) for an approximation problem of a circular region using a newly proposed learning algorithm[7] Our object is to minimize the number of connections and neurons in hidden layer by using a Newly Expanded and Truncated Learning Algorithm(NETLA) for the multilayer BNN. The synthesis method in the NETLA is based on the extension principle of Expanded and Truncated Learning(ETL) and is based on Expanded Sum of Product (ESP) as one of the boolean expression techniques. And it has an ability to optimize the given BNN in the binary space without any iterative training as the conventional Error Back Propagation(EBP) algorithm[6] If all the true and false patterns are only given, the connection weights and the threshold values can be immediately determined by an optimal synthesis method of the NETLA without any tedious learning. Futhermore, the number of the required neurons in hidden layer can be reduced and the fast learning of BNN can be realized. The superiority of this NETLA to other algorithms was proved by the approximation problem of one circular region.

  • PDF

Bond strength prediction of steel bars in low strength concrete by using ANN

  • Ahmad, Sohaib;Pilakoutas, Kypros;Rafi, Muhammad M.;Zaman, Qaiser U.
    • Computers and Concrete
    • /
    • v.22 no.2
    • /
    • pp.249-259
    • /
    • 2018
  • This paper presents Artificial Neural Network (ANN) models for evaluating bond strength of deformed, plain and cold formed bars in low strength concrete. The ANN models were implemented using the experimental database developed by conducting experiments in three different universities on total of 138 pullout and 108 splitting specimens under monotonic loading. The key parameters examined in the experiments are low strength concrete, bar development length, concrete cover, rebar type (deformed, cold-formed, plain) and diameter. These deficient parameters are typically found in non-engineered reinforced concrete structures of developing countries. To develop ANN bond model for each bar type, four inputs (the low strength concrete, development length, concrete cover and bar diameter) are used for training the neurons in the network. Multi-Layer-Perceptron was trained according to a back-propagation algorithm. The ANN bond model for deformed bar consists of a single hidden layer and the 9 neurons. For Tor bar and plain bars the ANN models consist of 5 and 6 neurons and a single hidden layer, respectively. The developed ANN models are capable of predicting bond strength for both pull and splitting bond failure modes. The developed ANN models have higher coefficient of determination in training, validation and testing with good prediction and generalization capacity. The comparison of experimental bond strength values with the outcomes of ANN models showed good agreement. Moreover, the ANN model predictions by varying different parameters are also presented for all bar types.

A Multi-layer Bidirectional Associative Neural Network with Improved Robust Capability for Hardware Implementation (성능개선과 하드웨어구현을 위한 다층구조 양방향연상기억 신경회로망 모델)

  • 정동규;이수영
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.159-165
    • /
    • 1994
  • In this paper, we propose a multi-layer associative neural network structure suitable for hardware implementaion with the function of performance refinement and improved robutst capability. Unlike other methods which reduce network complexity by putting restrictions on synaptic weithts, we are imposing a requirement of hidden layer neurons for the function. The proposed network has synaptic weights obtainted by Hebbian rule between adjacent layer's memory patterns such as Kosko's BAM. This network can be extended to arbitary multi-layer network trainable with Genetic algorithm for getting hidden layer memory patterns starting with initial random binary patterns. Learning is done to minimize newly defined network error. The newly defined error is composed of the errors at input, hidden, and output layers. After learning, we have bidirectional recall process for performance improvement of the network with one-shot recall. Experimental results carried out on pattern recognition problems demonstrate its performace according to the parameter which represets relative significance of the hidden layer error over the sum of input and output layer errors, show that the proposed model has much better performance than that of Kosko's bidirectional associative memory (BAM), and show the performance increment due to the bidirectionality in recall process.

  • PDF

Chip design and application of gas classification function using MLP classification method (MLP분류법을 적용한 가스분류기능의 칩 설계 및 응용)

  • 장으뜸;서용수;정완영
    • Proceedings of the IEEK Conference
    • /
    • 2001.06b
    • /
    • pp.309-312
    • /
    • 2001
  • A primitive gas classification system which can classify limited species of gas was designed and simulated. The 'electronic nose' consists of an array of 4 metal oxide gas sensors with different selectivity patterns, signal collecting unit and a signal pattern recognition and decision Part in PLD(programmable logic device) chip. Sensor array consists of four commercial, tin oxide based, semiconductor type gas sensors. BP(back propagation) neutral networks with MLP(Multilayer Perceptron) structure was designed and implemented on CPLD of fifty thousand gate level chip by VHDL language for processing the input signals from 4 gas sensors and qualification of gases in air. The network contained four input units, one hidden layer with 4 neurons and output with 4 regular neurons. The 'electronic nose' system was successfully classified 4 kinds of industrial gases in computer simulation.

  • PDF

Optimal Algorithm and Number of Neurons in Deep Learning (딥러닝 학습에서 최적의 알고리즘과 뉴론수 탐색)

  • Jang, Ha-Young;You, Eun-Kyung;Kim, Hyeock-Jin
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.389-396
    • /
    • 2022
  • Deep Learning is based on a perceptron, and is currently being used in various fields such as image recognition, voice recognition, object detection, and drug development. Accordingly, a variety of learning algorithms have been proposed, and the number of neurons constituting a neural network varies greatly among researchers. This study analyzed the learning characteristics according to the number of neurons of the currently used SGD, momentum methods, AdaGrad, RMSProp, and Adam methods. To this end, a neural network was constructed with one input layer, three hidden layers, and one output layer. ReLU was applied to the activation function, cross entropy error (CEE) was applied to the loss function, and MNIST was used for the experimental dataset. As a result, it was concluded that the number of neurons 100-300, the algorithm Adam, and the number of learning (iteraction) 200 would be the most efficient in deep learning learning. This study will provide implications for the algorithm to be developed and the reference value of the number of neurons given new learning data in the future.

Implementation of Exchange Rate Forecasting Neural Network Using Heterogeneous Computing (이기종 컴퓨팅을 활용한 환율 예측 뉴럴 네트워크 구현)

  • Han, Seong Hyeon;Lee, Kwang Yeob
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.11
    • /
    • pp.71-79
    • /
    • 2017
  • In this paper, we implemented the exchange rate forecasting neural network using heterogeneous computing. Exchange rate forecasting requires a large amount of data. We used a neural network that could leverage this data accordingly. Neural networks are largely divided into two processes: learning and verification. Learning took advantage of the CPU. For verification, RTL written in Verilog HDL was run on FPGA. The structure of the neural network has four input neurons, four hidden neurons, and one output neuron. The input neurons used the US $ 1, Japanese 100 Yen, EU 1 Euro, and UK £ 1. The input neurons predicted a Canadian dollar value of $ 1. The order of predicting the exchange rate is input, normalization, fixed-point conversion, neural network forward, floating-point conversion, denormalization, and outputting. As a result of forecasting the exchange rate in November 2016, there was an error amount between 0.9 won and 9.13 won. If we increase the number of neurons by adding data other than the exchange rate, it is expected that more precise exchange rate prediction will be possible.

A Comparative Study of Image Recognition by Neural Network Classifier and Linear Tree Classifier (신경망 분류기와 선형트리 분류기에 의한 영상인식의 비교연구)

  • Young Tae Park
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.5
    • /
    • pp.141-148
    • /
    • 1994
  • Both the neural network classifier utilizing multi-layer perceptron and the linear tree classifier composed of hierarchically structured linear discriminating functions can form arbitrarily complex decision boundaries in the feature space and have very similar decision making processes. In this paper, a new method for automatically choosing the number of neurons in the hidden layers and for initalzing the connection weights between the layres and its supporting theory are presented by mapping the sequential structure of the linear tree classifier to the parallel structure of the neural networks having one or two hidden layers. Experimental results on the real data obtained from the military ship images show that this method is effective, and that three exists no siginificant difference in the classification acuracy of both classifiers.

  • PDF

Comparison of the BOD Forecasting Ability of the ARIMA model and the Artificial Neural Network Model (ARIMA 모형과 인공신경망모형의 BOD예측력 비교)

  • 정효준;이홍근
    • Journal of Environmental Health Sciences
    • /
    • v.28 no.3
    • /
    • pp.19-25
    • /
    • 2002
  • In this paper, the water quality forecast was performed on the BOD of the Chungju Dam using the ARIMA model, which is a nonlinear statistics model, and the artificial neural network model. The monthly data of water quality were collected from 1991 to 2000. The most appropriate ARIMA model for Chungju dam was found to be the multiplicative seasonal ARIMA(1,0,1)(1,0,1)$_{12}$, model. While the artificial neural network model, which is used relatively often in recent days, forecasts new data by the strength of a learned matrix like human neurons. The BOD values were forecasted using the back-propagation algorithm of multi-layer perceptrons in this paper. Artificial neural network model was com- posed of two hidden layers and the node number of each hidden layer was designed fifteen. It was demonstrated that the ARIMA model was more appropriate in terms of changes around the overall average, but the artificial neural net-work model was more appropriate in terms of reflecting the minimum and the maximum values.s.