• Title/Summary/Keyword: neural network training

Search Result 1,742, Processing Time 0.027 seconds

The Comparison of Neural Network Learning Paradigms: Backpropagation, Simulated Annealing, Genetic Algorithm, and Tabu Search

  • Chen Ming-Kuen
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 1998.11a
    • /
    • pp.696-704
    • /
    • 1998
  • Artificial neural networks (ANN) have successfully applied into various areas. But, How to effectively established network is the one of the critical problem. This study will focus on this problem and try to extensively study. Firstly, four different learning algorithms ANNs were constructed. The learning algorithms include backpropagation, simulated annealing, genetic algorithm, and tabu search. The experimental results of the above four different learning algorithms were tested by statistical analysis. The training RMS, training time, and testing RMS were used as the comparison criteria.

  • PDF

Comparison of EKF and UKF on Training the Artificial Neural Network

  • Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.499-506
    • /
    • 2004
  • The Unscented Kalman Filter is known to outperform the Extended Kalman Filter for the nonlinear state estimation with a significance advantage that it does not require the computation of Jacobian but EKF has a competitive advantage to the UKF on the performance time. We compare both algorithms on training the artificial neural network. The validation data set is used to estimate parameters which are supposed to result in better fitting for the test data set. Experimental results are presented which indicate the performance of both algorithms.

  • PDF

A Study on Characteristics of Neural Network Model for Reservoir Inflow Forecasting (저수지 유입량 예측을 위한 신경망 모형의 특성 연구)

  • Kim, Jae-Hvung;Yoon, Yong-Nam
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.2 no.4 s.7
    • /
    • pp.123-129
    • /
    • 2002
  • In this study the results of Chungju reservoir inflow forecasting using 3 layered neural network model were analyzed in order to investigate the characteristics of neural network model for reservoir inflow forecasting. The proper neuron numbers of input and hidden layer were proposed after examining the variations of forecasted values according to neuron number and training epoch changes, and the probability of underestimation was judged by deliberating the variation characteristics of forecasting according to the differences between training and forecasting peak inflow magnitudes. In addition, necessary minimum training data size for precise forecasting was proposed. As a result, We confirmed the probability that excessive neuron number and training epoch cause over-fitting and judged that applying $8{\sim}10$ neurons, $1500{\sim}3000$ training epochs might be suitable in the case of Chungju reservoir inflow forecasting. When the peak inflow of training data set was larger than the forecasted one, it was confirmed that the forecasted values could be underestimated. And when the comparative short period training data was applied to neural networks, relatively inaccurate forecasting outputs were resulted and applying more than 600 training data was recommended for more precise forecasting in Chungju reservoir.

Experience Sensitive Cumulative Neural Network Using RAM (RAM을 이용한 경험유관축적 신경망 모델)

  • 김성진;권영철;이수동
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.95-102
    • /
    • 2004
  • In this paper, Experience Sensitive Cumulative Neural Network (ESCNN) is introduced, which can cumulate the same or similar experiences. As the same or similar training patterns are cumulated in the network, the system recognizes more important information in the training patterns. The functions of forgetting less important information and attending more important information resided in the training patterns are surveyed and implemented by simulations. The system behaves well under the noisy circumstances due to its forgetting and/or attending properties, even in 50 percents noisy environments. This paper also describes the creation of the generalized patterns for the input training patterns.

GENIE : A learning intelligent system engine based on neural adaptation and genetic search (GENIE : 신경망 적응과 유전자 탐색 기반의 학습형 지능 시스템 엔진)

  • 장병탁
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1996.10a
    • /
    • pp.27-34
    • /
    • 1996
  • GENIE is a learning-based engine for building intelligent systems. Learning in GENIE proceeds by incrementally modeling its human or technical environment using a neural network and a genetic algorithm. The neural network is used to represent the knowledge for solving a given task and has the ability to grow its structure. The genetic algorithm provides the neural network with training examples by actively exploring the example space of the problem. Integrated into the training examples by actively exploring the example space of the problem. Integrated into the GENIE system architecture, the genetic algorithm and the neural network build a virtually self-teaching autonomous learning system. This paper describes the structure of GENIE and its learning components. The performance is demonstrated on a robot learning problem. We also discuss the lessons learned from experiments with GENIE and point out further possibilities of effectively hybridizing genetic algorithms with neural networks and other softcomputing techniques.

  • PDF

The speed control of induction motor using neural networks (신경회로망을 이용한 유도전동기 속도제어)

  • 김세찬;원충연
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.45 no.1
    • /
    • pp.42-53
    • /
    • 1996
  • The paper presents a speed control system of vector controlled induct- ion motor using neural networks. The main feature of proposed speed control system is a Neural Network Controller(NNC) which supplies torque current to induction motor and Neural Network Emulator(NNE) which captures the forward dynamics of induction motor. A back propagation training algorithm is employed to train the NNE and NNC. In order to determine the NNC output error, plant(induction motor) output error can be back propagated through the NNE. The NNC and NNE for speed control of vector controlled induction motor is carried out by TMS320C30 DSP and IGBT current regulated PWM inverter. Through computer simulation and experimental results, it is verified that proposed speed control system is robust to the load variation. (author). refs., figs.

  • PDF

Modeling of Strength of High Performance Concrete with Artificial Neural Network and Mahalanobis Distance Outlier Detection Method (신경망 이론과 Mahalanobis Distance 이상치 탐색방법을 이용한 고강도 콘크리트 강도 예측 모델 개발에 관한 연구)

  • Hong, Jung-Eui
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.33 no.4
    • /
    • pp.122-129
    • /
    • 2010
  • High-performance concrete (HPC) is a new terminology used in concrete construction industry. Several studies have shown that concrete strength development is determined not only by the water-to-cement ratio but also influenced by the content of other concrete ingredients. HPC is a highly complex material, which makes modeling its behavior a very difficult task. This paper aimed at demonstrating the possibilities of adapting artificial neural network (ANN) to predict the comprresive strength of HPC. Mahalanobis Distance (MD) outlier detection method used for the purpose increase prediction ability of ANN. The detailed procedure of calculating Mahalanobis Distance (MD) is described. The effects of outlier compared with before and after artificial neural network training. MD outlier detection method successfully removed existence of outlier and improved the neural network training and prediction performance.

Fault Location Technique of 154 kV Substation using Neural Network (신경회로망을 이용한 154kV 변전소의 고장 위치 판별 기법)

  • Ahn, Jong-Bok;Kang, Tae-Won;Park, Chul-Won
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.9
    • /
    • pp.1146-1151
    • /
    • 2018
  • Recently, researches on the intelligence of electric power facilities have been trying to apply artificial intelligence techniques as computer platforms have improved. In particular, faults occurring in substation should be able to quickly identify possible faults and minimize power fault recovery time. This paper presents fault location technique for 154kV substation using neural network. We constructed a training matrix based on the operating conditions of the circuit breaker and IED to identify the fault location of each component of the target 154kV substation, such as line, bus, and transformer. After performing the training to identify the fault location by the neural network using Weka software, the performance of fault location discrimination of the designed neural network was confirmed.

Robust architecture search using network adaptation

  • Rana, Amrita;Kim, Kyung Ki
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.5
    • /
    • pp.290-294
    • /
    • 2021
  • Experts have designed popular and successful model architectures, which, however, were not the optimal option for different scenarios. Despite the remarkable performances achieved by deep neural networks, manually designed networks for classification tasks are the backbone of object detection. One major challenge is the ImageNet pre-training of the search space representation; moreover, the searched network incurs huge computational cost. Therefore, to overcome the obstacle of the pre-training process, we introduce a network adaptation technique using a pre-trained backbone model tested on ImageNet. The adaptation method can efficiently adapt the manually designed network on ImageNet to the new object-detection task. Neural architecture search (NAS) is adopted to adapt the architecture of the network. The adaptation is conducted on the MobileNetV2 network. The proposed NAS is tested using SSDLite detector. The results demonstrate increased performance compared to existing network architecture in terms of search cost, total number of adder arithmetics (Madds), and mean Average Precision(mAP). The total computational cost of the proposed NAS is much less than that of the State Of The Art (SOTA) NAS method.

Training-Free Hardware-Aware Neural Architecture Search with Reinforcement Learning

  • Tran, Linh Tam;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.855-861
    • /
    • 2021
  • Neural Architecture Search (NAS) is cutting-edge technology in the machine learning community. NAS Without Training (NASWOT) recently has been proposed to tackle the high demand of computational resources in NAS by leveraging some indicators to predict the performance of architectures before training. The advantage of these indicators is that they do not require any training. Thus, NASWOT reduces the searching time and computational cost significantly. However, NASWOT only considers high-performing networks which does not guarantee a fast inference speed on hardware devices. In this paper, we propose a multi objectives reward function, which considers the network's latency and the predicted performance, and incorporate it into the Reinforcement Learning approach to search for the best networks with low latency. Unlike other methods, which use FLOPs to measure the latency that does not reflect the actual latency, we obtain the network's latency from the hardware NAS bench. We conduct extensive experiments on NAS-Bench-201 using CIFAR-10, CIFAR-100, and ImageNet-16-120 datasets, and show that the proposed method is capable of generating the best network under latency constrained without training subnetworks.