• 제목/요약/키워드: neural network training

검색결과 1,742건 처리시간 0.026초

The Comparison of Neural Network Learning Paradigms: Backpropagation, Simulated Annealing, Genetic Algorithm, and Tabu Search

  • Chen Ming-Kuen
    • 한국품질경영학회:학술대회논문집
    • /
    • 한국품질경영학회 1998년도 The 12th Asia Quality Management Symposium* Total Quality Management for Restoring Competitiveness
    • /
    • pp.696-704
    • /
    • 1998
  • Artificial neural networks (ANN) have successfully applied into various areas. But, How to effectively established network is the one of the critical problem. This study will focus on this problem and try to extensively study. Firstly, four different learning algorithms ANNs were constructed. The learning algorithms include backpropagation, simulated annealing, genetic algorithm, and tabu search. The experimental results of the above four different learning algorithms were tested by statistical analysis. The training RMS, training time, and testing RMS were used as the comparison criteria.

  • PDF

Comparison of EKF and UKF on Training the Artificial Neural Network

  • Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • 제15권2호
    • /
    • pp.499-506
    • /
    • 2004
  • The Unscented Kalman Filter is known to outperform the Extended Kalman Filter for the nonlinear state estimation with a significance advantage that it does not require the computation of Jacobian but EKF has a competitive advantage to the UKF on the performance time. We compare both algorithms on training the artificial neural network. The validation data set is used to estimate parameters which are supposed to result in better fitting for the test data set. Experimental results are presented which indicate the performance of both algorithms.

  • PDF

저수지 유입량 예측을 위한 신경망 모형의 특성 연구 (A Study on Characteristics of Neural Network Model for Reservoir Inflow Forecasting)

  • 김재형;윤용남
    • 한국방재학회 논문집
    • /
    • 제2권4호
    • /
    • pp.123-129
    • /
    • 2002
  • 본 연구에서는 3층 신경망 모형에 의해 충주호의 유입량을 예측한 결과들을 이용하여 신경망 모형의 저수지 유입량 예측 특성을 분석하였다. 신경망 모형의 적절한 입력층 및 은닉층 뉴런 개수, 학습회수를 제시하였으며, 학습 첨두유량 크기가 예측된 첨두유량보다 작을 경우 예측 값이 과소평가되는 특징을 확인하였다. 또한 뉴런 개수, 학습회수가 과다할 경우 발생 가능한 과적합 현상을 확인하였으며, 정확한 예측을 위해 필요한 최소 학습자료 기간도 제시하였다. 결과적으로 충주호의 경우 $8{\sim}10$개의 뉴런 개수 및 $1500{\sim}3000$회의 학습회수를 이용한 신경망 모형이 적합한 것으로, 학습자료 기간 수는 최소한 600개 이상의 자료를 적용하여야 정확한 예측이 가능한 것으로 결과되었다.

RAM을 이용한 경험유관축적 신경망 모델 (Experience Sensitive Cumulative Neural Network Using RAM)

  • 김성진;권영철;이수동
    • 전자공학회논문지CI
    • /
    • 제41권2호
    • /
    • pp.95-102
    • /
    • 2004
  • 제안된 경험 유관 축적 신경회로망은 입력 패턴의 교육 회수를 누적시킬 수 있는 구조를 가지고 있어, 누적된 교육을 통한 공통된 경험에 대해서는 강한 반응을 보이는 주의 집중 기능을 가진다. 그리고 잡음이 많은 패턴에 대하여 선행처리 과정을 거치지 않고 바로 교육을 시켜도 상대적으로 유용한 정보를 누적시켜 일반화 패턴을 추출할 수 있다 본 논문에서는 추가 교육 뿐만 아니라 반복 교육도 가능한 경험 유관 축적 신경회로망 모델을 제안하고, 이 신경회로망이 가지는 기본 특성인 망각 및 주의 집중기능에 대하여 기술하였으며, 또한 교육된 정보로부터 일반화 패턴의 추출 과정과 일반화 패턴의 생성 및 반복교육에 관한 것을 기술하였다.

GENIE : 신경망 적응과 유전자 탐색 기반의 학습형 지능 시스템 엔진 (GENIE : A learning intelligent system engine based on neural adaptation and genetic search)

  • 장병탁
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1996년도 추계학술대회 학술발표 논문집
    • /
    • pp.27-34
    • /
    • 1996
  • GENIE is a learning-based engine for building intelligent systems. Learning in GENIE proceeds by incrementally modeling its human or technical environment using a neural network and a genetic algorithm. The neural network is used to represent the knowledge for solving a given task and has the ability to grow its structure. The genetic algorithm provides the neural network with training examples by actively exploring the example space of the problem. Integrated into the training examples by actively exploring the example space of the problem. Integrated into the GENIE system architecture, the genetic algorithm and the neural network build a virtually self-teaching autonomous learning system. This paper describes the structure of GENIE and its learning components. The performance is demonstrated on a robot learning problem. We also discuss the lessons learned from experiments with GENIE and point out further possibilities of effectively hybridizing genetic algorithms with neural networks and other softcomputing techniques.

  • PDF

신경회로망을 이용한 유도전동기 속도제어 (The speed control of induction motor using neural networks)

  • 김세찬;원충연
    • 대한전기학회논문지
    • /
    • 제45권1호
    • /
    • pp.42-53
    • /
    • 1996
  • The paper presents a speed control system of vector controlled induct- ion motor using neural networks. The main feature of proposed speed control system is a Neural Network Controller(NNC) which supplies torque current to induction motor and Neural Network Emulator(NNE) which captures the forward dynamics of induction motor. A back propagation training algorithm is employed to train the NNE and NNC. In order to determine the NNC output error, plant(induction motor) output error can be back propagated through the NNE. The NNC and NNE for speed control of vector controlled induction motor is carried out by TMS320C30 DSP and IGBT current regulated PWM inverter. Through computer simulation and experimental results, it is verified that proposed speed control system is robust to the load variation. (author). refs., figs.

  • PDF

신경망 이론과 Mahalanobis Distance 이상치 탐색방법을 이용한 고강도 콘크리트 강도 예측 모델 개발에 관한 연구 (Modeling of Strength of High Performance Concrete with Artificial Neural Network and Mahalanobis Distance Outlier Detection Method)

  • 홍정의
    • 산업경영시스템학회지
    • /
    • 제33권4호
    • /
    • pp.122-129
    • /
    • 2010
  • High-performance concrete (HPC) is a new terminology used in concrete construction industry. Several studies have shown that concrete strength development is determined not only by the water-to-cement ratio but also influenced by the content of other concrete ingredients. HPC is a highly complex material, which makes modeling its behavior a very difficult task. This paper aimed at demonstrating the possibilities of adapting artificial neural network (ANN) to predict the comprresive strength of HPC. Mahalanobis Distance (MD) outlier detection method used for the purpose increase prediction ability of ANN. The detailed procedure of calculating Mahalanobis Distance (MD) is described. The effects of outlier compared with before and after artificial neural network training. MD outlier detection method successfully removed existence of outlier and improved the neural network training and prediction performance.

신경회로망을 이용한 154kV 변전소의 고장 위치 판별 기법 (Fault Location Technique of 154 kV Substation using Neural Network)

  • 안종복;강태원;박철원
    • 전기학회논문지
    • /
    • 제67권9호
    • /
    • pp.1146-1151
    • /
    • 2018
  • Recently, researches on the intelligence of electric power facilities have been trying to apply artificial intelligence techniques as computer platforms have improved. In particular, faults occurring in substation should be able to quickly identify possible faults and minimize power fault recovery time. This paper presents fault location technique for 154kV substation using neural network. We constructed a training matrix based on the operating conditions of the circuit breaker and IED to identify the fault location of each component of the target 154kV substation, such as line, bus, and transformer. After performing the training to identify the fault location by the neural network using Weka software, the performance of fault location discrimination of the designed neural network was confirmed.

Robust architecture search using network adaptation

  • Rana, Amrita;Kim, Kyung Ki
    • 센서학회지
    • /
    • 제30권5호
    • /
    • pp.290-294
    • /
    • 2021
  • Experts have designed popular and successful model architectures, which, however, were not the optimal option for different scenarios. Despite the remarkable performances achieved by deep neural networks, manually designed networks for classification tasks are the backbone of object detection. One major challenge is the ImageNet pre-training of the search space representation; moreover, the searched network incurs huge computational cost. Therefore, to overcome the obstacle of the pre-training process, we introduce a network adaptation technique using a pre-trained backbone model tested on ImageNet. The adaptation method can efficiently adapt the manually designed network on ImageNet to the new object-detection task. Neural architecture search (NAS) is adopted to adapt the architecture of the network. The adaptation is conducted on the MobileNetV2 network. The proposed NAS is tested using SSDLite detector. The results demonstrate increased performance compared to existing network architecture in terms of search cost, total number of adder arithmetics (Madds), and mean Average Precision(mAP). The total computational cost of the proposed NAS is much less than that of the State Of The Art (SOTA) NAS method.

Training-Free Hardware-Aware Neural Architecture Search with Reinforcement Learning

  • Tran, Linh Tam;Bae, Sung-Ho
    • 방송공학회논문지
    • /
    • 제26권7호
    • /
    • pp.855-861
    • /
    • 2021
  • Neural Architecture Search (NAS) is cutting-edge technology in the machine learning community. NAS Without Training (NASWOT) recently has been proposed to tackle the high demand of computational resources in NAS by leveraging some indicators to predict the performance of architectures before training. The advantage of these indicators is that they do not require any training. Thus, NASWOT reduces the searching time and computational cost significantly. However, NASWOT only considers high-performing networks which does not guarantee a fast inference speed on hardware devices. In this paper, we propose a multi objectives reward function, which considers the network's latency and the predicted performance, and incorporate it into the Reinforcement Learning approach to search for the best networks with low latency. Unlike other methods, which use FLOPs to measure the latency that does not reflect the actual latency, we obtain the network's latency from the hardware NAS bench. We conduct extensive experiments on NAS-Bench-201 using CIFAR-10, CIFAR-100, and ImageNet-16-120 datasets, and show that the proposed method is capable of generating the best network under latency constrained without training subnetworks.