• Title/Summary/Keyword: neural network training

Search Result 1,750, Processing Time 0.062 seconds

Indirect adaptive control of nonlinear systems using Genetic Algorithm based Dynamic neural network (GA 학습 방법 기반 동적 신경 회로망을 이용한 비선형 시스템의 간접 적응 제어)

  • Cho, Hyun-Seob;Oh, Myoung-Kwan
    • Proceedings of the KAIS Fall Conference
    • /
    • 2007.11a
    • /
    • pp.81-84
    • /
    • 2007
  • In this thesis, we have designed the indirect adaptive controller using Dynamic Neural Units(DNU) for unknown nonlinear systems. Proposed indirect adaptive controller using Dynamic Neural Unit based upon the topology of a reverberating circuit in a neuronal pool of the central nervous system. In this thesis, we present a genetic DNU-control scheme for unknown nonlinear systems. Our method is different from those using supervised learning algorithms, such as the backpropagation (BP) algorithm, that needs training information in each step. The contributions of this thesis are the new approach to constructing neural network architecture and its training.

  • PDF

Training Sample and Feature Selection Methods for Pseudo Sample Neural Networks (의사 샘플 신경망에서 학습 샘플 및 특징 선택 기법)

  • Heo, Gyeongyong;Park, Choong-Shik;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.4
    • /
    • pp.19-26
    • /
    • 2013
  • Pseudo sample neural network (PSNN) is a variant of traditional neural network using pseudo samples to mitigate the local-optima-convergence problem when the size of training samples is small. PSNN can take advantage of the smoothed solution space through the use of pseudo samples. PSNN has a focus on the quantity problem in training, whereas, methods stressing the quality of training samples is presented in this paper to improve further the performance of PSNN. It is evident that typical samples and highly correlated features help in training. In this paper, therefore, kernel density estimation is used to select typical samples and correlation factor is introduced to select features, which can improve the performance of PSNN. Debris flow data set is used to demonstrate the usefulness of the proposed methods.

An Emphirical Closed Loop Modeling of a Suspension System using a Neural Networks (신경회로망을 이용한 폐회로 현가장치의 시스템 모델링)

  • 김일영;정길도;노태수;홍동표
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.11a
    • /
    • pp.384-388
    • /
    • 1996
  • The closed-loop system modeling of an Active/semiactive suspension system has been accomplished through an artificial neural Networks. The 7DOF full model as the system equation of motion has been derived and the output feedback linear quadratic regulator has been designed for the control purpose. For the neural networks training set of a sample data has been obtained through the computer simulation. A 7DOF full model with LQR controller simulated under the several road conditions such as sinusoidal bumps and the rectangular bumps. A general multilayer perceptron neural network is used for the dynamic modeling and the target outputs are feedback to the input layer. The Backpropagation method is used as the training algorithm. The modeling of system and the model validation have been shown through computer simulations.

  • PDF

Improvement of PM Forecasting Performance by Outlier Data Removing (Outlier 데이터 제거를 통한 미세먼지 예보성능의 향상)

  • Jeon, Young Tae;Yu, Suk Hyun;Kwon, Hee Yong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.6
    • /
    • pp.747-755
    • /
    • 2020
  • In this paper, we deal with outlier data problems that occur when constructing a PM2.5 fine dust forecasting system using a neural network. In general, when learning a neural network, some of the data are not helpful for learning, but rather disturbing. Those are called outlier data. When they are included in the training data, various problems such as overfitting occur. In building a PM2.5 fine dust concentration forecasting system using neural network, we have found several outlier data in the training data. We, therefore, remove them, and then make learning 3 ways. Over_outlier model removes outlier data that target concentration is low, but the model forecast is high. Under_outlier model removes outliers data that target concentration is high, but the model forecast is low. All_outlier model removes both Over_outlier and Under_outlier data. We compare 3 models with a conventional outlier removal model and non-removal model. Our outlier removal model shows better performance than the others.

Minimisation Technique for Seismic Noise Using a Neural Network (인공신경망을 이용한 탄성파 잡음제거)

  • Hwang Hak Soo;Lee Sang Kyu;Lee Tai Sup;Sung Nak Hoon
    • Geophysics and Geophysical Exploration
    • /
    • v.3 no.3
    • /
    • pp.83-87
    • /
    • 2000
  • The noise prediction filter using a local/remote reference was developed to obtain a high quality data from seismic surveys over the area where seismic transmission power is limited. The method used in the noise prediction filter is a 3-layer neural network whose algorithm is backpropagation. A NRF (Noise Reduction Factor) value of about 3.0 was obtained with appling training and test data to the trained noise prediction filter. However, the scaling technique generally used for minimizing EM noise from electric and electromagnetic data cannot reduce seismic noise, since the technique can allow only amplitude difference between two time series measured at the primary and reference sites.

  • PDF

Aircraft Identification and Orientation Estimention Using Multi-Layer Neural Network (다층 신경망을 사용한 항공기 인식 및 3차원 방향 추정)

  • Kim, Dae-Young;Chien, Sung-Il;Son, Hyon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.1
    • /
    • pp.35-45
    • /
    • 1991
  • Multi layer neural network using backpropagation learning algorithm is used to achieve identification and orientation estimation of different classes of aircraft in the variety of 3-D orientations. In-plane distortion invarient$(L,\;{\Phi})$ feature was extracted from each aircraft image to be used for training neural network aircraft classifier. For aircraft identification the optimum structure of the neural network classifier is studied to obtain high classification performance. Effective reductioin of learning time was achieved by using modified backpropagation learning algorithm and varying, learning parameters.

  • PDF

A Method for Optimizing the Structure of Neural Networks Based on Information Entropy

  • Yuan Hongchun;Xiong Fanlnu;Kei, Bai-Shi
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.30-33
    • /
    • 2001
  • The number of hidden neurons of the feed-forward neural networks is generally decided on the basis of experience. The method usually results in the lack or redundancy of hidden neurons, and causes the shortage of capacity for storing information of learning overmuch. This research proposes a new method for optimizing the number of hidden neurons bases on information entropy, Firstly, an initial neural network with enough hidden neurons should be trained by a set of training samples. Second, the activation values of hidden neurons should be calculated by inputting the training samples that can be identified correctly by the trained neural network. Third, all kinds of partitions should be tried and its information gain should be calculated, and then a decision-tree correctly dividing the whole sample space can be constructed. Finally, the important and related hidden neurons that are included in the tree can be found by searching the whole tree, and other redundant hidden neurons can be deleted. Thus, the number of hidden neurons can be decided. In the case of building a neural network with the best number of hidden units for tea quality evaluation, the proposed method is applied. And the result shows that the method is effective

  • PDF

Dust Prediction System based on Incremental Deep Learning (증강형 딥러닝 기반 미세먼지 예측 시스템)

  • Sung-Bong Jang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.301-307
    • /
    • 2023
  • Deep learning requires building a deep neural network, collecting a large amount of training data, and then training the built neural network for a long time. If training does not proceed properly or overfitting occurs, training will fail. When using deep learning tools that have been developed so far, it takes a lot of time to collect training data and learn. However, due to the rapid advent of the mobile environment and the increase in sensor data, the demand for real-time deep learning technology that can dramatically reduce the time required for neural network learning is rapidly increasing. In this study, a real-time deep learning system was implemented using an Arduino system equipped with a fine dust sensor. In the implemented system, fine dust data is measured every 30 seconds, and when up to 120 are accumulated, learning is performed using the previously accumulated data and the newly accumulated data as a dataset. The neural network for learning was composed of one input layer, one hidden layer, and one output. To evaluate the performance of the implemented system, learning time and root mean square error (RMSE) were measured. As a result of the experiment, the average learning error was 0.04053796, and the average learning time of one epoch was about 3,447 seconds.

Sensorless Speed Control of Induction Motor by Neural Network (신경회로망을 이용한 유도전동기의 센서리스 속도제어)

  • 김종수;김덕기;오세진;이성근;유희한;김성환
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.26 no.6
    • /
    • pp.695-704
    • /
    • 2002
  • Generally, induction motor controller requires rotor speed sensor for commutation and current control, but it increases cost and size of the motor. So in these days, various researches including speed sensorless vector control have been reported and some of them have been put to practical use. In this paper a new speed estimation method using neural networks is proposed. The optimal neural network structure was tracked down by trial and error, and it was found that the 8-16-1 neural network has given correct results for the instantaneous rotor speed. Supervised learning methods, through which the neural network is trained to learn the input/output pattern presented, are typically used. The back-propagation technique is used to adjust the neural network weights during training. The rotor speed is calculated by weights and eight inputs to the neural network. Also, the proposed method has advantages such as the independency on machine parameters, the insensitivity to the load condition, and the stability in the low speed operation.

Stable Path Tracking Control of a Mobile Robot Using a Wavelet Based Fuzzy Neural Network

  • Oh, Joon-Seop;Park, Jin-Bae;Choi, Yoon-Ho
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.4
    • /
    • pp.552-563
    • /
    • 2005
  • In this paper, we propose a wavelet based fuzzy neural network (WFNN) based direct adaptive control scheme for the solution of the tracking problem of mobile robots. To design a controller, we present a WFNN structure that merges the advantages of the neural network, fuzzy model and wavelet transform. The basic idea of our WFNN structure is to realize the process of fuzzy reasoning of the wavelet fuzzy system by the structure of a neural network and to make the parameters of fuzzy reasoning be expressed by the connection weights of a neural network. In our control system, the control signals are directly obtained to minimize the difference between the reference track and the pose of a mobile robot via the gradient descent (GD) method. In addition, an approach that uses adaptive learning rates for training of the WFNN controller is driven via a Lyapunov stability analysis to guarantee fast convergence, that is, learning rates are adaptively determined to rapidly minimize the state errors of a mobile robot. Finally, to evaluate the performance of the proposed direct adaptive control system using the WFNN controller, we compare the control results of the WFNN controller with those of the FNN, the WNN and the WFM controllers.