• Title/Summary/Keyword: neural network training

Search Result 1,742, Processing Time 0.03 seconds

An On-Line Adaptive Control of Underwater Vehicles Using Neural Network

  • Kim, Myung-Hyun;Kang, Sung-Won;Lee, Jae-Myung
    • Journal of Ocean Engineering and Technology
    • /
    • v.18 no.2
    • /
    • pp.33-38
    • /
    • 2004
  • All adaptive neural network controller has been developed for a model of an underwater vehicle. This controller combines a radial basis neural network and sliding mode control techniques. No prior off-line training phase is required, and this scheme exploits the advantages of both neural network control and sliding mode control. An on-line stable adaptive law is derived using Lyapunov theory. The number of neurons and the width of Gaussian function should be chosen carefully. Performance of the controller is demonstrated through computer simulation.

Neural Network Compensation for Impedance Force Controlled Robot Manipulators

  • Jung, Seul
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.1
    • /
    • pp.17-25
    • /
    • 2014
  • This paper presents the formulation of an impedance controller for regulating the contact force with the environment. To achieve an accurate force tracking control, uncertainties in both robot dynamics and the environment require to be addressed. As part of the framework of the proposed force tracking formulation, a neural network is introduced at the desired trajectory to compensate for all uncertainties in an on-line manner. Compensation at the input trajectory leads to a remarkable structural advantage in that no modifications of the internal force controllers are required. Minimizing the objective function of the training signal for a neural network satisfies the desired force tracking performance. A neural network actually compensates for uncertainties at the input trajectory level in an on-line fashion. Simulation results confirm the position and force tracking abilities of a robot manipulator.

A Study on the Nonlinear Modeling of Base Isolator Systems by a Neural Network Theory : Application to Lead Rubber Bearings (신경망 이론을 이용한 지진격리 장치의 비선형 모델링 기법 연구 : 납삽입 적층 고무베어링에 적용한 예)

  • 허영철;김영중;김병현
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 2003.03a
    • /
    • pp.433-441
    • /
    • 2003
  • In this paper, a study on the nonlinear modeling of lead rubber bearings(LRBs) by a neural network theory was carried out. The random tests on the LRB were used for a training of neural network model. Numerical simulations using the neural network model were peformed on a scaled structural model with the LRBs excited by three type of seismic loads and compared with the shaking table tests. As a result, it was shown that the neural network model would be useful to a numerical modeling of LRB.

  • PDF

In-Process Monitoring of Chatter Vibration using Multiple Neural Network(II) (복합 신경회로망을 이용한 채터진동의 인프로세스 감시(II))

  • Kim, Jeong-Suk;Kang, Myeong-Chang;Park, Cheol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.12 no.12
    • /
    • pp.100-108
    • /
    • 1995
  • The In-process minitoring of the chatter vibration is necessarily required to an automatic manufacturing system. In this study, we constructed a multi-sensing system using tool dynamoneter, accelerometer and AE(Acoustic Emission) sensor for a more credible detection of chatter vibration. And a new approach using a multiple neural network to extract the features of multi-sensor for the recognition chatter vibration is proposed. With the Back-propagation training process, the neural network memorize and classify the features of multi-sensor signals. As a result, it is shown by multiple neural network that the chatter vibration can be monitored accurately, and it can be widely used in practical unmanned system.

  • PDF

The Analysis of Liquefaction Evaluation in Ground Using Artificial Neural Network (인공신경망을 이용한 지반의 액상화 가능성 판별)

  • Lee, Song;Park, Hyung-Kyu
    • Journal of the Korean Geotechnical Society
    • /
    • v.18 no.5
    • /
    • pp.37-42
    • /
    • 2002
  • Artificial neural networks are efficient computing techniques that are widely used to solve complex problems in many fields. In this paper a liquefaction potential was estimated by using a back propagation neural network model applicated to cyclic triaxial test data, soil parameters and site investigation data. Training and testing of the network were based on a database of 43 cyclic triaxial test data from 00 sites. The neural networks are trained by modifying the weights of the neurons in response to the errors between the actual output values and the target output value. Training was done iteratively until the average sum squared errors over all the training patterns were minimized. This generally occurred after about 15,000 cycles of training. The accuracy from 72% to 98% was shown for the model equipped with two hidden layers and ten input variables. Important effective input variables have been identified as the NOC,$D_10$ and (N$_1$)$_60$. The study showed that the neural network model predicted a CSR(Cyclic shear stress Ratio) of silty-sand reasonably well. Analyzed results indicate that the neural-network model is more reliable than simplified method using N value of SPT.

Sparse Feature Convolutional Neural Network with Cluster Max Extraction for Fast Object Classification

  • Kim, Sung Hee;Pae, Dong Sung;Kang, Tae-Koo;Kim, Dong W.;Lim, Myo Taeg
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2468-2478
    • /
    • 2018
  • We propose the Sparse Feature Convolutional Neural Network (SFCNN) to reduce the volume of convolutional neural networks (CNNs). Despite the superior classification performance of CNNs, their enormous network volume requires high computational cost and long processing time, making real-time applications such as online-training difficult. We propose an advanced network that reduces the volume of conventional CNNs by producing a region-based sparse feature map. To produce the sparse feature map, two complementary region-based value extraction methods, cluster max extraction and local value extraction, are proposed. Cluster max is selected as the main function based on experimental results. To evaluate SFCNN, we conduct an experiment with two conventional CNNs. The network trains 59 times faster and tests 81 times faster than the VGG network, with a 1.2% loss of accuracy in multi-class classification using the Caltech101 dataset. In vehicle classification using the GTI Vehicle Image Database, the network trains 88 times faster and tests 94 times faster than the conventional CNNs, with a 0.1% loss of accuracy.

Robust control of Nonlinear System Using Multilayer Neural Network (다층 신경회로망을 이용한 비선형 시스템의 견실한 제어)

  • Cho, Hyun-Seob
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.6 no.4
    • /
    • pp.243-248
    • /
    • 2013
  • In this thesis, we have designed the indirect adaptive controller using Dynamic Neural Units(DNU) for unknown nonlinear systems. Proposed indirect adaptive controller using Dynamic Neural Unit based upon the topology of a reverberating circuit in a neuronal pool of the central nervous system. In this thesis, we present a genetic DNU-control scheme for unknown nonlinear systems. Our method is different from those using supervised learning algorithms, such as the backpropagation (BP) algorithm, that needs training information in each step. The contributions of this thesis are the new approach to constructing neural network architecture and its training.

Optimal Heating Load Identification using a DRNN (DRNN을 이용한 최적 난방부하 식별)

  • Chung, Kee-Chull;Yang, Hai-Won
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.10
    • /
    • pp.1231-1238
    • /
    • 1999
  • This paper presents an approach for the optimal heating load Identification using Diagonal Recurrent Neural Networks(DRNN). In this paper, the DRNN captures the dynamic nature of a system and since it is not fully connected, training is much faster than a fully connected recurrent neural network. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer. The hidden layer is comprised of self-recurrent neurons, each feeding its output only into itself. In this study, A dynamic backpropagation (DBP) with delta-bar-delta learning method is used to train an optimal heating load identifier. Delta-bar-delta learning method is an empirical method to adapt the learning rate gradually during the training period in order to improve accuracy in a short time. The simulation results based on experimental data show that the proposed model is superior to the other methods in most cases, in regard of not only learning speed but also identification accuracy.

  • PDF

An Input Feature Selection Method Applied to Fuzzy Neural Networks for Signal Estimation

  • Na, Man-Gyun;Sim, Young-Rok
    • Nuclear Engineering and Technology
    • /
    • v.33 no.5
    • /
    • pp.457-467
    • /
    • 2001
  • It is well known that the performance of a fuzzy neural network strongly depends on the input features selected for its training. In its applications to sensor signal estimation, there are a large number of input variables related with an output As the number of input variables increases, the training time of fuzzy neural networks required increases exponentially. Thus, it is essential to reduce the number of inputs to a fuzzy neural network and to select the optimum number of mutually independent inputs that are able to clearly define the input-output mapping. In this work, principal component analysis (PCA), genetic algorithms (CA) and probability theory are combined to select new important input features. A proposed feature selection method is applied to the signal estimation of the steam generator water level, the hot-leg flowrate, the pressurizer water level and the pressurizer pressure sensors in pressurized water reactors and compared with other input feature selection methods.

  • PDF

Parallel, self-organizing, hierarchical neural networks for handwritten digit recognition (필기체 숫자인식을 위한 병렬 자구성 계층 신경회로망)

  • 방극준;조남신;강창언;홍대식
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.7
    • /
    • pp.173-182
    • /
    • 1996
  • In this paper, we propose the parallel, self-organizing, hierarchical neural netowrks as a handwritten digit recognition system. This system can absorb the various shape variations of handwritten digits by using the different methods of extracting the features in each stage neural network (SNN) of the PSHNN, and can reduce training time by using the single layer neural network as the SNN, and can obtain high rate of correct recognition by using the certainty area in all the output nodes individually. experiments have been performed with NIST database. In which we use 21, 315 digits (10, 625 digits for training and 10,663 digits for testing). The results show that the correct rate is 97.48% the error rate is 1.72% and the reject rate is 0.78%.

  • PDF