• Title/Summary/Keyword: 미분 신경망

Search Result 30, Processing Time 0.033 seconds

Optimal Neural Network Controller Design using Jacobian (자코비안을 이용한 최적의 신경망 제어기 설계)

  • 임윤규;정병묵;조지승
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.2
    • /
    • pp.85-93
    • /
    • 2003
  • Generally, it is very difficult to get a modeling equation because multi-variable system has coupling relations between its inputs and outputs. To design an optimal controller without the modeling equation, this paper proposes a neural-network (NN) controller being learned by Jacobian matrix. Another major characteristic is that the controller consists of two separated NN controllers, namely, proportional control part and derivative control part. Simulation results for a catamaran system show that the proposed NN controller is superior to LQR in the regulation and tracking problems.

Performance Improvement of Independent Component Analysis by Adaptive Learning Parameters (적응적 학습파라미터를 이용한 독립성분분석의 성능개선)

  • 조용현;민성재
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.05b
    • /
    • pp.210-213
    • /
    • 2003
  • 본 연구에서는 뉴우턴법의 고정점 알고리즘에 적응 조정이 가능한 학습파라미터를 이용한 신경망 기반 독립성분분석기법을 제안하였다. 이는 고정점 알고리즘의 1차 미분을 이용하는 뉴우턴법에서 역혼합행렬의 경신 상태에 따라 학습율과 모멘트가 적응조정되도록 함으로써 분리속도와 분리성능을 개선시키기 위함이다. 제안된 기법을 512$\times$512 픽셀의 10개 영상으로부터 임의의 혼합행렬에 따라 발생되는 영상들의 분리에 적용한 결과, 기존의 고정점 알고리즘에 의한 결과보다 우수한 분리성능과 빠른 분리속도가 있음을 확인하였다.

  • PDF

Performance Improvement Method of Convolutional Neural Network Using Agile Activation Function (민첩한 활성함수를 이용한 합성곱 신경망의 성능 향상)

  • Kong, Na Young;Ko, Young Min;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.7
    • /
    • pp.213-220
    • /
    • 2020
  • The convolutional neural network is composed of convolutional layers and fully connected layers. The nonlinear activation function is used in each layer of the convolutional layer and the fully connected layer. The activation function being used in a neural network is a function that simulates the method of transmitting information in a neuron that can transmit a signal and not send a signal if the input signal is above a certain criterion when transmitting a signal between neurons. The conventional activation function does not have a relationship with the loss function, so the process of finding the optimal solution is slow. In order to improve this, an agile activation function that generalizes the activation function is proposed. The agile activation function can improve the performance of the deep neural network in a way that selects the optimal agile parameter through the learning process using the primary differential coefficient of the loss function for the agile parameter in the backpropagation process. Through the MNIST classification problem, we have identified that agile activation functions have superior performance over conventional activation functions.

Development of Thermal Power Boiler System Simulator Using Neural Network Algorithm (신경망 알고리즘을 이용한 화력발전 보일러 시스템 시뮬레이터 개발)

  • Lee, Jung Hoon
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.3
    • /
    • pp.9-18
    • /
    • 2020
  • The development of a large-scale thermal power plant control simulator consists of water/steam systems, air/combustion systems, pulverizer systems and turbine/generator systems. Modeling is possible for all systems except mechanical turbines/generators. Currently, there have been attempts to develop neural network simulators for some systems of a boiler, but the development of simulator for the whole system has never been completed. In particular, autoTuning, one of the key technology developments of all power generation companies, is a technology that can be achieved only when modeling for all systems with high accuracy is completed. The simulation results show accuracy of 95 to 99% or more of the actual boiler system, so if the field PID controller is fitted to this simulator, it will be available for fault diagnosis or auto-tuning.

State-Feedback Backstepping Controller for Uncertain Pure-Feedback Nonlinear Systems Using Switching Differentiator (불확실한 순궤환 비선형 계통에 대한 스위칭 미분기를 이용한 상태궤환 백스테핑 제어기)

  • Park, Jang-Hyun
    • Journal of IKEEE
    • /
    • v.23 no.2
    • /
    • pp.716-721
    • /
    • 2019
  • A novel switching differentiator-based backstepping controller for uncertain pure-feedback nonlinear systems is proposed. Using asymptotically convergent switching differentiator, time-derivatives of the virtual controls are directly estimated in every backstepping design steps. As a result, the control law has an extremely simple form and asymptotical stability of the tracking error is guaranteed regardless of parametric or unstructured uncertainties and unmatched disturbances in the considered system. It is required no universal approximators such as neural networks or fuzzy logic systems that are adaptively tuned online to cope with system uncertainties. Simulation results show the simplicity and performance of the proposed controller.

Parameter Estimation of Recurrent Neural Networks Using A Unscented Kalman Filter Training Algorithm and Its Applications to Nonlinear Channel Equalization (언센티드 칼만필터 훈련 알고리즘에 의한 순환신경망의 파라미터 추정 및 비선형 채널 등화에의 응용)

  • Kwon Oh-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.552-559
    • /
    • 2005
  • Recurrent neural networks(RNNs) trained with gradient based such as real time recurrent learning(RTRL) has a drawback of slor convergence rate. This algorithm also needs the derivative calculation which is not trivialized in error back propagation process. In this paper a derivative free Kalman filter, so called the unscented Kalman filter(UKF), for training a fully connected RNN is presented in a state space formulation of the system. A derivative free Kalman filler learning algorithm makes the RNN have fast convergence speed and good tracking performance without the derivative computation. Through experiments of nonlinear channel equalization, performance of the RNNs with a derivative free Kalman filter teaming algorithm is evaluated.

Control Law Design for a Tilt-Duct Unmanned Aerial Vehicle using Sigma-Pi Neural Networks (Sigma-Pi 신경망을 이용한 틸트덕트 무인기의 제어기 설계연구)

  • Kang, Youngshin;Park, Bumjin;Cho, Am;Yoo, Changsun
    • Journal of Aerospace System Engineering
    • /
    • v.11 no.1
    • /
    • pp.14-21
    • /
    • 2017
  • A Linear parameterized Sigma-Pi neural network (SPNN) is applied to a tilt-duct unmanned aerial vehicle (UAV) which has a very large longitudinal stability ($C_{L{\alpha}}$). It is uncontrollable by a proportional, integral, derivative (PID) controller due to heavy stability. It is shown that the combined inner loop and outer loop of SPNN controllers could overcome the sluggish longitudinal dynamics using a method of dynamic inversion and pseudo-control to compensate for reference model error. The simulation results of the way point guidance are presented to evaluate the performance of SPNN in comparison to a PID controller.

Alleviation of Vanishing Gradient Problem Using Parametric Activation Functions (파라메트릭 활성함수를 이용한 기울기 소실 문제의 완화)

  • Ko, Young Min;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.10
    • /
    • pp.407-420
    • /
    • 2021
  • Deep neural networks are widely used to solve various problems. However, the deep neural network with a deep hidden layer frequently has a vanishing gradient or exploding gradient problem, which is a major obstacle to learning the deep neural network. In this paper, we propose a parametric activation function to alleviate the vanishing gradient problem that can be caused by nonlinear activation function. The proposed parametric activation function can be obtained by applying a parameter that can convert the scale and location of the activation function according to the characteristics of the input data, and the loss function can be minimized without limiting the derivative of the activation function through the backpropagation process. Through the XOR problem with 10 hidden layers and the MNIST classification problem with 8 hidden layers, the performance of the original nonlinear and parametric activation functions was compared, and it was confirmed that the proposed parametric activation function has superior performance in alleviating the vanishing gradient.

Improvement of multi layer perceptron performance using combination of gradient descent and harmony search for prediction of groundwater level (지하수위 예측을 위한 경사하강법과 화음탐색법의 결합을 이용한 다층퍼셉트론 성능향상)

  • Lee, Won Jin;Lee, Eui Hoon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.186-186
    • /
    • 2022
  • 강수 및 침투 등으로 발생하는 지하수위의 변동을 예측하는 것은 지하수 자원의 활용 및 관리에 필수적이다. 지하수위의 변동은 지하수 자원의 활용 및 관리뿐만이 아닌 홍수 발생과 지반의 응력상태 등에 직접적인 영향을 미치기 때문에 정확한 예측이 필요하다. 본 연구는 인공신경망 중 다층퍼셉트론(Multi Layer Perceptron, MLP)을 이용한 지하수위 예측성능 향상을 위해 MLP의 구조 중 Optimizer를 개량하였다. MLP는 입력자료와 출력자료간 최적의 상관관계(가중치 및 편향)를 찾는 Optimizer와 출력되는 값을 결정하는 활성화 함수의 연산을 반복하여 학습한다. 특히 Optimizer는 신경망의 출력값과 관측값의 오차가 최소가 되는 상관관계를 찾는 연산자로써 MLP의 학습 및 예측성능에 직접적인 영향을 미친다. 기존의 Optimizer는 경사하강법(Gradient Descent, GD)을 기반으로 하는 Optimizer를 사용했다. 하지만 기존의 Optimizer는 미분을 이용하여 상관관계를 찾기 때문에 지역탐색 위주로 진행되며 기존에 생성된 상관관계를 저장하는 구조가 없어 지역 최적해로 수렴할 가능성이 있다는 단점이 있다. 본 연구에서는 기존 Optimizer의 단점을 개선하기 위해 지역탐색과 전역탐색을 동시에 고려할 수 있으며 기존의 해를 저장하는 구조가 있는 메타휴리스틱 최적화 알고리즘을 이용하였다. 메타휴리스틱 최적화 알고리즘 중 구조가 간단한 화음탐색법(Harmony Search, HS)과 GD의 결합모형(HS-GD)을 MLP의 Optimizer로 사용하여 기존 Optimizer의 단점을 개선하였다. HS-GD를 이용한 MLP의 성능검토를 위해 이천시 지하수위 예측을 실시하였으며 예측 결과를 기존의 Optimizer를 이용한 MLP 및 HS를 이용한 MLP의 예측결과와 비교하였다.

  • PDF

Trajectoroy control for a Robot Manipulator by Using Multilayer Neural Network (다층 신경회로망을 사용한 로봇 매니퓰레이터의 궤적제어)

  • 안덕환;이상효
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.11
    • /
    • pp.1186-1193
    • /
    • 1991
  • This paper proposed a trajectory controlmethod for a robot manipulator by using neural networks. The total torque for a manipulator is a sum of the linear feedback controller torque and the neural network feedfoward controller torque. The proposed neural network is a multilayer neural network with time delay elements, and learns the inverse dynamics of manipulator by means of PD(propotional denvative)controller error torque. The error backpropagation (BP) learning neural network controller does not directly require manipulator dynamics information. Instead, it learns the information by training and stores the information and connection weights. The control effects of the proposed system are verified by computer simulation.

  • PDF