• 제목/요약/키워드: Recurrent neural networks

검색결과 285건 처리시간 0.029초

Identification of Finite Automata Using Recurrent Neural Networks

  • Won, Sung-Hwan;Park, Cheol-Hoon
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2008년도 하계종합학술대회
    • /
    • pp.667-668
    • /
    • 2008
  • This paper demonstrates that the recurrent neural networks can be used successfully for the identification of finite automata (FAs). A new type of recurrent neural network (RNN) is proposed and the offline training algorithm, regulated Levenberg-Marquadt (LM) algorithm, for the network is developed. Simulation result shows that the identification and the extraction of FAs are practically achievable.

  • PDF

딥러닝의 모형과 응용사례 (Deep Learning Architectures and Applications)

  • 안성만
    • 지능정보연구
    • /
    • 제22권2호
    • /
    • pp.127-142
    • /
    • 2016
  • 딥러닝은 인공신경망(neural network)이라는 인공지능분야의 모형이 발전된 형태로서, 계층구조로 이루어진 인공신경망의 내부계층(hidden layer)이 여러 단계로 이루어진 구조이다. 딥러닝에서의 주요 모형은 합성곱신경망(convolutional neural network), 순환신경망(recurrent neural network), 그리고 심층신뢰신경망(deep belief network)의 세가지라고 할 수 있다. 그 중에서 현재 흥미로운 연구가 많이 발표되어서 관심이 집중되고 있는 모형은 지도학습(supervised learning)모형인 처음 두 개의 모형이다. 따라서 본 논문에서는 지도학습모형의 가중치를 최적화하는 기본적인 방법인 오류역전파 알고리즘을 살펴본 뒤에 합성곱신경망과 순환신경망의 구조와 응용사례 등을 살펴보고자 한다. 본문에서 다루지 않은 모형인 심층신뢰신경망은 아직까지는 합성곱신경망 이나 순환신경망보다는 상대적으로 주목을 덜 받고 있다. 그러나 심층신뢰신경망은 CNN이나 RNN과는 달리 비지도학습(unsupervised learning)모형이며, 사람이나 동물은 관찰을 통해서 스스로 학습한다는 점에서 궁극적으로는 비지도학습모형이 더 많이 연구되어야 할 주제가 될 것이다.

그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색 (Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks)

  • 최수연;박종열
    • 문화기술의 융합
    • /
    • 제9권1호
    • /
    • pp.649-654
    • /
    • 2023
  • 본 논문은 그래프 합성곱 신경망을 이용한 신경망 구조 탐색 모델 설계를 제안한다. 딥 러닝은 블랙박스로 학습이 진행되는 특성으로 인해 설계한 모델이 최적화된 성능을 가지는 구조인지 검증하지 못하는 문제점이 존재한다. 신경망 구조 탐색 모델은 모델을 생성하는 순환 신경망과 생성된 네트워크인 합성곱 신경망으로 구성되어있다. 통상의 신경망 구조 탐색 모델은 순환신경망 계열을 사용하지만 우리는 본 논문에서 순환신경망 대신 그래프 합성곱 신경망을 사용하여 합성곱 신경망 모델을 생성하는 GC-NAS를 제안한다. 제안하는 GC-NAS는 Layer Extraction Block을 이용하여 Depth를 탐색하며 Hyper Parameter Prediction Block을 이용하여 Depth 정보를 기반으로 한 spatial, temporal 정보(hyper parameter)를 병렬적으로 탐색합니다. 따라서 Depth 정보를 반영하기 때문에 탐색 영역이 더 넓으며 Depth 정보와 병렬적 탐색을 진행함으로 모델의 탐색 영역의 목적성이 분명하기 때문에 GC-NAS대비 이론적 구조에 있어서 우위에 있다고 판단된다. GC-NAS는 그래프 합성곱 신경망 블록 및 그래프 생성 알고리즘을 통하여 기존 신경망 구조 탐색 모델에서 순환 신경망이 가지는 고차원 시간 축의 문제와 공간적 탐색의 범위 문제를 해결할 것으로 기대한다. 또한 우리는 본 논문이 제안하는 GC-NAS를 통하여 신경망 구조 탐색에 그래프 합성곱 신경망을 적용하는 연구가 활발히 이루어질 수 있는 계기가 될 수 있기를 기대한다.

카오틱 신경망을 이용한 다입력 다출력 시스템의 단일 예측 (The Single Step Prediction of Multi-Input Multi-Output System using Chaotic Neural Networks)

  • 장창화;김상희
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 하계종합학술대회 논문집
    • /
    • pp.1041-1044
    • /
    • 1999
  • In This paper, we investigated the single step prediction for output responses of chaotic system with multi Input multi output using chaotic neural networks. Since the systems with chaotic characteristics are coupled between internal parameters, the chaotic neural networks is very suitable for output response prediction of chaotic system. To evaluate the performance of the proposed neural network predictor, we adopt for Lorenz attractor with chaotic responses and compare the results with recurrent neural networks. The results demonstrated superior performance on convergence and computation time than the predictor using recurrent neural networks. And we could also see good predictive capability of chaotic neural network predictor.

  • PDF

Understanding recurrent neural network for texts using English-Korean corpora

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • 제27권3호
    • /
    • pp.313-326
    • /
    • 2020
  • Deep Learning is the most important key to the development of Artificial Intelligence (AI). There are several distinguishable architectures of neural networks such as MLP, CNN, and RNN. Among them, we try to understand one of the main architectures called Recurrent Neural Network (RNN) that differs from other networks in handling sequential data, including time series and texts. As one of the main tasks recently in Natural Language Processing (NLP), we consider Neural Machine Translation (NMT) using RNNs. We also summarize fundamental structures of the recurrent networks, and some topics of representing natural words to reasonable numeric vectors. We organize topics to understand estimation procedures from representing input source sequences to predict target translated sequences. In addition, we apply multiple translation models with Gated Recurrent Unites (GRUs) in Keras on English-Korean sentences that contain about 26,000 pairwise sequences in total from two different corpora, colloquialism and news. We verified some crucial factors that influence the quality of training. We found that loss decreases with more recurrent dimensions and using bidirectional RNN in the encoder when dealing with short sequences. We also computed BLEU scores which are the main measures of the translation performance, and compared them with the score from Google Translate using the same test sentences. We sum up some difficulties when training a proper translation model as well as dealing with Korean language. The use of Keras in Python for overall tasks from processing raw texts to evaluating the translation model also allows us to include some useful functions and vocabulary libraries as well.

DRNN을 이용한 최적 난방부하 식별 (Optimal Heating Load Identification using a DRNN)

  • 정기철;양해원
    • 대한전기학회논문지:전력기술부문A
    • /
    • 제48권10호
    • /
    • pp.1231-1238
    • /
    • 1999
  • This paper presents an approach for the optimal heating load Identification using Diagonal Recurrent Neural Networks(DRNN). In this paper, the DRNN captures the dynamic nature of a system and since it is not fully connected, training is much faster than a fully connected recurrent neural network. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer. The hidden layer is comprised of self-recurrent neurons, each feeding its output only into itself. In this study, A dynamic backpropagation (DBP) with delta-bar-delta learning method is used to train an optimal heating load identifier. Delta-bar-delta learning method is an empirical method to adapt the learning rate gradually during the training period in order to improve accuracy in a short time. The simulation results based on experimental data show that the proposed model is superior to the other methods in most cases, in regard of not only learning speed but also identification accuracy.

  • PDF

유연성 로봇 링크의 위치제어를 위한 신경망 제어기의 설계 (The Design of Neural Networks Controller for Position Control of Flexible Robot Link)

  • 탁한호;이주원;이상배
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1997년도 추계학술대회 학술발표 논문집
    • /
    • pp.121-124
    • /
    • 1997
  • In this paper, applications of self-recurrent neural networks based of adaptive controller to position control of flexible robot link are considered. The self-recurrent neural networks can be used to approximate any continuous function to any desired degree of accuracy and the weights are updated by feedback-error learning algorithm. Therefore, a comparative analysis was mode with linear controller through an simulation. The results are presented to illustrate the advantages and improved performance of the proposed position tracking controller over the conventional linear controller.

  • PDF

Recurrent Neural Network Adaptive Equalizers Based on Data Communication

  • Jiang, Hongrui;Kwak, Kyung-Sup
    • Journal of Communications and Networks
    • /
    • 제5권1호
    • /
    • pp.7-18
    • /
    • 2003
  • In this paper, a decision feedback recurrent neural network equalizer and a modified real time recurrent learning algorithm are proposed, and an adaptive adjusting of the learning step is also brought forward. Then, a complex case is considered. A decision feedback complex recurrent neural network equalizer and a modified complex real time recurrent learning algorithm are proposed. Moreover, weights of decision feedback recurrent neural network equalizer under burst-interference conditions are analyzed, and two anti-burst-interference algorithms to prevent equalizer from out of working are presented, which are applied to both real and complex cases. The performance of the recurrent neural network equalizer is analyzed based on numerical results.

복잡한 도로 상태의 동적 비선형 제어를 위한 학습 신경망 (A Dynamic Neural Networks for Nonlinear Control at Complicated Road Situations)

  • 김종만;신동용;김원섭;김성중
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2000년도 하계학술대회 논문집 D
    • /
    • pp.2949-2952
    • /
    • 2000
  • A new neural networks and learning algorithm are proposed in order to measure nonlinear heights of complexed road environments in realtime without pre-information. This new neural networks is Error Self Recurrent Neural Networks(ESRN), The structure of it is similar to recurrent neural networks: a delayed output as the input and a delayed error between the output of plant and neural networks as a bias input. In addition, we compute the desired value of hidden layer by an optimal method instead of transfering desired values by back-propagation and each weights are updated by RLS(Recursive Least Square). Consequently. this neural networks are not sensitive to initial weights and a learning rate, and have a faster convergence rate than conventional neural networks. We can estimate nonlinear models in realtime by ESRN and learning algorithm and control nonlinear models. To show the performance of this one. we control 7 degree of freedom full car model with several control method. From this simulation. this estimation and controller were proved to be effective to the measurements of nonlinear road environment systems.

  • PDF

오차 자기 순환 신경회로망을 이용한 현가시스템 인식과 슬라이딩 모드 제어기 개발 (Identification of suspension systems using error self recurrent neural network and development of sliding mode controller)

  • 송광현;이창구;김성중
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.625-628
    • /
    • 1997
  • In this paper the new neural network and sliding mode suspension controller is proposed. That neural network is error self-recurrent neural network. For fast on-line learning, this paper use recursive least squares method. A new neural networks converges considerably faster than the backpropagation algorithm and has advantages of being less affected by the poor initial weights and learning rate. The controller for suspension systems is designed according to sliding mode technique based on new proposed neural network.

  • PDF