• Title/Summary/Keyword: Recurrent Neural Networks

Search Result 278, Processing Time 0.027 seconds

A Survey on Neural Networks Using Memory Component (메모리 요소를 활용한 신경망 연구 동향)

  • Lee, Jihwan;Park, Jinuk;Kim, Jaehyung;Kim, Jaein;Roh, Hongchan;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.8
    • /
    • pp.307-324
    • /
    • 2018
  • Recently, recurrent neural networks have been attracting attention in solving prediction problem of sequential data through structure considering time dependency. However, as the time step of sequential data increases, the problem of the gradient vanishing is occurred. Long short-term memory models have been proposed to solve this problem, but there is a limit to storing a lot of data and preserving it for a long time. Therefore, research on memory-augmented neural network (MANN), which is a learning model using recurrent neural networks and memory elements, has been actively conducted. In this paper, we describe the structure and characteristics of MANN models that emerged as a hot topic in deep learning field and present the latest techniques and future research that utilize MANN.

Training Method and Speaker Verification Measures for Recurrent Neural Network based Speaker Verification System

  • Kim, Tae-Hyung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.3C
    • /
    • pp.257-267
    • /
    • 2009
  • This paper presents a training method for neural networks and the employment of MSE (mean scare error) values as the basis of a decision regarding the identity claim of a speaker in a recurrent neural networks based speaker verification system. Recurrent neural networks (RNNs) are employed to capture temporally dynamic characteristics of speech signal. In the process of supervised learning for RNNs, target outputs are automatically generated and the generated target outputs are made to represent the temporal variation of input speech sounds. To increase the capability of discriminating between the true speaker and an impostor, a discriminative training method for RNNs is presented. This paper shows the use and the effectiveness of the MSE value, which is obtained from the Euclidean distance between the target outputs and the outputs of networks for test speech sounds of a speaker, as the basis of speaker verification. In terms of equal error rates, results of experiments, which have been performed using the Korean speech database, show that the proposed speaker verification system exhibits better performance than a conventional hidden Markov model based speaker verification system.

Load Prediction using Finite Element Analysis and Recurrent Neural Network (유한요소해석과 순환신경망을 활용한 하중 예측)

  • Jung-Ho Kang
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.1
    • /
    • pp.151-160
    • /
    • 2024
  • Artificial Neural Networks that enabled Artificial Intelligence are being used in many fields. However, the application to mechanical structures has several problems and research is incomplete. One of the problems is that it is difficult to secure a large amount of data necessary for learning Artificial Neural Networks. In particular, it is important to detect and recognize external forces and forces for safety working and accident prevention of mechanical structures. This study examined the possibility by applying the Current Neural Network of Artificial Neural Networks to detect and recognize the load on the machine. Tens of thousands of data are required for general learning of Recurrent Neural Networks, and to secure large amounts of data, this paper derives load data from ANSYS structural analysis results and applies a stacked auto-encoder technique to secure the amount of data that can be learned. The usefulness of Stacked Auto-Encoder data was examined by comparing Stacked Auto-Encoder data and ANSYS data. In addition, in order to improve the accuracy of detection and recognition of load data with a Recurrent Neural Network, the optimal conditions are proposed by investigating the effects of related functions.

Inference of Context-Free Grammars using Binary Third-order Recurrent Neural Networks with Genetic Algorithm (이진 삼차 재귀 신경망과 유전자 알고리즘을 이용한 문맥-자유 문법의 추론)

  • Jung, Soon-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.3
    • /
    • pp.11-25
    • /
    • 2012
  • We present the method to infer Context-Free Grammars by applying genetic algorithm to the Binary Third-order Recurrent Neural Networks(BTRNN). BTRNN is a multiple-layered architecture of recurrent neural networks, each of which is corresponding to an input symbol, and is combined with external stack. All parameters of BTRNN are represented as binary numbers and each state transition is performed with any stack operation simultaneously. We apply Genetic Algorithm to BTRNN chromosomes and obtain the optimal BTRNN inferring context-free grammar of positive and negative input patterns. This proposed method infers BTRNN, which includes the number of its states equal to or less than those of existing methods of Discrete Recurrent Neural Networks, with less examples and less learning trials. Also BTRNN is superior to the recent method of chromosomes representing grammars at recognition time complexity because of performing deterministic state transitions and stack operations at parsing process. If the number of non-terminals is p, the number of terminals q, the length of an input string k, and the max number of BTRNN states m, the parallel processing time is O(k) and the sequential processing time is O(km).

Time-Series Prediction of Baltic Dry Index (BDI) Using an Application of Recurrent Neural Networks (Recurrent Neural Networks를 활용한 Baltic Dry Index (BDI) 예측)

  • Han, Min-Soo;Yu, Song-Jin
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2017.11a
    • /
    • pp.50-53
    • /
    • 2017
  • Not only growth of importance to understanding economic trends, but also the prediction to overcome the uncertainty is coming up for long-term maritime recession. This paper discussed about the prediction of BDI with artificial neural networks (ANN). ANN is one of emerging applications that can be the finest solution to the knotty problems that may not easy to achieve by humankind. Proposed a prediction by implementing neural networks that have recurrent architecture which are a Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM). And for the reason of comparison, trained Multi Layer Perceptron (MLP) from 2009.04.01 to 2017.07.31. Also made a comparison with conventional statistics, prediction tools; ARIMA. As a result, recurrent net, especially RNN outperformed and also could discover the applicability of LSTM to specific time-series (BDI).

  • PDF

Study on Image Compression Algorithm with Deep Learning (딥 러닝 기반의 이미지 압축 알고리즘에 관한 연구)

  • Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.4
    • /
    • pp.156-162
    • /
    • 2022
  • Image compression plays an important role in encoding and improving various forms of images in the digital era. Recent researches have focused on the principle of deep learning as one of the most exciting machine learning methods to show that it is good scheme to analyze, classify and compress images. Various neural networks are able to adapt for image compressions, such as deep neural networks, artificial neural networks, recurrent neural networks and convolution neural networks. In this review paper, we discussed how to apply the rule of deep learning to obtain better image compression with high accuracy, low loss-ness and high visibility of the image. For those results in performance, deep learning methods are required on justified manner with distinct analysis.

Input-Ouput Linearization and Control of Nunlinear System Using Recurrent Neural Networks (리커런트 신경 회로망을 이용한 비선형 시스템의 입출력 선형화 및 제어)

  • 이준섭;이홍기;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.11a
    • /
    • pp.185-188
    • /
    • 1997
  • In this paper, we execute identification, linearization, and control of a nonlinear system using recurrent neural networks. In general nonlinear control system become complex because of nonlinearity and uncertainty. And though we compose nonlinear control system based on the model, it is difficult to get good control ability. So we identify the nonlinear control system using the recurrent neural networks and execute feedback linearization of identified model, In this process we choose the optional linear system, and the system which will have to be feedback linearized if trained to follow the linearity between input and output of the system we choose. We the feedback linearized system by applying standard linear control strategy and simulation. And we evaluate the effectiveness by comparing the result which is linearized theoretically.

  • PDF

Polyphonic sound event detection using multi-channel audio features and gated recurrent neural networks (다채널 오디오 특징값 및 게이트형 순환 신경망을 사용한 다성 사운드 이벤트 검출)

  • Ko, Sang-Sun;Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.4
    • /
    • pp.267-272
    • /
    • 2017
  • In this paper, we propose an effective method of applying multichannel-audio feature values to GRNNs (Gated Recurrent Neural Networks) in polyphonic sound event detection. Real life sounds are often overlapped with each other, so that it is difficult to distinguish them by using a mono-channel audio features. In the proposed method, we tried to improve the performance of polyphonic sound event detection by using multi-channel audio features. In addition, we also tried to improve the performance of polyphonic sound event detection by applying a gated recurrent neural network which is simpler than LSTM (Long Short Term Memory), which shows the highest performance among the current recurrent neural networks. The experimental results show that the proposed method achieves better sound event detection performance than other existing methods.

Evaluation Method of Structural Safety using Gated Recurrent Unit (Gated Recurrent Unit 기법을 활용한 구조 안전성 평가 방법)

  • Jung-Ho Kang
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.1
    • /
    • pp.183-193
    • /
    • 2024
  • Recurrent Neural Network technology that learns past patterns and predicts future patterns using technology for recognizing and classifying objects is being applied to various industries, economies, and languages. And research for practical use is making a lot of progress. However, research on the application of Recurrent Neural Networks for evaluating and predicting the safety of mechanical structures is insufficient. Accurate detection of external load applied to the outside is required to evaluate the safety of mechanical structures. Learning of Recurrent Neural Networks for this requires a large amount of load data. This study applied the Gated Recurrent Unit technique to examine the possibility of load learning and investigated the possibility of applying a stacked Auto Encoder as a way to secure load data. In addition, the usefulness of learning mechanical loads was analyzed with the Gated Recurrent Unit technique, and the basic setting of related functions and parameters was proposed to secure accuracy in the recognition and prediction of loads.

Evolutionary Algorithm for Recurrent Neural Networks Storing Periodic Pattern Pairs (주기적 패턴 쌍을 저장하는 Recurrent Neural Network를 찾는 진화 알고리즘)

  • Kim, Kwon-Il;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06c
    • /
    • pp.399-402
    • /
    • 2007
  • 뇌 속 뉴런들의 네트워크는 근본적으로 recurrent neural networks(RNNs)의 형태를 지닌다. 이 논문에서는 반복되는 뉴런 반응 패턴들 사이의 관계를 네트워크에 저장함으로써 생물의 기억이 생성된다는 가정하에, 이를 표현할 수 있는 RNN 모델을 제안하였고, evolutionary algorithm을 통해 이러한 여러 쌍의 기억들이 저장된 네트워크가 존재할 수 있음을 보였다.

  • PDF