• Title/Summary/Keyword: 순환신경망 모델

Search Result 193, Processing Time 0.029 seconds

Learning and Transferring Deep Neural Network Models for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델 학습과 전이)

  • Kim, Dong-Ha;Kim, Incheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.617-620
    • /
    • 2016
  • 본 논문에서는 이미지 캡션 생성과 모델 전이에 효과적인 심층 신경망 모델을 제시한다. 본 모델은 멀티 모달 순환 신경망 모델의 하나로서, 이미지로부터 시각 정보를 추출하는 컨볼루션 신경망 층, 각 단어를 저차원의 특징으로 변환하는 임베딩 층, 캡션 문장 구조를 학습하는 순환 신경망 층, 시각 정보와 언어 정보를 결합하는 멀티 모달 층 등 총 5 개의 계층들로 구성된다. 특히 본 모델에서는 시퀀스 패턴 학습과 모델 전이에 우수한 LSTM 유닛을 이용하여 순환 신경망 층을 구성하고, 컨볼루션 신경망 층의 출력을 임베딩 층뿐만 아니라 멀티 모달 층에도 연결함으로써, 캡션 문장 생성을 위한 매 단계마다 이미지의 시각 정보를 이용할 수 있는 연결 구조를 가진다. Flickr8k, Flickr30k, MSCOCO 등의 공개 데이터 집합들을 이용한 다양한 비교 실험을 통해, 캡션의 정확도와 모델 전이의 효과 면에서 본 논문에서 제시한 멀티 모달 순환 신경망 모델의 우수성을 입증하였다.

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.

Residual Convolutional Recurrent Neural Network-Based Sound Event Classification Applicable to Broadcast Captioning Services (자막방송을 위한 잔차 합성곱 순환 신경망 기반 음향 사건 분류)

  • Kim, Nam Kyun;Kim, Hong Kook;Ahn, Chung Hyun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.26-27
    • /
    • 2021
  • 본 논문에서는 자막방송 제공을 위해 방송콘텐츠를 이해하는 방법으로 잔차 합성곱 순환신경망 기반 음향 사건 분류 기법을 제안한다. 제안된 기법은 잔차 합성곱 신경망과 순환 신경망을 연결한 구조를 갖는다. 신경망의 입력 특징으로는 멜-필터벵크 특징을 활용하고, 잔차 합성곱 신경망은 하나의 스템 블록과 5개의 잔차 합성곱 신경망으로 구성된다. 잔차 합성곱 신경망은 잔차 학습으로 구성된 합성곱 신경망과 기존의 합성곱 신경망 대비 특징맵의 표현 능력 향상을 위해 합성곱 블록 주의 모듈로 구성한다. 추출된 특징맵은 순환 신경망에 연결되고, 최종적으로 음향 사건 종류와 시간정보를 추출하는 완전연결층으로 연결되는 구조를 활용한다. 제안된 모델 훈련을 위해 라벨링되지 않는 데이터 활용이 가능한 평균 교사 모델을 기반으로 훈련하였다. 제안된 모델의 성능평가를 위해 DCASE 2020 챌린지 Task 4 데이터 셋을 활용하였으며, 성능 평가 결과 46.8%의 이벤트 단위의 F1-score를 얻을 수 있었다.

  • PDF

Stock Prediction Model based on Bidirectional LSTM Recurrent Neural Network (양방향 LSTM 순환신경망 기반 주가예측모델)

  • Joo, Il-Taeck;Choi, Seung-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.2
    • /
    • pp.204-208
    • /
    • 2018
  • In this paper, we proposed and evaluated the time series deep learning prediction model for learning fluctuation pattern of stock price. Recurrent neural networks, which can store previous information in the hidden layer, are suitable for the stock price prediction model, which is time series data. In order to maintain the long - term dependency by solving the gradient vanish problem in the recurrent neural network, we use LSTM with small memory inside the recurrent neural network. Furthermore, we proposed the stock price prediction model using bidirectional LSTM recurrent neural network in which the hidden layer is added in the reverse direction of the data flow for solving the limitation of the tendency of learning only based on the immediately preceding pattern of the recurrent neural network. In this experiment, we used the Tensorflow to learn the proposed stock price prediction model with stock price and trading volume input. In order to evaluate the performance of the stock price prediction, the mean square root error between the real stock price and the predicted stock price was obtained. As a result, the stock price prediction model using bidirectional LSTM recurrent neural network has improved prediction accuracy compared with unidirectional LSTM recurrent neural network.

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

The Improving Method of Characters Recognition Using New Recurrent Neural Network (새로운 순환신경망을 사용한 문자인식성능의 향상 방안)

  • 정낙우;김병기
    • Journal of the Korea Society of Computer and Information
    • /
    • v.1 no.1
    • /
    • pp.129-138
    • /
    • 1996
  • In the result of Industrial development. largeness and highness of techniques. a large amount of Information Is being treated every year. Achive informationization. we must store in computer ,all informations written on paper for a long time and be able to utilize them In right time and place. There Is recurrent neural network as a model rousing the output value In learning neural network for characters recognition. But most of these methods are not so effectively applied to it. This study suggests a new type of recurrent neural network to classifyeffectively the static patterns such as off-line handwritten characters. This study shows that this new type Is better than those of before in recognizing the patterns. such as figures and handwritten characters, by using the new J-E (Jordan-Elman) neural network model in which enlarges and combines Jordan and Elman Model.

  • PDF

Learning Recurrent Neural Networks for Activity Detection from Untrimmed Videos (비분할 비디오로부터 행동 탐지를 위한 순환 신경망 학습)

  • Song, YeongTaek;Suh, Junbae;Kim, Incheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.892-895
    • /
    • 2017
  • 본 논문에서는 비분할 비디오로부터 이 비디오에 담긴 사람의 행동을 효과적으로 탐지해내기 위한 심층 신경망 모델을 제안한다. 일반적으로 비디오에서 사람의 행동을 탐지해내는 작업은 크게 비디오에서 행동 탐지에 효과적인 특징들을 추출해내는 과정과 이 특징들을 토대로 비디오에 담긴 행동을 탐지해내는 과정을 포함한다. 본 논문에서는 특징 추출 과정과 행동 탐지 과정에 이용할 심층 신경망 모델을 제시한다. 특히 비디오로부터 각 행동별 시간적, 공간적 패턴을 잘 표현할 수 있는 특징들을 추출해내기 위해서는 C3D 및 I-ResNet 합성곱 신경망 모델을 이용하고, 시계열 특징 벡터들로부터 행동을 자동 판별해내기 위해서는 양방향 BI-LSTM 순환 신경망 모델을 이용한다. 대용량의 공개 벤치 마크 데이터 집합인 ActivityNet 비디오 데이터를 이용한 실험을 통해, 본 논문에서 제안하는 심층 신경망 모델의 성능과 효과를 확인할 수 있었다.

Hardware Implementation of Recurrent Neural Network (순환 신경망의 하드웨어 구현)

  • 김정욱;오종훈
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.586-588
    • /
    • 2001
  • 최근에는 순환 신경망의 생성모델이 비교사 학습에 관련하여 활발히 연구되고 있다. 이러한 형태의 신경망은 형태 추출이나 인식에 효과적으로 사용될 수 있는 반면 반복 loop를 사용하므로 대단히 많은 계산이 필요하다. 본 논문에서는 Oh와 Seung에 의해 제안된 상향전파(Up-propagation) network이라는 순환 신경망을 FPGA를 이용해서 구현하였다. 단층 신경망은 9개의 상층 neuron과 256개의 하층 neuron으로 구성되 있으며 4만 게이트의 FPGA 하나로 효과적으로 구현할 수 있다. pipeline된 곱셈기로 게산 속도를 향상시켰고 sigmoid 전달 함수는 유한 정밀도의 2차 다항식으로 근사될 수 있다. 구현된 하드웨어는 hand-written 숫자 영상인 USPS data를 재생하는데 사용되었으며 좋은 결과를 얻었다.

  • PDF

A New Thpe of Recurrent Neural Network for the Umprovement of Pattern Recobnition Ability (패턴 인식 성능을 향상시키는 새로운 형태의 순환신경망)

  • Jeong, Nak-U;Kim, Byeong-Gi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.2
    • /
    • pp.401-408
    • /
    • 1997
  • Human gets almist all of his knoweledge from the recognition and the accumulation of input patterns,image or sound,the he gets theough his eyes and through his ears.Among these means,his chracter recognition,an ability that allows him to recognize characters and understand their meanings through visual information, is now applied to a pattern recognition system using neural network in computer. Recurrent neural network is one of those models that reuse the output value in neural network learning.Recently many studies try to apply this recurrent neural network to the classification of static patterns like off-line handwritten characters. But most of their efforts are not so drrdtive until now.This stusy suggests a new type of recurrent neural network for an deedctive classification of the static patterns such as off-line handwritten chracters.Using the new J-E(Jordan-Elman)neural network model that enlarges and combines Jordan Model and Elman Model,this new type is better than those of before in recobnizing the static patterms such as figures and handwritten-characters.

  • PDF

Recurrent Neural Network Based Distance Estimation for Indoor Localization in UWB Systems (UWB 시스템에서 실내 측위를 위한 순환 신경망 기반 거리 추정)

  • Jung, Tae-Yun;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.494-500
    • /
    • 2020
  • This paper proposes a new distance estimation technique for indoor localization in ultra wideband (UWB) systems. The proposed technique is based on recurrent neural network (RNN), one of the deep learning methods. The RNN is known to be useful to deal with time series data, and since UWB signals can be seen as a time series data, RNN is employed in this paper. Specifically, the transmitted UWB signal passes through IEEE802.15.4a indoor channel model, and from the received signal, the RNN regressor is trained to estimate the distance from the transmitter to the receiver. To verify the performance of the trained RNN regressor, new received UWB signals are used and the conventional threshold based technique is also compared. For the performance measure, root mean square error (RMSE) is assessed. According to the computer simulation results, the proposed distance estimator is always much better than the conventional technique in all signal-to-noise ratios and distances between the transmitter and the receiver.