• Title/Summary/Keyword: 장단기 기억 순환 신경망

Search Result 13, Processing Time 0.026 seconds

Background subtraction using LSTM and spatial recurrent neural network (장단기 기억 신경망과 공간적 순환 신경망을 이용한 배경차분)

  • Choo, Sungkwon;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.13-16
    • /
    • 2016
  • 본 논문에서는 순환 신경망을 이용하여 동영상에서의 배경과 전경을 구분하는 알고리즘을 제안한다. 순환 신경망은 일련의 순차적인 입력에 대해서 내부의 루프(loop)를 통해 이전 입력에 의한 정보를 지속할 수 있도록 구성되는 신경망을 말한다. 순환 신경망의 여러 구조들 가운데, 우리는 장기적인 관계에도 반응할 수 있도록 장단기 기억 신경망(Long short-term memory networks, LSTM)을 사용했다. 그리고 동영상에서의 시간적인 연결 뿐 아니라 공간적인 연관성도 배경과 전경을 판단하는 것에 영향을 미치기 때문에, 공간적 순환 신경망을 적용하여 내부 신경망(hidden layer)들의 정보가 공간적으로 전달될 수 있도록 신경망을 구성하였다. 제안하는 알고리즘은 기본적인 배경차분 동영상에 대해 기존 알고리즘들과 비교할만한 결과를 보인다.

  • PDF

Performance comparison of various deep neural network architectures using Merlin toolkit for a Korean TTS system (Merlin 툴킷을 이용한 한국어 TTS 시스템의 심층 신경망 구조 성능 비교)

  • Hong, Junyoung;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.57-64
    • /
    • 2019
  • In this paper, we construct a Korean text-to-speech system using the Merlin toolkit which is an open source system for speech synthesis. In the text-to-speech system, the HMM-based statistical parametric speech synthesis method is widely used, but it is known that the quality of synthesized speech is degraded due to limitations of the acoustic modeling scheme that includes context factors. In this paper, we propose an acoustic modeling architecture that uses deep neural network technique, which shows excellent performance in various fields. Fully connected deep feedforward neural network (DNN), recurrent neural network (RNN), gated recurrent unit (GRU), long short-term memory (LSTM), bidirectional LSTM (BLSTM) are included in the architecture. Experimental results have shown that the performance is improved by including sequence modeling in the architecture, and the architecture with LSTM or BLSTM shows the best performance. It has been also found that inclusion of delta and delta-delta components in the acoustic feature parameters is advantageous for performance improvement.

Analysis and Prediction Methods of Marine Accident Patterns related to Vessel Traffic using Long Short-Term Memory Networks (장단기 기억 신경망을 활용한 선박교통 해양사고 패턴 분석 및 예측)

  • Jang, Da-Un;Kim, Joo-Sung
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.5
    • /
    • pp.780-790
    • /
    • 2022
  • Quantitative risk levels must be presented by analyzing the causes and consequences of accidents and predicting the occurrence patterns of the accidents. For the analysis of marine accidents related to vessel traffic, research on the traffic such as collision risk analysis and navigational path finding has been mainly conducted. The analysis of the occurrence pattern of marine accidents has been presented according to the traditional statistical analysis. This study intends to present a marine accident prediction model using the statistics on marine accidents related to vessel traffic. Statistical data from 1998 to 2021, which can be accumulated by month and hourly data among the Korean domestic marine accidents, were converted into structured time series data. The predictive model was built using a long short-term memory network, which is a representative artificial intelligence model. As a result of verifying the performance of the proposed model through the validation data, the RMSEs were noted to be 52.5471 and 126.5893 in the initial neural network model, and as a result of the updated model with observed datasets, the RMSEs were improved to 31.3680 and 36.3967, respectively. Based on the proposed model, the occurrence pattern of marine accidents could be predicted by learning the features of various marine accidents. In further research, a quantitative presentation of the risk of marine accidents and the development of region-based hazard maps are required.

An Empirical Study on Prediction of the Art Price using Multivariate Long Short Term Memory Recurrent Neural Network Deep Learning Model (다변수 LSTM 순환신경망 딥러닝 모형을 이용한 미술품 가격 예측에 관한 실증연구)

  • Lee, Jiin;Song, Jeongseok
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.6
    • /
    • pp.552-560
    • /
    • 2021
  • With the recent development of the art distribution system, interest in art investment is increasing rather than seeing art as an object of aesthetic utility. Unlike stocks and bonds, the price of artworks has a heterogeneous characteristic that is determined by reflecting both objective and subjective factors, so the uncertainty in price prediction is high. In this study, we used LSTM Recurrent Neural Network deep learning model to predict the auction winning price by inputting the artist, physical and sales charateristics of the Korean artist. According to the result, the RMSE value, which explains the difference between the predicted and actual price by model, was 0.064. Painter Lee Dae Won had the highest predictive power, and Lee Joong Seop had the lowest. The results suggest the art market becomes more active as investment goods and demand for auction winning price increases.

A Survey on Neural Networks Using Memory Component (메모리 요소를 활용한 신경망 연구 동향)

  • Lee, Jihwan;Park, Jinuk;Kim, Jaehyung;Kim, Jaein;Roh, Hongchan;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.8
    • /
    • pp.307-324
    • /
    • 2018
  • Recently, recurrent neural networks have been attracting attention in solving prediction problem of sequential data through structure considering time dependency. However, as the time step of sequential data increases, the problem of the gradient vanishing is occurred. Long short-term memory models have been proposed to solve this problem, but there is a limit to storing a lot of data and preserving it for a long time. Therefore, research on memory-augmented neural network (MANN), which is a learning model using recurrent neural networks and memory elements, has been actively conducted. In this paper, we describe the structure and characteristics of MANN models that emerged as a hot topic in deep learning field and present the latest techniques and future research that utilize MANN.

Polyphonic sound event detection using multi-channel audio features and gated recurrent neural networks (다채널 오디오 특징값 및 게이트형 순환 신경망을 사용한 다성 사운드 이벤트 검출)

  • Ko, Sang-Sun;Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.4
    • /
    • pp.267-272
    • /
    • 2017
  • In this paper, we propose an effective method of applying multichannel-audio feature values to GRNNs (Gated Recurrent Neural Networks) in polyphonic sound event detection. Real life sounds are often overlapped with each other, so that it is difficult to distinguish them by using a mono-channel audio features. In the proposed method, we tried to improve the performance of polyphonic sound event detection by using multi-channel audio features. In addition, we also tried to improve the performance of polyphonic sound event detection by applying a gated recurrent neural network which is simpler than LSTM (Long Short Term Memory), which shows the highest performance among the current recurrent neural networks. The experimental results show that the proposed method achieves better sound event detection performance than other existing methods.

CNN-LSTM based Autonomous Driving Technology (CNN-LSTM 기반의 자율주행 기술)

  • Ga-Eun Park;Chi Un Hwang;Lim Se Ryung;Han Seung Jang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1259-1268
    • /
    • 2023
  • This study proposes a throttle and steering control technology using visual sensors based on deep learning's convolutional and recurrent neural networks. It collects camera image and control value data while driving a training track in clockwise and counterclockwise directions, and generates a model to predict throttle and steering through data sampling and preprocessing for efficient learning. Afterward, the model was validated on a test track in a different environment that was not used for training to find the optimal model and compare it with a CNN (Convolutional Neural Network). As a result, we found that the proposed deep learning model has excellent performance.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Sentence generation model with neural attention (Neural Attention을 반영한 문장 생성 모델)

  • Lee, Seihee;Lee, Jee-Hyung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.01a
    • /
    • pp.17-18
    • /
    • 2017
  • 자연어 처리 분야에서 대화문 생성, 질의응답 등과 같은 문장생성과 관련된 연구가 꾸준히 진행되고 있다. 본 논문에서는 기존 순환신경망 모델에 Neural Attention을 추가하여 주제 정보를 어느 정도 포함시킬지 결정한 뒤 다음 문장을 생성할 때 사용하는 모델을 제안한다. 이는 기존 문장과 다음 문장의 확률 정보를 사용할 뿐만 아니라 주제 정보를 추가하여 문맥적인 의미를 넣을 수 있기 때문에, 더욱 연관성 있는 문장을 생성할 수 있게 도와준다. 이 모델은 적절한 다음 문장을 생성할 뿐만 아니라 추가적으로 어떤 단어가 다음 문장을 생성함에 있어 주제문장에 더 민감하게 반응하는지 확인할 수 있다.

  • PDF