• Title/Summary/Keyword: recurrent neural network

Search Result 567, Processing Time 0.024 seconds

Flow based Network Traffic Classification Using Recurrent Neural Network (Recurrent Neural Network을 이용한 플로우 기반 네트워크 트래픽 분류)

  • Lim, Hyun-Kyo;Kim, Ju-Bong;Heo, Joo-Seong;Kwon, Do-Hyung;Han, Youn-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.835-838
    • /
    • 2017
  • 최근 다양한 네트워크 서비스와 응용들이 생겨나면서, 네트워크상에 다양한 네트워크 트래픽이 발생하고 있다. 이로 인하여, 네트워크에 불필요한 네트워크 트래픽도 많이 발생하면서 네트워크 성능에 저하를 발생 시키고 있다. 따라서, 네트워크 트래픽 분류를 통하여 빠르게 제공되어야 하는 네트워크 서비스를 빠르게 전송 할 수 있도록 각 네트워크 트래픽마다의 분류가 필요하다. 본 논문에서는 Deep Learning 기법 중 Recurrent Neural Network를 이용한 플로우 기반의 네트워크 트래픽 분류를 제안한다. Deep Learning은 네트워크 관리자의 개입 없이 네트워크 트래픽 분류를 할 수 있으며, 이를 위하여 네트워크 트래픽을 Recurrent Neural Network에 적합한 데이터 형태로 변환한다. 변환된 데이터 세트를 이용하여 훈련시킴으로써 네트워크 트래픽을 분류한다. 본 논문에서는 훈련시킨 결과를 토대로 비교 분석 및 평가를 진행한다.

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.

Stock Prediction Model based on Bidirectional LSTM Recurrent Neural Network (양방향 LSTM 순환신경망 기반 주가예측모델)

  • Joo, Il-Taeck;Choi, Seung-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.2
    • /
    • pp.204-208
    • /
    • 2018
  • In this paper, we proposed and evaluated the time series deep learning prediction model for learning fluctuation pattern of stock price. Recurrent neural networks, which can store previous information in the hidden layer, are suitable for the stock price prediction model, which is time series data. In order to maintain the long - term dependency by solving the gradient vanish problem in the recurrent neural network, we use LSTM with small memory inside the recurrent neural network. Furthermore, we proposed the stock price prediction model using bidirectional LSTM recurrent neural network in which the hidden layer is added in the reverse direction of the data flow for solving the limitation of the tendency of learning only based on the immediately preceding pattern of the recurrent neural network. In this experiment, we used the Tensorflow to learn the proposed stock price prediction model with stock price and trading volume input. In order to evaluate the performance of the stock price prediction, the mean square root error between the real stock price and the predicted stock price was obtained. As a result, the stock price prediction model using bidirectional LSTM recurrent neural network has improved prediction accuracy compared with unidirectional LSTM recurrent neural network.

Research on Performance Improvement of the Adaptive Active Noise Control System Using the Recurrent Neural Network (순환형 신경망을 이용한 적응형 능동소음제어시스템의 성능 향상에 대한 연구)

  • Han, Song-Ik;Lee, Tae-Oh;Yeo, Dae-Yeon;Lee, Kwon-Soon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.8
    • /
    • pp.1759-1766
    • /
    • 2010
  • The performance of noise attenuation of the adaptive active noise control algorithm is improved using the recurrent neural network. The FXLMS that has been frequently used in the active noise control is simple and has low computational load, but this method is weak to nonlinearity of the main or secondary path since it is based on the FIR linear filter method. In this paper, the recurrent neural network filter has been developed and applied to improvement of the active noise attenuation by simulation.

Robust Sliding Mode Friction Control with Adaptive Friction Observer and Recurrent Fuzzy Neural Network

  • Shin, Kyoo-Jae;Han, Seong-I.
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.2
    • /
    • pp.125-130
    • /
    • 2009
  • A robust friction compensation scheme is proposed in this paper. The recurrent fuzzy neural network and friction parameter observer are developed with sliding mode based controller in order to obtain precise position tracking performance. For a servo system with incomplete identified friction parameters, a proposed control scheme provides a satisfactory result via some experiment.

Stable Predictive Control of Chaotic Systems Using Self-Recurrent Wavelet Neural Network

  • Yoo Sung Jin;Park Jin Bae;Choi Yoon Ho
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.1
    • /
    • pp.43-55
    • /
    • 2005
  • In this paper, a predictive control method using self-recurrent wavelet neural network (SRWNN) is proposed for chaotic systems. Since the SRWNN has a self-recurrent mother wavelet layer, it can well attract the complex nonlinear system though the SRWNN has less mother wavelet nodes than the wavelet neural network (WNN). Thus, the SRWNN is used as a model predictor for predicting the dynamic property of chaotic systems. The gradient descent method with the adaptive learning rates is applied to train the parameters of the SRWNN based predictor and controller. The adaptive learning rates are derived from the discrete Lyapunov stability theorem, which are used to guarantee the convergence of the predictive controller. Finally, the chaotic systems are provided to demonstrate the effectiveness of the proposed control strategy.

Load Prediction using Finite Element Analysis and Recurrent Neural Network (유한요소해석과 순환신경망을 활용한 하중 예측)

  • Jung-Ho Kang
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.1
    • /
    • pp.151-160
    • /
    • 2024
  • Artificial Neural Networks that enabled Artificial Intelligence are being used in many fields. However, the application to mechanical structures has several problems and research is incomplete. One of the problems is that it is difficult to secure a large amount of data necessary for learning Artificial Neural Networks. In particular, it is important to detect and recognize external forces and forces for safety working and accident prevention of mechanical structures. This study examined the possibility by applying the Current Neural Network of Artificial Neural Networks to detect and recognize the load on the machine. Tens of thousands of data are required for general learning of Recurrent Neural Networks, and to secure large amounts of data, this paper derives load data from ANSYS structural analysis results and applies a stacked auto-encoder technique to secure the amount of data that can be learned. The usefulness of Stacked Auto-Encoder data was examined by comparing Stacked Auto-Encoder data and ANSYS data. In addition, in order to improve the accuracy of detection and recognition of load data with a Recurrent Neural Network, the optimal conditions are proposed by investigating the effects of related functions.

Modular Neural Network Using Recurrent Neural Network (궤환 신경회로망을 사용한 모듈라 네트워크)

  • 최우경;김성주;서재용;전흥태
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1565-1568
    • /
    • 2003
  • In this paper, we propose modular network to solve difficult and complex problems that are seldom solved with multi-layer neural network. The structure of modular neural network in researched by Jacobs and Jordan is selected in this paper. Modular network consists of several expert networks and a gating network which is composed of single-layer neural network or multi-layer neural network. We propose modular network structure using recurrent neural network, since the state of the whole network at a particular time depends on an aggregate of previous states as well as on the current input. Finally, we show excellence of the proposed network compared with modular network.

  • PDF

A New Thpe of Recurrent Neural Network for the Umprovement of Pattern Recobnition Ability (패턴 인식 성능을 향상시키는 새로운 형태의 순환신경망)

  • Jeong, Nak-U;Kim, Byeong-Gi
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.2
    • /
    • pp.401-408
    • /
    • 1997
  • Human gets almist all of his knoweledge from the recognition and the accumulation of input patterns,image or sound,the he gets theough his eyes and through his ears.Among these means,his chracter recognition,an ability that allows him to recognize characters and understand their meanings through visual information, is now applied to a pattern recognition system using neural network in computer. Recurrent neural network is one of those models that reuse the output value in neural network learning.Recently many studies try to apply this recurrent neural network to the classification of static patterns like off-line handwritten characters. But most of their efforts are not so drrdtive until now.This stusy suggests a new type of recurrent neural network for an deedctive classification of the static patterns such as off-line handwritten chracters.Using the new J-E(Jordan-Elman)neural network model that enlarges and combines Jordan Model and Elman Model,this new type is better than those of before in recobnizing the static patterms such as figures and handwritten-characters.

  • PDF