• Title/Summary/Keyword: 순환 컨볼루션 신경망

Search Result 9, Processing Time 0.024 seconds

Learning and Transferring Deep Neural Network Models for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델 학습과 전이)

  • Kim, Dong-Ha;Kim, Incheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.617-620
    • /
    • 2016
  • 본 논문에서는 이미지 캡션 생성과 모델 전이에 효과적인 심층 신경망 모델을 제시한다. 본 모델은 멀티 모달 순환 신경망 모델의 하나로서, 이미지로부터 시각 정보를 추출하는 컨볼루션 신경망 층, 각 단어를 저차원의 특징으로 변환하는 임베딩 층, 캡션 문장 구조를 학습하는 순환 신경망 층, 시각 정보와 언어 정보를 결합하는 멀티 모달 층 등 총 5 개의 계층들로 구성된다. 특히 본 모델에서는 시퀀스 패턴 학습과 모델 전이에 우수한 LSTM 유닛을 이용하여 순환 신경망 층을 구성하고, 컨볼루션 신경망 층의 출력을 임베딩 층뿐만 아니라 멀티 모달 층에도 연결함으로써, 캡션 문장 생성을 위한 매 단계마다 이미지의 시각 정보를 이용할 수 있는 연결 구조를 가진다. Flickr8k, Flickr30k, MSCOCO 등의 공개 데이터 집합들을 이용한 다양한 비교 실험을 통해, 캡션의 정확도와 모델 전이의 효과 면에서 본 논문에서 제시한 멀티 모달 순환 신경망 모델의 우수성을 입증하였다.

Large-Scale Text Classification with Deep Neural Networks (깊은 신경망 기반 대용량 텍스트 데이터 분류 기술)

  • Jo, Hwiyeol;Kim, Jin-Hwa;Kim, Kyung-Min;Chang, Jeong-Ho;Eom, Jae-Hong;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.322-327
    • /
    • 2017
  • The classification problem in the field of Natural Language Processing has been studied for a long time. Continuing forward with our previous research, which classifies large-scale text using Convolutional Neural Networks (CNN), we implemented Recurrent Neural Networks (RNN), Long-Short Term Memory (LSTM) and Gated Recurrent Units (GRU). The experiment's result revealed that the performance of classification algorithms was Multinomial Naïve Bayesian Classifier < Support Vector Machine (SVM) < LSTM < CNN < GRU, in order. The result can be interpreted as follows: First, the result of CNN was better than LSTM. Therefore, the text classification problem might be related more to feature extraction problem than to natural language understanding problems. Second, judging from the results the GRU showed better performance in feature extraction than LSTM. Finally, the result that the GRU was better than CNN implies that text classification algorithms should consider feature extraction and sequential information. We presented the results of fine-tuning in deep neural networks to provide some intuition regard natural language processing to future researchers.

피싱 웹사이트 URL의 수준별 특징 모델링을 위한 컨볼루션 신경망과 게이트 순환신경망의 퓨전 신경망

  • Bu, Seok-Jun;Kim, Hae-Jung
    • Review of KIISC
    • /
    • v.29 no.3
    • /
    • pp.29-36
    • /
    • 2019
  • 폭발적으로 성장하는 소셜 미디어 서비스로 인해 개인간의 연결이 강화된 환경에서는 URL로써 전파되는 피싱 공격의 위험성이 크게 강조된다. 최근 텍스트 분류 및 모델링 분야에서 그 성능을 입증받은 딥러닝 알고리즘은 피싱 URL의 구문적, 의미적 특징을 각각 모델링하기에 적절하지만, 기존에 사용하는 규칙 기반 앙상블 방법으로는 문자와 단어로부터 추출되는 특징간의 비선형적인 관계를 효과적으로 융합하는데 한계가 있다. 본 논문에서는 피싱 URL의 구문적, 의미적 특징을 체계적으로 융합하기 위한 컨볼루션 신경망 기반의 퓨전 신경망을 제안하고 기계학습 방법 중 최고의 분류정확도 (0.9804)를 달성하였다. 학습 및 테스트 데이터셋으로 45,000건의 정상 URL과 15,000건의 피싱 URL을 수집하였고, 정량적 검증으로 10겹 교차검증과 ROC커브, 정성적 검증으로 오분류 케이스와 딥러닝 내부 파라미터를 시각화하여 분석하였다.

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

Customized AI Exercise Recommendation Service for the Balanced Physical Activity (균형적인 신체활동을 위한 맞춤형 AI 운동 추천 서비스)

  • Chang-Min Kim;Woo-Beom Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.234-240
    • /
    • 2022
  • This paper proposes a customized AI exercise recommendation service for balancing the relative amount of exercise according to the working environment by each occupation. WISDM database is collected by using acceleration and gyro sensors, and is a dataset that classifies physical activities into 18 categories. Our system recommends a adaptive exercise using the analyzed activity type after classifying 18 physical activities into 3 physical activities types such as whole body, upper body and lower body. 1 Dimensional convolutional neural network is used for classifying a physical activity in this paper. Proposed model is composed of a convolution blocks in which 1D convolution layers with a various sized kernel are connected in parallel. Convolution blocks can extract a detailed local features of input pattern effectively that can be extracted from deep neural network models, as applying multi 1D convolution layers to input pattern. To evaluate performance of the proposed neural network model, as a result of comparing the previous recurrent neural network, our method showed a remarkable 98.4% accuracy.

A Stock Price Prediction Based on Recurrent Convolution Neural Network with Weighted Loss Function (가중치 손실 함수를 가지는 순환 컨볼루션 신경망 기반 주가 예측)

  • Kim, HyunJin;Jung, Yeon Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.123-128
    • /
    • 2019
  • This paper proposes the stock price prediction based on the artificial intelligence, where the model with recurrent convolution neural network (RCNN) layers is adopted. In the motivation of this prediction, long short-term memory model (LSTM)-based neural network can make the output of the time series prediction. On the other hand, the convolution neural network provides the data filtering, averaging, and augmentation. By combining the advantages mentioned above, the proposed technique predicts the estimated stock price of next day. In addition, in order to emphasize the recent time series, a custom weighted loss function is adopted. Moreover, stock data related to the stock price index are adopted to consider the market trends. In the experiments, the proposed stock price prediction reduces the test error by 3.19%, which is over other techniques by about 19%.

Hybrid Word-Character Neural Network Model for the Improvement of Document Classification (문서 분류의 개선을 위한 단어-문자 혼합 신경망 모델)

  • Hong, Daeyoung;Shim, Kyuseok
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1290-1295
    • /
    • 2017
  • Document classification, a task of classifying the category of each document based on text, is one of the fundamental areas for natural language processing. Document classification may be used in various fields such as topic classification and sentiment classification. Neural network models for document classification can be divided into two categories: word-level models and character-level models that treat words and characters as basic units respectively. In this study, we propose a neural network model that combines character-level and word-level models to improve performance of document classification. The proposed model extracts the feature vector of each word by combining information obtained from a word embedding matrix and information encoded by a character-level neural network. Based on feature vectors of words, the model classifies documents with a hierarchical structure wherein recurrent neural networks with attention mechanisms are used for both the word and the sentence levels. Experiments on real life datasets demonstrate effectiveness of our proposed model.

Research for Radar Signal Classification Model Using Deep Learning Technique (딥 러닝 기법을 이용한 레이더 신호 분류 모델 연구)

  • Kim, Yongjun;Yu, Kihun;Han, Jinwoo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.2
    • /
    • pp.170-178
    • /
    • 2019
  • Classification of radar signals in the field of electronic warfare is a problem of discriminating threat types by analyzing enemy threat radar signals such as aircraft, radar, and missile received through electronic warfare equipment. Recent radar systems have adopted a variety of modulation schemes that are different from those used in conventional systems, and are often difficult to analyze using existing algorithms. Also, it is necessary to design a robust algorithm for the signal received in the real environment due to the environmental influence and the measurement error due to the characteristics of the hardware. In this paper, we propose a radar signal classification method which are not affected by radar signal modulation methods and noise generation by using deep learning techniques.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.