• Title/Summary/Keyword: Deep recurrent neural networks

Search Result 99, Processing Time 0.031 seconds

The Prediction of Cryptocurrency on Using Text Mining and Deep Learning Techniques : Comparison of Korean and USA Market (텍스트 마이닝과 딥러닝을 활용한 암호화폐 가격 예측 : 한국과 미국시장 비교)

  • Won, Jonggwan;Hong, Taeho
    • Knowledge Management Research
    • /
    • v.22 no.2
    • /
    • pp.1-17
    • /
    • 2021
  • In this study, we predicted the bitcoin prices of Bithum and Coinbase, a leading exchange in Korea and USA, using ARIMA and Recurrent Neural Networks(RNNs). And we used news articles from each country to suggest a separated RNN model. The suggested model identifies the datasets based on the changing trend of prices in the training data, and then applies time series prediction technique(RNNs) to create multiple models. Then we used daily news data to create a term-based dictionary for each trend change point. We explored trend change points in the test data using the daily news keyword data of testset and term-based dictionary, and apply a matching model to produce prediction results. With this approach we obtained higher accuracy than the model which predicted price by applying just time series prediction technique. This study presents that the limitations of the time series prediction techniques could be overcome by exploring trend change points using news data and various time series prediction techniques with text mining techniques could be applied to improve the performance of the model in the further research.

Efficient LSTM Configuration in IoT Environment (IoT 환경에서의 효율적인 LSTM 구성)

  • Lee, Jongwon;Hwang, Chulhyun;Lee, Sungock;Song, Hyunok;Jung, Hoekyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.345-346
    • /
    • 2018
  • Internet of Things (IoT) data is collected in real time and is treated as highly reliable data because of its high precision. However, IoT data is not always highly reliable data. Because, data be often incomplete values for reasons such as sensor aging and failure, poor operating environment, and communication problems. So, we propose the methodology for solve this problem. Our methodology implements multiple LSTM networks to individually process the data collected from the sensors and a single LSTM network that batches the input data into an array. And, we propose an efficient method for constructing LSTM in IoT environment.

  • PDF

Prediction of DorimRiver Water Level Using Tensorflow (Tensorflow를 이용한 도림천 수위 예측)

  • Yuk, Gi-moon;Lee, Jung-hwan;Jeong, Min-su;Moon, Hyeon-Tae;Moon, Yong-il
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.188-188
    • /
    • 2019
  • 본 연구에서는 텐서플로우를 이용한 관측자료 기반의 수위예측 연구를 수행하였다. 대상유역은 도림천 유역으로 선정하였으며 관측강우와 상류하천의 수위자료를 이용하여 하류인 도림교지점의 수위를 예측하였으며 다른 변수는 배제하였다. 사용된 모형은 시계열 데이터예측에 우수한 성능을 보이는 RNN(Recurrent Neural Network)과 LSTM(Long Short Term Memory networks)을 이용하였으며 수위자료는 2005년부터 2016년도 10분단위 관측강우와 수위 데이터를 학습하여 2017년도 수위데이터를 예측하도록 하였다. 본 연구를 통하여 홍수기 실시간 수위예측이 가능할것으로 판단되며 도시지역 골든타임 확보에 활용될 것으로 판단된다.

  • PDF

Language-based Classification of Words using Deep Learning (딥러닝을 이용한 언어별 단어 분류 기법)

  • Zacharia, Nyambegera Duke;Dahouda, Mwamba Kasongo;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.411-414
    • /
    • 2021
  • One of the elements of technology that has become extremely critical within the field of education today is Deep learning. It has been especially used in the area of natural language processing, with some word-representation vectors playing a critical role. However, some of the low-resource languages, such as Swahili, which is spoken in East and Central Africa, do not fall into this category. Natural Language Processing is a field of artificial intelligence where systems and computational algorithms are built that can automatically understand, analyze, manipulate, and potentially generate human language. After coming to discover that some African languages fail to have a proper representation within language processing, even going so far as to describe them as lower resource languages because of inadequate data for NLP, we decided to study the Swahili language. As it stands currently, language modeling using neural networks requires adequate data to guarantee quality word representation, which is important for natural language processing (NLP) tasks. Most African languages have no data for such processing. The main aim of this project is to recognize and focus on the classification of words in English, Swahili, and Korean with a particular emphasis on the low-resource Swahili language. Finally, we are going to create our own dataset and reprocess the data using Python Script, formulate the syllabic alphabet, and finally develop an English, Swahili, and Korean word analogy dataset.

LSTM Language Model Based Korean Sentence Generation (LSTM 언어모델 기반 한국어 문장 생성)

  • Kim, Yang-hoon;Hwang, Yong-keun;Kang, Tae-gwan;Jung, Kyo-min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.5
    • /
    • pp.592-601
    • /
    • 2016
  • The recurrent neural network (RNN) is a deep learning model which is suitable to sequential or length-variable data. The Long Short-Term Memory (LSTM) mitigates the vanishing gradient problem of RNNs so that LSTM can maintain the long-term dependency among the constituents of the given input sequence. In this paper, we propose a LSTM based language model which can predict following words of a given incomplete sentence to generate a complete sentence. To evaluate our method, we trained our model using multiple Korean corpora then generated the incomplete part of Korean sentences. The result shows that our language model was able to generate the fluent Korean sentences. We also show that the word based model generated better sentences compared to the other settings.

Recurrent Neural Network Model for Predicting Tight Oil Productivity Using Type Curve Parameters for Each Cluster (군집 별 표준곡선 매개변수를 이용한 치밀오일 생산성 예측 순환신경망 모델)

  • Han, Dong-kwon;Kim, Min-soo;Kwon, Sun-il
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.297-299
    • /
    • 2021
  • Predicting future productivity of tight oil is an important task for analyzing residual oil recovery and reservoir behavior. In general, productivity prediction is made using the decline curve analysis(DCA). In this study, we intend to propose an effective model for predicting future production using deep learning-based recurrent neural networks(RNN), LSTM, and GRU algorithms. As input variables, the main parameters are oil, gas, water, which are calculated during the production of tight oil, and the type curve calculated through various cluster analyzes. the output variable is the monthly oil production. Existing empirical models, the DCA and RNN models, were compared, and an optimal model was derived through hyperparameter tuning to improve the predictive performance of the model.

  • PDF

Comparison of Fault Diagnosis Accuracy Between XGBoost and Conv1D Using Long-Term Operation Data of Ship Fuel Supply Instruments (선박 연료 공급 기기류의 장시간 운전 데이터의 고장 진단에 있어서 XGBoost 및 Conv1D의 예측 정확성 비교)

  • Hyung-Jin Kim;Kwang-Sik Kim;Se-Yun Hwang;Jang-Hyun Lee
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.06a
    • /
    • pp.110-110
    • /
    • 2022
  • 본 연구는 자율운항 선박의 원격 고장 진단 기법 개발의 일부로 수행되었다. 특히, 엔진 연료 계통 장비로부터 계측된 시계열 데이터로부터 상태 진단을 위한 알고리즘 구현 결과를 제시하였다. 엔진 연료 펌프와 청정기를 가진 육상 실험 장비로부터 진동 시계열 데이터 계측하였으며, 이상 감지, 고장 분류 및 고장 예측이 가능한 심층 학습(Deep Learning) 및 기계 학습(Machine Learning) 알고리즘을 구현하였다. 육상 실험 장비에 고장 유형 별로 인위적인 고장을 발생시켜 특징적인 진동 신호를 계측하여, 인공 지능 학습에 이용하였다. 계측된 신호 데이터는 선행 발생한 사건의 신호가 후행 사건에 영향을 미치는 특성을 가지고 있으므로, 시계열에 내포된 고장 상태는 시간 간의 선후 종속성을 반영할 수 있는 학습 알고리즘을 제시하였다. 고장 사건의 시간 종속성을 반영할 수 있도록 순환(Recurrent) 계열의 RNN(Recurrent Neural Networks), LSTM(Long Short-Term Memory models)의 모델과 합성곱 연산 (Convolution Neural Network)을 기반으로 하는 Conv1D 모델을 적용하여 예측 정확성을 비교하였다. 특히, 합성곱 계열의 RNN LSTM 모델이 고차원의 순차적 자연어 언어 처리에 장점을 보이는 모델임을 착안하여, 신호의 시간 종속성을 학습에 반영할 수 있는 합성곱 계열의 Conv1 알고리즘을 고장 예측에 사용하였다. 또한 기계 학습 모델의 효율성을 감안하여 XGBoost를 추가로 적용하여 고장 예측을 시도하였다. 최종적으로 연료 펌프와 청정기의 진동 신호로부터 Conv1D 모델과 XGBoost 모델의 고장 예측 성능 결과를 비교하였다

  • PDF

Bi-directional LSTM-CNN-CRF for Korean Named Entity Recognition System with Feature Augmentation (자질 보강과 양방향 LSTM-CNN-CRF 기반의 한국어 개체명 인식 모델)

  • Lee, DongYub;Yu, Wonhee;Lim, HeuiSeok
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.12
    • /
    • pp.55-62
    • /
    • 2017
  • The Named Entity Recognition system is a system that recognizes words or phrases with object names such as personal name (PS), place name (LC), and group name (OG) in the document as corresponding object names. Traditional approaches to named entity recognition include statistical-based models that learn models based on hand-crafted features. Recently, it has been proposed to construct the qualities expressing the sentence using models such as deep-learning based Recurrent Neural Networks (RNN) and long-short term memory (LSTM) to solve the problem of sequence labeling. In this research, to improve the performance of the Korean named entity recognition system, we used a hand-crafted feature, part-of-speech tagging information, and pre-built lexicon information to augment features for representing sentence. Experimental results show that the proposed method improves the performance of Korean named entity recognition system. The results of this study are presented through github for future collaborative research with researchers studying Korean Natural Language Processing (NLP) and named entity recognition system.

Deep Learning based Time Offset Estimation in GPS Time Transfer Measurement Data (GPS 시각전송 측정데이터에 대한 딥러닝 모델 기반 시각오프셋 예측)

  • Yu, Dong-Hui;Kim, Min-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.456-462
    • /
    • 2022
  • In this paper, we introduce a method of predicting time offset by applying LSTM, a deep learning model, to a precision time comparison technique based on measurement data extracted from code signals transmitted from GPS satellites to determine Universal Coordinated Time (UTC). First, we introduce a process of extracting time information from code signals received from a GPS satellite on a daily basis and constructing a daily time offset into one time series data. To apply the deep learning model to the constructed time offset time series data, LSTM, one of the recurrent neural networks, was applied to predict the time offset of a GPS satellite. Through this study, the possibility of time offset prediction by applying deep learning in the field of GNSS precise time transfer was confirmed.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.