• Title/Summary/Keyword: Long-Short Term Memory Network

Search Result 324, Processing Time 0.025 seconds

Radar rainfall prediction based on deep learning considering temporal consistency (시간 연속성을 고려한 딥러닝 기반 레이더 강우예측)

  • Shin, Hongjoon;Yoon, Seongsim;Choi, Jaemin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.5
    • /
    • pp.301-309
    • /
    • 2021
  • In this study, we tried to improve the performance of the existing U-net-based deep learning rainfall prediction model, which can weaken the meaning of time series order. For this, ConvLSTM2D U-Net structure model considering temporal consistency of data was applied, and we evaluated accuracy of the ConvLSTM2D U-Net model using a RainNet model and an extrapolation-based advection model. In addition, we tried to improve the uncertainty in the model training process by performing learning not only with a single model but also with 10 ensemble models. The trained neural network rainfall prediction model was optimized to generate 10-minute advance prediction data using four consecutive data of the past 30 minutes from the present. The results of deep learning rainfall prediction models are difficult to identify schematically distinct differences, but with ConvLSTM2D U-Net, the magnitude of the prediction error is the smallest and the location of rainfall is relatively accurate. In particular, the ensemble ConvLSTM2D U-Net showed high CSI, low MAE, and a narrow error range, and predicted rainfall more accurately and stable prediction performance than other models. However, the prediction performance for a specific point was very low compared to the prediction performance for the entire area, and the deep learning rainfall prediction model also had limitations. Through this study, it was confirmed that the ConvLSTM2D U-Net neural network structure to account for the change of time could increase the prediction accuracy, but there is still a limitation of the convolution deep neural network model due to spatial smoothing in the strong rainfall region or detailed rainfall prediction.

A Study on Performance Improvement of Recurrent Neural Networks Algorithm using Word Group Expansion Technique (단어그룹 확장 기법을 활용한 순환신경망 알고리즘 성능개선 연구)

  • Park, Dae Seung;Sung, Yeol Woo;Kim, Cheong Ghil
    • Journal of Industrial Convergence
    • /
    • v.20 no.4
    • /
    • pp.23-30
    • /
    • 2022
  • Recently, with the development of artificial intelligence (AI) and deep learning, the importance of conversational artificial intelligence chatbots is being highlighted. In addition, chatbot research is being conducted in various fields. To build a chatbot, it is developed using an open source platform or a commercial platform for ease of development. These chatbot platforms mainly use RNN and application algorithms. The RNN algorithm has the advantages of fast learning speed, ease of monitoring and verification, and good inference performance. In this paper, a method for improving the inference performance of RNNs and applied algorithms was studied. The proposed method used the word group expansion learning technique of key words for each sentence when RNN and applied algorithm were applied. As a result of this study, the RNN, GRU, and LSTM three algorithms with a cyclic structure achieved a minimum of 0.37% and a maximum of 1.25% inference performance improvement. The research results obtained through this study can accelerate the adoption of artificial intelligence chatbots in related industries. In addition, it can contribute to utilizing various RNN application algorithms. In future research, it will be necessary to study the effect of various activation functions on the performance improvement of artificial neural network algorithms.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.

Comparison of Models for Stock Price Prediction Based on Keyword Search Volume According to the Social Acceptance of Artificial Intelligence (인공지능의 사회적 수용도에 따른 키워드 검색량 기반 주가예측모형 비교연구)

  • Cho, Yujung;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.103-128
    • /
    • 2021
  • Recently, investors' interest and the influence of stock-related information dissemination are being considered as significant factors that explain stock returns and volume. Besides, companies that develop, distribute, or utilize innovative new technologies such as artificial intelligence have a problem that it is difficult to accurately predict a company's future stock returns and volatility due to macro-environment and market uncertainty. Market uncertainty is recognized as an obstacle to the activation and spread of artificial intelligence technology, so research is needed to mitigate this. Hence, the purpose of this study is to propose a machine learning model that predicts the volatility of a company's stock price by using the internet search volume of artificial intelligence-related technology keywords as a measure of the interest of investors. To this end, for predicting the stock market, we using the VAR(Vector Auto Regression) and deep neural network LSTM (Long Short-Term Memory). And the stock price prediction performance using keyword search volume is compared according to the technology's social acceptance stage. In addition, we also conduct the analysis of sub-technology of artificial intelligence technology to examine the change in the search volume of detailed technology keywords according to the technology acceptance stage and the effect of interest in specific technology on the stock market forecast. To this end, in this study, the words artificial intelligence, deep learning, machine learning were selected as keywords. Next, we investigated how many keywords each week appeared in online documents for five years from January 1, 2015, to December 31, 2019. The stock price and transaction volume data of KOSDAQ listed companies were also collected and used for analysis. As a result, we found that the keyword search volume for artificial intelligence technology increased as the social acceptance of artificial intelligence technology increased. In particular, starting from AlphaGo Shock, the keyword search volume for artificial intelligence itself and detailed technologies such as machine learning and deep learning appeared to increase. Also, the keyword search volume for artificial intelligence technology increases as the social acceptance stage progresses. It showed high accuracy, and it was confirmed that the acceptance stages showing the best prediction performance were different for each keyword. As a result of stock price prediction based on keyword search volume for each social acceptance stage of artificial intelligence technologies classified in this study, the awareness stage's prediction accuracy was found to be the highest. The prediction accuracy was different according to the keywords used in the stock price prediction model for each social acceptance stage. Therefore, when constructing a stock price prediction model using technology keywords, it is necessary to consider social acceptance of the technology and sub-technology classification. The results of this study provide the following implications. First, to predict the return on investment for companies based on innovative technology, it is most important to capture the recognition stage in which public interest rapidly increases in social acceptance of the technology. Second, the change in keyword search volume and the accuracy of the prediction model varies according to the social acceptance of technology should be considered in developing a Decision Support System for investment such as the big data-based Robo-advisor recently introduced by the financial sector.

A Study on Deep Learning Model for Discrimination of Illegal Financial Advertisements on the Internet

  • Kil-Sang Yoo; Jin-Hee Jang;Seong-Ju Kim;Kwang-Yong Gim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.21-30
    • /
    • 2023
  • The study proposes a model that utilizes Python-based deep learning text classification techniques to detect the legality of illegal financial advertising posts on the internet. These posts aim to promote unlawful financial activities, including the trading of bank accounts, credit card fraud, cashing out through mobile payments, and the sale of personal credit information. Despite the efforts of financial regulatory authorities, the prevalence of illegal financial activities persists. By applying this proposed model, the intention is to aid in identifying and detecting illicit content in internet-based illegal financial advertisining, thus contributing to the ongoing efforts to combat such activities. The study utilizes convolutional neural networks(CNN) and recurrent neural networks(RNN, LSTM, GRU), which are commonly used text classification techniques. The raw data for the model is based on manually confirmed regulatory judgments. By adjusting the hyperparameters of the Korean natural language processing and deep learning models, the study has achieved an optimized model with the best performance. This research holds significant meaning as it presents a deep learning model for discerning internet illegal financial advertising, which has not been previously explored. Additionally, with an accuracy range of 91.3% to 93.4% in a deep learning model, there is a hopeful anticipation for the practical application of this model in the task of detecting illicit financial advertisements, ultimately contributing to the eradication of such unlawful financial advertisements.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Automated Vehicle Research by Recognizing Maneuvering Modes using LSTM Model (LSTM 모델 기반 주행 모드 인식을 통한 자율 주행에 관한 연구)

  • Kim, Eunhui;Oh, Alice
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.4
    • /
    • pp.153-163
    • /
    • 2017
  • This research is based on the previous research that personally preferred safe distance, rotating angle and speed are differentiated. Thus, we use machine learning model for recognizing maneuvering modes trained per personal or per similar driving pattern groups, and we evaluate automatic driving according to maneuvering modes. By utilizing driving knowledge, we subdivided 8 kinds of longitudinal modes and 4 kinds of lateral modes, and by combining the longitudinal and lateral modes, we build 21 kinds of maneuvering modes. we train the labeled data set per time stamp through RNN, LSTM and Bi-LSTM models by the trips of drivers, which are supervised deep learning models, and evaluate the maneuvering modes of automatic driving for the test data set. The evaluation dataset is aggregated of living trips of 3,000 populations by VTTI in USA for 3 years and we use 1500 trips of 22 people and training, validation and test dataset ratio is 80%, 10% and 10%, respectively. For recognizing longitudinal 8 kinds of maneuvering modes, RNN achieves better accuracy compared to LSTM, Bi-LSTM. However, Bi-LSTM improves the accuracy in recognizing 21 kinds of longitudinal and lateral maneuvering modes in comparison with RNN and LSTM as 1.54% and 0.47%, respectively.

Real-time PM10 Concentration Prediction LSTM Model based on IoT Streaming Sensor data (IoT 스트리밍 센서 데이터에 기반한 실시간 PM10 농도 예측 LSTM 모델)

  • Kim, Sam-Keun;Oh, Tack-Il
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.11
    • /
    • pp.310-318
    • /
    • 2018
  • Recently, the importance of big data analysis is increasing as a large amount of data is generated by various devices connected to the Internet with the advent of Internet of Things (IoT). Especially, it is necessary to analyze various large-scale IoT streaming sensor data generated in real time and provide various services through new meaningful prediction. This paper proposes a real-time indoor PM10 concentration prediction LSTM model based on streaming data generated from IoT sensor using AWS. We also construct a real-time indoor PM10 concentration prediction service based on the proposed model. Data used in the paper is streaming data collected from the PM10 IoT sensor for 24 hours. This time series data is converted into sequence data consisting of 30 consecutive values from time series data for use as input data of LSTM. The LSTM model is learned through a sliding window process of moving to the immediately adjacent dataset. In order to improve the performance of the model, incremental learning method is applied to the streaming data collected every 24 hours. The linear regression and recurrent neural networks (RNN) models are compared to evaluate the performance of LSTM model. Experimental results show that the proposed LSTM prediction model has 700% improvement over linear regression and 140% improvement over RNN model for its performance level.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.