• Title/Summary/Keyword: Short-term Memory

Search Result 721, Processing Time 0.023 seconds

Combined Study of Individual Board Game Program on Cognitive Function and Depression in Elderly People with Mild Cognitive Impairment (경도인지장애 고령자의 인지기능 및 우울 수준에 대한 가정방문 개별 보드게임 프로그램의 융복합 연구)

  • Kim, Han-na;Song, Bo-Kyoung
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.9
    • /
    • pp.85-90
    • /
    • 2019
  • The purpose of this study was to investigate the effects of individual board game program (IBGP) on cognitive function and depression level in 7 elderly people with mild cognitive impairment(MCI). We used the mini-mental state examination korean version (MMSE-K), montreal cognitive assessment korean version (MoCA-K), and korean form of geriatric depression scale(KGDS). The results showed significant differences in MMSE-K before, after, and follow-up(p<0.05), and there were differences of orientation for time, place, and object and attention in before, after, and follow-up(p<0.05). MoCA-K showed differences in before, after, and follow-up assessments(p<0.01), and showed differences in visual construction skill, orientation, and short-term memory(p<0.05). Finally, there was a difference in depression level before, after, and follow-up of KGDS(p<0.01). Therefore, IBGP for the elderly can help improve the cognitive function, and based on this, it is expected that an advanced IBGP will be applied to improve orientation for time and place in the elderly.

Radar rainfall prediction based on deep learning considering temporal consistency (시간 연속성을 고려한 딥러닝 기반 레이더 강우예측)

  • Shin, Hongjoon;Yoon, Seongsim;Choi, Jaemin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.5
    • /
    • pp.301-309
    • /
    • 2021
  • In this study, we tried to improve the performance of the existing U-net-based deep learning rainfall prediction model, which can weaken the meaning of time series order. For this, ConvLSTM2D U-Net structure model considering temporal consistency of data was applied, and we evaluated accuracy of the ConvLSTM2D U-Net model using a RainNet model and an extrapolation-based advection model. In addition, we tried to improve the uncertainty in the model training process by performing learning not only with a single model but also with 10 ensemble models. The trained neural network rainfall prediction model was optimized to generate 10-minute advance prediction data using four consecutive data of the past 30 minutes from the present. The results of deep learning rainfall prediction models are difficult to identify schematically distinct differences, but with ConvLSTM2D U-Net, the magnitude of the prediction error is the smallest and the location of rainfall is relatively accurate. In particular, the ensemble ConvLSTM2D U-Net showed high CSI, low MAE, and a narrow error range, and predicted rainfall more accurately and stable prediction performance than other models. However, the prediction performance for a specific point was very low compared to the prediction performance for the entire area, and the deep learning rainfall prediction model also had limitations. Through this study, it was confirmed that the ConvLSTM2D U-Net neural network structure to account for the change of time could increase the prediction accuracy, but there is still a limitation of the convolution deep neural network model due to spatial smoothing in the strong rainfall region or detailed rainfall prediction.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.

A Study on Performance Improvement of Recurrent Neural Networks Algorithm using Word Group Expansion Technique (단어그룹 확장 기법을 활용한 순환신경망 알고리즘 성능개선 연구)

  • Park, Dae Seung;Sung, Yeol Woo;Kim, Cheong Ghil
    • Journal of Industrial Convergence
    • /
    • v.20 no.4
    • /
    • pp.23-30
    • /
    • 2022
  • Recently, with the development of artificial intelligence (AI) and deep learning, the importance of conversational artificial intelligence chatbots is being highlighted. In addition, chatbot research is being conducted in various fields. To build a chatbot, it is developed using an open source platform or a commercial platform for ease of development. These chatbot platforms mainly use RNN and application algorithms. The RNN algorithm has the advantages of fast learning speed, ease of monitoring and verification, and good inference performance. In this paper, a method for improving the inference performance of RNNs and applied algorithms was studied. The proposed method used the word group expansion learning technique of key words for each sentence when RNN and applied algorithm were applied. As a result of this study, the RNN, GRU, and LSTM three algorithms with a cyclic structure achieved a minimum of 0.37% and a maximum of 1.25% inference performance improvement. The research results obtained through this study can accelerate the adoption of artificial intelligence chatbots in related industries. In addition, it can contribute to utilizing various RNN application algorithms. In future research, it will be necessary to study the effect of various activation functions on the performance improvement of artificial neural network algorithms.

Experimental Comparison of Network Intrusion Detection Models Solving Imbalanced Data Problem (데이터의 불균형성을 제거한 네트워크 침입 탐지 모델 비교 분석)

  • Lee, Jong-Hwa;Bang, Jiwon;Kim, Jong-Wouk;Choi, Mi-Jung
    • KNOM Review
    • /
    • v.23 no.2
    • /
    • pp.18-28
    • /
    • 2020
  • With the development of the virtual community, the benefits that IT technology provides to people in fields such as healthcare, industry, communication, and culture are increasing, and the quality of life is also improving. Accordingly, there are various malicious attacks targeting the developed network environment. Firewalls and intrusion detection systems exist to detect these attacks in advance, but there is a limit to detecting malicious attacks that are evolving day by day. In order to solve this problem, intrusion detection research using machine learning is being actively conducted, but false positives and false negatives are occurring due to imbalance of the learning dataset. In this paper, a Random Oversampling method is used to solve the unbalance problem of the UNSW-NB15 dataset used for network intrusion detection. And through experiments, we compared and analyzed the accuracy, precision, recall, F1-score, training and prediction time, and hardware resource consumption of the models. Based on this study using the Random Oversampling method, we develop a more efficient network intrusion detection model study using other methods and high-performance models that can solve the unbalanced data problem.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.

Development of Dolphin Click Signal Classification Algorithm Based on Recurrent Neural Network for Marine Environment Monitoring (해양환경 모니터링을 위한 순환 신경망 기반의 돌고래 클릭 신호 분류 알고리즘 개발)

  • Seoje Jeong;Wookeen Chung;Sungryul Shin;Donghyeon Kim;Jeasoo Kim;Gihoon Byun;Dawoon Lee
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.3
    • /
    • pp.126-137
    • /
    • 2023
  • In this study, a recurrent neural network (RNN) was employed as a methodological approach to classify dolphin click signals derived from ocean monitoring data. To improve the accuracy of click signal classification, the single time series data were transformed into fractional domains using fractional Fourier transform to expand its features. Transformed data were used as input for three RNN models: long short-term memory (LSTM), gated recurrent unit (GRU), and bidirectional LSTM (BiLSTM), which were compared to determine the optimal network for the classification of signals. Because the fractional Fourier transform displayed different characteristics depending on the chosen angle parameter, the optimal angle range for each RNN was first determined. To evaluate network performance, metrics such as accuracy, precision, recall, and F1-score were employed. Numerical experiments demonstrated that all three networks performed well, however, the BiLSTM network outperformed LSTM and GRU in terms of learning results. Furthermore, the BiLSTM network provided lower misclassification than the other networks and was deemed the most practically appliable to field data.

A Study on Deep Learning Model for Discrimination of Illegal Financial Advertisements on the Internet

  • Kil-Sang Yoo; Jin-Hee Jang;Seong-Ju Kim;Kwang-Yong Gim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.21-30
    • /
    • 2023
  • The study proposes a model that utilizes Python-based deep learning text classification techniques to detect the legality of illegal financial advertising posts on the internet. These posts aim to promote unlawful financial activities, including the trading of bank accounts, credit card fraud, cashing out through mobile payments, and the sale of personal credit information. Despite the efforts of financial regulatory authorities, the prevalence of illegal financial activities persists. By applying this proposed model, the intention is to aid in identifying and detecting illicit content in internet-based illegal financial advertisining, thus contributing to the ongoing efforts to combat such activities. The study utilizes convolutional neural networks(CNN) and recurrent neural networks(RNN, LSTM, GRU), which are commonly used text classification techniques. The raw data for the model is based on manually confirmed regulatory judgments. By adjusting the hyperparameters of the Korean natural language processing and deep learning models, the study has achieved an optimized model with the best performance. This research holds significant meaning as it presents a deep learning model for discerning internet illegal financial advertising, which has not been previously explored. Additionally, with an accuracy range of 91.3% to 93.4% in a deep learning model, there is a hopeful anticipation for the practical application of this model in the task of detecting illicit financial advertisements, ultimately contributing to the eradication of such unlawful financial advertisements.

Research on the Application of AI Techniques to Advance Dam Operation (댐 운영 고도화를 위한 AI 기법 적용 연구)

  • Choi, Hyun Gu;Jeong, Seok Il;Park, Jin Yong;Kwon, E Jae;Lee, Jun Yeol
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.387-387
    • /
    • 2022
  • 기존 홍수기시 댐 운영은 예측 강우와 실시간 관측 강우를 이용하여 댐 운영 모형을 수행하며, 예측 결과에 따라 의사결정 및 댐 운영을 실시하게 된다. 하지만 이 과정에서 반복적인 분석이 필요하며, 댐 운영 모형 수행자의 경험에 따라 예측 결과가 달라져서 반복작업에 대한 자동화, 모형 수행자에 따라 달라지지 않는 예측 결과의 일반화가 필요한 상황이다. 이에 댐 운영 모형에 AI 기법을 적용하여, 다양한 강우 상황에 따른 자동 예측 및 모형 결과의 일반화를 구현하고자 하였다. 이를 위해 수자원 분야에 적용된 국내외 129개 연구논문에서 사용된 딥러닝 기법의 활용성을 분석하였으며, 다양한 수자원 분야 AI 적용 사례 중에서 댐 운영 예측 모형에 적용한 사례는 없었지만 유사한 분야로는 장기 저수지 운영 예측과 댐 상·하류 수위, 유량 예측이 있었다. 수자원의 시계열 자료 활용을 위해서는 Long-Short Term Memory(LSTM) 기법의 적용 활용성이 높은 것으로 분석되었다. 댐 운영 모형에서 AI 적용은 2개 분야에서 진행하였다. 기존 강우관측소의 관측 강우를 활용하여 강우의 패턴분석을 수행하는 과정과, 강우에서 댐 유입량 산정시 매개변수 최적화 분야에 적용하였다. 강우 패턴분석에서는 유사한 표본끼리 묶음을 생성하는 K-means 클러스터링 알고리즘과 시계열 데이터의 유사도 분석 방법인 Dynamic Time Warping을 결합하여 적용하였다. 강우 패턴분석을 통해서 지점별로 월별, 태풍 및 장마기간에 가장 많이 관측되었던 강우 패턴을 제시하며, 이를 모형에서 직접적으로 활용할 수 있도록 구성하였다. 강우에서 댐 유입량을 산정시 활용되는 매개변수 최적화를 위해서는 3층의 Multi-Layer LSTM 기법과 경사하강법을 적용하였다. 매개변수 최적화에 적용되는 매개변수는 중권역별 8개이며, 매개변수 최적화 과정을 통해 산정되는 결과물은 실측값과 오차가 제일 적은 유량(유입량)이 된다. 댐 운영 모형에 AI 기법을 적용한 결과 기존 반복작업에 대한 자동화는 이뤘으며, 댐 운영에 따른 상·하류 제약사항 표출 기능을 추가하여 의사결정에 소요되는 시간도 많이 줄일 수 있었다. 하지만, 매개변수 최적화 부분에서 기존 댐운영 모형에 적용되어 있는 고전적인 매개변수 추정기법보다 추정시간이 오래 소요되며, 매개변수 추정결과의 일반화가 이뤄지지 않아 이 부분에 대한 추가적인 연구가 필요하다.

  • PDF