• Title/Summary/Keyword: Long Short-term Memory (LSTM)

Search Result 519, Processing Time 0.027 seconds

A Study on the Index Estimation of Missing Real Estate Transaction Cases Using Machine Learning (머신러닝을 활용한 결측 부동산 매매 지수의 추정에 대한 연구)

  • Kim, Kyung-Min;Kim, Kyuseok;Nam, Daisik
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.25 no.1
    • /
    • pp.171-181
    • /
    • 2022
  • The real estate price index plays key roles as quantitative data in real estate market analysis. International organizations including OECD publish the real estate price indexes by country, and the Korea Real Estate Board announces metropolitan-level and municipal-level indexes. However, when the index is set on the smaller spatial unit level than metropolitan and municipal-level, problems occur: missing values. As the spatial scope is narrowed down, there are cases where there are few or no transactions depending on the unit period, which lead index calculation difficult or even impossible. This study suggests a supervised learning-based machine learning model to compensate for missing values that may occur due to no transaction in a specific range and period. The models proposed in our research verify the accuracy of predicting the existing values and missing values.

Comparison of solar power prediction model based on statistical and artificial intelligence model and analysis of revenue for forecasting policy (통계적 및 인공지능 모형 기반 태양광 발전량 예측모델 비교 및 재생에너지 발전량 예측제도 정산금 분석)

  • Lee, Jeong-In;Park, Wan-Ki;Lee, Il-Woo;Kim, Sang-Ha
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.355-363
    • /
    • 2022
  • Korea is pursuing a plan to switch and expand energy sources with a focus on renewable energy with the goal of becoming carbon neutral by 2050. As the instability of energy supply increases due to the intermittent nature of renewable energy, accurate prediction of the amount of renewable energy generation is becoming more important. Therefore, the government has opened a small-scale power brokerage market and is implementing a system that pays settlements according to the accuracy of renewable energy prediction. In this paper, a prediction model was implemented using a statistical model and an artificial intelligence model for the prediction of solar power generation. In addition, the results of prediction accuracy were compared and analyzed, and the revenue from the settlement amount of the renewable energy generation forecasting system was estimated.

Economic Analysis on the Maintenance Management of Riparian Facilities against Flood Damage (침수피해를 고려한 하천이용시설 유지관리의 경제성 분석)

  • Lee, Seung Yeon;Yoo, Hyung Ju;Lee, Sang Eun;Lee, Seung Oh
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.198-198
    • /
    • 2021
  • 최근 자연적, 사회적, 정책적 관점에서 하천관리의 중요성이 증대되면서 국가하천 정비를 통한 하천시설 관리의 책임이 증대되고 있다. 국가하천 5대강 본류의 친수지구 이용도 변화를 살펴보면 2015년에 비해 2019년에 면적당 이용객 수가 630,813(명/km2)이 증가하였음을 알 수 있었고(국토교통부, 2020) 본 연구에서는 이용자 수 증가율이 높은 편인 한강 내 하천이용시설을 대상으로 선정하여 해당 지역을 기계학습 기반의 수위예측 알고리즘에 적용하였다. 하천이용시설은 하천이용자가 편리하게 하천을 이용하기 위하여 설치한 시설로 공원시설(강서, 난지, 양화, 망원, 여의도, 이촌, 반포, 잠원, 뚝섬, 잠실, 광나루, 구리)을 위주로 분석하였다. 해당 시설의 침수피해를 고려하기 위해 시계열 자료에 특화된 LSTM(Long Short-term Memory)기법을 활용하여 수위예측 알고리즘을 개발하였고 이를 통해 도출된 홍수 예보로 재난을 대비하고 시설물을 체계적으로 관리하는 유지관리의 효과를 분석하고자 하였다. 입력 자료(input data)는 수위 (EL.m), 팔당댐 방류량 (m3/s), 강화대교의 조위(EL.m)를 사용하였으며 수위예측 알고리즘을 통해 6시간 후 예측 수위값을 도출하여 기존 2단계(주의보, 경보)였던 홍수 예보 단계에서 4단계(관심, 보행자통제, 차량통제, 경계)로 구축하였다. 기존과 세분화된 홍수예보를 적용했을 경우의 유지관리 비용과 편익을 산정하여 하천이용시설의 경제성을 비교·분석한 결과, 유지관리 비용이 기존 대비 약 5% 이상 절감되었고 편익은 약 1.5배 이상 증가하였으며 관리등급은 평균 C등급(보통) 이상 달성하였다. 이는 수위예측 알고리즘의 적용으로 하천이용 활성화 및 투자의 효율성에 목적을 두었으며 향후 분석결과를 토대로 경제성모델을 개발하여 국가하천 내 관리그룹에 적용하면 효율적인 유지관리체계를 제시할 수 있을 것으로 기대된다.

  • PDF

Linkage of Numerical Analysis Model and Machine Learning for Real-time Flood Risk Prediction (도시홍수 위험도 실시간 표출을 위한 수치해석 모형과 기계학습의 연계)

  • Kim, Hyun Il;Han, Kun Yeun;Kim, Tae Hyung;Choi, Kyu Hyun;Cho, Hyo Seop
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.332-332
    • /
    • 2021
  • 도시화가 상당히 이뤄지고 기습적인 폭우의 발생이 불확실하게 나타나는 시점에서 재산 및 인명피해를 야기할 수 있는 내수침수에 대한 위험도가 증가하고 있다. 내수침수에 대한 예측을 위하여 실측강우 또는 확률강우량 시나리오를 참조하고 연구대상 지역에 대한 1차원 그리고 2차원 수리학적 해석을 실시하는 연구가 오랫동안 진행되어 왔으나, 수치해석 모형의 경우 다양한 수문-지형학적 자료 및 계측 자료를 요구하고 집약적인 계산과정을 통한 단기간 예측에 어려움이 있음이 언급되어 왔다. 본 연구에서는 위와 같은 문제점을 해결하기 위하여 단일 도시 배수분구를 대상으로 관측 강우 자료, 1, 2차원 수치해석 모형, 기계학습 및 딥러닝 기법을 적용한 실시간 홍수위험지도 예측 모형을 개발하였다. 강우자료에 대하여 실시간으로 홍수량을 예측할 수 있도록 LSTM(Long-Short Term Memory) 기법을 적용하였으며, 전국단위 강우에 대한 다양한 1차원 도시유출해석 결과를 학습시킴으로써 예측을 수행하였다. 침수심의 공간적 분포의 경우 로지스틱 회귀를 이용하여, 기준 침수심에 대한 예측을 각각 수행하였다. 홍수위험 등급의 경우 침수심, 유속 그리고 잔해인자를 고려한 홍수위험등급 공식을 적용하여 산정하였으며, 이 결과를 랜덤포레스트(Random Forest)에 학습함으로써 실시간 예측을 수행할 수 있도록 개발하였다. 침수범위 및 홍수위험등급에 대한 예측은 격자 단위로 이뤄졌으며, 검증 자료의 부족으로 침수 흔적도를 통하여 검증된 2차원 침수해석 결과와 비교함으로써 예측력을 평가하였다. 본 기법은 특정 관측강우 또는 예측강우 자료가 입력되었을 때에, 도시 유역 단위로 접근이 불가하여 통제해야 할 구간을 실시간으로 예측하여 관리할 수 있을 것으로 판단된다.

  • PDF

Prediction of rainfall abstraction based on deep learning considering watershed and rainfall characteristic factors (유역 및 강우 특성인자를 고려한 딥러닝 기반의 강우손실 예측)

  • Jeong, Minyeob;Kim, Dae-Hong;Kim, Seokgyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.37-37
    • /
    • 2022
  • 유효우량 산정을 위하여 국내에서 주로 사용되는 모형은 NRCS-CN(Natural Resources Conservation Service - curve number) 모형으로, 유역의 유출 능력을 나타내는 유출곡선지수(runoff curve number, CN)와 같은 NRCS-CN 모형의 매개변수들은 관측 강우-유출자료 또는 토양도, 토지피복지도 등을 이용하여 유역마다 결정된 값이 사용되고 있다. 그러나 유역의 CN값은 유역의 토양 상태와 같은 환경적 조건에 따라 달라질 수 있으며, 이를 반영하기 위하여 선행토양함수조건(antecedent moisture condition, AMC)을 이용하여 CN값을 조정하는 방법이 사용되고 있으나, AMC 조건에 따른 CN 값의 갑작스런 변화는 유출량의 극단적인 변화를 가져올 수 있다. NRCS-CN 모형과 더불어 강우 손실량 산정에 많이 사용되는 모형으로 Green-Ampt 모형이 있다. Green-Ampt 모형은 유역에서 발생하는 침투현상의 물리적 과정을 고려하는 모형이라는 장점이 있으나, 모형에 활용되는 다양한 물리적인 매개변수들을 산정하기 위해서는 유역에 대한 많은 조사가 선행되어야 한다. 또한 이렇게 산정된 매개변수들은 유역 내 토양이나 식생 조건 등에 따른 여러 불확실성을 내포하고 있어 실무적용에 어려움이 있다. 따라서 본 연구에서는, 현재 사용되고 있는 강우손실 모형들의 매개변수를 추정하기 위한 방법을 제시하고자 하였다. 본 연구에서 제시하는 방법은 인공지능(AI) 기술 중 하나인 딥러닝(deep-learning) 기법을 기반으로 하고 있으며, 딥러닝 모형으로는 장단기 메모리(Long Short-Term Memory, LSTM) 모형이 활용되었다. 딥러닝 모형의 입력 데이터는 유역에서의 강우특성이나 토양수분, 증발산, 식생 특성들을 나타내는 인자이며, 모의 결과는 유역에서 발생한 총 유출량으로 강우손실 모형들의 매개변수 값들은 이들을 활용하여 도출될 수 있다. 산정된 매개변수 값들을 강우손실 모형에 적용하여 실제 유역들에서의 유효우량 산정에 활용해보았으며, 동역학파 기반의 강우-유출 모형을 사용하여 유출을 예측해보았다. 예측된 유출수문곡선을 관측 자료와 비교 시 NSE=0.5 이상으로 산정되어 유출이 적절히 예측되었음을 확인했다.

  • PDF

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

Development of a Framework for Improvement of Sensor Data Quality from Weather Buoys (해양기상부표의 센서 데이터 품질 향상을 위한 프레임워크 개발)

  • Ju-Yong Lee;Jae-Young Lee;Jiwoo Lee;Sangmun Shin;Jun-hyuk Jang;Jun-Hee Han
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.186-197
    • /
    • 2023
  • In this study, we focus on the improvement of data quality transmitted from a weather buoy that guides a route of ships. The buoy has an Internet-of-Thing (IoT) including sensors to collect meteorological data and the buoy's status, and it also has a wireless communication device to send them to the central database in a ground control center and ships nearby. The time interval of data collected by the sensor is irregular, and fault data is often detected. Therefore, this study provides a framework to improve data quality using machine learning models. The normal data pattern is trained by machine learning models, and the trained models detect the fault data from the collected data set of the sensor and adjust them. For determining fault data, interquartile range (IQR) removes the value outside the outlier, and an NGBoost algorithm removes the data above the upper bound and below the lower bound. The removed data is interpolated using NGBoost or long-short term memory (LSTM) algorithm. The performance of the suggested process is evaluated by actual weather buoy data from Korea to improve the quality of 'AIR_TEMPERATURE' data by using other data from the same buoy. The performance of our proposed framework has been validated through computational experiments based on real-world data, confirming its suitability for practical applications in real-world scenarios.

Developing Cryptocurrency Trading Strategies with Time Series Forecasting Model (시계열 예측 모델을 활용한 암호화폐 투자 전략 개발)

  • Hyun-Sun Kim;Jae Joon Ahn
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.152-159
    • /
    • 2023
  • This study endeavors to enrich investment prospects in cryptocurrency by establishing a rationale for investment decisions. The primary objective involves evaluating the predictability of four prominent cryptocurrencies - Bitcoin, Ethereum, Litecoin, and EOS - and scrutinizing the efficacy of trading strategies developed based on the prediction model. To identify the most effective prediction model for each cryptocurrency annually, we employed three methodologies - AutoRegressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), and Prophet - representing traditional statistics and artificial intelligence. These methods were applied across diverse periods and time intervals. The result suggested that Prophet trained on the previous 28 days' price history at 15-minute intervals generally yielded the highest performance. The results were validated through a random selection of 100 days (20 target dates per year) spanning from January 1st, 2018, to December 31st, 2022. The trading strategies were formulated based on the optimal-performing prediction model, grounded in the simple principle of assigning greater weight to more predictable assets. When the forecasting model indicates an upward trend, it is recommended to acquire the cryptocurrency with the investment amount determined by its performance. Experimental results consistently demonstrated that the proposed trading strategy yields higher returns compared to an equal portfolio employing a buy-and-hold strategy. The cryptocurrency trading model introduced in this paper carries two significant implications. Firstly, it facilitates the evolution of cryptocurrencies from speculative assets to investment instruments. Secondly, it plays a crucial role in advancing deep learning-based investment strategies by providing sound evidence for portfolio allocation. This addresses the black box issue, a notable weakness in deep learning, offering increased transparency to the model.

A Method for Generating Malware Countermeasure Samples Based on Pixel Attention Mechanism

  • Xiangyu Ma;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.456-477
    • /
    • 2024
  • With information technology's rapid development, the Internet faces serious security problems. Studies have shown that malware has become a primary means of attacking the Internet. Therefore, adversarial samples have become a vital breakthrough point for studying malware. By studying adversarial samples, we can gain insights into the behavior and characteristics of malware, evaluate the performance of existing detectors in the face of deceptive samples, and help to discover vulnerabilities and improve detection methods for better performance. However, existing adversarial sample generation methods still need help regarding escape effectiveness and mobility. For instance, researchers have attempted to incorporate perturbation methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and others into adversarial samples to obfuscate detectors. However, these methods are only effective in specific environments and yield limited evasion effectiveness. To solve the above problems, this paper proposes a malware adversarial sample generation method (PixGAN) based on the pixel attention mechanism, which aims to improve adversarial samples' escape effect and mobility. The method transforms malware into grey-scale images and introduces the pixel attention mechanism in the Deep Convolution Generative Adversarial Networks (DCGAN) model to weigh the critical pixels in the grey-scale map, which improves the modeling ability of the generator and discriminator, thus enhancing the escape effect and mobility of the adversarial samples. The escape rate (ASR) is used as an evaluation index of the quality of the adversarial samples. The experimental results show that the adversarial samples generated by PixGAN achieve escape rates of 97%, 94%, 35%, 39%, and 43% on the Random Forest (RF), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Convolutional Neural Network and Recurrent Neural Network (CNN_RNN), and Convolutional Neural Network and Long Short Term Memory (CNN_LSTM) algorithmic detectors, respectively.

Speech Emotion Recognition in People at High Risk of Dementia

  • Dongseon Kim;Bongwon Yi;Yugwon Won
    • Dementia and Neurocognitive Disorders
    • /
    • v.23 no.3
    • /
    • pp.146-160
    • /
    • 2024
  • Background and Purpose: The emotions of people at various stages of dementia need to be effectively utilized for prevention, early intervention, and care planning. With technology available for understanding and addressing the emotional needs of people, this study aims to develop speech emotion recognition (SER) technology to classify emotions for people at high risk of dementia. Methods: Speech samples from people at high risk of dementia were categorized into distinct emotions via human auditory assessment, the outcomes of which were annotated for guided deep-learning method. The architecture incorporated convolutional neural network, long short-term memory, attention layers, and Wav2Vec2, a novel feature extractor to develop automated speech-emotion recognition. Results: Twenty-seven kinds of Emotions were found in the speech of the participants. These emotions were grouped into 6 detailed emotions: happiness, interest, sadness, frustration, anger, and neutrality, and further into 3 basic emotions: positive, negative, and neutral. To improve algorithmic performance, multiple learning approaches were applied using different data sources-voice and text-and varying the number of emotions. Ultimately, a 2-stage algorithm-initial text-based classification followed by voice-based analysis-achieved the highest accuracy, reaching 70%. Conclusions: The diverse emotions identified in this study were attributed to the characteristics of the participants and the method of data collection. The speech of people at high risk of dementia to companion robots also explains the relatively low performance of the SER algorithm. Accordingly, this study suggests the systematic and comprehensive construction of a dataset from people with dementia.