• Title/Summary/Keyword: Support vector regression

Search Result 549, Processing Time 0.222 seconds

Wildfire Severity Mapping Using Sentinel Satellite Data Based on Machine Learning Approaches (Sentinel 위성영상과 기계학습을 이용한 국내산불 피해강도 탐지)

  • Sim, Seongmun;Kim, Woohyeok;Lee, Jaese;Kang, Yoojin;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1109-1123
    • /
    • 2020
  • In South Korea with forest as a major land cover class (over 60% of the country), many wildfires occur every year. Wildfires weaken the shear strength of the soil, forming a layer of soil that is vulnerable to landslides. It is important to identify the severity of a wildfire as well as the burned area to sustainably manage the forest. Although satellite remote sensing has been widely used to map wildfire severity, it is often difficult to determine the severity using only the temporal change of satellite-derived indices such as Normalized Difference Vegetation Index (NDVI) and Normalized Burn Ratio (NBR). In this study, we proposed an approach for determining wildfire severity based on machine learning through the synergistic use of Sentinel-1A Synthetic Aperture Radar-C data and Sentinel-2A Multi Spectral Instrument data. Three wildfire cases-Samcheok in May 2017, Gangreung·Donghae in April 2019, and Gosung·Sokcho in April 2019-were used for developing wildfire severity mapping models with three machine learning algorithms (i.e., Random Forest, Logistic Regression, and Support Vector Machine). The results showed that the random forest model yielded the best performance, resulting in an overall accuracy of 82.3%. The cross-site validation to examine the spatiotemporal transferability of the machine learning models showed that the models were highly sensitive to temporal differences between the training and validation sites, especially in the early growing season. This implies that a more robust model with high spatiotemporal transferability can be developed when more wildfire cases with different seasons and areas are added in the future.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

The big data method for flash flood warning (돌발홍수 예보를 위한 빅데이터 분석방법)

  • Park, Dain;Yoon, Sanghoo
    • Journal of Digital Convergence
    • /
    • v.15 no.11
    • /
    • pp.245-250
    • /
    • 2017
  • Flash floods is defined as the flooding of intense rainfall over a relatively small area that flows through river and valley rapidly in short time with no advance warning. So that it can cause damage property and casuality. This study is to establish the flash-flood warning system using 38 accident data, reported from the National Disaster Information Center and Land Surface Model(TOPLATS) between 2009 and 2012. Three variables were used in the Land Surface Model: precipitation, soil moisture, and surface runoff. The three variables of 6 hours preceding flash flood were reduced to 3 factors through factor analysis. Decision tree, random forest, Naive Bayes, Support Vector Machine, and logistic regression model are considered as big data methods. The prediction performance was evaluated by comparison of Accuracy, Kappa, TP Rate, FP Rate and F-Measure. The best method was suggested based on reproducibility evaluation at the each points of flash flood occurrence and predicted count versus actual count using 4 years data.

A study on entertainment TV show ratings and the number of episodes prediction (국내 예능 시청률과 회차 예측 및 영향요인 분석)

  • Kim, Milim;Lim, Soyeon;Jang, Chohee;Song, Jongwoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.6
    • /
    • pp.809-825
    • /
    • 2017
  • The number of TV entertainment shows is increasing. Competition among programs in the entertainment market is intensifying since cable channels air many entertainment TV shows. There is now a need for research on program ratings and the number of episodes. This study presents predictive models for entertainment TV show ratings and number of episodes. We use various data mining techniques such as linear regression, logistic regression, LASSO, random forests, gradient boosting, and support vector machine. The analysis results show that the average program ratings before the first broadcast is affected by broadcasting company, average ratings of the previous season, starting year and number of articles. The average program ratings after the first broadcast is influenced by the rating of the first broadcast, broadcasting company and program type. We also found that the predicted average ratings, starting year, type and broadcasting company are important variables in predicting of the number of episodes.

Predicting Stock Liquidity by Using Ensemble Data Mining Methods

  • Bae, Eun Chan;Lee, Kun Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.6
    • /
    • pp.9-19
    • /
    • 2016
  • In finance literature, stock liquidity showing how stocks can be cashed out in the market has received rich attentions from both academicians and practitioners. The reasons are plenty. First, it is known that stock liquidity affects significantly asset pricing. Second, macroeconomic announcements influence liquidity in the stock market. Therefore, stock liquidity itself affects investors' decision and managers' decision as well. Though there exist a great deal of literature about stock liquidity in finance literature, it is quite clear that there are no studies attempting to investigate the stock liquidity issue as one of decision making problems. In finance literature, most of stock liquidity studies had dealt with limited views such as how much it influences stock price, which variables are associated with describing the stock liquidity significantly, etc. However, this paper posits that stock liquidity issue may become a serious decision-making problem, and then be handled by using data mining techniques to estimate its future extent with statistical validity. In this sense, we collected financial data set from a number of manufacturing companies listed in KRX (Korea Exchange) during the period of 2010 to 2013. The reason why we selected dataset from 2010 was to avoid the after-shocks of financial crisis that occurred in 2008. We used Fn-GuidPro system to gather total 5,700 financial data set. Stock liquidity measure was computed by the procedures proposed by Amihud (2002) which is known to show best metrics for showing relationship with daily return. We applied five data mining techniques (or classifiers) such as Bayesian network, support vector machine (SVM), decision tree, neural network, and ensemble method. Bayesian networks include GBN (General Bayesian Network), NBN (Naive BN), TAN (Tree Augmented NBN). Decision tree uses CART and C4.5. Regression result was used as a benchmarking performance. Ensemble method uses two types-integration of two classifiers, and three classifiers. Ensemble method is based on voting for the sake of integrating classifiers. Among the single classifiers, CART showed best performance with 48.2%, compared with 37.18% by regression. Among the ensemble methods, the result from integrating TAN, CART, and SVM was best with 49.25%. Through the additional analysis in individual industries, those relatively stabilized industries like electronic appliances, wholesale & retailing, woods, leather-bags-shoes showed better performance over 50%.

Implementing an Adaptive Neuro-Fuzzy Model for Emotion Prediction Based on Heart Rate Variability(HRV) (심박변이도를 이용한 적응적 뉴로 퍼지 감정예측 모형에 관한 연구)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.1
    • /
    • pp.239-247
    • /
    • 2019
  • An accurate prediction of emotion is a very important issue for the sake of patient-centered medical device development and emotion-related psychology fields. Although there have been many studies on emotion prediction, no studies have applied the heart rate variability and neuro-fuzzy approach to emotion prediction. We propose ANFEP(Adaptive Neuro Fuzzy System for Emotion Prediction) HRV. The ANFEP bases its core functions on an ANFIS(Adaptive Neuro-Fuzzy Inference System) which integrates neural networks with fuzzy systems as a vehicle for training predictive models. To prove the proposed model, 50 participants were invited to join the experiment and Heart rate variability was obtained and used to input the ANFEP model. The ANFEP model with STDRR and RMSSD as inputs and two membership functions per input variable showed the best results. The result out of applying the ANFEP to the HRV metrics proved to be significantly robust when compared with benchmarking methods like linear regression, support vector regression, neural network, and random forest. The results show that reliable prediction of emotion is possible with less input and it is necessary to develop a more accurate and reliable emotion recognition system.

Effect of the Learning Image Combinations and Weather Parameters in the PM Estimation from CCTV Images (CCTV 영상으로부터 미세먼지 추정에서 학습영상조합, 기상변수 적용이 결과에 미치는 영향)

  • Won, Taeyeon;Eo, Yang Dam;Sung, Hong ki;Chong, Kyu soo;Youn, Junhee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.573-581
    • /
    • 2020
  • Using CCTV images and weather parameters, a method for estimating PM (Particulate Matter) index was proposed, and an experiment was conducted. For CCTV images, we proposed a method of estimating the PM index by applying a deep learning technique based on a CNN (Convolutional Neural Network) with ROI(Region Of Interest) image including a specific spot and an full area image. In addition, after combining the predicted result values by deep learning with the two weather parameters of humidity and wind speed, a post-processing experiment was also conducted to calculate the modified PM index using the learned regression model. As a result of the experiment, the estimated value of the PM index from the CCTV image was R2(R-Squared) 0.58~0.89, and the result of learning the ROI image and the full area image with the measuring device was the best. The result of post-processing using weather parameters did not always show improvement in accuracy in all cases in the experimental area.

Estimation of KOSPI200 Index option volatility using Artificial Intelligence (이기종 머신러닝기법을 활용한 KOSPI200 옵션변동성 예측)

  • Shin, Sohee;Oh, Hayoung;Kim, Jang Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1423-1431
    • /
    • 2022
  • Volatility is one of the variables that the Black-Scholes model requires for option pricing. It is an unknown variable at the present time, however, since the option price can be observed in the market, implied volatility can be derived from the price of an option at any given point in time and can represent the market's expectation of future volatility. Although volatility in the Black-Scholes model is constant, when calculating implied volatility, it is common to observe a volatility smile which shows that the implied volatility is different depending on the strike prices. We implement supervised learning to target implied volatility by adding V-KOSPI to ease volatility smile. We examine the estimation performance of KOSPI200 index options' implied volatility using various Machine Learning algorithms such as Linear Regression, Tree, Support Vector Machine, KNN and Deep Neural Network. The training accuracy was the highest(99.9%) in Decision Tree model and test accuracy was the highest(96.9%) in Random Forest model.

EEG Feature Engineering for Machine Learning-Based CPAP Titration Optimization in Obstructive Sleep Apnea

  • Juhyeong Kang;Yeojin Kim;Jiseon Yang;Seungwon Chung;Sungeun Hwang;Uran Oh;Hyang Woon Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.3
    • /
    • pp.89-103
    • /
    • 2023
  • Obstructive sleep apnea (OSA) is one of the most prevalent sleep disorders that can lead to serious consequences, including hypertension and/or cardiovascular diseases, if not treated promptly. Continuous positive airway pressure (CPAP) is widely recognized as the most effective treatment for OSA, which needs the proper titration of airway pressure to achieve the most effective treatment results. However, the process of CPAP titration can be time-consuming and cumbersome. There is a growing importance in predicting personalized CPAP pressure before CPAP treatment. The primary objective of this study was to optimize the CPAP titration process for obstructive sleep apnea patients through EEG feature engineering with machine learning techniques. We aimed to identify and utilize the most critical EEG features to forecast key OSA predictive indicators, ultimately facilitating more precise and personalized CPAP treatment strategies. Here, we analyzed 126 OSA patients' PSG datasets before and after the CPAP treatment. We extracted 29 EEG features to predict the features that have high importance on the OSA prediction index which are AHI and SpO2 by applying the Shapley Additive exPlanation (SHAP) method. Through extracted EEG features, we confirmed the six EEG features that had high importance in predicting AHI and SpO2 using XGBoost, Support Vector Machine regression, and Random Forest Regression. By utilizing the predictive capabilities of EEG-derived features for AHI and SpO2, we can better understand and evaluate the condition of patients undergoing CPAP treatment. The ability to predict these key indicators accurately provides more immediate insight into the patient's sleep quality and potential disturbances. This not only ensures the efficiency of the diagnostic process but also provides more tailored and effective treatment approach. Consequently, the integration of EEG analysis into the sleep study protocol has the potential to revolutionize sleep diagnostics, offering a time-saving, and ultimately more effective evaluation for patients with sleep-related disorders.