• Title/Summary/Keyword: Automated Machine Learning

Search Result 180, Processing Time 0.029 seconds

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

Prediction of Landslides and Determination of Its Variable Importance Using AutoML (AutoML을 이용한 산사태 예측 및 변수 중요도 산정)

  • Nam, KoungHoon;Kim, Man-Il;Kwon, Oil;Wang, Fawu;Jeong, Gyo-Cheol
    • The Journal of Engineering Geology
    • /
    • v.30 no.3
    • /
    • pp.315-325
    • /
    • 2020
  • This study was performed to develop a model to predict landslides and determine the variable importance of landslides susceptibility factors based on the probabilistic prediction of landslides occurring on slopes along the road. Field survey data of 30,615 slopes from 2007 to 2020 in Korea were analyzed to develop a landslide prediction model. Of the total 131 variable factors, 17 topographic factors and 114 geological factors (including 89 bedrocks) were used to predict landslides. Automated machine learning (AutoML) was used to classify landslides and non-landslides. The verification results revealed that the best model, an extremely randomized tree (XRT) with excellent predictive performance, yielded 83.977% of prediction rates on test data. As a result of the analysis to determine the variable importance of the landslide susceptibility factors, it was composed of 10 topographic factors and 9 geological factors, which was presented as a percentage for each factor. This model was evaluated probabilistically and quantitatively for the likelihood of landslide occurrence by deriving the ranking of variable importance using only on-site survey data. It is considered that this model can provide a reliable basis for slope safety assessment through field surveys to decision-makers in the future.

Study on High-speed Cyber Penetration Attack Analysis Technology based on Static Feature Base Applicable to Endpoints (Endpoint에 적용 가능한 정적 feature 기반 고속의 사이버 침투공격 분석기술 연구)

  • Hwang, Jun-ho;Hwang, Seon-bin;Kim, Su-jeong;Lee, Tae-jin
    • Journal of Internet Computing and Services
    • /
    • v.19 no.5
    • /
    • pp.21-31
    • /
    • 2018
  • Cyber penetration attacks can not only damage cyber space but can attack entire infrastructure such as electricity, gas, water, and nuclear power, which can cause enormous damage to the lives of the people. Also, cyber space has already been defined as the fifth battlefield, and strategic responses are very important. Most of recent cyber attacks are caused by malicious code, and since the number is more than 1.6 million per day, automated analysis technology to cope with a large amount of malicious code is very important. However, it is difficult to deal with malicious code encryption, obfuscation and packing, and the dynamic analysis technique is not limited to the performance requirements of dynamic analysis but also to the virtual There is a limit in coping with environment avoiding technology. In this paper, we propose a machine learning based malicious code analysis technique which improve the weakness of the detection performance of existing analysis technology while maintaining the light and high-speed analysis performance applicable to commercial endpoints. The results of this study show that 99.13% accuracy, 99.26% precision and 99.09% recall analysis performance of 71,000 normal file and malicious code in commercial environment and analysis time in PC environment can be analyzed more than 5 per second, and it can be operated independently in the endpoint environment and it is considered that it works in complementary form in operation in conjunction with existing antivirus technology and static and dynamic analysis technology. It is also expected to be used as a core element of EDR technology and malware variant analysis.

Current status and future plans of KMTNet microlensing experiments

  • Chung, Sun-Ju;Gould, Andrew;Jung, Youn Kil;Hwang, Kyu-Ha;Ryu, Yoon-Hyun;Shin, In-Gu;Yee, Jennifer C.;Zhu, Wei;Han, Cheongho;Cha, Sang-Mok;Kim, Dong-Jin;Kim, Hyun-Woo;Kim, Seung-Lee;Lee, Chung-Uk;Lee, Yongseok
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.1
    • /
    • pp.41.1-41.1
    • /
    • 2018
  • We introduce a current status and future plans of Korea Microlensing Telescope Network (KMTNet) microlensing experiments, which include an observational strategy, pipeline, event-finder, and collaborations with Spitzer. The KMTNet experiments were initiated in 2015. From 2016, KMTNet observes 27 fields including 6 main fields and 21 subfields. In 2017, we have finished the DIA photometry for all 2016 and 2017 data. Thus, it is possible to do a real-time DIA photometry from 2018. The DIA photometric data is used for finding events from the KMTNet event-finder. The KMTNet event-finder has been improved relative to the previous version, which already found 857 events in 4 main fields of 2015. We have applied the improved version to all 2016 data. As a result, we find that 2597 events are found, and out of them, 265 are found in KMTNet-K2C9 overlapping fields. For increasing the detection efficiency of event-finder, we are working on filtering false events out by machine-learning method. In 2018, we plan to measure event detection efficiency of KMTNet by injecting fake events into the pipeline near the image level. Thanks to high-cadence observations, KMTNet found fruitful interesting events including exoplanets and brown dwarfs, which were not found by other groups. Masses of such exoplanets and brown dwarfs are measured from collaborations with Spitzer and other groups. Especially, KMTNet has been closely cooperating with Spitzer from 2015. Thus, KMTNet observes Spitzer fields. As a result, we could measure the microlens parallaxes for many events. Also, the automated KMTNet PySIS pipeline was developed before the 2017 Spitzer season and it played a very important role in selecting the Spitzer target. For the 2018 Spitzer season, we will improve the PySIS pipeline to obtain better photometric results.

  • PDF

Counter Measures by using Execution Plan Analysis against SQL Injection Attacks (실행계획 분석을 이용한 SQL Injection 공격 대응방안)

  • Ha, Man-Seok;Namgung, Jung-Il;Park, Soo-Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.2
    • /
    • pp.76-86
    • /
    • 2016
  • SQL Injection attacks are the most widely used and also they are considered one of the oldest traditional hacking techniques. SQL Injection attacks are getting quite complicated and they perform a high portion among web hacking. The big data environments in the future will be widely used resulting in many devices and sensors will be connected to the internet and the amount of data that flows among devices will be highly increased. The scale of damage caused by SQL Injection attacks would be even greater in the future. Besides, creating security solutions against SQL Injection attacks are high costs and time-consuming. In order to prevent SQL Injection attacks, we have to operate quickly and accurately according to this data analysis techniques. We utilized data analytics and machine learning techniques to defend against SQL Injection attacks and analyzed the execution plan of the SQL command input if there are abnormal patterns through checking the web log files. Herein, we propose a way to distinguish between normal and abnormal SQL commands. We have analyzed the value entered by the user in real time using the automated SQL Injection attacks tools. We have proved that it is possible to ensure an effective defense through analyzing the execution plan of the SQL command.

Implementation of an Automated Agricultural Frost Observation System (AAFOS) (농업서리 자동관측 시스템(AAFOS)의 구현)

  • Kyu Rang Kim;Eunsu Jo;Myeong Su Ko;Jung Hyuk Kang;Yunjae Hwang;Yong Hee Lee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.26 no.1
    • /
    • pp.63-74
    • /
    • 2024
  • In agriculture, frost can be devastating, which is why observation and forecasting are so important. According to a recent report analyzing frost observation data from the Korea Meteorological Administration, despite global warming due to climate change, the late frost date in spring has not been accelerated, and the frequency of frost has not decreased. Therefore, it is important to automate and continuously operate frost observation in risk areas to prevent agricultural frost damage. In the existing frost observation using leaf wetness sensors, there is a problem that the reference voltage value fluctuates over a long period of time due to contamination of the observation sensor or changes in the humidity of the surrounding environment. In this study, a datalogger program was implemented to automatically solve these problems. The established frost observation system can stably and automatically accumulate time-resolved observation data over a long period of time. This data can be utilized in the future for the development of frost diagnosis models using machine learning methods and the production of frost occurrence prediction information for surrounding areas.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Case Study: Improvement of Wind Risk Prediction by Reclassifying the Detection Results (풍해 예측 결과 재분류를 통한 위험 감지확률의 개선 연구)

  • Kim, Soo-ock;Hwang, Kyu-Hong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.3
    • /
    • pp.149-155
    • /
    • 2021
  • Early warning systems for weather risk management in the agricultural sector have been developed to predict potential wind damage to crops. These systems take into account the daily maximum wind speed to determine the critical wind speed that causes fruit drops and provide the weather risk information to farmers. In an effort to increase the accuracy of wind risk predictions, an artificial neural network for binary classification was implemented. In the present study, the daily wind speed and other weather data, which were measured at weather stations at sites of interest in Jeollabuk-do and Jeollanam-do as well as Gyeongsangbuk- do and part of Gyeongsangnam- do provinces in 2019, were used for training the neural network. These weather stations include 210 synoptic and automated weather stations operated by the Korean Meteorological Administration (KMA). The wind speed data collected at the same locations between January 1 and December 12, 2020 were used to validate the neural network model. The data collected from December 13, 2020 to February 18, 2021 were used to evaluate the wind risk prediction performance before and after the use of the artificial neural network. The critical wind speed of damage risk was determined to be 11 m/s, which is the wind speed reported to cause fruit drops and damages. Furthermore, the maximum wind speeds were expressed using Weibull distribution probability density function for warning of wind damage. It was found that the accuracy of wind damage risk prediction was improved from 65.36% to 93.62% after re-classification using the artificial neural network. Nevertheless, the error rate also increased from 13.46% to 37.64%, as well. It is likely that the machine learning approach used in the present study would benefit case studies where no prediction by risk warning systems becomes a relatively serious issue.

Modeling of Vegetation Phenology Using MODIS and ASOS Data (MODIS와 ASOS 자료를 이용한 식물계절 모델링)

  • Kim, Geunah;Youn, Youjeong;Kang, Jonggu;Choi, Soyeon;Park, Ganghyun;Chun, Junghwa;Jang, Keunchang;Won, Myoungsoo;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.627-646
    • /
    • 2022
  • Recently, the seriousness of climate change-related problems caused by global warming is growing, and the average temperature is also rising. As a result, it is affecting the environment in which various temperature-sensitive creatures and creatures live, and changes in the ecosystem are also being detected. Seasons are one of the important factors influencing the types, distribution, and growth characteristics of creatures living in the area. Among the most popular and easily recognized plant seasonal phenomena among the indicators of the climate change impact evaluation, the blooming day of flower and the peak day of autumn leaves were modeled. The types of plants used in the modeling were forsythia and cherry trees, which can be seen as representative plants of spring, and maple and ginkgo, which can be seen as representative plants of autumn. Weather data used to perform modeling were temperature, precipitation, and solar radiation observed through the ASOS Observatory of the Korea Meteorological Administration. As satellite data, MODIS NDVI was used for modeling, and it has a correlation coefficient of about -0.2 for the flowering date and 0.3 for the autumn leaves peak date. As the model used, the model was established using multiple regression models, which are linear models, and Random Forest, which are nonlinear models. In addition, the predicted values estimated by each model were expressed as isopleth maps using spatial interpolation techniques to express the trend of plant seasonal changes from 2003 to 2020. It is believed that using NDVI with high spatio-temporal resolution in the future will increase the accuracy of plant phenology modeling.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.