• Title/Summary/Keyword: Learning Machine System

Search Result 1,830, Processing Time 0.029 seconds

Real-time flood prediction applying random forest regression model in urban areas (랜덤포레스트 회귀모형을 적용한 도시지역에서의 실시간 침수 예측)

  • Kim, Hyun Il;Lee, Yeon Su;Kim, Byunghyun
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1119-1130
    • /
    • 2021
  • Urban flooding caused by localized heavy rainfall with unstable climate is constantly occurring, but a system that can predict spatial flood information with weather forecast has not been prepared yet. The worst flood situation in urban area can be occurred with difficulties of structural measures such as river levees, discharge capacity of urban sewage, storage basin of storm water, and pump facilities. However, identifying in advance the spatial flood information can have a decisive effect on minimizing flood damage. Therefore, this study presents a methodology that can predict the urban flood map in real-time by using rainfall data of the Korea Meteorological Administration (KMA), the results of two-dimensional flood analysis and random forest (RF) regression model. The Ujeong district in Ulsan metropolitan city, which the flood is frequently occurred, was selected for the study area. The RF regression model predicted the flood map corresponding to the 50 mm, 80 mm, and 110 mm rainfall events with 6-hours duration. And, the predicted results showed 63%, 80%, and 67% goodness of fit compared to the results of two-dimensional flood analysis model. It is judged that the suggested results of this study can be utilized as basic data for evacuation and response to urban flooding that occurs suddenly.

Systemic literature review on the impact of government financial support on innovation in private firms (정부의 기술혁신 재정지원 정책효과에 대한 체계적 문헌연구)

  • Ahn, Joon Mo
    • Journal of Technology Innovation
    • /
    • v.30 no.1
    • /
    • pp.57-104
    • /
    • 2022
  • The government has supported the innovation of private firms by intervening the market for various purposes, such as preventing market failure, alleviating information asymmetry, and allocating resources efficiently. Although the government's R&D budget increased rapidly in the 2000s, it is not clear whether the government intervention has made desirable impact on the market. To address this, the current study attempts to explore this issue by doing a systematic literature review on foreign and domestic papers in an integrated way. In total, 168 studies are analyzed using contents analysis approach and various lens, such as policy additionality, policy tools, firm size, unit of analysis, data and method, are adopted for analysis. Overlapping policy target, time lag between government intervention and policy effects, non-linearity of financial supports, interference between different polices, and out-dated R&D tax incentive system are reported as factors hampering the effect of the government intervention. Many policy prescriptions, such as program evaluation indices reflecting behavioral additionality, an introduction of policy mix and evidence-based policy using machine learning, are suggested to improve these hurdles.

Artificial Neural Network with Firefly Algorithm-Based Collaborative Spectrum Sensing in Cognitive Radio Networks

  • Velmurugan., S;P. Ezhumalai;E.A. Mary Anita
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1951-1975
    • /
    • 2023
  • Recent advances in Cognitive Radio Networks (CRN) have elevated them to the status of a critical instrument for overcoming spectrum limits and achieving severe future wireless communication requirements. Collaborative spectrum sensing is presented for efficient channel selection because spectrum sensing is an essential part of CRNs. This study presents an innovative cooperative spectrum sensing (CSS) model that is built on the Firefly Algorithm (FA), as well as machine learning artificial neural networks (ANN). This system makes use of user grouping strategies to improve detection performance dramatically while lowering collaboration costs. Cooperative sensing wasn't used until after cognitive radio users had been correctly identified using energy data samples and an ANN model. Cooperative sensing strategies produce a user base that is either secure, requires less effort, or is faultless. The suggested method's purpose is to choose the best transmission channel. Clustering is utilized by the suggested ANN-FA model to reduce spectrum sensing inaccuracy. The transmission channel that has the highest weight is chosen by employing the method that has been provided for computing channel weight. The proposed ANN-FA model computes channel weight based on three sets of input parameters: PU utilization, CR count, and channel capacity. Using an improved evolutionary algorithm, the key principles of the ANN-FA scheme are optimized to boost the overall efficiency of the CRN channel selection technique. This study proposes the Artificial Neural Network with Firefly Algorithm (ANN-FA) for cognitive radio networks to overcome the obstacles. This proposed work focuses primarily on sensing the optimal secondary user channel and reducing the spectrum handoff delay in wireless networks. Several benchmark functions are utilized We analyze the efficacy of this innovative strategy by evaluating its performance. The performance of ANN-FA is 22.72 percent more robust and effective than that of the other metaheuristic algorithm, according to experimental findings. The proposed ANN-FA model is simulated using the NS2 simulator, The results are evaluated in terms of average interference ratio, spectrum opportunity utilization, three metrics are measured: packet delivery ratio (PDR), end-to-end delay, and end-to-average throughput for a variety of different CRs found in the network.

Implementation of reliable dynamic honeypot file creation system for ransomware attack detection (랜섬웨어 공격탐지를 위한 신뢰성 있는 동적 허니팟 파일 생성 시스템 구현)

  • Kyoung Wan Kug;Yeon Seung Ryu;Sam Beom Shin
    • Convergence Security Journal
    • /
    • v.23 no.2
    • /
    • pp.27-36
    • /
    • 2023
  • In recent years, ransomware attacks have become more organized and specialized, with the sophistication of attacks targeting specific individuals or organizations using tactics such as social engineering, spear phishing, and even machine learning, some operating as business models. In order to effectively respond to this, various researches and solutions are being developed and operated to detect and prevent attacks before they cause serious damage. In particular, honeypots can be used to minimize the risk of attack on IT systems and networks, as well as act as an early warning and advanced security monitoring tool, but in cases where ransomware does not have priority access to the decoy file, or bypasses it completely. has a disadvantage that effective ransomware response is limited. In this paper, this honeypot is optimized for the user environment to create a reliable real-time dynamic honeypot file, minimizing the possibility of an attacker bypassing the honeypot, and increasing the detection rate by preventing the attacker from recognizing that it is a honeypot file. To this end, four models, including a basic data collection model for dynamic honeypot generation, were designed (basic data collection model / user-defined model / sample statistical model / experience accumulation model), and their validity was verified.

The Prediction of the Helpfulness of Online Review Based on Review Content Using an Explainable Graph Neural Network (설명가능한 그래프 신경망을 활용한 리뷰 콘텐츠 기반의 유용성 예측모형)

  • Eunmi Kim;Yao Ziyan;Taeho Hong
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.309-323
    • /
    • 2023
  • As the role of online reviews has become increasingly crucial, numerous studies have been conducted to utilize helpful reviews. Helpful reviews, perceived by customers, have been verified in various research studies to be influenced by factors such as ratings, review length, review content, and so on. The determination of a review's helpfulness is generally based on the number of 'helpful' votes from consumers, with more 'helpful' votes considered to have a more significant impact on consumers' purchasing decisions. However, recently written reviews that have not been exposed to many customers may have relatively few 'helpful' votes and may lack 'helpful' votes altogether due to a lack of participation. Therefore, rather than relying on the number of 'helpful' votes to assess the helpfulness of reviews, we aim to classify them based on review content. In addition, the text of the review emerges as the most influential factor in review helpfulness. This study employs text mining techniques, including topic modeling and sentiment analysis, to analyze the diverse impacts of content and emotions embedded in the review text. In this study, we propose a review helpfulness prediction model based on review content, utilizing movie reviews from IMDb, a global movie information site. We construct a review helpfulness prediction model by using an explainable Graph Neural Network (GNN), while addressing the interpretability limitations of the machine learning model. The explainable graph neural network is expected to provide more reliable information about helpful or non-helpful reviews as it can identify connections between reviews.

A Study on the Drug Classification Using Machine Learning Techniques (머신러닝 기법을 이용한 약물 분류 방법 연구)

  • Anmol Kumar Singh;Ayush Kumar;Adya Singh;Akashika Anshum;Pradeep Kumar Mallick
    • Advanced Industrial SCIence
    • /
    • v.3 no.2
    • /
    • pp.8-16
    • /
    • 2024
  • This paper shows the system of drug classification, the goal of this is to foretell the apt drug for the patients based on their demographic and physiological traits. The dataset consists of various attributes like Age, Sex, BP (Blood Pressure), Cholesterol Level, and Na_to_K (Sodium to Potassium ratio), with the objective to determine the kind of drug being given. The models used in this paper are K-Nearest Neighbors (KNN), Logistic Regression and Random Forest. Further to fine-tune hyper parameters using 5-fold cross-validation, GridSearchCV was used and each model was trained and tested on the dataset. To assess the performance of each model both with and without hyper parameter tuning evaluation metrics like accuracy, confusion matrices, and classification reports were used and the accuracy of the models without GridSearchCV was 0.7, 0.875, 0.975 and with GridSearchCV was 0.75, 1.0, 0.975. According to GridSearchCV Logistic Regression is the most suitable model for drug classification among the three-model used followed by the K-Nearest Neighbors. Also, Na_to_K is an essential feature in predicting the outcome.

Spectral Band Selection for Detecting Fire Blight Disease in Pear Trees by Narrowband Hyperspectral Imagery (초분광 이미지를 이용한 배나무 화상병에 대한 최적 분광 밴드 선정)

  • Kang, Ye-Seong;Park, Jun-Woo;Jang, Si-Hyeong;Song, Hye-Young;Kang, Kyung-Suk;Ryu, Chan-Seok;Kim, Seong-Heon;Jun, Sae-Rom;Kang, Tae-Hwan;Kim, Gul-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.1
    • /
    • pp.15-33
    • /
    • 2021
  • In this study, the possibility of discriminating Fire blight (FB) infection tested using the hyperspectral imagery. The reflectance of healthy and infected leaves and branches was acquired with 5 nm of full width at high maximum (FWHM) and then it was standardized to 10 nm, 25 nm, 50 nm, and 80 nm of FWHM. The standardized samples were divided into training and test sets at ratios of 7:3, 5:5 and 3:7 to find the optimal bands of FWHM by the decision tree analysis. Classification accuracy was evaluated using overall accuracy (OA) and kappa coefficient (KC). The hyperspectral reflectance of infected leaves and branches was significantly lower than those of healthy green, red-edge (RE) and near infrared (NIR) regions. The bands selected for the first node were generally 750 and 800 nm; these were used to identify the infection of leaves and branches, respectively. The accuracy of the classifier was higher in the 7:3 ratio. Four bands with 50 nm of FWHM (450, 650, 750, and 950 nm) might be reasonable because the difference in the recalculated accuracy between 8 bands with 10 nm of FWHM (440, 580, 640, 660, 680, 710, 730, and 740 nm) and 4 bands was only 1.8% for OA and 4.1% for KC, respectively. Finally, adding two bands (550 nm and 800 nm with 25 nm of FWHM) in four bands with 50 nm of FWHM have been proposed to improve the usability of multispectral image sensors with performing various roles in agriculture as well as detecting FB with other combinations of spectral bands.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Comparison between the Reference Evapotranspiration Products for Croplands in Korea: Case Study of 2016-2019 (우리나라 농지의 기준증발산 격자자료 비교평가: 2016-2019년의 사례연구)

  • Kim, Seoyeon;Jeong, Yemin;Cho, Subin;Youn, Youjeong;Kim, Nari;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1465-1483
    • /
    • 2020
  • Evapotranspiration is a concept that includes the evaporation from soil and the transpiration from the plant leaf. It is an essential factor for monitoring water balance, drought, crop growth, and climate change. Actual evapotranspiration (AET) corresponds to the consumption of water from the land surface and the necessary amount of water for the land surface. Because the AET is derived from multiplying the crop coefficient by the reference evapotranspiration (ET0), an accurate calculation of the ET0 is required for the AET. To date, many efforts have been made for gridded ET0 to provide multiple products now. This study presents a comparison between the ET0 products such as FAO56-PM, LDAPS, PKNU-NMSC, and MODIS to find out which one is more suitable for the local-scale hydrological and agricultural applications in Korea, where the heterogeneity of the land surface is critical. In the experiment for the period between 2016 and 2019, the daily and 8-day products were compared with the in-situ observations by KMA. The analyses according to the station, year, month, and time-series showed that the PKNU-NMSC product with a successful optimization for Korea was superior to the others, yielding stable accuracy irrespective of space and time. Also, this paper showed the intrinsic characteristics of the FAO56-PM, LDAPS, and MODIS ET0 products that could be informative for other researchers.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.