• Title/Summary/Keyword: Machine Learning

Search Result 5,469, Processing Time 0.034 seconds

A Study on the Stereotype of ICT SMEs' R&D: Empirical Evidence from Korea (ICT 중소기업 R&D의 스테레오타입에 대한 연구 : 한국의 사례를 중심으로)

  • Jun, Seung-pyo;Choi, San;Jung, JaeOong
    • Journal of Korea Technology Innovation Society
    • /
    • v.20 no.2
    • /
    • pp.334-367
    • /
    • 2017
  • The ICT industry has been the main driver of Korea's economy with international competitiveness and is expected to be the growth engine that will revitalize the currently depressed economy. A broad range of different perspectives and opinions on the industry exist in Korea and overseas. Some of these are stereotypes, not all of which are based on objective evidence. Stereotypes refer to widely-held fixed opinions on a specific group and do not necessarily have negative connotations. However, they should not be viewed lightly because they can substantially affect decision-making process. In this regard, this study sought to review the stereotypes of ICT industry and identify objective and relative stereotypes. In the study, a decision-tree analysis was conducted on a survey result of 3,300 small and medium-sized enterprises (SMEs) in order to identify Korean ICT companies' characteristics that distinguish them from other technology companies. The decision-tree analysis, a data mining process based on machine learning, took a total of 291 variables into account in 10 subjects such as: corporate business in general, technology development activities as well as organization and people in technology development. Identifying the variables that distinguish ICT companies from other technology companies with the decision-tree analysis, the study then came up with a list of objective stereotypes of ICT companies. The findings from the stereotypes of Korean ICT companies are as follows. First, the companies are in need of technology policies that help R&D planning and market penetration. Second, policies must better support the companies working to sell new products or explore new business. Third, the companies need policies that support secure protection of development outcomes and proper management of IP rights. Fourth, the administrative procedures related to governmental support for ICT companies' R&D projects must be simplified. It is hoped that the outcome of this study will provide meaningful guidance in establishment, implementation and evaluation of technology policies for ICT SMEs, particularly to policymakers or researchers in relevant government agencies who determine R&D policies for ICT SMEs.

A Comparative Study on the Possibility of Land Cover Classification of the Mosaic Images on the Korean Peninsula (한반도 모자이크 영상의 토지피복분류 활용 가능성 탐색을 위한 비교 연구)

  • Moon, Jiyoon;Lee, Kwang Jae
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_4
    • /
    • pp.1319-1326
    • /
    • 2019
  • The KARI(Korea Aerospace Research Institute) operates the government satellite information application consultation to cope with ever-increasing demand for satellite images in the public sector, and carries out various support projects including the generation and provision of mosaic images on the Korean Peninsula every year to enhance user convenience and promote the use of satellite images. In particular, the government has wanted to increase the utilization of mosaic images on the Korean Peninsula and seek to classify and update mosaic images so that users can use them in their businesses easily. However, it is necessary to test and verify whether the classification results of the mosaic images can be utilized in the field since the original spectral information is distorted during pan-sharpening and color balancing, and there is a limitation that only R, G, and B bands are provided. Therefore, in this study, the reliability of the classification result of the mosaic image was compared to the result of KOMPSAT-3 image. The study found that the accuracy of the classification result of KOMPSAT-3 image was between 81~86% (overall accuracy is about 85%), while the accuracy of the classification result of mosaic image was between 69~72% (overall accuracy is about 72%). This phenomenon is interpreted not only because of the distortion of the original spectral information through pan-sharpening and mosaic processes, but also because NDVI and NDWI information were extracted from KOMPSAT-3 image rather than from the mosaic image, as only three color bands(R, G, B) were provided. Although it is deemed inadequate to distribute classification results extracted from mosaic images at present, it is believed that it will be necessary to explore ways to minimize the distortion of spectral information when making mosaic images and to develop classification techniques suitable for mosaic images as well as the provision of NIR band information. In addition, it is expected that the utilization of images with limited spectral information could be increased in the future if related research continues, such as the comparative analysis of classification results by geomorphological characteristics and the development of machine learning methods for image classification by objects of interest.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

Assessment of climate change impact on aquatic ecology health indices in Han river basin using SWAT and random forest (SWAT 및 random forest를 이용한 기후변화에 따른 한강유역의 수생태계 건강성 지수 영향 평가)

  • Woo, So Young;Jung, Chung Gil;Kim, Jin Uk;Kim, Seong Joon
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.10
    • /
    • pp.863-874
    • /
    • 2018
  • The purpose of this study is to evaluate the future climate change impact on stream aquatic ecology health of Han River watershed ($34,148km^2$) using SWAT (Soil and Water Assessment Tool) and random forest. The 8 years (2008~2015) spring (April to June) Aquatic ecology Health Indices (AHI) such as Trophic Diatom Index (TDI), Benthic Macroinvertebrate Index (BMI) and Fish Assessment Index (FAI) scored (0~100) and graded (A~E) by NIER (National Institute of Environmental Research) were used. The 8 years NIER indices with the water quality (T-N, $NH_4$, $NO_3$, T-P, $PO_4$) showed that the deviation of AHI score is large when the concentration of water quality is low, and AHI score had negative correlation when the concentration is high. By using random forest, one of the Machine Learning techniques for classification analysis, the classification results for the 3 indices grade showed that all of precision, recall, and f1-score were above 0.81. The future SWAT hydrology and water quality results under HadGEM3-RA RCP 4.5 and 8.5 scenarios of Korea Meteorological Administration (KMA) showed that the future nitrogen-related water quality in watershed average increased up to 43.2% by the baseflow increase effect and the phosphorus-related water quality decreased up to 18.9% by the surface runoff decrease effect. The future FAI and BMI showed a little better Index grade while the future TDI showed a little worse index grade. We can infer that the future TDI is more sensitive to nitrogen-related water quality and the future FAI and BMI are responded to phosphorus-related water quality.

Overview and Prospective of Satellite Chlorophyll-a Concentration Retrieval Algorithms Suitable for Coastal Turbid Sea Waters (연안 혼탁 해수에 적합한 위성 클로로필-a 농도 산출 알고리즘 개관과 전망)

  • Park, Ji-Eun;Park, Kyung-Ae;Lee, Ji-Hyun
    • Journal of the Korean earth science society
    • /
    • v.42 no.3
    • /
    • pp.247-263
    • /
    • 2021
  • Climate change has been accelerating in coastal waters recently; therefore, the importance of coastal environmental monitoring is also increasing. Chlorophyll-a concentration, an important marine variable, in the surface layer of the global ocean has been retrieved for decades through various ocean color satellites and utilized in various research fields. However, the commonly used chlorophyll-a concentration algorithm is only suitable for application in clear water and cannot be applied to turbid waters because significant errors are caused by differences in their distinct components and optical properties. In addition, designing a standard algorithm for coastal waters is difficult because of differences in various optical characteristics depending on the coastal area. To overcome this problem, various algorithms have been developed and used considering the components and the variations in the optical properties of coastal waters with high turbidity. Chlorophyll-a concentration retrieval algorithms can be categorized into empirical algorithms, semi-analytic algorithms, and machine learning algorithms. These algorithms mainly use the blue-green band ratio based on the reflective spectrum of sea water as the basic form. In constrast, algorithms developed for turbid water utilizes the green-red band ratio, the red-near-infrared band ratio, and the inherent optical properties to compensate for the effect of dissolved organisms and suspended sediments in coastal area. Reliable retrieval of satellite chlorophyll-a concentration from turbid waters is essential for monitoring the coastal environment and understanding changes in the marine ecosystem. Therefore, this study summarizes the pre-existing algorithms that have been utilized for monitoring turbid Case 2 water and presents the problems associated with the mornitoring and study of seas around the Korean Peninsula. We also summarize the prospective for future ocean color satellites, which can yield more accurate and diverse results regarding the ecological environment with the development of multi-spectral and hyperspectral sensors.

Artificial Intelligence In Wheelchair: From Technology for Autonomy to Technology for Interdependence and Care (휠체어 탄 인공지능: 자율적 기술에서 상호의존과 돌봄의 기술로)

  • HA, Dae-Cheong
    • Journal of Science and Technology Studies
    • /
    • v.19 no.2
    • /
    • pp.169-206
    • /
    • 2019
  • This article seeks to explore new relationships and ethics of human and technology by analyzing a cultural imaginary produced by artificial intelligence. Drawing on theoretical reflections of the Feminist Scientific and Technological Studies which understand science and technology as the matter of care(Puig de la Bellacas, 2011), this paper focuses on the fact that artificial intelligence and robots materialize cultural imaginary such as autonomy. This autonomy, defined as the capacity to adapt to a new environment through self-learning, is accepted as a way to conceptualize an authentic human or an ideal subject. However, this article argues that artificial intelligence is mediated by and dependent on invisible human labor and complex material devices, suggesting that such autonomy is close to fiction. The recent growth of the so-called 'assistant technology' shows that it is differentially visualizing the care work of both machines and humans. Technology and its cultural imaginary hide the care work of human workers and actively visualize the one of the machine. And they make autonomy and agency ideal humanness, leaving disabled bodies and dependency as unworthy. Artificial intelligence and its cultural imaginary negate the value of disabled bodies while idealizing abled-bodies, and result in eliminating the real relationship between man and technology as mutually dependent beings. In conclusion, the author argues that the technology we need is not the one to exclude the non-typical bodies and care work of others, but the one to include them as they are. This technology responsibly empathizes marginalized beings and encourages solidarity between fragile beings. Inspired by an art performance of artist Sue Austin, the author finally comes up with and suggests 'artificial intelligence in wheelchair' as an alternative figuration for the currently dominant 'autonomous artificial intelligence'.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

A Study on Improvement of Collaborative Filtering Based on Implicit User Feedback Using RFM Multidimensional Analysis (RFM 다차원 분석 기법을 활용한 암시적 사용자 피드백 기반 협업 필터링 개선 연구)

  • Lee, Jae-Seong;Kim, Jaeyoung;Kang, Byeongwook
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.139-161
    • /
    • 2019
  • The utilization of the e-commerce market has become a common life style in today. It has become important part to know where and how to make reasonable purchases of good quality products for customers. This change in purchase psychology tends to make it difficult for customers to make purchasing decisions in vast amounts of information. In this case, the recommendation system has the effect of reducing the cost of information retrieval and improving the satisfaction by analyzing the purchasing behavior of the customer. Amazon and Netflix are considered to be the well-known examples of sales marketing using the recommendation system. In the case of Amazon, 60% of the recommendation is made by purchasing goods, and 35% of the sales increase was achieved. Netflix, on the other hand, found that 75% of movie recommendations were made using services. This personalization technique is considered to be one of the key strategies for one-to-one marketing that can be useful in online markets where salespeople do not exist. Recommendation techniques that are mainly used in recommendation systems today include collaborative filtering and content-based filtering. Furthermore, hybrid techniques and association rules that use these techniques in combination are also being used in various fields. Of these, collaborative filtering recommendation techniques are the most popular today. Collaborative filtering is a method of recommending products preferred by neighbors who have similar preferences or purchasing behavior, based on the assumption that users who have exhibited similar tendencies in purchasing or evaluating products in the past will have a similar tendency to other products. However, most of the existed systems are recommended only within the same category of products such as books and movies. This is because the recommendation system estimates the purchase satisfaction about new item which have never been bought yet using customer's purchase rating points of a similar commodity based on the transaction data. In addition, there is a problem about the reliability of purchase ratings used in the recommendation system. Reliability of customer purchase ratings is causing serious problems. In particular, 'Compensatory Review' refers to the intentional manipulation of a customer purchase rating by a company intervention. In fact, Amazon has been hard-pressed for these "compassionate reviews" since 2016 and has worked hard to reduce false information and increase credibility. The survey showed that the average rating for products with 'Compensated Review' was higher than those without 'Compensation Review'. And it turns out that 'Compensatory Review' is about 12 times less likely to give the lowest rating, and about 4 times less likely to leave a critical opinion. As such, customer purchase ratings are full of various noises. This problem is directly related to the performance of recommendation systems aimed at maximizing profits by attracting highly satisfied customers in most e-commerce transactions. In this study, we propose the possibility of using new indicators that can objectively substitute existing customer 's purchase ratings by using RFM multi-dimensional analysis technique to solve a series of problems. RFM multi-dimensional analysis technique is the most widely used analytical method in customer relationship management marketing(CRM), and is a data analysis method for selecting customers who are likely to purchase goods. As a result of verifying the actual purchase history data using the relevant index, the accuracy was as high as about 55%. This is a result of recommending a total of 4,386 different types of products that have never been bought before, thus the verification result means relatively high accuracy and utilization value. And this study suggests the possibility of general recommendation system that can be applied to various offline product data. If additional data is acquired in the future, the accuracy of the proposed recommendation system can be improved.

A quantitative study on the minimal pair of Korean phonemes: Focused on syllable-initial consonants (한국어 음소 최소대립쌍의 계량언어학적 연구: 초성 자음을 중심으로)

  • Jung, Jieun
    • Phonetics and Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.29-40
    • /
    • 2019
  • The paper investigates the minimal pair of Korean phonemes quantitatively. To achieve this goal, I calculated the number of consonant minimal pairs in the syllable-initial position as both raw counts and relative counts, and analyzed the part of speech relations of the two words in the minimal pair. "Urimalsaem" was chosen as the object of this study because it was judged that the minimal pair analysis should be done through a dictionary and it is the largest among Korean dictionaries. The results of the study are summarized as follows. First, there were 153 types of minimal pairs out of 337,135 examples. The ranking of phoneme pairs from highest to lowest was 'ㅅ-ㅈ, ㄱ-ㅅ, ㄱ-ㅈ, ㄱ-ㅂ, ㄱ-ㅎ, ${\ldots}$, ㅆ-ㅋ, ㄸ-ㅋ, ㅉ-ㅋ, ㄹ-ㅃ, ㅃ-ㅋ'. The phonemes that played a major role in the formation of the minimal pair were /ㄱ, ㅅ, ㅈ, ㅂ, ㅊ/, in that order, which showed a high proportion of palatals. The correlation between the raw count of minimal pairs and the relative count of minimal pairs was found to be quite high r=0.937. Second, 87.91% of the minimal pairs shared the part of speech (same syntactic category). The most frequently observed type has been 'noun-noun' pair (70.25%), and 'vowel-vowel' pair (14.77%) was the next ranking. It can be indicated that the minimal pair could be grouped into similar categories in terms of semantics. The results of this study can be useful for various research in Korean linguistics, speech-language pathology, language education, language acquisition, speech synthesis, and artificial intelligence-machine learning as basic data related to Korean phonemes.

A Case Study: Improvement of Wind Risk Prediction by Reclassifying the Detection Results (풍해 예측 결과 재분류를 통한 위험 감지확률의 개선 연구)

  • Kim, Soo-ock;Hwang, Kyu-Hong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.3
    • /
    • pp.149-155
    • /
    • 2021
  • Early warning systems for weather risk management in the agricultural sector have been developed to predict potential wind damage to crops. These systems take into account the daily maximum wind speed to determine the critical wind speed that causes fruit drops and provide the weather risk information to farmers. In an effort to increase the accuracy of wind risk predictions, an artificial neural network for binary classification was implemented. In the present study, the daily wind speed and other weather data, which were measured at weather stations at sites of interest in Jeollabuk-do and Jeollanam-do as well as Gyeongsangbuk- do and part of Gyeongsangnam- do provinces in 2019, were used for training the neural network. These weather stations include 210 synoptic and automated weather stations operated by the Korean Meteorological Administration (KMA). The wind speed data collected at the same locations between January 1 and December 12, 2020 were used to validate the neural network model. The data collected from December 13, 2020 to February 18, 2021 were used to evaluate the wind risk prediction performance before and after the use of the artificial neural network. The critical wind speed of damage risk was determined to be 11 m/s, which is the wind speed reported to cause fruit drops and damages. Furthermore, the maximum wind speeds were expressed using Weibull distribution probability density function for warning of wind damage. It was found that the accuracy of wind damage risk prediction was improved from 65.36% to 93.62% after re-classification using the artificial neural network. Nevertheless, the error rate also increased from 13.46% to 37.64%, as well. It is likely that the machine learning approach used in the present study would benefit case studies where no prediction by risk warning systems becomes a relatively serious issue.