• Title/Summary/Keyword: Prediction#4

Search Result 6,540, Processing Time 0.037 seconds

Kriging of Daily PM10 Concentration from the Air Korea Stations Nationwide and the Accuracy Assessment (베리오그램 최적화 기반의 정규크리깅을 이용한 전국 에어코리아 PM10 자료의 일평균 격자지도화 및 내삽정확도 검증)

  • Jeong, Yemin;Cho, Subin;Youn, Youjeong;Kim, Seoyeon;Kim, Geunah;Kang, Jonggu;Lee, Dalgeun;Chung, Euk;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.379-394
    • /
    • 2021
  • Air pollution data in South Korea is provided on a real-time basis by Air Korea stations since 2005. Previous studies have shown the feasibility of gridding air pollution data, but they were confined to a few cities. This paper examines the creation of nationwide gridded maps for PM10 concentration using 333 Air Korea stations with variogram optimization and ordinary kriging. The accuracy of the spatial interpolation was evaluated by various sampling schemes to avoid a too dense or too sparse distribution of the validation points. Using the 114,745 matchups, a four-round blind test was conducted by extracting random validation points for every 365 days in 2019. The overall accuracy was stably high with the MAE of 5.697 ㎍/m3 and the CC of 0.947. Approximately 1,500 cases for high PM10 concentration also showed a result with the MAE of about 12 ㎍/m3 and the CC over 0.87, which means that the proposed method was effective and applicable to various situations. The gridded maps for daily PM10 concentration at the resolution of 0.05° also showed a reasonable spatial distribution, which can be used as an input variable for a gridded prediction of tomorrow's PM10 concentration.

A Machine Learning-based Total Production Time Prediction Method for Customized-Manufacturing Companies (주문생산 기업을 위한 기계학습 기반 총생산시간 예측 기법)

  • Park, Do-Myung;Choi, HyungRim;Park, Byung-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.177-190
    • /
    • 2021
  • Due to the development of the fourth industrial revolution technology, efforts are being made to improve areas that humans cannot handle by utilizing artificial intelligence techniques such as machine learning. Although on-demand production companies also want to reduce corporate risks such as delays in delivery by predicting total production time for orders, they are having difficulty predicting this because the total production time is all different for each order. The Theory of Constraints (TOC) theory was developed to find the least efficient areas to increase order throughput and reduce order total cost, but failed to provide a forecast of total production time. Order production varies from order to order due to various customer needs, so the total production time of individual orders can be measured postmortem, but it is difficult to predict in advance. The total measured production time of existing orders is also different, which has limitations that cannot be used as standard time. As a result, experienced managers rely on persimmons rather than on the use of the system, while inexperienced managers use simple management indicators (e.g., 60 days total production time for raw materials, 90 days total production time for steel plates, etc.). Too fast work instructions based on imperfections or indicators cause congestion, which leads to productivity degradation, and too late leads to increased production costs or failure to meet delivery dates due to emergency processing. Failure to meet the deadline will result in compensation for delayed compensation or adversely affect business and collection sectors. In this study, to address these problems, an entity that operates an order production system seeks to find a machine learning model that estimates the total production time of new orders. It uses orders, production, and process performance for materials used for machine learning. We compared and analyzed OLS, GLM Gamma, Extra Trees, and Random Forest algorithms as the best algorithms for estimating total production time and present the results.

Damage of Whole Crop Maize in Abnormal Climate Using Machine Learning (이상기상 시 사일리지용 옥수수의 기계학습을 이용한 피해량 산출)

  • Kim, Ji Yung;Choi, Jae Seong;Jo, Hyun Wook;Kim, Moon Ju;Kim, Byong Wan;Sung, Kyung Il
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.42 no.2
    • /
    • pp.127-136
    • /
    • 2022
  • This study was conducted to estimate the damage of Whole Crop Maize (WCM) according to abnormal climate using machine learning and present the damage through mapping. The collected WCM data was 3,232. The climate data was collected from the Korea Meteorological Administration's meteorological data open portal. Deep Crossing is used for the machine learning model. The damage was calculated using climate data from the Automated Synoptic Observing System (95 sites) by machine learning. The damage was calculated by difference between the Dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of WCM data (1978~2017). The level of abnormal climate was set as a multiple of the standard deviation applying the World Meteorological Organization(WMO) standard. The DMYnormal was ranged from 13,845~19,347 kg/ha. The damage of WCM was differed according to region and level of abnormal climate and ranged from -305 to 310, -54 to 89, and -610 to 813 kg/ha bnormal temperature, precipitation, and wind speed, respectively. The maximum damage was 310 kg/ha when the abnormal temperature was +2 level (+1.42 ℃), 89 kg/ha when the abnormal precipitation was -2 level (-0.12 mm) and 813 kg/ha when the abnormal wind speed was -2 level (-1.60 m/s). The damage calculated through the WMO method was presented as an mapping using QGIS. When calculating the damage of WCM due to abnormal climate, there was some blank area because there was no data. In order to calculate the damage of blank area, it would be possible to use the automatic weather system (AWS), which provides data from more sites than the automated synoptic observing system (ASOS).

A Study on Risk Assessment Method for Earthquake-Induced Landslides (지진에 의한 산사태 위험도 평가방안에 관한 연구)

  • Seo, Junpyo;Eu, Song;Lee, Kihwan;Lee, Changwoo;Woo, Choongshik
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.4
    • /
    • pp.694-709
    • /
    • 2021
  • Purpose: In this study, earthquake-induced landslide risk assessment was conducted to provide basic data for efficient and preemptive damage prevention by selecting the erosion control work before the earthquake and the prediction and restoration priorities of the damaged area after the earthquake. Method: The study analyzed the previous studies abroad to examine the evaluation methodology and to derive the evaluation factors, and examine the utilization of the landslide hazard map currently used in Korea. In addition, the earthquake-induced landslide hazard map was also established on a pilot basis based on the fault zone and epicenter of Pohang using seismic attenuation. Result: The earthquake-induced landslide risk assessment study showed that China ranked 44%, Italy 16%, the U.S. 15%, Japan 10%, and Taiwan 8%. As for the evaluation method, the statistical model was the most common at 59%, and the physical model was found at 23%. The factors frequently used in the statistical model were altitude, distance from the fault, gradient, slope aspect, country rock, and topographic curvature. Since Korea's landslide hazard map reflects topography, geology, and forest floor conditions, it has been shown that it is reasonable to evaluate the risk of earthquake-induced landslides using it. As a result of evaluating the risk of landslides based on the fault zone and epicenter in the Pohang area, the risk grade was changed to reflect the impact of the earthquake. Conclusion: It is effective to use the landslide hazard map to evaluate the risk of earthquake-induced landslides at the regional scale. The risk map based on the fault zone is effective when used in the selection of a target site for preventive erosion control work to prevent damage from earthquake-induced landslides. In addition, the risk map based on the epicenter can be used for efficient follow-up management in order to prioritize damage prevention measures, such as to investigate the current status of landslide damage after an earthquake, or to restore the damaged area.

A Study of Life Safety Index Model based on AHP and Utilization of Service (AHP 기반의 생활안전지수 모델 및 서비스 활용방안 연구)

  • Oh, Hye-Su;Lee, Dong-Hoon;Jeong, Jong-Woon;Jang, Jae-Min;Yang, Sang-Woon
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.4
    • /
    • pp.864-881
    • /
    • 2021
  • Purpose: This study aims is to provide a total care solution preventing disaster based on Big Data and AI technology and to service safety considered by individual situations and various risk characteristics. The purpose is to suggest a method that customized comprehensive index services to prevent and respond to safety accidents for calculating the living safety index that quantitatively represent individual safety levels in relation to daily life safety. Method: In this study, we use method of mixing AHP(Analysis Hierarchy Process) and Likert Scale that extracted from consensus formation model of the expert group. We organize evaluation items that can evaluate life safety prevention services into risk indicators, vulnerability indicators, and prevention indicators. And We made up AHP hierarchical structure according to the AHP decision methodology and proposed a method to calculate relative weights between evaluation criteria through pairwise comparison of each level item. In addition, in consideration of the expansion of life safety prevention services in the future, the Likert scale is used instead of the AHP pair comparison and the weights between individual services are calculated. Result: We obtain result that is weights for life safety prevention services and reflected them in the individual risk index calculated through the artificial intelligence prediction model of life safety prevention services, so the comprehensive index was calculated. Conclusion: In order to apply the implemented model, a test environment consisting of a life safety prevention service app and platform was built, and the efficacy of the function was evaluated based on the user scenario. Through this, the life safety index presented in this study was confirmed to support the golden time for diagnosis, response and prevention of safety risks by comprehensively indication the user's current safety level.

Exploring the Role of Preference Heterogeneity and Causal Attribution in Online Ratings Dynamics

  • Chu, Wujin;Roh, Minjung
    • Asia Marketing Journal
    • /
    • v.15 no.4
    • /
    • pp.61-101
    • /
    • 2014
  • This study investigates when and how disagreements in online customer ratings prompt more favorable product evaluations. Among the three metrics of volume, valence, and variance that feature in the research on online customer ratings, volume and valence have exhibited consistently positive patterns in their effects on product sales or evaluations (e.g., Dellarocas, Zhang, and Awad 2007; Liu 2006). Ratings variance, or the degree of disagreement among reviewers, however, has shown rather mixed results, with some studies reporting positive effects on product sales (e.g., Clement, Proppe, and Rott 2007) while others finding negative effects on product evaluations (e.g., Zhu and Zhang 2010). This study aims to resolve these contradictory findings by introducing preference heterogeneity as a possible moderator and causal attribution as a mediator to account for the moderating effect. The main proposition of this study is that when preference heterogeneity is perceived as high, a disagreement in ratings is attributed more to reviewers' different preferences than to unreliable product quality, which in turn prompts better quality evaluations of a product. Because disagreements mostly result from differences in reviewers' tastes or the low reliability of a product's quality (Mizerski 1982; Sen and Lerman 2007), a greater level of attribution to reviewer tastes can mitigate the negative effect of disagreement on product evaluations. Specifically, if consumers infer that reviewers' heterogeneous preferences result in subjectively different experiences and thereby highly diverse ratings, they would not disregard the overall quality of a product. However, if consumers infer that reviewers' preferences are quite homogeneous and thus the low reliability of the product quality contributes to such disagreements, they would discount the overall product quality. Therefore, consumers would respond more favorably to disagreements in ratings when preference heterogeneity is perceived as high rather than low. This study furthermore extends this prediction to the various levels of average ratings. The heuristicsystematic processing model so far indicates that the engagement in effortful systematic processing occurs only when sufficient motivation is present (Hann et al. 2007; Maheswaran and Chaiken 1991; Martin and Davies 1998). One of the key factors affecting this motivation is the aspiration level of the decision maker. Only under conditions that meet or exceed his aspiration level does he tend to engage in systematic processing (Patzelt and Shepherd 2008; Stephanous and Sage 1987). Therefore, systematic causal attribution processing regarding ratings variance is likely more activated when the average rating is high enough to meet the aspiration level than when it is too low to meet it. Considering that the interaction between ratings variance and preference heterogeneity occurs through the mediation of causal attribution, this greater activation of causal attribution in high versus low average ratings would lead to more pronounced interaction between ratings variance and preference heterogeneity in high versus low average ratings. Overall, this study proposes that the interaction between ratings variance and preference heterogeneity is more pronounced when the average rating is high as compared to when it is low. Two laboratory studies lend support to these predictions. Study 1 reveals that participants exposed to a high-preference heterogeneity book title (i.e., a novel) attributed disagreement in ratings more to reviewers' tastes, and thereby more favorably evaluated books with such ratings, compared to those exposed to a low-preference heterogeneity title (i.e., an English listening practice book). Study 2 then extended these findings to the various levels of average ratings and found that this greater preference for disagreement options under high preference heterogeneity is more pronounced when the average rating is high compared to when it is low. This study makes an important theoretical contribution to the online customer ratings literature by showing that preference heterogeneity serves as a key moderator of the effect of ratings variance on product evaluations and that causal attribution acts as a mediator of this moderation effect. A more comprehensive picture of the interplay among ratings variance, preference heterogeneity, and average ratings is also provided by revealing that the interaction between ratings variance and preference heterogeneity varies as a function of the average rating. In addition, this work provides some significant managerial implications for marketers in terms of how they manage word of mouth. Because a lack of consensus creates some uncertainty and anxiety over the given information, consumers experience a psychological burden regarding their choice of a product when ratings show disagreement. The results of this study offer a way to address this problem. By explicitly clarifying that there are many more differences in tastes among reviewers than expected, marketers can allow consumers to speculate that differing tastes of reviewers rather than an uncertain or poor product quality contribute to such conflicts in ratings. Thus, when fierce disagreements are observed in the WOM arena, marketers are advised to communicate to consumers that diverse, rather than uniform, tastes govern reviews and evaluations of products.

  • PDF

Utilization of Smart Farms in Open-field Agriculture Based on Digital Twin (디지털 트윈 기반 노지스마트팜 활용방안)

  • Kim, Sukgu
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2023.04a
    • /
    • pp.7-7
    • /
    • 2023
  • Currently, the main technologies of various fourth industries are big data, the Internet of Things, artificial intelligence, blockchain, mixed reality (MR), and drones. In particular, "digital twin," which has recently become a global technological trend, is a concept of a virtual model that is expressed equally in physical objects and computers. By creating and simulating a Digital twin of software-virtualized assets instead of real physical assets, accurate information about the characteristics of real farming (current state, agricultural productivity, agricultural work scenarios, etc.) can be obtained. This study aims to streamline agricultural work through automatic water management, remote growth forecasting, drone control, and pest forecasting through the operation of an integrated control system by constructing digital twin data on the main production area of the nojinot industry and designing and building a smart farm complex. In addition, it aims to distribute digital environmental control agriculture in Korea that can reduce labor and improve crop productivity by minimizing environmental load through the use of appropriate amounts of fertilizers and pesticides through big data analysis. These open-field agricultural technologies can reduce labor through digital farming and cultivation management, optimize water use and prevent soil pollution in preparation for climate change, and quantitative growth management of open-field crops by securing digital data for the national cultivation environment. It is also a way to directly implement carbon-neutral RED++ activities by improving agricultural productivity. The analysis and prediction of growth status through the acquisition of the acquired high-precision and high-definition image-based crop growth data are very effective in digital farming work management. The Southern Crop Department of the National Institute of Food Science conducted research and development on various types of open-field agricultural smart farms such as underground point and underground drainage. In particular, from this year, commercialization is underway in earnest through the establishment of smart farm facilities and technology distribution for agricultural technology complexes across the country. In this study, we would like to describe the case of establishing the agricultural field that combines digital twin technology and open-field agricultural smart farm technology and future utilization plans.

  • PDF

Safety and Efficacy of Ultrasound-Guided Percutaneous Core Needle Biopsy of Pancreatic and Peripancreatic Lesions Adjacent to Critical Vessels (주요 혈관 근처의 췌장 또는 췌장 주위 병변에 대한 초음파 유도하 경피적 중심 바늘 생검의 안전성과 효율성)

  • Sun Hwa Chung;Hyun Ji Kang;Hyo Jeong Lee;Jin Sil Kim;Jeong Kyong Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.5
    • /
    • pp.1207-1217
    • /
    • 2021
  • Purpose To evaluate the safety and efficacy of ultrasound-guided percutaneous core needle biopsy (USPCB) of pancreatic and peripancreatic lesions adjacent to critical vessels. Materials and Methods Data were collected retrospectively from 162 patients who underwent USPCB of the pancreas (n = 98), the peripancreatic area adjacent to the portal vein, the paraaortic area adjacent to pancreatic uncinate (n = 34), and lesions on the third duodenal portion (n = 30) during a 10-year period. An automated biopsy gun with an 18-gauge needle was used for biopsies under US guidance. The USPCB results were compared with those of the final follow-up imaging performed postoperatively. The diagnostic accuracy and major complication rate of the USPCB were calculated. Multiple factors were evaluated for the prediction of successful biopsies using univariate and multivariate analyses. Results The histopathologic diagnosis from USPCB was correct in 149 (92%) patients. The major complication rate was 3%. Four cases of mesenteric hematomas and one intramural hematoma of the duodenum occurred during the study period. The following factors were significantly associated with successful biopsies: a transmesenteric biopsy route rather than a transgastric or transenteric route; good visualization of targets; and evaluation of the entire US pathway. In addition, the number of biopsies required was less when the biopsy was successful. Conclusion USPCB demonstrated high diagnostic accuracy and a low complication rate for the histopathologic diagnosis of pancreatic and peripancreatic lesions adjacent to critical vessels.

Abundance and Occupancy of Forest Mammals at Mijiang Area in the Lower Tumen River (두만강 하류 밀강 지역의 산림성 포유류 풍부도와 점유율)

  • Hai-Long Li;Chang-Yong Choi
    • Korean Journal of Environment and Ecology
    • /
    • v.37 no.6
    • /
    • pp.429-438
    • /
    • 2023
  • The forest in the lower Tumen River serves as an important ecosystem spanning the territories of North Korea, Russia, and China, and it provides habitat and movement corridors for diverse mammals, including the endangered Amur tiger (Panthera tigris) and Amur leopard (Panthera pardus). This study focuses on the Mijiang area, situated as a potential ecological corridor connecting North Korea and China in the lower Tumen River, playing a crucial role in conserving and restoring the biodiversity of the Korean Peninsula. This study aimed to identify mammal species and estimate their relative abundance, occupancy, and distribution based on the 48 camera traps installed in the Mijiang area from May 2019 to May 2021. The results confirmed the presence of 18 mammal species in the Mijiang area, including large carnivores like tigers and leopards. Among the dominant mammals, four species of ungulates showed high occupancy and detection rates, particularly the Roe deer (Capreolus pygargus) and Wild boar (Sus scrofa). The roe deer was distributed across all areas with a predicted high occupancy rate of 0.97, influenced by altitude, urban residential areas, and patch density. Wild boars showed a predicted occupancy rate of 0.73 and were distributed throughout the entire area, with factors such as wetland ratio, grazing intensity, and spatial heterogeneity in aspects of the landscape influencing their occupancy and detection rates. Sika deer (Cervus nippon) exhibited a predicted occupancy rate of 0.48, confined to specific areas, influenced by slope, habitat fragmentation diversity affecting detection rates, and the ratio of open forests impacting occupancy. Water deer (Hydropotes inermis) displayed a very low occupancy rate of 0.06 along the Tumen River Basin, with higher occupancy in lower altitude areas and increased detection in locations with high spatial heterogeneity in aspects. This study confirmed that the Mijiang area serves as a habitat supporting diverse mammals in the lower Tumen River while also playing a crucial role in facilitating animal movement and habitat connectivity. Additionally, the occupancy prediction model developed in this study is expected to contribute to predicting mammal distribution within the disrupted Tumen River basin due to human interference and identifying and protecting potential ecological corridors in this transboundary region.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.