• Title/Summary/Keyword: Predictability

Search Result 797, Processing Time 0.033 seconds

Predicting Forest Gross Primary Production Using Machine Learning Algorithms (머신러닝 기법의 산림 총일차생산성 예측 모델 비교)

  • Lee, Bora;Jang, Keunchang;Kim, Eunsook;Kang, Minseok;Chun, Jung-Hwa;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.29-41
    • /
    • 2019
  • Terrestrial Gross Primary Production (GPP) is the largest global carbon flux, and forest ecosystems are important because of the ability to store much more significant amounts of carbon than other terrestrial ecosystems. There have been several attempts to estimate GPP using mechanism-based models. However, mechanism-based models including biological, chemical, and physical processes are limited due to a lack of flexibility in predicting non-stationary ecological processes, which are caused by a local and global change. Instead mechanism-free methods are strongly recommended to estimate nonlinear dynamics that occur in nature like GPP. Therefore, we used the mechanism-free machine learning techniques to estimate the daily GPP. In this study, support vector machine (SVM), random forest (RF) and artificial neural network (ANN) were used and compared with the traditional multiple linear regression model (LM). MODIS products and meteorological parameters from eddy covariance data were employed to train the machine learning and LM models from 2006 to 2013. GPP prediction models were compared with daily GPP from eddy covariance measurement in a deciduous forest in South Korea in 2014 and 2015. Statistical analysis including correlation coefficient (R), root mean square error (RMSE) and mean squared error (MSE) were used to evaluate the performance of models. In general, the models from machine-learning algorithms (R = 0.85 - 0.93, MSE = 1.00 - 2.05, p < 0.001) showed better performance than linear regression model (R = 0.82 - 0.92, MSE = 1.24 - 2.45, p < 0.001). These results provide insight into high predictability and the possibility of expansion through the use of the mechanism-free machine-learning models and remote sensing for predicting non-stationary ecological processes such as seasonal GPP.

A Study on the Connectivity Modeling Considering the Habitat and Movement Characteristics of Wild Boars (Sus scrofa) (멧돼지(Sus scrofa) 서식지 및 이동 특성을 고려한 연결성 모델링 연구)

  • Lee, Hyun-Jung;Kim, Whee-Moon;Kim, Kyeong-Tae;Jeong, Seung-Gyu;Kim, Yu-Jin;Lee, Kyung Jin;Kim, Ho Gul;Park, Chan;Song, Won-Kyong
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.25 no.4
    • /
    • pp.33-47
    • /
    • 2022
  • Wild boars(Sus scrofa) are expanding their range of behavior as their habitats change. Appearing in urban centers and private houses, it caused various social problems, including damage to crops. In order to prevent damage and effectively manage wild boars, there is a need for ecological research considering the characteristics and movement characteristics of wild boars. The purpose of this study is to analyze home range and identify land cover types in key areas through tracking wild boars, and to predict the movement connectivity of wild boars in consideration of previous studies and their preferred land use characteristics. In this study, from January to June 2021, four wild boars were captured and tracked in Jinju city, Gyeongsangnam-do, and the preferred land cover type of wild boars was identified based on the MCP 100%, KDE 95%, and KDE 50% results. As a result of the analysis of the home range for each individual, it was found that 100% of MCP was about 0.68km2, 2.77km2, 2.42km2, and 0.16km2, and the three individuals overlapped the home range, refraining from habitat movement and staying in the preferred area. The core areas were analyzed as about 0.55km2, 2.05km2, 0.82km2, and 0.14km2 with KDE 95%., and about 0.011km2, 0.033km2, 0.004km2, and 0.003km2 with KDE 50%. When the preferred land cover type of wild boar was confirmed based on the results of analysis of the total home range area and core area that combined all individuals, forests were 55.49% (MCP 100%), 54.00% (KDE 95%), 77.69% (KDE 50%), respectively, with the highest ratio, and the urbanization area, grassland, and agricultural area were relatively high. A connectivity scenario was constructed in which the ratio of the land cover type preferred by the analyzed wild boar was reflected as a weight for the resistance value of the connectivity analysis, and this was compared with the connectivity evaluation results analyzed based on previous studies and wild boar characteristics. When the current density values for the wild boar movement data were compared, the average value of the existing scenario was 2.76, the minimum 1.12, and the maximum 4.36, and the weighted scenario had an average value of 2.84, the minimum 0.96, and the maximum 4.65. It was confirmed that, on average, the probability of movement predictability was about 2.90% better even though the weighted scenario had movement restrictions due to large resistance values. It is expected that the identification of the movement route through the movement connectivity analysis of wild boars can be suggested as an alternative to prevent damage by predicting the point of appearance. In the future, when analyzing the connectivity of species including wild boar, it is judged that it will be effective to use movement data on actual species.

Reliable Radiologic Parameters to Predict Surgical Management for Clubfoot Treated with the Ponseti Method (Ponseti 방법으로 치료를 시작한 선천성 만곡족 환자에서 수술적 치료 여부를 예측할 수 있는 방사선적 지표)

  • Song, Kwang Soon;Yon, Chang Jin;Lee, Si Wook;Lee, Yong Ho;Um, Sang Hyun;Kwon, Hyuk Jun
    • Journal of the Korean Orthopaedic Association
    • /
    • v.54 no.1
    • /
    • pp.59-66
    • /
    • 2019
  • Purpose: Several radiologic reference lines have been used to evaluate individuals with a clubfoot but there is no consensus as to which is most reliable. The aim of this study was to identify which radiologic parameters have relevance to the predictability of additional surgery after Ponseti casting on clubfoot and the effect of clubfoot treatments that contain Ponseti casting and additional surgery. Materials and Methods: A total of 102 clubfeet (65 patients, 37 bilateral) were reviewed from 2005 to 2013. The patients were divided into two groups (Group A, those for whom the result of the Ponseti method was successful and did not require additional surgery; and Group B, those for whom the result of the Ponseti method was unsuccessful and required additional surgery), and the following parameters were measured on the plain radiographs: i) talo-calcaneal angle on the anteroposterior and lateral view, ii) talo-1st metatarsal angle on the anteroposterior view, and iii) Tibio-calcaneal angle on the lateral view with the ankle full-dorsiflexion state. Each radiograph was reviewed on two separate occasions by one orthopedic doctor to characterize the intra-observer reliability, and the averages were analyzed. Next, 20 cases were chosen using a random number table, and two orthopedic doctors measured the angle separately to characterize the interobserver reliability. Results: Groups A and B included 73 clubfeet (71.6%) and 29 clubfeet (28.4%), respectively. The initial talo-calcaneal angle and tibiocalcaneal angle in the lateral view were significantly different among the groups. In addition, inter- and intra-observer biases were not detected. The talo-1st metatarsal angle on the anteroposterior view and tibio-calcaneal angle on the lateral view were significantly different after treatment in both groups. Conclusion: Congenital clubfeet treated with the Ponseti method showed successful results in more than 70% of patients. The initial talocalcaneal angle and tibio-calcaneal angle on the lateral view were the radiologic parameters that could predict the need for additional surgical treatments. The talo-1st metatarsal angle on the anteroposterior view and tibio-calcaneal angle on the lateral view could effectively evaluate the changes in clubfoot after treatment.

Satellite-Based Cabbage and Radish Yield Prediction Using Deep Learning in Kangwon-do (딥러닝을 활용한 위성영상 기반의 강원도 지역의 배추와 무 수확량 예측)

  • Hyebin Park;Yejin Lee;Seonyoung Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.1031-1042
    • /
    • 2023
  • In this study, a deep learning model was developed to predict the yield of cabbage and radish, one of the five major supply and demand management vegetables, using satellite images of Landsat 8. To predict the yield of cabbage and radish in Gangwon-do from 2015 to 2020, satellite images from June to September, the growing period of cabbage and radish, were used. Normalized difference vegetation index, enhanced vegetation index, lead area index, and land surface temperature were employed in this study as input data for the yield model. Crop yields can be effectively predicted using satellite images because satellites collect continuous spatiotemporal data on the global environment. Based on the model developed previous study, a model designed for input data was proposed in this study. Using time series satellite images, convolutional neural network, a deep learning model, was used to predict crop yield. Landsat 8 provides images every 16 days, but it is difficult to acquire images especially in summer due to the influence of weather such as clouds. As a result, yield prediction was conducted by splitting June to July into one part and August to September into two. Yield prediction was performed using a machine learning approach and reference models , and modeling performance was compared. The model's performance and early predictability were assessed using year-by-year cross-validation and early prediction. The findings of this study could be applied as basic studies to predict the yield of field crops in Korea.

The Usefulness of B-type Natriuretic Peptide test in Critically Ill, Noncardiac Patients (심질환 병력이 없었던 중환자에서 B-type Natriuretic Peptide 검사의 유용성)

  • Kim, Kang Ho;Park, Hong-Hoon;Kim, Esther;Cheon, Seok-Cheol;Lee, Ji Hyun;Lee, Stephen YongGu;Lee, Ji-Hyun;Kim, In Jai;Cha, Dong-Hoon;Kim, Sehyun;Choi, Jeongeun;Hong, Sang-Bum
    • Tuberculosis and Respiratory Diseases
    • /
    • v.54 no.3
    • /
    • pp.311-319
    • /
    • 2003
  • Background : Previous studies have suggested that a B-type natriuretic peptide(BNP) test can provide important information on diagnosis, as well as predicting the severity and prognosis of heart failure. Myocardial dysfunction is often observed in critically ill noncardiac patients admitted to the Intensive Care Unit, and the prognosis of the myocardial dysfunction needs to be determined. This study evaluated the predictability of BNP on the prognosis of critically ill noncardiac patients. Methods : 32 ICU patients, who were hospitalized from June to October 2002 and in whom the BNP test was evaluated, were enrolled in this study. The exclusion criteria included the conditions that could increase the BNP levels irrespective of the severity, such as congestive heart failure, atrial fibrillation, ischemic heart disease, and renal insufficiencies. A triage B-Type Natriuretic Peptide test with a RIA-kit was used for the fluorescence immunoassay of BNP test. In addition, the acute physiology and the chronic health evaluation (APACHE) II score and mortality were recorded. Results : There were 16 males and 16 females enrolled in this study. The mean age was 59 years old. The mean BNP levels between the ICU patients and control were significantly different ($186.7{\pm}274.1$ pg/mL vs. $19.9{\pm}21.3$ pg/mL, p=0.033). Among the ICU patients, there were 14(44----) patients with BNP levels above 100 pg/mL. The APACHE II score was $16.5{\pm}7.6$. In addition, there were 11 mortalities reported. The correlation between the BNP and APACHE II score, between the BNP and mortality were significant (r=0.443, p=0.011 & r=0.530, p=0.002). The mean BNP levels between the dead and alive groups were significantly different ($384.1{\pm}401.7$ pg/mL vs. $83.2{\pm}55.8$ pg/mL p=0.033). However, the $PaO_2/FiO_2$ did not significantly correlate with the BNP level. Conclusion : This study evaluated the BNP level was elevated in critically ill, noncardiac patients. The BNP level could be a useful, noninvasive tool for predicting the prognosis of the critically ill, noncardiac patients.

An Analysis of the Moderating Effects of User Ability on the Acceptance of an Internet Shopping Mall (인터넷 쇼핑몰 수용에 있어 사용자 능력의 조절효과 분석)

  • Suh, Kun-Soo
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.27-55
    • /
    • 2008
  • Due to the increasing and intensifying competition in the Internet shopping market, it has been recognized as very important to develop an effective policy and strategy for acquiring loyal customers. For this reason, web site designers need to know if a new Internet shopping mall(ISM) will be accepted. Researchers have been working on identifying factors for explaining and predicting user acceptance of an ISM. Some studies, however, revealed inconsistent findings on the antecedents of user acceptance of a website. Lack of consideration for individual differences in user ability is believed to be one of the key reasons for the mixed findings. The elaboration likelihood model (ELM) and several studies have suggested that individual differences in ability plays an moderating role on the relationship between the antecedents and user acceptance. Despite the critical role of user ability, little research has examined the role of user ability in the Internet shopping mall context. The purpose of this study is to develop a user acceptance model that consider the moderating role of user ability in the context of Internet shopping. This study was initiated to see the ability of the technology acceptance model(TAM) to explain the acceptance of a specific ISM. According to TAM. which is one of the most influential models for explaining user acceptance of IT, an intention to use IT is determined by usefulness and ease of use. Given that interaction between user and website takes place through web interface, the decisions to accept and continue using an ISM depend on these beliefs. However, TAM neglects to consider the fact that many users would not stick to an ISM until they trust it although they may think it useful and easy to use. The importance of trust for user acceptance of ISM has been raised by the relational views. The relational view emphasizes the trust-building process between the user and ISM, and user's trust on the website is a major determinant of user acceptance. The proposed model extends and integrates the TAM and relational views on user acceptance of ISM by incorporating usefulness, ease of use, and trust. User acceptance is defined as a user's intention to reuse a specific ISM. And user ability is introduced into the model as moderating variable. Here, the user ability is defined as a degree of experiences, knowledge and skills regarding Internet shopping sites. The research model proposes that the ease of use, usefulness and trust of ISM are key determinants of user acceptance. In addition, this paper hypothesizes that the effects of the antecedents(i.e., ease of use, usefulness, and trust) on user acceptance may differ among users. In particular, this paper proposes a moderating effect of a user's ability on the relationship between antecedents with user's intention to reuse. The research model with eleven hypotheses was derived and tested through a survey that involved 470 university students. For each research variable, this paper used measurement items recognized for reliability and widely used in previous research. We slightly modified some items proper to the research context. The reliability and validity of the research variables were tested using the Crobnach's alpha and internal consistency reliability (ICR) values, standard factor loadings of the confirmative factor analysis, and average variance extracted (AVE) values. A LISREL method was used to test the suitability of the research model and its relating six hypotheses. Key findings of the results are summarized in the following. First, TAM's two constructs, ease of use and usefulness directly affect user acceptance. In addition, ease of use indirectly influences user acceptance by affecting trust. This implies that users tend to trust a shopping site and visit repeatedly when they perceive a specific ISM easy to use. Accordingly, designing a shopping site that allows users to navigate with heuristic and minimal clicks for finding information and products within the site is important for improving the site's trust and acceptance. Usefulness, however, was not found to influence trust. Second, among the three belief constructs(ease of use, usefulness, and trust), trust was empirically supported as the most important determinants of user acceptance. This implies that users require trustworthiness from an Internet shopping site to be repeat visitors of an ISM. Providing a sense of safety and eliminating the anxiety of online shoppers in relation to privacy, security, delivery, and product returns are critically important conditions for acquiring repeat visitors. Hence, in addition to usefulness and ease of use as in TAM, trust should be a fundamental determinants of user acceptance in the context of internet shopping. Third, the user's ability on using an Internet shopping site played a moderating role. For users with low ability, ease of use was found to be a more important factors in deciding to reuse the shopping mall, whereas usefulness and trust had more effects on users with high ability. Applying the EML theory to these findings, we can suggest that experienced and knowledgeable ISM users tend to elaborate on such usefulness aspects as efficient and effective shopping performance and trust factors as ability, benevolence, integrity, and predictability of a shopping site before they become repeat visitors of the site. In contrast, novice users tend to rely on the low elaborating features, such as the perceived ease of use. The existence of moderating effects suggests the fact that different individuals evaluate an ISM from different perspectives. The expert users are more interested in the outcome of the visit(usefulness) and trustworthiness(trust) than those novice visitors. The latter evaluate the ISM in a more superficial manner focusing on the novelty of the site and on other instrumental beliefs(ease of use). This is consistent with the insights proposed by the Heuristic-Systematic model. According to the Heuristic-Systematic model. a users act on the principle of minimum effort. Thus, the user considers an ISM heuristically, focusing on those aspects that are easy to process and evaluate(ease of use). When the user has sufficient experience and skills, the user will change to systematic processing, where they will evaluate more complex aspects of the site(its usefulness and trustworthiness). This implies that an ISM has to provide a minimum level of ease of use to make it possible for a user to evaluate its usefulness and trustworthiness. Ease of use is a necessary but not sufficient condition for the acceptance and use of an ISM. Overall, the empirical results generally support the proposed model and identify the moderating effect of the effects of user ability. More detailed interpretations and implications of the findings are discussed. The limitations of this study are also discussed to provide directions for future research.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.