• Title/Summary/Keyword: Long-term forecasting

Search Result 373, Processing Time 0.025 seconds

Application of multiple linear regression and artificial neural network models to forecast long-term precipitation in the Geum River basin (다중회귀모형과 인공신경망모형을 이용한 금강권역 강수량 장기예측)

  • Kim, Chul-Gyum;Lee, Jeongwoo;Lee, Jeong Eun;Kim, Hyeonjun
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.723-736
    • /
    • 2022
  • In this study, monthly precipitation forecasting models that can predict up to 12 months in advance were constructed for the Geum River basin, and two statistical techniques, multiple linear regression (MLR) and artificial neural network (ANN), were applied to the model construction. As predictor candidates, a total of 47 climate indices were used, including 39 global climate patterns provided by the National Oceanic and Atmospheric Administration (NOAA) and 8 meteorological factors for the basin. Forecast models were constructed by using climate indices with high correlation by analyzing the teleconnection between the monthly precipitation and each climate index for the past 40 years based on the forecast month. In the goodness-of-fit test results for the average value of forecasts of each month for 1991 to 2021, the MLR models showed -3.3 to -0.1% for the percent bias (PBIAS), 0.45 to 0.50 for the Nash-Sutcliffe efficiency (NSE), and 0.69 to 0.70 for the Pearson correlation coefficient (r), whereas, the ANN models showed PBIAS -5.0~+0.5%, NSE 0.35~0.47, and r 0.64~0.70. The mean values predicted by the MLR models were found to be closer to the observation than the ANN models. The probability of including observations within the forecast range for each month was 57.5 to 83.6% (average 72.9%) for the MLR models, and 71.5 to 88.7% (average 81.1%) for the ANN models, indicating that the ANN models showed better results. The tercile probability by month was 25.9 to 41.9% (average 34.6%) for the MLR models, and 30.3 to 39.1% (average 34.7%) for the ANN models. Both models showed long-term predictability of monthly precipitation with an average of 33.3% or more in tercile probability. In conclusion, the difference in predictability between the two models was found to be relatively small. However, when judging from the hit rate for the prediction range or the tercile probability, the monthly deviation for predictability was found to be relatively small for the ANN models.

Case study on flood water level prediction accuracy of LSTM model according to condition of reference hydrological station combination (참조 수문관측소 구성 조건에 따른 LSTM 모형 홍수위예측 정확도 검토 사례 연구)

  • Lee, Seungho;Kim, Sooyoung;Jung, Jaewon;Yoon, Kwang Seok
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.12
    • /
    • pp.981-992
    • /
    • 2023
  • Due to recent global climate change, the scale of flood damage is increasing as rainfall is concentrated and its intensity increases. Rain on a scale that has not been observed in the past may fall, and long-term rainy seasons that have not been recorded may occur. These damages are also concentrated in ASEAN countries, and many people in ASEAN countries are affected, along with frequent occurrences of flooding due to typhoons and torrential rains. In particular, the Bandung region which is located in the Upper Chitarum River basin in Indonesia has topographical characteristics in the form of a basin, making it very vulnerable to flooding. Accordingly, through the Official Development Assistance (ODA), a flood forecasting and warning system was established for the Upper Citarium River basin in 2017 and is currently in operation. Nevertheless, the Upper Citarium River basin is still exposed to the risk of human and property damage in the event of a flood, so efforts to reduce damage through fast and accurate flood forecasting are continuously needed. Therefore, in this study an artificial intelligence-based river flood water level forecasting model for Dayeu Kolot as a target station was developed by using 10-minute hydrological data from 4 rainfall stations and 1 water level station. Using 10-minute hydrological observation data from 6 stations from January 2017 to January 2021, learning, verification, and testing were performed for lead time such as 0.5, 1, 2, 3, 4, 5 and 6 hour and LSTM was applied as an artificial intelligence algorithm. As a result of the study, good results were shown in model fit and error for all lead times, and as a result of reviewing the prediction accuracy according to the learning dataset conditions, it is expected to be used to build an efficient artificial intelligence-based model as it secures prediction accuracy similar to that of using all observation stations even when there are few reference stations.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Present Status of Soilborne Disease Incidence and Scheme for Its Integrated Management in Korea (국내 토양병해 발생현황과 종합 관리방안)

  • Kim, Choong-Hoe;Kim, Yong-Ki
    • Research in Plant Disease
    • /
    • v.8 no.3
    • /
    • pp.146-161
    • /
    • 2002
  • Incidence of soilborne diseases, as a major cause of failure of continuous monocropping becomes severe in recent years. For examples, recent epidemics of club root of chinese cabbage, white rot of garlic, bacterial wilt of potato, pepper phytophthora blight, tomato fusarium wilt and CGMMV of watermelon are the diseases that require urgent control measures. Reasons for the severe incidence of soilborne diseases are the simplified cropping system or continuous monocropping associated with allocation of major production areas of certain crop and year-round cultivation system that results in rapid degradation of soil environment. Neglect of breeding for disease resistance relative to giving much emphasis on high yield and good quality, and cultural methods putting first on the use of chemical fertilizers are thought to be the reason. Counter-measures against soilborne disease epidemics would become most effective when the remedies are seeded for individual causes. As long-term strategies, development of rational cropping system which fits local cropping and economic condition, development and supply of cultivars resistant to multiple diseases, and improvement of soil environment by soil conditioning are suggested. In short-term strategies, simple and economical soil-disinfestation technology, and quick and accurate forecasting methods for soilborne diseases are urgent matter far development. for these, extensive supports are required in governmental level for rearing soilborne disease specialists and activation of collaborating researches to solve encountering problems of soilborne diseases.

Development of Model for Optimal Concession Period in PPPs Considering Traffic Risk (교통량 위험을 고려한 도로 민간투자사업 적정 관리운영기간 산정 모형 개발)

  • KU, Sukmo;LEE, Seungjae
    • Journal of Korean Society of Transportation
    • /
    • v.34 no.5
    • /
    • pp.421-436
    • /
    • 2016
  • Public-Private-Partnerships tend to be committed high project development cost and recover the cost through future revenue during the operation period. In general, long-term concession can bring on more revenue to private investors, but short-term concession less revenue due to the short recovering opportunities. The concession period is usually determined by government in advance or by the private sectors's proposal although it is a very crucial factor for the PPPs. Accurate traffic forecasting should be most important in planing and evaluating the operation period in that the forecasted traffic determines the project revenue with user fees in PPPs. In this regards, governments and the private investors are required to consider the traffic forecast risk when determining concession period. This study proposed a model for the optimal concession period in the PPPs transportation projects. Monte Carlo simulation was performed to find out the optimal concession period while traffic forecast uncertainty is considered as a project risk under the expected return of the private sector. The simulation results showed that the optimal concession periods are 17 years and 21 years at 5.5% and 7% discount level, respectively. This study result can be applied for the private investors and/or any other concerned decision makers for PPPs projects to set up a more resonable concession period.

Estimating Travel Demand by Using a Spatial-Temporal Activity Presence-Based Approach (시.공간 활동인구 추정에 의한 통행수요 예측)

  • Eom, Jin-Ki
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.5
    • /
    • pp.163-174
    • /
    • 2008
  • The conventional four-step travel demand model is still widely used as the state-of-practice in most transportation planning agencies even though it does not provide reliable estimates of travel demand. In order to improve the accuracy of travel demand estimation, implementing an alternative approach would be critical as much as acquiring reliable socioeconomic and travel data. Recently, the role of travel demand model is diverse to satisfy the needs of microscopic analysis regarding various policies of travel demand management and traffic operations. In this context, the activity-based approach for travel demand estimation is introduced and a case study of developing a spatial-temporal activity presence-based approach that estimates travel demand through forecasting number of people present at certain place and time is accomplished. Results show that the spatial-temporal activity presence-based approach provides reliable estimates of both number of people present and trips actually people made. It is expected that the proposed approach will provide better estimates and be used in not only long-term transport plans but short-term transport impact studies with respect to various transport policies. Finally, in order to introduce the spatial-temporal activity presence-based approach, the data such as activity-based travel diary and land use based on geographic information system are essential.

Current Status and Future Challenges of the National Population Projection in South Korea Concerning Super-Low Fertility Patterns (국제비교를 통해 바라본 한국의 장래인구추계 현황과 전망)

  • Jun, Kwang-Hee;Choi, Seul-Ki
    • Korea journal of population studies
    • /
    • v.33 no.2
    • /
    • pp.85-111
    • /
    • 2010
  • South Korea has experienced a rapid fertility decline and notable mortality improvement. As the drop in TFR was quicker and greater in terms of tempo and magnitude, it cast a new challenge of population projection - how to improve the forecasting accuracy in the country with a super-low fertility pattern. This study begin with the current status of the national population projection as implemented by Statistics Korea by comparing the 2009 interim projection with the 2006 official national population projection. Secondly, this study compare the population projection system including projection agencies, projection horizons, projection intervals, the number of projection scenarios, and the number of assumptions on fertility, mortality and international migration among super-low fertility countries. Thirdly we illustrate a stochastic population projection for Korea by transforming the population rates into one parameter series. Finally we describe the future challenges of the national population projection, and propose the projection scenarios for the 2011 official population projection. To enhance the accuracy, we suggest that Statistics Korea should update population projections more frequently or distinguish them into short-term and long-term projections. Adding more than four projection scenarios including additional types of "low-variant"fertility could show a variety of future changes. We also expect Statistics Korea topay more attention to the determination of a base population that should include both national and non-national populations. Finally we hope that Statistics Korea will find a wise way to incorporate the ideas underlying the system of stochastic population projection as part of the official national population projection.

Analysis of effects of drought on water quality using HSPF and QUAL-MEV (HSPF 및 QUAL-MEV를 이용한 가뭄이 수질에 미치는 영향 분석)

  • Lee, Sangung;Jo, Bugeon;Kim, Young Do;Lee, Joo-Heon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.393-402
    • /
    • 2023
  • Drought, which has been increasing in frequency and magnitude due to recent abnormal weather events, poses severe challenges in various sectors. To address this issue, it is important to develop technologies for drought monitoring, forecasting, and response in order to implement effective measures and safeguard the ecological health of aquatic systems during water scarcity caused by drought. This study aimed to predict water quality fluctuations during drought periods by integrating the watershed model HSPF and the water quality model QUAL-MEV. The researchers examined the SPI and RCP 4.5 scenarios, and analyzed water quality changes based on flow rates by simulating them using the HSPF and QUAL-MEV models. The study found a strong correlation between water flow and water quality during the low flow. However, the relationship between precipitation and water quality was deemed insignificant. Moreover, the flow rate and SPI6 exhibited different trends. It was observed that the relationship with the mid- to long-term drought index was not significant when predicting changes in water quality influenced by drought. Therefore, to accurately assess the impact of drought on water quality, it is necessary to employ a short-term drought index and develop an evaluation method that considers fluctuations in flow.

An Analysis of Drought Using the Palmer's Method (Plamer의 방법을 이용한 가뭄의 분석)

  • Yun, Yong-Nam;An, Jae-Hyeon;Lee, Dong-Ryul
    • Journal of Korea Water Resources Association
    • /
    • v.30 no.4
    • /
    • pp.317-326
    • /
    • 1997
  • The Palmer Drought Severity Index has been ectensively used to quantitatively evaluate the drought severity at a location for both agricultural and water resources management purposes. In the present study the Palmer-type formula for drought index is drived for the whole country by analyzing the monthly rainfall and meteorological data at nine stations with a long period of records. The formula is then used to compute the monthly drought severity index at sixty-eight rainfall stations located throughout the country. For the past five significant drought periods the spatial variation of each drought is shown as a nationwide drought index map of a specified duration from which the relative severity of drought throughout the country is identifiable for a specific drought period. A comparative study is made to evaluate the relative severity of the significant droughts occurred in Korea since 1960's. It turned out that '94-'95 drought was one of the worst both in the areal extent and drought severity. It is found that the Palmer-type formula is a very useful tool in quantitatively evaluating the severity of drought over an area as well as at a point. When rainfall and meteorological forecast become feasible on a long-term basis the method could also be utilized as a tool for drought forecasting.

  • PDF

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.