• Title/Summary/Keyword: forecasting performance

Search Result 707, Processing Time 0.029 seconds

An Empirical Study on Supply Chain Demand Forecasting Using Adaptive Exponential Smoothing (적응적 지수평활법을 이용한 공급망 수요예측의 실증분석)

  • Kim, Jung-Il;Cha, Kyoung-Cheon;Jun, Duk-Bin;Park, Dae- Keun;Park, Sung-Ho;Park, Myoung-Whan
    • IE interfaces
    • /
    • v.18 no.3
    • /
    • pp.343-349
    • /
    • 2005
  • This study presents the empirical results of comparing several demand forecasting methods for Supply Chain Management(SCM). Adaptive exponential smoothing using change detection statistics (Jun) is compared with Trigg and Leach's adaptive methods and SAS time series forecasting systems using weekly SCM demand data. The results show that Jun's method is superior to others in terms of one-step-ahead forecast error and eight-step-ahead forecast error. Based on the results, we conclude that the forecasting performance of SCM solution can be improved by the proposed adaptive forecasting method.

Fast Data Assimilation using Kernel Tridiagonal Sparse Matrix for Performance Improvement of Air Quality Forecasting (대기질 예보의 성능 향상을 위한 커널 삼중대각 희소행렬을 이용한 고속 자료동화)

  • Bae, Hyo Sik;Yu, Suk Hyun;Kwon, Hee Yong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.363-370
    • /
    • 2017
  • Data assimilation is an initializing method for air quality forecasting such as PM10. It is very important to enhance the forecasting accuracy. Optimal interpolation is one of the data assimilation techniques. It is very effective and widely used in air quality forecasting fields. The technique, however, requires too much memory space and long execution time. It makes the PM10 air quality forecasting difficult in real time. We propose a fast optimal interpolation data assimilation method for PM10 air quality forecasting using a new kernel tridiagonal sparse matrix and CUDA massively parallel processing architecture. Experimental results show the proposed method is 5~56 times faster than conventional ones.

Bivariate long range dependent time series forecasting using deep learning (딥러닝을 이용한 이변량 장기종속시계열 예측)

  • Kim, Jiyoung;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.1
    • /
    • pp.69-81
    • /
    • 2019
  • We consider bivariate long range dependent (LRD) time series forecasting using a deep learning method. A long short-term memory (LSTM) network well-suited to time series data is applied to forecast bivariate time series; in addition, we compare the forecasting performance with bivariate fractional autoregressive integrated moving average (FARIMA) models. Out-of-sample forecasting errors are compared with various performance measures for functional MRI (fMRI) data and daily realized volatility data. The results show a subtle difference in the predicted values of the FIVARMA model and VARFIMA model. LSTM is computationally demanding due to hyper-parameter selection, but is more stable and the forecasting performance is competitively good to that of parametric long range dependent time series models.

Water level forecasting for extended lead times using preprocessed data with variational mode decomposition: A case study in Bangladesh

  • Shabbir Ahmed Osmani;Roya Narimani;Hoyoung Cha;Changhyun Jun;Md Asaduzzaman Sayef
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.179-179
    • /
    • 2023
  • This study suggests a new approach of water level forecasting for extended lead times using original data preprocessing with variational mode decomposition (VMD). Here, two machine learning algorithms including light gradient boosting machine (LGBM) and random forest (RF) were considered to incorporate extended lead times (i.e., 5, 10, 15, 20, 25, 30, 40, and 50 days) forecasting of water levels. At first, the original data at two water level stations (i.e., SW173 and SW269 in Bangladesh) and their decomposed data from VMD were prepared on antecedent lag times to analyze in the datasets of different lead times. Mean absolute error (MAE), root mean squared error (RMSE), and mean squared error (MSE) were used to evaluate the performance of the machine learning models in water level forecasting. As results, it represents that the errors were minimized when the decomposed datasets were considered to predict water levels, rather than the use of original data standalone. It was also noted that LGBM produced lower MAE, RMSE, and MSE values than RF, indicating better performance. For instance, at the SW173 station, LGBM outperformed RF in both decomposed and original data with MAE values of 0.511 and 1.566, compared to RF's MAE values of 0.719 and 1.644, respectively, in a 30-day lead time. The models' performance decreased with increasing lead time, as per the study findings. In summary, preprocessing original data and utilizing machine learning models with decomposed techniques have shown promising results for water level forecasting in higher lead times. It is expected that the approach of this study can assist water management authorities in taking precautionary measures based on forecasted water levels, which is crucial for sustainable water resource utilization.

  • PDF

Improving SARIMA model for reliable meteorological drought forecasting

  • Jehanzaib, Muhammad;Shah, Sabab Ali;Son, Ho Jun;Kim, Tae-Woong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.141-141
    • /
    • 2022
  • Drought is a global phenomenon that affects almost all landscapes and causes major damages. Due to non-linear nature of contributing factors, drought occurrence and its severity is characterized as stochastic in nature. Early warning of impending drought can aid in the development of drought mitigation strategies and measures. Thus, drought forecasting is crucial in the planning and management of water resource systems. The primary objective of this study is to make improvement is existing drought forecasting techniques. Therefore, we proposed an improved version of Seasonal Autoregressive Integrated Moving Average (SARIMA) model (MD-SARIMA) for reliable drought forecasting with three years lead time. In this study, we selected four watersheds of Han River basin in South Korea to validate the performance of MD-SARIMA model. The meteorological data from 8 rain gauge stations were collected for the period 1973-2016 and converted into watershed scale using Thiessen's polygon method. The Standardized Precipitation Index (SPI) was employed to represent the meteorological drought at seasonal (3-month) time scale. The performance of MD-SARIMA model was compared with existing models such as Seasonal Naive Bayes (SNB) model, Exponential Smoothing (ES) model, Trigonometric seasonality, Box-Cox transformation, ARMA errors, Trend and Seasonal components (TBATS) model, and SARIMA model. The results showed that all the models were able to forecast drought, but the performance of MD-SARIMA was robust then other statistical models with Wilmott Index (WI) = 0.86, Mean Absolute Error (MAE) = 0.66, and Root mean square error (RMSE) = 0.80 for 36 months lead time forecast. The outcomes of this study indicated that the MD-SARIMA model can be utilized for drought forecasting.

  • PDF

Enhancing the Performance of Call Center using Simulation (시뮬레이션을 통한 콜센터의 성능 개선)

  • 김윤배;이창헌;김재범;이계신;이병철
    • Journal of the Korea Society for Simulation
    • /
    • v.12 no.4
    • /
    • pp.83-94
    • /
    • 2003
  • Managing a call center is a complex and diverse challenge. Call center becomes a very important contact point and a data ware house for successful CRM. Improving performance of call center is critical and valuable for providing better service. In this study we applied forecasting technique to estimate incoming calls and ProModel based simulation model to enhance performance of a mobile telecommunication company's call center. The simulation study shows reduction in managing cost and better customer's satisfaction.

  • PDF

Electricity Demand Forecasting based on Support Vector Regression (Support Vector Regression에 기반한 전력 수요 예측)

  • Lee, Hyoung-Ro;Shin, Hyun-Jung
    • IE interfaces
    • /
    • v.24 no.4
    • /
    • pp.351-361
    • /
    • 2011
  • Forecasting of electricity demand have difficulty in adapting to abrupt weather changes along with a radical shift in major regional and global climates. This has lead to increasing attention to research on the immediate and accurate forecasting model. Technically, this implies that a model requires only a few input variables all of which are easily obtainable, and its predictive performance is comparable with other competing models. To meet the ends, this paper presents an energy demand forecasting model that uses the variable selection or extraction methods of data mining to select only relevant input variables, and employs support vector regression method for accurate prediction. Also, it proposes a novel performance measure for time-series prediction, shift index, followed by description on preprocessing procedure. A comparative evaluation of the proposed method with other representative data mining models such as an auto-regression model, an artificial neural network model, an ordinary support vector regression model was carried out for obtaining the forecast of monthly electricity demand from 2000 to 2008 based on data provided by Korea Energy Economics Institute. Among the models tested, the proposed method was shown promising results than others.

Machine learning approaches for wind speed forecasting using long-term monitoring data: a comparative study

  • Ye, X.W.;Ding, Y.;Wan, H.P.
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.733-744
    • /
    • 2019
  • Wind speed forecasting is critical for a variety of engineering tasks, such as wind energy harvesting, scheduling of a wind power system, and dynamic control of structures (e.g., wind turbine, bridge, and building). Wind speed, which has characteristics of random, nonlinear and uncertainty, is difficult to forecast. Nowadays, machine learning approaches (generalized regression neural network (GRNN), back propagation neural network (BPNN), and extreme learning machine (ELM)) are widely used for wind speed forecasting. In this study, two schemes are proposed to improve the forecasting performance of machine learning approaches. One is that optimization algorithms, i.e., cross validation (CV), genetic algorithm (GA), and particle swarm optimization (PSO), are used to automatically find the optimal model parameters. The other is that the combination of different machine learning methods is proposed by finite mixture (FM) method. Specifically, CV-GRNN, GA-BPNN, PSO-ELM belong to optimization algorithm-assisted machine learning approaches, and FM is a hybrid machine learning approach consisting of GRNN, BPNN, and ELM. The effectiveness of these machine learning methods in wind speed forecasting are fully investigated by one-year field monitoring data, and their performance is comprehensively compared.

Developing Optimal Demand Forecasting Models for a Very Short Shelf-Life Item: A Case of Perishable Products in Online's Retail Business

  • Wiwat Premrudikul;Songwut Ahmornahnukul;Akkaranan Pongsathornwiwat
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.3
    • /
    • pp.1-13
    • /
    • 2023
  • Demand forecasting is a crucial task for an online retail where has to manage daily fresh foods effectively. Failing in forecasting results loss of profitability because of incompetent inventory management. This study investigated the optimal performance of different forecasting models for a very short shelf-life product. Demand data of 13 perishable items with aging of 210 days were used for analysis. Our comparison results of four methods: Trivial Identity, Seasonal Naïve, Feed-Forward and Autoregressive Recurrent Neural Networks (DeepAR) reveals that DeepAR outperforms with the lowest MAPE. This study also suggests the managerial implications by employing coefficient of variation (CV) as demand variation indicators. Three classes: Low, Medium and High variation are introduced for classify 13 products into groups. Our analysis found that DeepAR is suitable for medium and high variations, while the low group can use any methods. With this approach, the case can gain benefit of better fill-rate performance.

Time-Series Forecasting Based on Multi-Layer Attention Architecture

  • Na Wang;Xianglian Zhao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.1-14
    • /
    • 2024
  • Time-series forecasting is extensively used in the actual world. Recent research has shown that Transformers with a self-attention mechanism at their core exhibit better performance when dealing with such problems. However, most of the existing Transformer models used for time series prediction use the traditional encoder-decoder architecture, which is complex and leads to low model processing efficiency, thus limiting the ability to mine deep time dependencies by increasing model depth. Secondly, the secondary computational complexity of the self-attention mechanism also increases computational overhead and reduces processing efficiency. To address these issues, the paper designs an efficient multi-layer attention-based time-series forecasting model. This model has the following characteristics: (i) It abandons the traditional encoder-decoder based Transformer architecture and constructs a time series prediction model based on multi-layer attention mechanism, improving the model's ability to mine deep time dependencies. (ii) A cross attention module based on cross attention mechanism was designed to enhance information exchange between historical and predictive sequences. (iii) Applying a recently proposed sparse attention mechanism to our model reduces computational overhead and improves processing efficiency. Experiments on multiple datasets have shown that our model can significantly increase the performance of current advanced Transformer methods in time series forecasting, including LogTrans, Reformer, and Informer.