• Title/Summary/Keyword: Exponential average method

Search Result 98, Processing Time 0.023 seconds

Short-term Forecasting of Power Demand based on AREA (AREA 활용 전력수요 단기 예측)

  • Kwon, S.H.;Oh, H.S.
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.25-30
    • /
    • 2016
  • It is critical to forecast the maximum daily and monthly demand for power with as little error as possible for our industry and national economy. In general, long-term forecasting of power demand has been studied from both the consumer's perspective and an econometrics model in the form of a generalized linear model with predictors. Time series techniques are used for short-term forecasting with no predictors as predictors must be predicted prior to forecasting response variables and containing estimation errors during this process is inevitable. In previous researches, seasonal exponential smoothing method, SARMA (Seasonal Auto Regressive Moving Average) with consideration to weekly pattern Neuron-Fuzzy model, SVR (Support Vector Regression) model with predictors explored through machine learning, and K-means clustering technique in the various approaches have been applied to short-term power supply forecasting. In this paper, SARMA and intervention model are fitted to forecast the maximum power load daily, weekly, and monthly by using the empirical data from 2011 through 2013. $ARMA(2,\;1,\;2)(1,\;1,\;1)_7$ and $ARMA(0,\;1,\;1)(1,\;1,\;0)_{12}$ are fitted respectively to the daily and monthly power demand, but the weekly power demand is not fitted by AREA because of unit root series. In our fitted intervention model, the factors of long holidays, summer and winter are significant in the form of indicator function. The SARMA with MAPE (Mean Absolute Percentage Error) of 2.45% and intervention model with MAPE of 2.44% are more efficient than the present seasonal exponential smoothing with MAPE of about 4%. Although the dynamic repression model with the predictors of humidity, temperature, and seasonal dummies was applied to foretaste the daily power demand, it lead to a high MAPE of 3.5% even though it has estimation error of predictors.

Estimating Optimal Harvesting Production of Yellow Croaker Caught by Multiple Fisheries Using Hamiltonian Method (해밀토니안기법을 이용한 복수어업의 참조기 최적어획량 추정)

  • Nam, Jong-Oh;Sim, Seong-Hyun;Kwon, Oh-Min
    • The Journal of Fisheries Business Administration
    • /
    • v.46 no.2
    • /
    • pp.59-74
    • /
    • 2015
  • This study aims to estimate optimal harvesting production, fishing efforts, and stock levels of yellow croaker caught by the offshore Stow Net and the offshore Gill Net fisheries using the current value Hamiltonian method and the surplus production model. As analyzing processes, firstly, this study uses the Gavaris general linear model to estimate standardized fishing efforts of yellow croaker caught by the above multiple fisheries. Secondly, this study applies the Clarke Yoshimoto Pooley(CY&P) model among the various exponential growth models to estimate intrinsic growth rate(r), environmental carrying capacity(K), and catchability coefficient(q) of yellow croaker which inhabits in offshore area of Korea. Thirdly, the study determines optimal harvesting production, fishing efforts, and stock levels of yellow croaker using the current value Hamiltonian method which is including average landing price of yellow croaker, average unit cost of fishing efforts, and social discount rate based on standard of the Korean Development Institute. Finally, this study tries sensitivity analysis to understand changes in optimal harvesting production, fishing efforts, and stock levels of yellow croaker caused by changes in economic and biological parameters. As results drawn by the current value Hamiltonian model, the optimal harvesting production, fishing efforts, and stock levels of yellow croaker caught by the multiple fisheries were estimated as 19,173 ton, 101,644 horse power, and 146,144 ton respectively. In addition, as results of sensitivity analysis, firstly, if the social discount rate and the average landing price of yellow croaker continuously increase, the optimal harvesting production of yellow croaker increases at decreasing rate and then finally slightly decreases due to decreases in stock levels of yellow croaker. Secondly, if the average unit cost of fishing efforts continuously increases, the optimal fishing efforts of the multiple fisheries decreases, but the optimal stock level of yellow croaker increases. The optimal harvest starts climbing and then continuously decreases due to increases in the average unit cost. Thirdly, when the intrinsic growth rate of yellow croaker increases, the optimal harvest, fishing efforts, and stock level all continuously increase. In conclusion, this study suggests that the optimal harvesting production and fishing efforts were much less than actual harvesting production(35,279 ton) and estimated standardized fishing efforts(175,512 horse power) in 2013. This result implies that yellow croaker has been overfished due to excessive fishing efforts. Efficient management and conservative policy on stock of yellow croaker need to be urgently implemented.

Modeling of Co(II) adsorption by artificial bee colony and genetic algorithm

  • Ozturk, Nurcan;Senturk, Hasan Basri;Gundogdu, Ali;Duran, Celal
    • Membrane and Water Treatment
    • /
    • v.9 no.5
    • /
    • pp.363-371
    • /
    • 2018
  • In this work, it was investigated the usability of artificial bee colony (ABC) and genetic algorithm (GA) in modeling adsorption of Co(II) onto drinking water treatment sludge (DWTS). DWTS, obtained as inevitable byproduct at the end of drinking water treatment stages, was used as an adsorbent without any physical or chemical pre-treatment in the adsorption experiments. Firstly, DWTS was characterized employing various analytical procedures such as elemental, FT-IR, SEM-EDS, XRD, XRF and TGA/DTA analysis. Then, adsorption experiments were carried out in a batch system and DWTS's Co(II) removal potential was modelled via ABC and GA methods considering the effects of certain experimental parameters (initial pH, contact time, initial Co(II) concentration, DWTS dosage) called as the input parameters. The accuracy of ABC and GA method was determined and these methods were applied to four different functions: quadratic, exponential, linear and power. Some statistical indices (sum square error, root mean square error, mean absolute error, average relative error, and determination coefficient) were used to evaluate the performance of these models. The ABC and GA method with quadratic forms obtained better prediction. As a result, it was shown ABC and GA can be used optimization of the regression function coefficients in modeling adsorption experiments.

Online Experts Screening the Worst Slicing Machine to Control Wafer Yield via the Analytic Hierarchy Process

  • Lin, Chin-Tsai;Chang, Che-Wei;Wu, Cheng-Ru;Chen, Huang-Chu
    • International Journal of Quality Innovation
    • /
    • v.7 no.2
    • /
    • pp.141-156
    • /
    • 2006
  • This study describes a novel algorithm for optimizing the quality yield of silicon wafer slicing. 12 inch wafer slicing is the most difficult in terms of semiconductor manufacturing yield. As silicon wafer slicing directly impacts production costs, semiconductor manufacturers are especially concerned with increasing and maintaining the yield, as well as identifying why yields decline. The criteria for establishing the proposed algorithm are derived from a literature review and interviews with a group of experts in semiconductor manufacturing. The modified Delphi method is then adopted to analyze those results. The proposed algorithm also incorporates the analytic hierarchy process (AHP) to determine the weights of evaluation. Additionally, the proposed algorithm can select the evaluation outcomes to identify the worst machine of precision. Finally, results of the exponential weighted moving average (EWMA) control chart demonstrate the feasibility of the proposed AHP-based algorithm in effectively selecting the evaluation outcomes and evaluating the precision of the worst performing machines. So, through collect data (the quality and quantity) to judge the result by AHP, it is the key to help the engineer can find out the manufacturing process yield quickly effectively.

Probability Funetion of Best Fit to Distribution of Extremal Minimum Flow and Estimation of Probable Drought Flow (극소치유량에 대한 적정분포형의 설정과 확률갈수량의 산정)

  • 김지학;이순탁
    • Water for future
    • /
    • v.8 no.1
    • /
    • pp.80-88
    • /
    • 1975
  • In this paper the authors established the best fit distribution function by applying the concept of probabiaity to the annual minimum flow of nine areas along the Nakdong river basin which is one of the largest Korean rivers and calculated the probable minimum flow suitable to those distribution function. Lastly, the authors tried to establish the best method to estimate the probable minimun flow by comparing some frequency analysis methods. The results obtained are as follows (1) It was considered that the extremal distribution type III was the most suitable one in the distributional types as a result of the comparision with Exponential distribution, Log-Normal distribution, Extremal distribution type-III and so on. (2) It was found that the formula of extremal distribution type-II for the estimation of probable minimum flow gave the best result in deciding the probable minimum flow of the Nakdong river basin. Therfore, it is recommended that the probable minimum flow should be estimated by using the extremal distribution type-III method. (3) It could be understood that in the probable minimum flow the average non-excessive probability appeared to be $Po{\fallingdotseq}1-\frac{1}{2T}$ and gave the same values of the probable variable without any difference in the various methods of plotting technique.

  • PDF

An Adaptive Proximity Route Selection Method in DHT-Based Peer-to-Peer Systems (DHT 기반 피어-투-피어 시스템을 위한 적응적 근접경로 선택기법)

  • Song Ji-Young;Han Sae-Young;Park Sung-Yong
    • The KIPS Transactions:PartA
    • /
    • v.13A no.1 s.98
    • /
    • pp.11-18
    • /
    • 2006
  • In the Internet of various networks, it is difficult to reduce real routing time by just minimizing their hop count. We propose an adaptive proximity route selection method in DHT-based peer-to-peer systems, in which nodes select the nぉe with smallest lookup latency among their routing table entries as a next routing node. Using Q-Routing algorithm and exponential recency-weighted average, each node estimates the total latency and establishes a lookup table. Moreover, without additional overhead, nodes exchange their lookup tables to update their routing tables. Several simulations measuring the lookup latencies and hop-to-hop latency show that our method outperforms the original Chord method as well as CFS' server selection method.

The Study for Software Future Forecasting Failure Time Using Time Series Analysis. (시계열 분석을 이용한 소프트웨어 미래 고장 시간 예측에 관한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Convergence Security Journal
    • /
    • v.11 no.3
    • /
    • pp.19-24
    • /
    • 2011
  • Software failure time presented in the literature exhibit either constant monotonic increasing or monotonic decreasing, For data analysis of software reliability model, data scale tools of trend analysis are developed. The methods of trend analysis are arithmetic mean test and Laplace trend test. Trend analysis only offer information of outline content. In this paper, we discuss forecasting failure time case of failure time censoring. In this study, time series analys is used in the simple moving average and weighted moving averages, exponential smoothing method for predict the future failure times, Empirical analysis used interval failure time for the prediction of this model. Model selection using the mean square error was presented for effective comparison.

A Study on Detection of Underwater Ferromagnetic Target for Harbor Surveillance (항만 감시를 위한 수중 강자성 표적 탐지에 관한 연구)

  • Kim, Minho;Joo, Unggul;Lim, Changsum;Yoon, Sanggi;Moon, Sangtaeck
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.4
    • /
    • pp.350-357
    • /
    • 2015
  • Many countries have been developing and operating an underwater surveillance system in order to protect their oceanic environment from infiltrating hostile marine forces which intend to lay mines, conduct reconnaissance and destroy friendly ships anchored at the harbor. One of the most efficient methods to detect unidentified submarine approaching harbor is sensing variation of magnetism of target by magnetic sensors. This measurement system has an advantage of high possibility of detection and low probability of false alarm, compared to acoustic sensors, although it has relatively decreased detection range. The contents of this paper mainly cover the analysis of possible effectiveness of magnetic sensors. First of all, environmental characteristics of surveillance area and magnetic information of simulated targets has been analyzed. Subsequently, a signal processing method of separating target from geomagnetic field and methods of estimating target location has been proposed.

A Study on the Forecasting of Container Volume using Neural Network (신경망을 이용한 컨테이너 물동량 예측에 관한 연구)

  • Park, Sung-Young;Lee, Chul-Young
    • Journal of Navigation and Port Research
    • /
    • v.26 no.2
    • /
    • pp.183-188
    • /
    • 2002
  • The forecast of a container traffic has been very important for port and development. Generally, Statistic methods, such as moving average method, exponential smoothing, and regression analysis have been much used for traffic forecasting. But, considering various factors related to the port affect the forecasting of container volume, neural network of parallel processing system can be effective to forecast container volume based on various factors. This study discusses the forecasting of volume by using the neural, network with back propagation learning algorithm. Affected factors are selected based on impact vector on neural network, and these selected factors are used to forecast container volume. The proposed the forecasting algorithm using neural network was compared to the statistic methods.

A Study on the Analysis of Container Logistics System by Simulation Method -with reference to BCTOC- (시뮬레이션에 의한 컨테이너 터미널 물류시스템의 분석에 관한 연구 (BCTOC를 중심으로))

  • 임봉택;이재원;성경빈;이철영
    • Journal of Korean Port Research
    • /
    • v.12 no.2
    • /
    • pp.251-260
    • /
    • 1998
  • For the purpose of building the simulation model on cargo handling capacity in container terminal we composed a model of container logistics system which has a 4 subsystem; cargo handling transportation storage and gate complex system. Several data are used in simulation which were gained through a field study and a basic statistic analysis of raw data on BCTOC from January to Jane in 1998. The results of this study are as follows; First average available ratios of each subsystems were 50% for G/C, 57.5% for Y/T, 56% for storage system and 50% for gate complex. And there were no subsystems occurring specific bottleneck. Second comparing the results of simulation to the results of basic statistics analysis we can verifying the suitability of this simulation model. Third comparing the results of this study to the results of existed similar study in 1996, we were able to confirm the changes of container logistics system in BCTOC.

  • PDF