• Title/Summary/Keyword: 가중평균법

Search Result 154, Processing Time 0.023 seconds

Elastic Wave Modeling Including Surface Topography Using a Weighted-Averaging Finite Element Method in Frequency Domain (지형을 고려한 주파수 영역 가중평균 유한요소법 탄성파 모델링)

  • Choi, Ji-Hyang;Nam, Myung-Jin;Min, Dong-Joo;Shin, Chang-Soo;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.2
    • /
    • pp.93-98
    • /
    • 2008
  • Abstract: Surface topography has a significant influence on seismic wave propagation in a reflection seismic exploration. Effects of surface topography on two-dimensional elastic wave propagation are investigated through modeling using a weighted-averaging (WA) finite-element method (FEM), which is computationally more efficient than conventional FEM. Effects of air layer on wave propagation are also investigated using flat surface models with and without air. To validate our scheme in modeling including topography, we compare WA FEM results for irregular topographic models against those derived from conventional FEM using one set of rectangular elements. For the irregular surface topography models, elastic wave propagation is simulated to show that breaks in slope act as a new source for diffracted waves, and that Rayleigh waves are more seriously distorted by surface topography than P-waves.

Object Tracking Using Weighted Average Maximum Likelihood Neural Network (최대우도 가중평균 신경망을 이용한 객체 위치 추적)

  • Sun-Bae Park;Do-Sik Yoo
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.1
    • /
    • pp.43-49
    • /
    • 2023
  • Object tracking is being studied with various techniques such as Kalman filter and Luenberger tracker. Even in situations, such as the one in which the system model is not well specified, to which existing signal processing techniques are not successfully applicable, it is possible to design artificial neural networks to track objects. In this paper, we propose an artificial neural network, which we call 'maximum-likelihood weighted-average neural network', to continuously track unpredictably moving objects. This neural network does not directly estimate the locations of an object but obtains location estimates by making weighted average combining various results of maximum likelihood tracking with different data lengths. We compare the performance of the proposed system with those of Kalman filter and maximum likelihood object trackers and show that the proposed scheme exhibits excellent performance well adapting the change of object moving characteristics.

Comparison of Methods for the Analysis Percentile of Seismic Hazards (지진재해도의 백분위수 분석 방법 비교)

  • Rhee, Hyun-Me;Seo, Jung-Moon;Kim, Min-Kyu;Choi, In-Kil
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.43-51
    • /
    • 2011
  • Probabilistic seismic hazard analysis (PSHA), which can effectively apply inevitable uncertainties in seismic data, considers a number of seismotectonic models and attenuation equations. The calculated hazard by PSHA is generally a value dependent on peak ground acceleration (PGA) and expresses the value as an annual exceedance probability. To represent the uncertainty range of a hazard which has occurred using various seismic data, a hazard curve figure shows both a mean curve and percentile curves (15, 50, and 85). The percentile performs an important role in that it indicates the uncertainty range of the calculated hazard, could be calculated using various methods by the relation of the weight and hazard. This study using the weight accumulation method, the weighted hazard method, the maximum likelihood method, and the moment method, has calculated the percentile of the computed hazard by PSHA on the Shinuljin 1, 2 site. The calculated percentile using the weight accumulation method, the weighted hazard method, and the maximum likelihood method, have similar trends and represent the range of all computed hazards by PSHA. The calculated percentile using the moment method effectively showed the range of hazards at the source which includes a site. This study suggests the moment method as effective percentile calculation method considering the almost same mean hazard for the seismotectonic model and a source which includes a site.

The Study of Prediction Model of Gas Accidents Using Time Series Analysis (시계열 분석을 이용한 가스사고 발생 예측 연구)

  • Lee, Su-Kyung;Hur, Young-Taeg;Shin, Dong-Il;Song, Dong-Woo;Kim, Ki-Sung
    • Journal of the Korean Institute of Gas
    • /
    • v.18 no.1
    • /
    • pp.8-16
    • /
    • 2014
  • In this study, the number of gas accidents prediction model was suggested by analyzing the gas accidents occurred in Korea. In order to predict the number of gas accidents, simple moving average method (3, 4, 5 period), weighted average method and exponential smoothing method were applied. Study results of the sum of mean-square error acquired by the models of moving average method for 4 periods and weighted moving average method showed the highest value of 44.4 and 43 respectively. By developing the number of gas accidents prediction model, it could be actively utilized for gas accident prevention activities.

A Study on Objective Functions for the Multi-purpose Dam Operation Plan in Korea (국내 다목적댐 운영계획에 적합한 목적함수에 관한 연구)

  • Eum, Hyung-Il;Kim, Young-Oh;Yun, Ji-Hyun;Ko, Ick-Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.38 no.9 s.158
    • /
    • pp.737-746
    • /
    • 2005
  • Optimization is a process that searches an optimal solution to obtain maximum or minimum value of an objective function. Many researchers have focused on effective search algorithms for the optimum but few researches were interested in establishing the objective function. This study compares two approaches for the objective function: one allows a tradeoff among the objectives and the other does not allow a tradeoff by assigning weights for the absolute priority between the objectives. An optimization model using sampling stochastic dynamic programming was applied to these two objective functions and the resulting optimal policies were compared. As a result, the objective function with no tradeoff provides a decision making process that matches practical reservoir operations than that with a tradeoff allowed. Therefore, it is more reasonable to establish the objective function with no a tradeoff among the objectives for multi-purpose dam operation plan in Korea.

A Spatial Interpolation Model for Daily Minimum Temperature over Mountainous Regions (산악지대의 일 최저기온 공간내삽모형)

  • Yun Jin-Il;Choi Jae-Yeon;Yoon Young-Kwan;Chung Uran
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.2 no.4
    • /
    • pp.175-182
    • /
    • 2000
  • Spatial interpolation of daily temperature forecasts and observations issued by public weather services is frequently required to make them applicable to agricultural activities and modeling tasks. In contrast to the long term averages like monthly normals, terrain effects are not considered in most spatial interpolations for short term temperatures. This may cause erroneous results in mountainous regions where the observation network hardly covers full features of the complicated terrain. We developed a spatial interpolation model for daily minimum temperature which combines inverse distance squared weighting and elevation difference correction. This model uses a time dependent function for 'mountain slope lapse rate', which can be derived from regression analyses of the station observations with respect to the geographical and topographical features of the surroundings including the station elevation. We applied this model to interpolation of daily minimum temperature over the mountainous Korean Peninsula using 63 standard weather station data. For the first step, a primitive temperature surface was interpolated by inverse distance squared weighting of the 63 point data. Next, a virtual elevation surface was reconstructed by spatially interpolating the 63 station elevation data and subtracted from the elevation surface of a digital elevation model with 1 km grid spacing to obtain the elevation difference at each grid cell. Final estimates of daily minimum temperature at all the grid cells were obtained by applying the calculated daily lapse rate to the elevation difference and adjusting the inverse distance weighted estimates. Independent, measured data sets from 267 automated weather station locations were used to calculate the estimation errors on 12 dates, randomly selected one for each month in 1999. Analysis of 3 terms of estimation errors (mean error, mean absolute error, and root mean squared error) indicates a substantial improvement over the inverse distance squared weighting.

  • PDF

A numerical study on portfolio VaR forecasting based on conditional copula (조건부 코퓰라를 이용한 포트폴리오 위험 예측에 대한 실증 분석)

  • Kim, Eun-Young;Lee, Tae-Wook
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.6
    • /
    • pp.1065-1074
    • /
    • 2011
  • During several decades, many researchers in the field of finance have studied Value at Risk (VaR) to measure the market risk. VaR indicates the worst loss over a target horizon such that there is a low, pre-specified probability that the actual loss will be larger (Jorion, 2006, p.106). In this paper, we compare conditional copula method with two conventional VaR forecasting methods based on simple moving average and exponentially weighted moving average for measuring the risk of the portfolio, consisting of two domestic stock indices. Through real data analysis, we conclude that the conditional copula method can improve the accuracy of portfolio VaR forecasting in the presence of high kurtosis and strong correlation in the data.

The Mean Formula of Implicate Quantity (내포량의 평균 공식과 조작적 학습법)

  • Kim, Myung-Woon
    • Journal for History of Mathematics
    • /
    • v.23 no.3
    • /
    • pp.121-140
    • /
    • 2010
  • This study presents one universal mean formula of implicate quantity for speed, temperature, consistency, density, unit cost, and the national income per person in order to avoid the inconvenience of applying different formulas for each one of them. This work is done by using the principle of lever and was led to the formula of two implicate quantity, $M=\frac{x_1f_1+x_2f_2}{f_1+f_2}$, and to help the understanding of relationships in this formula. The value of ratio of fraction cannot be added but it shows that it can be calculated depending on the size of the ratio. It is intended to solve multiple additions with one formula which is the expansion of the mean formula of implicate quantity. $M=\frac{x_1f_1+x_2f_2+{\cdots}+x_nf_n}{N}$, where $f_1+f_2+{\cdots}+f_n=N$. For this reason, this mean formula will be able to help in physics as well as many other different fields in solving complication of structures.

Comparison and Evaluation of Root Mean Square for Parameter Settings of Spatial Interpolation Method (공간보간법의 매개변수 설정에 따른 평균제곱근 비교 및 평가)

  • Lee, Hyung-Seok
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.3
    • /
    • pp.29-41
    • /
    • 2010
  • In this study, the prediction errors of various spatial interpolation methods used to model values at unmeasured locations was compared and the accuracy of these predictions was evaluated. The root mean square (RMS) was calculated by processing different parameters associated with spatial interpolation by using techniques such as inverse distance weighting, kriging, local polynomial interpolation and radial basis function to known elevation data of the east coastal area under the same condition. As a result, a circular model of simple kriging reached the smallest RMS value. Prediction map using the multiquadric method of a radial basis function was coincident with the spatial distribution obtained by constructing a triangulated irregular network of the study area through the raster mathematics. In addition, better interpolation results can be obtained by setting the optimal power value provided under the selected condition.

Longevity Bond Pricing by a Cohort-based Stochastic Mortality (코호트 사망률을 이용한 장수채권 가격산출)

  • Jho, Jae Hoon;Lee, Kangsoo
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.4
    • /
    • pp.703-719
    • /
    • 2015
  • We propose an extension of the Lee and Jho (2015) mean reverting the two factor mortality model by incorporating a period-specific cohort effect. We found that the consideration of cohort effect improves the mortality fit of Korea male data above age 65. Parameters are estimated by the weighted least squares method and Metropolis algorithm. We also emphasize that the cohort effect is necessary to choose the base survival index to calculate longevity bond issue price. A key contribution of the article is the proposal and development of a method to calculate the longevity bond price to hedge the longevity risk exposed to Korea National Pension Services.