• 제목/요약/키워드: model errors

Search Result 3,127, Processing Time 0.028 seconds

Validation and selection of GCPs obtained from ERS SAR and the SRTM DEM: Application to SPOT DEM Construction

  • Jung, Hyung-Sup;Hong, Sang-Hoon;Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.5
    • /
    • pp.483-496
    • /
    • 2008
  • Qualified ground control points (GCPs) are required to construct a digital elevation model (DEM) from a pushbroom stereo pair. An inverse geolocation algorithm for extracting GCPs from ERS SAR data and the SRTM DEM was recently developed. However, not all GCPs established by this method are accurate enough for direct application to the geometric correction of pushbroom images such as SPOT, IRS, etc, and thus a method for selecting and removing inaccurate points from the sets of GCPs is needed. In this study, we propose a method for evaluating GCP accuracy and winnowing sets of GCPs through orientation modeling of pushbroom image and validate performance of this method using SPOT stereo pair of Daejon City. It has been found that the statistical distribution of GCP positional errors is approximately Gaussian without bias, and that the residual errors estimated by orientation modeling have a linear relationship with the positional errors. Inaccurate GCPs have large positional errors and can be iteratively eliminated by thresholding the residual errors. Forty-one GCPs were initially extracted for the test, with mean the positional error values of 25.6m, 2.5m and -6.1m in the X-, Y- and Z-directions, respectively, and standard deviations of 62.4m, 37.6m and 15.0m. Twenty-one GCPs were eliminated by the proposed method, resulting in the standard deviations of the positional errors of the 20 final GCPs being reduced to 13.9m, 8.5m and 7.5m in the X-, Y- and Z-directions, respectively. Orientation modeling of the SPOT stereo pair was performed using the 20 GCPs, and the model was checked against 15 map-based points. The root mean square errors (RMSEs) of the model were 10.4m, 7.1m and 12.1m in X-, Y- and Z-directions, respectively. A SPOT DEM with a 20m ground resolution was successfully constructed using a automatic matching procedure.

A study on motion errors due to acceleration and deceleration types of servo motors (서보모터의 가감속형태에 따른 운도오차에 관한 연구)

  • Shin, Dong-Soo;Chung, Sung-Chong
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.10
    • /
    • pp.1718-1729
    • /
    • 1997
  • This paper describes motion errors due to acceleration and deceleration types of servo motors in NC machine tools. Motion errors are composed of two components : one is due to transient response of a servomechanism and the other comes from gain mismatching of positioning servo motors. It deals with circular interpolation to identify motion errors by using Interface card. Also in order to minimize motion errors, this study presents an effective method to optimize parameters which are connected with motion errors. The proposed method is based upon a second order polynomial regression model and it includes an orthogonal array method to make the effective results of experiments. The validity and reliability of the study were verified on a vertical machining center equipped with FANUC 0MC through a series of experiments and analysis.

The Design and Implementation of Anomaly Traffic Analysis System using Data Mining

  • Lee, Se-Yul;Cho, Sang-Yeop;Kim, Yong-Soo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.4
    • /
    • pp.316-321
    • /
    • 2008
  • Advanced computer network technology enables computers to be connected in an open network environment. Despite the growing numbers of security threats to networks, most intrusion detection identifies security attacks mainly by detecting misuse using a set of rules based on past hacking patterns. This pattern matching has a high rate of false positives and can not detect new hacking patterns, which makes it vulnerable to previously unidentified attack patterns and variations in attack and increases false negatives. Intrusion detection and analysis technologies are thus required. This paper investigates the asymmetric costs of false errors to enhance the performances the detection systems. The proposed method utilizes the network model to consider the cost ratio of false errors. By comparing false positive errors with false negative errors, this scheme achieved better performance on the view point of both security and system performance objectives. The results of our empirical experiment show that the network model provides high accuracy in detection. In addition, the simulation results show that effectiveness of anomaly traffic detection is enhanced by considering the costs of false errors.

An Analysis and Modeling of Propagation/Accumulation Errors Incurred by CD in the FD-CD Transcoding (FD-CD 트랜스코팅기법에서 CD에 의한 전파/누적 왜곡의 분석과 모델링)

  • Kim Jin soo;Kim Jae Gon;Kim Hyung Myung;Hong Jin Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.12C
    • /
    • pp.1677-1685
    • /
    • 2004
  • Recently, FD (Frame Dropping)-CD (Coefficient Dropping) transcoding is considered due to the low computational complexity and simple implementation. The conventional FD-CD transcoding schemes have not considered the CD errors that tend to be propagated and accumulated. In this paper, we derive the error characteristics incurred by the CD errors and, through computer simulations, we show that the CD errors are propagated and not negligible to the decoded qualities of the next frames within single GOP. Then, we propose an exponential-decaying model that describes well the characteristics of propagation/accumulation errors. Finally, it is shown that the proposed model can be effectively used for estimating the overall distortions incurred by the CD errors.

Improvement of WRF forecast meteorological data by Model Output Statistics using linear, polynomial and scaling regression methods

  • Jabbari, Aida;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2019.05a
    • /
    • pp.147-147
    • /
    • 2019
  • The Numerical Weather Prediction (NWP) models determine the future state of the weather by forcing current weather conditions into the atmospheric models. The NWP models approximate mathematically the physical dynamics by nonlinear differential equations; however these approximations include uncertainties. The errors of the NWP estimations can be related to the initial and boundary conditions and model parameterization. Development in the meteorological forecast models did not solve the issues related to the inevitable biases. In spite of the efforts to incorporate all sources of uncertainty into the forecast, and regardless of the methodologies applied to generate the forecast ensembles, they are still subject to errors and systematic biases. The statistical post-processing increases the accuracy of the forecast data by decreasing the errors. Error prediction of the NWP models which is updating the NWP model outputs or model output statistics is one of the ways to improve the model forecast. The regression methods (including linear, polynomial and scaling regression) are applied to the present study to improve the real time forecast skill. Such post-processing consists of two main steps. Firstly, regression is built between forecast and measurement, available during a certain training period, and secondly, the regression is applied to new forecasts. In this study, the WRF real-time forecast data, in comparison with the observed data, had systematic biases; the errors related to the NWP model forecasts were reflected in the underestimation of the meteorological data forecast by the WRF model. The promising results will indicate that the post-processing techniques applied in this study improved the meteorological forecast data provided by WRF model. A comparison between various bias correction methods will show the strength and weakness of the each methods.

  • PDF

Quantitative Analysis of Random Errors of the WRF-FLEXPART Model for Backward-in-time Simulation over the Seoul Metropolitan Area (수도권 영역의 시간 후방 모드 WRF-FLEXPART 모의를 위한 입자 수에 따른 무작위 오차의 정량 분석)

  • Woo, Ju-Wan;Lee, Jae-Hyeong;Lee, Sang-Hyun
    • Atmosphere
    • /
    • v.29 no.5
    • /
    • pp.551-566
    • /
    • 2019
  • Quantitative understanding of a random error that is associated with Lagrangian particle dispersion modeling is a prerequisite for backward-in-time mode simulations. This study aims to quantify the random error of the WRF-FLEXPART model and suggest an optimum number of the Lagrangian particles for backward-in-time simulations over the Seoul metropolitan area. A series of backward-in-time simulations of the WRF-FLEXPART model has conducted at two receptor points by changing the number of Lagrangian particles and the relative error, as a quantitative indicator of random error, is analyzed to determine the optimum number of the release particles. The results show that in the Seoul metropolitan area a 1-day Lagrangian transport contributes 80~90% in residence time and ~100% in atmospheric enhancement of carbon monoxide. The relative errors in both the residence time and the atmospheric concentration enhancement are larger when the particles release in the daytime than in the nighttime, and in the inland area than in the coastal area. The sensitivity simulations reveal that the relative errors decrease with increasing the number of Lagrangian particles. The use of small number of Lagrangian particles caused significant random errors, which is attributed to the random number sampling process. For the particle number of 6000, the relative error in the atmospheric concentration enhancement is estimated as -6% ± 10% with reduction of computational time to 21% ± 7% on average. This study emphasizes the importance of quantitative analyses of the random errors in interpreting backward-in-time simulations of the WRF-FLEXPART model and in determining the number of Lagrangian particles as well.

Studies on Error Propagation by Simulation Model -Main description of experments of aero-triangulation- (횡응모형에 의한 오차전파에 관한 연구 -공중삼각측량의 실험을 중심으로-)

  • 백은기
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.18 no.1
    • /
    • pp.4021-4037
    • /
    • 1976
  • This paper describes the actual experiments of the error propagation and studies of analytical photogrammetry using the simulation method in which we can find the causes of the errors. These studies and the results give the valuable data which are very effective for systematically controlling the errors in aerial triangulation. The main contents in my paper are as follows: 1. Dose the scale errors in the successive models in the form of normal distribution appear when the observation errors propagate in the form of normal distribution\ulcorner 2. In what form does this scale error propagation in the actual model appear\ulcorner 3. When the causes of the scale error propagation are found, is the evaluation standard determined normally\ulcorner 4. What degree of influence is there form the constant errors\ulcorner I have done several experiments using the simulation method technique to solve the complicated error propgation of aerial triangulation which is the effective means to research the relations between cause and effect. In this paper, the studies have concentrated on the following points of simulation experiments. (1) The first part descries how we can produce the soft program of the simulation experiment. (2) The second part is the method propagating the errors in the simulation models and the kinds of errors. (3) The final part is the most important; that is the analyzing and evaluation of control during actual work. From the above-mentioned points, it is said that these studies are a very important development in the field of controlling and managing aerial photogrammetry and especially in the case of error propagation, we can clearly find the causes of the errors and steps and parts of errors generated when we use these techniques.

  • PDF

Quantile regression with errors in variables

  • Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.2
    • /
    • pp.439-446
    • /
    • 2014
  • Quantile regression models with errors in variables have received a great deal of attention in the social and natural sciences. Some eorts have been devoted to develop eective estimation methods for such quantile regression models. In this paper we propose an orthogonal distance quantile regression model that eectively considers the errors on both input and response variables. The performance of the proposed method is evaluated through simulation studies.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Study on EOQ Model Involving Estimate Errors (수요, 주문 및 재고비용이 불확실한 상황에서의 EOQ모형에 관한 연구)

  • Kim, Gyu-Tae;Hwang, Hark-Chin;Kim, Chang-Hyun
    • IE interfaces
    • /
    • v.17 no.1
    • /
    • pp.78-83
    • /
    • 2004
  • We consider the sensitivity of average inventory cost rate when true values of the parameters in the EOQ model are unknown over known ranges. In particular, in the case that the valid range on the true economic lot size are known, we provide a formula for estimating the lot size under minimax criterion. Moreover, to estimate the valid range, we apply the propagation of errors technique. Then, we present a scheme to find a (valid) lot size, based on the estimated range of the true lot size from the propagation of errors technique.