• Title/Summary/Keyword: magnitude of errors

Search Result 182, Processing Time 0.034 seconds

Adaptive Matching Scan Algorithm Based on Gradient Magnitude and Sub-blocks in Fast Motion Estimation of Full Search (전영역 탐색의 고속 움직임 예측에서 기울기 크기와 부 블록을 이용한 적응 매칭 스캔 알고리즘)

  • 김종남;최태선
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.1097-1100
    • /
    • 1999
  • Due to the significant computation of full search in motion estimation, extensive research in fast motion estimation algorithms has been carried out. However, most of the algorithms have the degradation in predicted images compared with the full search algorithm. To reduce an amount of significant computation while keeping the same prediction quality of the full search, we propose a fast block-matching algorithm based on gradient magnitude of reference block without any degradation of predicted image. By using Taylor series expansion, we show that the block matching errors between reference block and candidate block are proportional to the gradient magnitude of matching block. With the derived result, we propose fast full search algorithm with adaptively determined scan direction in the block matching. Experimentally, our proposed algorithm is very efficient in terms of computational speedup and has the smallest computation among all the conventional full search algorithms. Therefore, our algorithm is useful in VLSI implementation of video encoder requiring real-time application.

  • PDF

Robust Control of Nonlinear Systems with Adaptive Fuzzy System (적응 퍼지 시스템을 이용한 비선형 시스템의 강인 제어)

  • 구근모;왕보현
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1996.10a
    • /
    • pp.158-161
    • /
    • 1996
  • A robust adaptive tracking control architecture is proposed for a class of continuous-time nonlinear dynamic systems for which an explicit linear parameterization of the uncertainty in the dynamics is either unknown or impossible. The architecture employs an adaptive fuzzy system to compensate for the uncertainty of the plant. In order to improve the robustness under approximation errors and disturbances, the proposed architecture includes deadzone in adaptation laws. Unlike the previously proposed schemes, the magnitude of approximate errors and disturbances is not required in the determination of the deadzone size, since it is estimated using the adaptation law. The proposed algorithm is proven to be globally stable in the Lyapunov sense, with tracking errors converging to the proposed architecture.

  • PDF

An Examination of Sediment Discharge Computation Errors Related to Imprecise Factors (부정확한 인자와 관계된 유사량 산정 오류에 대한 검증)

  • 정관수
    • Water for future
    • /
    • v.29 no.3
    • /
    • pp.129-142
    • /
    • 1996
  • This study investigates the magnitude of errors that can be expected in integrating sediment concentration in a vertical, basede on a single-point measurement, because of errors in input data. Potential error sources, including sampler location, water surface elevation, bed elevation, fall velocity, $\beta$ value, and $\kappa$ value were comparatively examined using data from a special study on the Rio Grande Conveyance channel in New Mexico. It is concluded that simple forms of equations for the vertical distribution of velocity and sediment concentration based on a single-point field sample of suspended sediment. The most uncertain point in the computation is related to the Rouse number z in the equation for the vertical concentration distribution of suspended sediment.

  • PDF

Correction of Accelerogram in Frequency Domain (주파수영역에서의 가속도 기록 보정)

  • Park, Chang Ho;Lee, Dong Guen
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.4
    • /
    • pp.71-79
    • /
    • 1992
  • In general, the accelerogram of earthquake ground motion or the accelerogram obtained from dynamic tests contain various errors. In these errors of the accelerograms, there are instrumental errors(magnitude and phase distortion) due to the response characteristics of accelerometer and the digitizing error concentrated in low and high frequency components and random errors. Then, these errors may be detrimental to the results of data processing and dynamic analysis. An efficient method which can correct the errors of the accelerogram is proposed in this study. The correction of errors can be accomplished through four steps as followes ; 1) using an interpolation method a data form appropriate to the error correction is prepared, 2) low and high frequency errors of the accelerogram are removed by band-pass filter between prescribed frequency limits, 3) instrumental errors are corrected using dynamic equilibrium equation of the accelerometer, 4) velocity and displacement are obtained by integrating corrected accelerogram. Presently, infinite impulse response(IIR) filter and finite impulse response (FIR) filter are generally used as band-pass filter. In the proposed error correction procedure, the deficiencies of FIR filter and IIR filter are reduced and, using the properties of the differentiation and the integration of Fourier transform, the accuracy of instrument correction and integration is improved.

  • PDF

Discontinuity in GNSS Coordinate Time Series due to Equipment Replacement

  • Sohn, Dong-Hyo;Choi, Byung-Kyu;Kim, Hyunho;Yoon, Hasu;Park, Sul Gee;Park, Sang-Hyun
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.4
    • /
    • pp.287-295
    • /
    • 2022
  • The GNSS coordinate time series is used as important data for geophysical analysis such as terrestrial reference frame establishment, crustal deformation, Earth orientation parameter estimation, etc. However, various factors may cause discontinuity in the coordinate time series, which may lead to errors in the interpretation. In this paper, we describe the discontinuity in the coordinate time series due to the equipment replacement for domestic GNSS stations and discuss the change in movement magnitude and velocity vector difference in each direction before and after discontinuity correction. To do this, we used three years (2017-2019) of data from 40 GNSS stations. The average magnitude of the velocity vector in the north-south, east-west, and vertical directions before correction is -12.9±1.5, 28.0±1.9, and 4.2±7.6 mm/yr, respectively. After correction, the average moving speed in each direction was -13.0±1.0, 28.2±0.8, and 0.7±2.1 mm/yr, respectively. The average magnitudes of the horizontal GNSS velocity vectors before and after discontinuous correction was similar, but the deviation in movement size of stations decreased after correction. After equipment replacement, the change in the vertical movement occurred more than the horizontal movement variation. Moreover, the change in the magnitude of movement in each direction may also cause a change in the velocity vector, which may lead to errors in geophysical analysis.

Determination of Soil Sample Size Based on Gy's Particulate Sampling Theory (Gy의 입자성 물질 시료채취이론에 근거한 토양 시료 채취량 결정)

  • Bae, Bum-Han
    • Journal of Soil and Groundwater Environment
    • /
    • v.16 no.6
    • /
    • pp.1-9
    • /
    • 2011
  • A bibliographical review of Gy sampling theory for particulate materials was conducted to provide readers with useful means to reduce errors in soil contamination investigation. According to the Gy theory, the errors caused by the heterogeneous nature of soil include; the fundamental error (FE) caused by physical and chemical constitutional heterogeneity, the grouping and segregation error (GE) aroused from gravitational force, long-range heterogeneous fluctuation error ($CE_2$), the periodic heterogeneity fluctuation error ($CE_3$), and the materialization error (ME) generated during physical process of sample treatment. However, the accurate estimation of $CE_2$ and $CE_3$ cannot be estimated easily and only increasing sampling locations can reduce the magnitude of the errors. In addition, incremental sampling is the only method to reduce GE while grab sampling should be avoided as it introduces uncertainty and errors to the sampling process. Correct preparation and operation of sampling tools are important factors in reducing the incremental delimitation error (DE) and extraction error (EE) which are resulted from physical processes in the sampling. Therefore, Gy sampling theory can be used efficiently in planning a strategy for soil investigations of non-volatile and non-reactive samples.

Analytical Sensitivity Analysis of Geometric Errors in a Three-Axis Machine Tool (해석적 방법을 통한 3 축 공작기계의 기하학적 오차 민감도 분석)

  • Park, Sung-Ryung;Yang, Seung-Han
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.36 no.2
    • /
    • pp.165-171
    • /
    • 2012
  • In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three-axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors.

Errors in Estimated Temporal Tracer Trends Due to Changes in the Historical Observation Network: A Case Study of Oxygen Trends in the Southern Ocean

  • Min, Dong-Ha;Keller, Klaus
    • Ocean and Polar Research
    • /
    • v.27 no.2
    • /
    • pp.189-195
    • /
    • 2005
  • Several models predict large and potentially abrupt ocean circulation changes due to anthropogenic greenhouse-gas emissions. These circulation changes drive-in the models-considerable oceanic oxygen trend. A sound estimate of the observed oxygen trends can hence be a powerful tool to constrain predictions of future changes in oceanic deepwater formation, heat and carbon dioxide uptake. Estimating decadal scale oxygen trends is, however, a nontrivial task and previous studies have come to contradicting conclusions. One key potential problem is that changes in the historical observation network might introduce considerable errors. Here we estimate the likely magnitude of these errors for a subset of the available observations in the Southern Ocean. We test three common data analysis methods south of Australia and focus on the decadal-scale trends between the 1970's and the 1990's. Specifically, we estimate errors due to sparsely sampled observations using a known signal (the time invariant, temporally averaged, World Ocean Atlas 2001) as a negative control. The crossover analysis and the objective analysis methods are for less prone to spatial sampling location biases than the area averaging method. Subject to numerous caveats, we find that errors due to sparse sampling for the area averaging method are on the order of several micro-moles $kg^{-1}$. for the crossover and the objective analysis method, these errors are much smaller. For the analyzed example, the biases due to changes in the spatial design of the historical observation network are relatively small compared to the tends predicted by many model simulations. This raises the possibility to use historic oxygen trends to constrain model simulations, even in sparsely sampled ocean basins.

Multilayer Stereo Image Matching Based upon Phase-Magnitude an Mean Field Approximation

  • Hong Jeong;Kim, Jung-Gu;Chae, Myoung-Sik
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.5
    • /
    • pp.79-88
    • /
    • 1997
  • This paper introduces a new energy function, as maximum a posteriori(MAP) estimate of binocular disparity, that can deal with both random dot stereo-gram(RDS) and natural scenes. The energy function uses phase-magnitude as features to detect only the shift for a pair of corrupted conjugate images. Also we adopted Fleet singularity that effectively detects unstable areas of image plant and thus eliminates in advance error-prone stereo mathcing. The multi-scale concept is applied to the multi laser architecture that can search the solutions systematically from coarse to fine details and thereby avoids drastically the local minima. Using mean field approximation, we obtained a compact representation that is suitable for fast computation. In this manner, the energy function satisfies major natural constraints and requirements for implementing parallel relaxation. As an experiment, the proposed algorithm is applied to RDS and natural stereo images. As a result we will see that it reveals good performance in terms of recognition errors, parallel implementation, and noise characteristics.

  • PDF

Design of Enhanced Min-Max Control using Feedforward Control

  • Im, Yoon-Tae;Song, Seong-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.312-315
    • /
    • 2003
  • This paper deals with robust control problems of linear systems with matched nonlinear uncertainties. In order to handle the uncertainties, a Lyapunov min-max control approach can usually be adopted. By the way, the min-max control input is required to be switched and provokes chattering phenomena which limit the practical implementation. The magnitude of switching control input which cause chattering is dependent on the size of uncertainties. In this paper, it is shown that the magnitude of the min-max control input can be made small using a well-known disturbance observer technique and only considers the disturbance observing errors. The chattering phenomena can be reduced as small as possible by selecting a high diturbance observer gain. The simulations show that the min-max control with a disturbance observer can reduce chattering phenomena much smaller and guarantee much better robust performance rather than the one without a disturbance observer.

  • PDF