• Title/Summary/Keyword: Error Component

Search Result 820, Processing Time 0.026 seconds

Development of 6-component Force/Moment Calibration Machine (6분력 힘/모멘트 교정기의 개발)

  • 김갑순;강대임
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.9
    • /
    • pp.127-134
    • /
    • 1998
  • This paper describes the design of a 6-component force/moment calibration machine with having the maximum capacities of 500 N in forces and 50 Nm in moments. To be used for the characteristic of a multi-component load cell. this machine consists of a body, a fixture, a force generating system, a moment generating system and weights. We have also evaluated the accuracy of the calibration machine. Test results show that the expanded relative uncertainty for force components $\pmFx,\;\pmFy\;and\;moment\;components\;\pmMx,\;\pmMy\;are\;less\; than\;8.6\times10^{-4}$, and force components +Fz, -Fz and moment components $\pmMz\;is\;less\;than\;1.6\times10^{-3},\;2.0\times10^{-5},\;1.7\times10^{-3}$ respectively.

  • PDF

Estimation of Surface Spectral Reflectance using A Population with Similar Colors (유사색 모집단을 이용한 물체의 분광 반사율 추정)

  • 이철희;서봉우;안석출
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.1
    • /
    • pp.37-45
    • /
    • 2001
  • The studies to estimate the surface spectral reflectance of an object have received widespread attention using the multi-spectral camera system. However, the multi-spectral camera system requires the additional color filter according to increment of the channel and system complexity is increased by multiple capture. Thus, this paper proposes an algorithm to reduce the estimation error of surface spectral reflectance with the conventional 3-band RGB camera. In the proposed method, adaptive principal components for each pixel are calculated by renewing the population of surface reflectances and the adaptive principal components can reduce estimation error of surface spectral reflectance of current pixel. To evaluate performance of the proposed estimation method, 3-band principal component analysis, 5-band wiener estimation method, and the proposed method are compared in the estimation experiment with the Macbeth Color Checker. As a result, the proposed method showed a lower mean square error between the estimated and the measured spectra compared to the conventional 3-band principal component analysis method and represented a similar or advanced estimation performance compared to the 5-band wiener method.

  • PDF

A New Scaling Method for Characteristics of Gas Turbine Components using Polynomial Equation (Polynomial 다항식을 이용한 가스터빈 구성품 선도의 새로운 Scaling 방법)

  • Kong, Chang-Duk;Ki, Ja-Young;Kang, Myoung-Cheol
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.30 no.7
    • /
    • pp.36-43
    • /
    • 2002
  • A new scaling method for the prediction of gas turbine components characteristics using experimental data or partially given data from engine manufacturers has been proposed. In order to minimize the analyzed performance error in the this study, firstly component maps were constructed by identifying performance data given by engine manufacturers at some operating conditions, then the simulated performance using the identified maps was compared with the performance result using the currently used traditional scaling method. In comparison, the performance result by the currently used traditional scaling method was well agreed with the real engine performance at on-design point but it had maximum 12% error at off-design points within the flight envelope of a study turboprop engine. However because the analysed performance by the newly proposed scaling method had maximum 6% reasonable error even within all flight envelope.

VLBI TRF Combination Using GNSS Software

  • Kwak, Younghee;Cho, Jungho
    • Journal of Astronomy and Space Sciences
    • /
    • v.30 no.4
    • /
    • pp.315-320
    • /
    • 2013
  • Space geodetic techniques can be used to obtain precise shape and rotation information of the Earth. To achieve this, the representative combination solution of each space geodetic technique has to be produced, and then those solutions need to be combined. In this study, the representative combination solution of very long baseline interferometry (VLBI), which is one of the space geodetic techniques, was produced, and the variations in the position coordinate of each station during 7 years were analyzed. Products from five analysis centers of the International VLBI Service for Geodesy and Astrometry (IVS) were used as the input data, and Bernese 5.0, which is the global navigation satellite system (GNSS) data processing software, was used. The analysis of the coordinate time series for the 43 VLBI stations indicated that the latitude component error was about 15.6 mm, the longitude component error was about 37.7 mm, and the height component error was about 30.9 mm, with respect to the reference frame, International Terrestrial Reference Frame 2008 (ITRF2008). The velocity vector of the 42 stations excluding the YEBES station showed a magnitude difference of 7.3 mm/yr (30.2%) and a direction difference of $13.8^{\circ}$ (3.8%), with respect to ITRF2008. Among these, the 10 stations in Europe showed a magnitude difference of 7.8 mm/yr (30.3%) and a direction difference of $3.7^{\circ}$ (1.0%), while the 14 stations in North America showed a magnitude difference of 2.7 mm/yr (15.8%) and a direction difference of $10.3^{\circ}$ (2.9%).

Development and Application of the Heteroscedastic Logit Model (이분산 로짓모형의 추정과 적용)

  • 양인석;노정현;김강수
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.4
    • /
    • pp.57-66
    • /
    • 2003
  • Because the Logit model easily calculates probabilities for choice alternatives and estimates parameters for explanatory variables, it is widely used as a traffic mode choice model. However, this model includes an assumption which is independently and identically distributed to the error component distribution of the mode choice utility function. This paper is a study on the estimation of the Heteroscedastic Logit Model. which mitigates this assumption. The purpose of this paper is to estimate a Logit model that more accurately reflects the mode choice behavior of passengers by resolving the homoscedasticity of the model choice utility error component. In order to do this, we introduced a scale factor that is directly related to the error component distribution of the model. This scale factor was defined so as to take into account the heteroscedasticity in the difference in travel time between using public transport and driving a car, and was used to estimate the travel time parameter. The results of the Logit Model estimation developed in this study show that Heteroscedastic Logit Models can realistically reflect the mode choice behavior of passengers, even if the difference in travel time between public and private transport remains the same as passenger travel time increases, by identifying the difference in mode choice probability of passengers for public transportation.

Tests for Panel Regression Model with Unbalanced Data

  • Song, Suck-Heun;Jung, Byoung-Cheol
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.3
    • /
    • pp.511-527
    • /
    • 2001
  • This paper consider the testing problem of variance component for the unbalanced tow=-way error component model. We provide a conditional LM test statistic for testing zero individual(time) effects assuming that the other time-specific(individual)efefcts are present. This test is extension of Baltagi, Chang and Li(1998, 1992). Monte Carlo experiments are conducted to study the performance of this LM test.

  • PDF

Effect of Departures from Independence for a System

  • Park, Byung-Gu;Jeong, Cheol-Hyun
    • Journal of Korean Society for Quality Management
    • /
    • v.19 no.1
    • /
    • pp.28-42
    • /
    • 1991
  • For a series or parallel system, though the component lifetimes have the absolutely continuous bivariate exponential distributions(ACBVE) by Block and Basu(1974), the common assumption that the component lifetimes are independent is used. The purpose of this paper, in this case, is to investigate the magnitude of the error caused by erroneous assumption, using the measure proposed by Klein and Moeschberger(1986). Estimation of the measure is conducted by maximum likelihood estimator(MLE) and those estimators are compared with corresponding jackknifed MLE through the Monte Carlo study.

  • PDF

A Channel Equalization Algorithm Using Neural Network Based Data Least Squares (뉴럴네트웍에 기반한 Data Least Squares를 사용한 채널 등화기 알고리즘)

  • Lim, Jun-Seok;Pyeon, Yong-Kuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2E
    • /
    • pp.63-68
    • /
    • 2007
  • Using the neural network model for oriented principal component analysis (OPCA), we propose a solution to the data least squares (DLS) problem, in which the error is assumed to lie in the data matrix only. In this paper, we applied this neural network model to channel equalization. Simulations show that the neural network based DLS outperforms ordinary least squares in channel equalization problems.

Learning Generative Models with the Up-Propagation Algorithm (생성모형의 학습을 위한 상향전파알고리듬)

  • ;H. Sebastian Seung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.327-329
    • /
    • 1998
  • Up-Propagation is an algorithm for inverting and learning neural network generative models. Sensory input is processed by inverting a model that generates patterns from hidden variables using top-down connections. The inversion process is iterative, utilizing a negative feedback loop that depends on an error signal propagated by bottom-up connections. The error signal is also used to learn the generative model from examples. the algorithm is benchmarked against principal component analysis in experiments on images of handwritten digits.

  • PDF

An Analysis of Human Factor and Error for Human Error of the Semiconductor Industry (반도체 산업에서의 인적오류에 대한 인적요인과 과오에 대한 분석)

  • Yun, Yong-Gu;Park, Beom
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2007.04a
    • /
    • pp.113-123
    • /
    • 2007
  • Through so that accident of semiconductor industry deduces unsafe factor of the person center on unsafe behaviour that incident history and questionnaire and I made starting point that extract very important factor. It served as a momentum that make up base that analyzes factors that happen based on factor that extract factor cause classification for the first factor, the second factor and the third factor and presents model of human error. Factor for whole defines factor component for human factor and to cause analysis 1 stage in human factor and step that wish to do access of problem and it do analysis cause of data of 1 step. Also, see significant difference that analyzes interrelation between leading persons about human mistake in semiconductor industry and connect interrelation of mistake by this. Continuously, dictionary road map to human error theoretical background to basis traditional accidental cause model and modern accident cause model and leading persons. I wish to present model and new model in semiconductor industry by backbone that leading persons of existing scholars who present model of existent human error deduce relation. Finally, I wish to deduce backbone of model of pre-suppression about accident leading person of the person center.

  • PDF