• Title/Summary/Keyword: Sampling Errors

Search Result 341, Processing Time 0.023 seconds

A Sampling Inspection Plan with Human Error: Considering the Relationship between Visual Inspection Time and Human Error Rate

  • Lee, Yong-Hwa;Hong, Seung-Kweon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.5
    • /
    • pp.645-650
    • /
    • 2011
  • Objective: The aim of this study is to design a sampling inspection plan with human error which is changing according to inspection time. Background: Typical sampling inspection plans have been established typically based on an assumption of the perfect inspection without human error. However, most of all inspection tasks include human errors in the process of inspection. Therefore, a sampling inspection plan should be designed with consideration of imperfect inspection. Method: A model for single sampling inspection plans were proposed for the cases that visual inspection error rate is changing according to inspection time. Additionally, a sampling inspection plan for an optimal inspection time was proposed. In order to show an applied example of the proposed model, an experiment for visual inspection task was performed and the inspection error rates were measured according to the inspection time. Results: Inspection error rates changed according to inspection time. The inspection error rate could be reflected on the single sampling inspection plans for attribute. In particular, inspection error rate in an optimal inspection time may be used for a reasonable single sampling plan in a practical view. Conclusion: Human error rate in inspection tasks should be reflected on typical single sampling inspection plans. A sampling inspection plan with consideration of human error requires more sampling number than a typical sampling plan with perfect inspection. Application: The result of this research may help to determine more practical sampling inspection plan rather than typical one.

ERROR ANALYSIS ASSOCIATED WITH UNIFORM HERMITE INTERPOLATIONS OF BANDLIMITED FUNCTIONS

  • Annaby, Mahmoud H.;Asharabi, Rashad M.
    • Journal of the Korean Mathematical Society
    • /
    • v.47 no.6
    • /
    • pp.1299-1316
    • /
    • 2010
  • We derive estimates for the truncation, amplitude and jitter type errors associated with Hermite-type interpolations at equidistant nodes of functions in Paley-Wiener spaces. We give pointwise and uniform estimates. Some examples and comparisons which indicate that applying Hermite interpolations would improve the methods that use the classical sampling theorem are given.

Identification of Implementation Strategy by Practical Interpretations of Significance Level, Significance Probability, and Known Parameters in Statistical Inferences (통계적 추론에서 유의수준, 유의확률과 모수기지의 실무적 해석에 의한 적용방안)

  • Choe, Seong-Un
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2012.04a
    • /
    • pp.75-80
    • /
    • 2012
  • The research presents a guideline for quality practitioners to provide a full comprehension of differences in theoretical and practical interpretations of assumed sampling errors of and significance probability of calculated p-value. Besides, the study recommends the use of statistical inferences methods with known parameters to identify the improvement effects. In practice, the quality practitioners obtain the known parameters through systematic quality Database (DB) activities.

  • PDF

A Geostatistical Study Using Qualitative Information for Tunnel Rock Binary Classificationll- II. Applcation (이분적 터널 암반 분류를 위한 정성적 자료의 지구통계학적 연구 II. 응용)

  • 유광호
    • Geotechnical Engineering
    • /
    • v.10 no.1
    • /
    • pp.19-26
    • /
    • 1994
  • In this paper, the application of the rock classification method based on indicator kriging and the cost of errors, which can incorporate qualitative data, was presented. In particular, the binary classification of rock masses was considered. To this end, a simplified RMR system was used. Since most of subjectivity in this analysis occur during the estimation of loss functions, a sensitivity analysis of loss functions was performed. Through this research, it was found out that an expected cost of errors could successfully be used as an indication for how well a sampling plan was designed. In certain conditions, qualitative data can be more economical than quantitative data in terms of expected costs of errors and sampling costs. Therefore, an additional sampling should be carefully determined depending upon the surrounding geologic conditions and its sampling cost. The application method shown in this paper can be useful for more systematic rock classifications.

  • PDF

Precision servo control of a computer hard disk (컴퓨터 하드 디스크의 정밀 서보 제어)

  • 전도영
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.286-289
    • /
    • 1996
  • Two servo control algorithms are suggested to reduce the tracking error of a computer hard disk drive. One is the repetitive control to reduce the repeatable tracking error which is not explicitly taken into account in the design of a conventional controller. This algorithm was successfully applied to a commercial disk using a fixed point DSP. The other is the multi-rate sampling control which generates the control output between each sampling times since the sampling time of hard disk drives is limited. These algorithms were shown effectively to reduce tracking errors.

  • PDF

Measurement of Atmospheric PCDD/Fs Concentrations Using Polyurethane Foam Disk Passive Air Samplers (폴리우레탄 폼 수동형 공기시료채취기를 이용한 대기 중 다이옥신/퓨란 농도 측정)

  • Kim, Taewook;Chun, Man-Young
    • Journal of Environmental Health Sciences
    • /
    • v.42 no.2
    • /
    • pp.102-111
    • /
    • 2016
  • Objectives: This study was conducted to evaluate the use of polyurethane foam disk passive air samplers (PUF PAS) for better measurement of atmospheric polychlorinared dibenzo-p-dioxins/furans (PCDD/Fs) concentrations compared to PUF PAS combined with high volume air samplers (HVS). Methods: Air samples were collected by a low volume air sampler (LVS) and PUF PAS. A total of two pairs were continuously collected for six months, but the PUF was replaced every two months. Results: A good correlation was shown ($R^2=0.8595$, p<0.0001) between atmospheric PCDD/Fs concentration measured by the LVS and PUF PAS. The average air sampling rate ($1.5m^3/day-sampler$) of all PCDD/Fs congeners showed a middle of the means which were measured using a HVS by other researchers in different cities. In addition, the air sampling rates of the LVS for each congener made less difference than did those of the HVS. Conclusion: It was found that measurements using the LVS were less influenced by atmospheric peak PCDD/Fs concentrations. However, trace POPs such as PCDD/Fs may involve relatively large analytical errors in measurement, and as a result the air sampling rate of the respective PCDD/Fs isomer is also likely to involve errors. The method of using a regression straight line between the concentrations obtained from the LVS and those from the PUF PAS was judged higher than the method using the air sampling rate, since the former compensated for the experimental errors in the process of evaluation of atmospheric PCDD/F concentrations using the PUF PAS.

Retrieving the Time History of Displacement from Measured Acceleration Signal

  • Han, Sangbo
    • Journal of Mechanical Science and Technology
    • /
    • v.17 no.2
    • /
    • pp.197-206
    • /
    • 2003
  • It is intended to retrieve the time history of displacement from measured acceleration signal. In this study, the word retrieving means reconstructing the time history of original displacement signal from already measured acceleration signal not just extracting various information using relevant signal processing techniques. Unlike extracting required information from the signal, there are not many options to apply to retrieve the time history of displacement signal, once the acceleration signal is measured and recorded with given sampling rate. There are two methods, in general, to convert measured acceleration signal into displacement signal. One is directly integrating the acceleration signal in time domain. The other is dividing the Fourier transformed acceleration signal by the scale factor of - $\omega$$^2$and taking the inverse Fourier transform of it. It turned out both the methods produced a significant amount of errors depending on the sampling resolution in time and frequency domain when digitizing the acceleration signals. A simple and effective way to convert the time history of acceleration signal into the time history of displacement signal without significant errors is studied here with the analysis on the errors involved in the conversion process.

The Volume Measurement of Air Flowing through a Cross-section with PLC Using Trapezoidal Rule Method

  • Calik, Huseyin
    • Journal of Electrical Engineering and Technology
    • /
    • v.8 no.4
    • /
    • pp.872-878
    • /
    • 2013
  • In industrial control systems, flow measurement is a very important issue. It is frequently needed to calculate how much total fluid or gas flows through a cross-section. Flow volume measurement tools use simple sampling or rectangle methods. Actually, flow volume measurement process is an integration process. For this reason, measurement systems using instantaneous sampling technique cause considerably high errors. In order to make more accurate flow measurement, numerical integration methods should be used. Literally, for numerical integration method, Rectangular, Trapezoidal, Simpson, Romberg and Gaussian Quadrature methods are suggested. Among these methods, trapezoidal rule method is quite easy to calculate and is notably more accurate and contains no restrictive conditions. Therefore, it is especially convenient for the portable flow volume measurement systems. In this study, the volume measurement of air which is flowing through a cross-section is achieved by using PLC ladder diagram. The measurements are done using two different approaches. Trapezoidal rule method is proposed to measure the flow sensor signal to minimize measurement errors due to the classical sampling method as a different approach. It is concluded that the trapezoidal rule method is more effective than the classical sampling.

Errors in Estimated Temporal Tracer Trends Due to Changes in the Historical Observation Network: A Case Study of Oxygen Trends in the Southern Ocean

  • Min, Dong-Ha;Keller, Klaus
    • Ocean and Polar Research
    • /
    • v.27 no.2
    • /
    • pp.189-195
    • /
    • 2005
  • Several models predict large and potentially abrupt ocean circulation changes due to anthropogenic greenhouse-gas emissions. These circulation changes drive-in the models-considerable oceanic oxygen trend. A sound estimate of the observed oxygen trends can hence be a powerful tool to constrain predictions of future changes in oceanic deepwater formation, heat and carbon dioxide uptake. Estimating decadal scale oxygen trends is, however, a nontrivial task and previous studies have come to contradicting conclusions. One key potential problem is that changes in the historical observation network might introduce considerable errors. Here we estimate the likely magnitude of these errors for a subset of the available observations in the Southern Ocean. We test three common data analysis methods south of Australia and focus on the decadal-scale trends between the 1970's and the 1990's. Specifically, we estimate errors due to sparsely sampled observations using a known signal (the time invariant, temporally averaged, World Ocean Atlas 2001) as a negative control. The crossover analysis and the objective analysis methods are for less prone to spatial sampling location biases than the area averaging method. Subject to numerous caveats, we find that errors due to sparse sampling for the area averaging method are on the order of several micro-moles $kg^{-1}$. for the crossover and the objective analysis method, these errors are much smaller. For the analyzed example, the biases due to changes in the spatial design of the historical observation network are relatively small compared to the tends predicted by many model simulations. This raises the possibility to use historic oxygen trends to constrain model simulations, even in sparsely sampled ocean basins.

Hybrid Down-Sampling Method of Depth Map Based on Moving Objects (움직임 객체 기반의 하이브리드 깊이 맵 다운샘플링 기법)

  • Kim, Tae-Woo;Kim, Jung Hun;Park, Myung Woo;Shin, Jitae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37A no.11
    • /
    • pp.918-926
    • /
    • 2012
  • In 3D video transmission, a depth map being used for depth image based rendering (DIBR) is generally compressed by reducing resolution for coding efficiency. Errors in resolution reduction are recovered by an appropriate up-sampling method after decoding. However, most previous works only focus on up-sampling techniques to reduce errors. In this paper, we propose a novel down-sampling technique of depth map that applies different down-sampling rates on moving objects and background in order to enhance human perceptual quality. Experimental results demonstrate that the proposed scheme provides both higher visual quality and peak signal-to-noise ratio (PSNR). Also, our method is compatible with other up-sampling techniques.