• Title/Summary/Keyword: Expected error

Search Result 1,156, Processing Time 0.027 seconds

A Study on Error Characteristics of Large Size Electromagnetic Flowmeter in the Range of Low Velocity (저유속 영역에서 대구경 전자기유량계의 오차특성 연구)

  • Lee, Dong-Keun;Park, Jong-Ho
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.32 no.3
    • /
    • pp.235-240
    • /
    • 2008
  • The large size electromagnetic flowmeter was tested to investigate the variation of its error characteristics in the range of low velocity under 0.6 m/s using flowmeter calibration system. For the two case of valve opening rate 100 % and 50 %, these tests were undertaken three times each for twelve velocity condition from $0.05\;^m/s\;to\;0.6\;^m/s$ with increment of $0.05\;^m/s$. It is shown that error characteristic of electromagnetic flowmeter was stabilized within ${\pm}0.4%$ of rate both higher than $0.25^m/s$ of velocity condition and 50 % of valve opening position. But, measurement deviation of flowmeter for ${\Phi}400mm\;and\;{\Phi}600mm$ was out of expected deviation range. It is necessary to correction with calibration. In conclusion, error characteristic of electromagnetic flowmeter wasn't changed proportion to its size.

The wavelet based Kalman filter method for the estimation of time-series data (시계열 데이터의 추정을 위한 웨이블릿 칼만 필터 기법)

  • Hong, Chan-Young;Yoon, Tae-Sung;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.449-451
    • /
    • 2003
  • The estimation of time-series data is fundamental process in many data analysis cases. However, the unwanted measurement error is usually added to true data, so that the exact estimation depends on efficient method to eliminate the error components. The wavelet transform method nowadays is expected to improve the accuracy of estimation, because it is able to decompose and analyze the data in various resolutions. Therefore, the wavelet based Kalman filter method for the estimation of time-series data is proposed in this paper. The wavelet transform separates the data in accordance with frequency bandwidth, and the detail wavelet coefficient reflects the stochastic process of error components. This property makes it possible to obtain the covariance of measurement error. We attempt the estimation of true data through recursive Kalman filtering algorithm with the obtained covariance value. The procedure is verified with the fundamental example of Brownian walk process.

  • PDF

Relationship Between Housing Prices and Expected Housing Prices in the Real Estate Industry (주택유통산업에서의 주택가격과 기대주택가격간의 관계분석)

  • Choi, Cha-Soon
    • Journal of Distribution Science
    • /
    • v.13 no.11
    • /
    • pp.39-46
    • /
    • 2015
  • Purpose - In Korea, there has been a recent trend that shows housing prices have risen rapidly following the International Monetary Fund crisis. The rapid rise in housing prices is spreading recognition of this as a factor in housing price volatility. In addition, this raises the expectations of housing prices in the future. These expectations are based on the assumption that a relationship exists between the current housing prices and expected housing prices in the real estate industry. By performing an empirical analysis on the validity of the claim that an increase in current housing prices can be correlated with expected housing prices, this study examines whether a long-term equilibrium relationship exists between expected housing prices and existing housing prices. If such a relationship exists, the recovery of equilibrium from disequilibrium is analyzed to derive related implications. Research design, data, and methodology - The relationship between current housing prices and expected housing prices was analyzed empirically using the Vector Error Correction Model. This model was applied to the co-integration test, the long-term equilibrium equation among variables, and the causality test. The housing prices used in the analysis were based on the National Housing Price Trend Survey released by Kookmin Bank. Additionally, the Index of Industrial Product and the Consumer Price Index were also used and were obtained from the Bank of Korea ECOS. The monthly data analyzed were from January 1987 to May 2015. Results - First, a long-term equilibrium relationship was established as one co-integration between current housing price distribution and expected housing prices. Second, the sign of the long-term equilibrium relationship variable was consistent with the theoretical sign, with the elasticity of housing price distribution to expected housing price, the industrial production, and the consumer price volatility revealed as 1.600, 0.104,and 0.092, respectively. This implies that the long-term effect of expected housing price volatility on housing price distribution is more significant than that of the industrial production and consumer price volatility. Third, the sign of the coefficient of the error correction term coincided with the theoretical sign. The absolute value of the coefficient of the correction term in the industrial production equation was 0.006, significantly larger than the coefficients for the expected housing price and the consumer price equation. In case of divergence from the long-term equilibrium relationship, the state of equilibrium will be restored through changes in the interest rate. Fourth, housing-price volatility was found to be causal to expected housing price, and was shown to be bi-directionally causal to industrial production. Conclusions - Based on the finding of this study, it is required to relieve the association between current housing price distribution and expected housing price by using property taxes and the loan-to-value policy to stabilize the housing market. Further, the relationship between housing price distribution and expected housing price can be examined and tested using a sophisticated methodology and policy variables.

The Effects of Chatbot's Error Types and Structures of Error Message on User Experience (챗봇의 오류 유형과 오류 메시지 구조화 여부가 사용자 경험에 미치는 영향)

  • Lee, Mi-Jin;Han, Kwang-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.6
    • /
    • pp.19-34
    • /
    • 2021
  • The aim of this study is verifying the effects of chatbot's error types and structures of error message on attitude, behavior intention towards the chatbot and perceived usability of the chatbot. The error types of chatbot are divided into 'experience' error and 'agency' error, which set different expectancy level, according to mind perception theory. The structures of error message were either unstructured condition composed of error specification only or structured condition composed of apology, explanation and willingness of improvement. It was found that score of perceived usability was higher in experience error condition than agency error condition. Also, all three scores of dependent variables were higher in structured error message condition than unstructured error message condition. Furthermore, expectation gap of experience didn't predict the dependent variables but expectation gap of agency predicted all three dependent variables. Finally, the tendency of interaction effect between the error type and the structure of the error message on expectation gap of agency was observed. This study confirmed the mitigating effect of structured error messages and the possibility that these effects may vary by the type of error. The result is expected to be applicable to design of error coping strategies that enhance user experience.

Analysis on Upper and Lower Bounds of Stochastic LP Problems (확률적 선형계획문제의 상한과 하한한계 분석)

  • 이상진
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.27 no.3
    • /
    • pp.145-156
    • /
    • 2002
  • Business managers are often required to use LP problems to deal with uncertainty inherent in decision making due to rapid changes in today's business environments. Uncertain parameters can be easily formulated in the two-stage stochastic LP problems. However, since solution methods are complex and time-consuming, a common approach has been to use modified formulations to provide upper and lower bounds on the two-stage stochastic LP problem. One approach is to use an expected value problem, which provides upper and lower bounds. Another approach is to use “walt-and-see” problem to provide upper and lower bounds. The objective of this paper is to propose a modified approach of “wait-and-see” problem to provide an upper bound and to compare the relative error of optimal value with various upper and lower bounds. A computing experiment is implemented to show the relative error of optimal value with various upper and lower bounds and computing times.

A Study of Software Quality Evaluation Using Error-Data (오류데이터를 이용한 소프트웨어 품질평가)

  • Moon, Wae-Sik
    • Journal of The Korean Association of Information Education
    • /
    • v.2 no.1
    • /
    • pp.35-51
    • /
    • 1998
  • Software reliability growth model is one of the evaluation methods, software quality which quantitatively calculates the software reliability based on the number of errors detected. For correct and precise evaluation of reliability of certain software, the reliability model, which is considered to fit dose to real data should be selected as well. In this paper, the optimal model for specific test data was selected one of among five software reliability growth models based on NHPP(Non Homogeneous Poission Process), and in result reliability estimating scales(total expected number of errors, error detection rate, expected number of errors remaining in the software, reliability etc) could obtained. According to reliability estimating scales obtained, Software development and predicting optimal release point and finally in conducting systematic project management.

  • PDF

A Sequential Approach for Estimating the Variance of a Normal Population Using Some Available Prior Information

  • Samawi, Hani M.;Al-Saleh, Mohammad F.
    • Journal of the Korean Statistical Society
    • /
    • v.31 no.4
    • /
    • pp.433-445
    • /
    • 2002
  • Using some available information about the unknown variance $\sigma$$^2$ of a normal distribution with mean $\mu$, a sequential approach is used to estimate $\sigma$$^2$. Two cases have been considered regarding the mean $\mu$ being known or unknown. The mean square error (MSE) of the new estimators are compared to that of the usual estimator of $\sigma$$^2$, namely, the sample variance based on a sample of size equal to the expected sample size. Simulation results indicates that, the new estimator is more efficient than the usual estimator of $\sigma$$^2$whenever the actual value of $\sigma$$^2$ is not too far from the prior information.

The Effect of Altitude Errors in Altitude-aided Global Navigation Satellite System(GNSS) (고도를 고정한 GNSS 위치 결정 기법에서 고도 오차의 영향)

  • Cho, Sung-Lyong;Han, Young-Hoon;Kim, Sang-Sik;Moon, Jei-Hyeong;Lee, Sang-Jeong;Park, Chan-Sik
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.10
    • /
    • pp.1483-1488
    • /
    • 2012
  • This paper analyzed the precision and accuracy of the altitude-aided GNSS using the altitude information from digital map. The precision of altitude-aided GNSS is analysed using the theoretically derived DOP. It is confirmed that the precision of altitude-aided GNSS is superior to the general 3D positioning method. It is also shown that the DOP of altitude-aided GNSS is independent of altitude bias error while the accuracy was influenced by the altitude bias error. Furthermore, it is shown that, since the altitude bias error influenced differently to each pseudorange measurement, the effect of the altitude bias error is more serious than clock bias error which does not influence position error at all. The results are evaluated by the simulation using the commercial RF simulator and GPS receiver. It confirmed that altitude-aided GNSS could improve not only precision but also accuracy if the altitude bias error are small. These results are expected to be easily applied for the performance improvement to the land and maritime applications.

R&D Status of Quantum Computing Technology (양자컴퓨팅 기술 연구개발 동향)

  • Baek, C.H.;Hwang, Y.S.;Kim, T.W.;Choi, B.S.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.1
    • /
    • pp.20-33
    • /
    • 2018
  • The calculation speed of quantum computing is expected to outperform that of existing supercomputers with regard to certain problems such as secure computing, optimization problems, searching, and quantum chemistry. Many companies such as Google and IBM have been trying to make 50 superconducting qubits, which is expected to demonstrate quantum supremacy and those quantum computers are more advantageous in computing power than classical computers. However, quantum computers are expected to be applicable to solving real-world problems with superior computing power. This will require large scale quantum computing with many more qubits than the current 50 qubits available. To realize this, first, quantum error correction codes are required to be capable of computing within a sufficient amount of time with tolerable accuracy. Next, a compiler is required for the qubits encoded by quantum error correction codes to perform quantum operations. A large-scale quantum computer is therefore predicted to be composed of three essential components: a programming environment, layout mapping of qubits, and quantum processors. These components analyze how many numbers of qubits are needed, how accurate the qubit operations are, and where they are placed and operated. In this paper, recent progress on large-scale quantum computing and the relation of their components will be introduced.

H.264의 FMO Performance Evaluation and Comparison over Packet-Lossy Networks (패킷 손실이 발생하는 네트워크 환경에서의 H.264의 FMO 성능분석과 비교에 관한 연구)

  • Kim Won-Jung;Lim Hye-Sook;Yim Chang-Hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.5C
    • /
    • pp.490-496
    • /
    • 2006
  • H.264 is the most recent video coding standard, containing improved error resilience tools than previous video compression schemes. This paper shows an analysis of the dependency of error concealment (EC) performance on the expected number of correctly received neighboring macroblock(MB)s for a lost MB, applying error concealment schemes to the raster scan mode that is used in the previous video coding standard and the flexible macroblock ordering (FMO) which is one of error-resilience tools in H.264. We also present simulation results and performance evaluation with various packet loss rates. Simulation results show that the FMO mode provides better EC performances of $1{\sim}9dB$ PSNR improvements compared to the raster scan mode because of larger expected number of correctly received neighboring MBs. The PSNR improvement by FMO mode becomes higher as the intra-frame period is larger and the packet loss rate is higher.