• 제목/요약/키워드: Numerical errors

검색결과 866건 처리시간 0.029초

모바일 환경에서 기억법 기반 짧은 거리 유효 알고리즘 (Valid Algorithm for Short Distance based on Mnemonic System in Mobile Environments)

  • 김분희
    • 한국전자통신학회논문지
    • /
    • 제16권2호
    • /
    • pp.301-306
    • /
    • 2021
  • 연도를 기반으로 중요한 자료를 해석하는 분야에 있어서 기억법은 교육 효과를 높일 수 있는 방법으로 활용될 수 있다. 수치 기억법과 관련된 연구는 기존의 단순한 이미지 형태의 정보를 보여줌으로써 기억률을 높이는 형태로 진행되고 있다. 모바일 환경에서 기억법 기반 실수 수정 수치 알고리즘 논문에서 제안한 방법을 보완하고자 한다. 사용자가 이용하는 앱에서 수치 정보를 입력했을 때 발생되는 오류를 보완해주는 방법이었다. 본 연구에서는 실수 수정에 있어서 단순히 환기 데이터를 보여주는데 그치지 않고 각도 개념을 부과하여 2차원 정보를 기반으로 기억률을 높이고자 한다. 이를 위해 개발된 앱을 이용해 문제 해결 과정을 제안하고 짧은 거리 유효 알고리즘을 구현 평가한다.

Investigating the Impact of Random and Systematic Errors on GPS Precise Point Positioning Ambiguity Resolution

  • Han, Joong-Hee;Liu, Zhizhao;Kwon, Jay Hyoun
    • 한국측량학회지
    • /
    • 제32권3호
    • /
    • pp.233-244
    • /
    • 2014
  • Precise Point Positioning (PPP) is an increasingly recognized precisely the GPS/GNSS positioning technique. In order to improve the accuracy of PPP, the error sources in PPP measurements should be reduced as much as possible and the ambiguities should be correctly resolved. The correct ambiguity resolution requires a careful control of residual errors that are normally categorized into random and systematic errors. To understand effects from two categorized errors on the PPP ambiguity resolution, those two GPS datasets are simulated by generating in locations in South Korea (denoted as SUWN) and Hong Kong (PolyU). Both simulation cases are studied for each dataset; the first case is that all the satellites are affected by systematic and random errors, and the second case is that only a few satellites are affected. In the first case with random errors only, when the magnitude of random errors is increased, L1 ambiguities have a much higher chance to be incorrectly fixed. However, the size of ambiguity error is not exactly proportional to the magnitude of random error. Satellite geometry has more impacts on the L1 ambiguity resolution than the magnitude of random errors. In the first case when all the satellites have both random and systematic errors, the accuracy of fixed ambiguities is considerably affected by the systematic error. A pseudorange systematic error of 5 cm is the much more detrimental to ambiguity resolutions than carrier phase systematic error of 2 mm. In the $2^{nd}$ case when only a portion of satellites have systematic and random errors, the L1 ambiguity resolution in PPP can be still corrected. The number of allowable satellites varies from stations to stations, depending on the geometry of satellites. Through extensive simulation tests under different schemes, this paper sheds light on how the PPP ambiguity resolution (more precisely L1 ambiguity resolution) is affected by the characteristics of the residual errors in PPP observations. The numerical examples recall the PPP data analysts that how accurate the error correction models must achieve in order to get all the ambiguities resolved correctly.

Saddlepoint approximations for the ratio of two independent sequences of random variables

  • Cho, Dae-Hyeon
    • Journal of the Korean Data and Information Science Society
    • /
    • 제9권2호
    • /
    • pp.255-262
    • /
    • 1998
  • In this paper, we study the saddlepoint approximations for the ratio of independent random variables. In Section 2, we derive the saddlepoint approximation to the probability density function. In Section 3, we represent a numerical example which shows that the errors are small even for small sample size.

  • PDF

입자결합모델을 이용한 동적콘관입시험(DCPT)의 수치해석 모델링에 관한 연구 (A Study on Numerical Modeling of Dynamic CPT using Particle Flow Code)

  • 유광호;이창수;최준성
    • 한국도로학회논문집
    • /
    • 제16권2호
    • /
    • pp.43-52
    • /
    • 2014
  • PURPOSES : To solve problems in current compaction control DCPT(Dynamic Cone Penetrometer Test), highly correlated with various testing methods, simple, and economic is being applied. However, it、s hard to utilize DCPT results due to the few numerical analyses for DCPT have been performed and the lack of data accumulation. Therefore, this study tried to verify the validation of numerical modeling for DCPT by comparing and analyzing the results of numerical analyses with field tests. METHODS: The ground elastic modulus and PR(Penetration Rate) value were estimated by using PFC(Particle Flow Code) 3D program based on the discrete element method. Those values were compared and analyzed with the result of field tests. Also, back analysis was conducted to describe ground elastic modulus of field tests. RESULTS : Relative errors of PR value between the numerical analyses and field tests were calculated to be comparatively low. Also, the relationship between elastic modulus and PR value turned out to be similar. CONCLUSIONS : Numerical modeling of DCPT is considered to be suitable for describing field tests by carrying out numerical analysis using PFC 3D program.

3차원 인체치수 조사 자료의 품질 개선을 위한 연구 (A Study for Quality Improvement of Three-dimensional Body Measurement Data)

  • 박선미;남윤자;박진우
    • 대한인간공학회지
    • /
    • 제28권4호
    • /
    • pp.117-124
    • /
    • 2009
  • To inspect the quality of data collected from a large-scale body measurement and investigation project, it is necessary to establish a proper data editing process. The three-dimensional body measurement may have measuring errors caused from measurer's proficiency or changes in the subject's posture. And it may also have errors caused in the process of algorithm expressing the information obtained from the three-dimensional scanner into numerical values, and in the course of data-processing dealing with numerous data for individuals. When those errors are found, the quality of the measured data is deteriorated, and they consequently reduce the quality of statistics which was conducted on the basis of it. Therefore this study intends to suggest a new way to improve the quality of the data collected from the three-dimensional body measurement by proposing a working procedure identifying data errors and correcting them from the whole data processing procedure-collecting, processing, and analyzing- of the 2004 Size Korea Three-dimensional Body Measurement Project. This study was carried out into three stages: Firstly, we detected erroneous data by examining of logical relations among variables under each edit rule. Secondly, we detected suspicious data through independent examination of individual variable value by sex and age. Finally, we examined scatter-plot matrix of many variables to consider the relationships among them. This simple graphical tool helps us to find out whether some suspicious data exist in the data set or not. As a result of this study, we detected some erroneous data included in the raw data. We figured out that the main errors are not because of the system errors that the three-dimensional body measurement system has but because of the subject's original three-dimensional shape data. Therefore by correcting some erroneous data, we have enhanced data quality.

결측 데이터 보정법에 의한 의사 데이터로 조정된 예측 최적화 방법 (Predictive Optimization Adjusted With Pseudo Data From A Missing Data Imputation Technique)

  • 김정우
    • 한국산학기술학회논문지
    • /
    • 제20권2호
    • /
    • pp.200-209
    • /
    • 2019
  • 미래 값을 예측할 때, 학습 오차(training error)를 최소화하여 추정된 모형은 보통 많은 테스트 오차(test error)를 야기할 수 있다. 이것은 추정 모델이 주어진 데이터 집합에만 집중하여 발생하는 모델 복잡성에 따른 과적합(overfitting) 문제이다. 일부 정규화 및 리샘플링 방법은 이 문제를 완화하여 테스트 오차를 줄이기 위해 도입되었지만, 이 방법들 또한 주어진 데이터 집합에서만 국한 되도록 설계되었다. 본 논문에서는 테스트 오차 최소화 문제를 학습 오차 최소화 문제로 변환하여 테스트 오차를 줄이기 위한 새로운 최적화 방법을 제안한다. 이 변환을 수행하기 위해 주어진 데이터 집합에 대해 의사(pseudo) 데이터라고 하는 새로운 데이터를 추가하였다. 그리고 적절한 의사 데이터를 만들기 위해 결측 데이터 보정법의 세 가지 유형을 사용하였다. 예측 모델로서 선형회귀모형, 자기회귀모형, ridge 회귀모형을 사용하고 이 모형들에 의사 데이터 방법을 적용하였다. 또한, 의사 데이터로 조정된 최적화 방법을 활용하여 환경 데이터 및 금융 데이터에 적용한 사례를 제시하였다. 결과적으로 이 논문에서 제시된 방법은 원래의 예측 모형보다 테스트 오차를 감소시키는 것으로 나타났다.

신경회로망을 사용한 노이즈가 첨가된 포화증기표의 모델링 (Modelling of noise-added saturated steam table using the neural networks)

  • 이태환;박진현
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2008년도 춘계종합학술대회 A
    • /
    • pp.205-208
    • /
    • 2008
  • 수치해석에서는 온도, 압력, 비체적, 엔탈피, 엔트로피 등의 수치값이 필요하다. 그런데 증기표의 대부분의 열역학적 성질들은 측정된 값이기 때문에 기본적으로 측정 오차를 가지고 있다. 본 연구에서는 압력 기준의 물의 포화 상태에 대해, 난수를 발생시켜 적절한 크기로 조절한 다음 원래의 성질들에 더하여 인위적으로 노이즈가 포함된 데이터를 만들었다. 이 데이터를 신경회로망과 스플라인 보간법으로 함수 근사를 하였다. 해석 결과 신경회로망이 2차 스플라인 보간법보다 훨씬 더 적은 백분율 오차를 보였으며 이로부터 신경회로망이 측정 오차의 영향을 적게 받는 함수 근사에 적절한 방법임을 확인하였다.

  • PDF

The Volume Measurement of Air Flowing through a Cross-section with PLC Using Trapezoidal Rule Method

  • Calik, Huseyin
    • Journal of Electrical Engineering and Technology
    • /
    • 제8권4호
    • /
    • pp.872-878
    • /
    • 2013
  • In industrial control systems, flow measurement is a very important issue. It is frequently needed to calculate how much total fluid or gas flows through a cross-section. Flow volume measurement tools use simple sampling or rectangle methods. Actually, flow volume measurement process is an integration process. For this reason, measurement systems using instantaneous sampling technique cause considerably high errors. In order to make more accurate flow measurement, numerical integration methods should be used. Literally, for numerical integration method, Rectangular, Trapezoidal, Simpson, Romberg and Gaussian Quadrature methods are suggested. Among these methods, trapezoidal rule method is quite easy to calculate and is notably more accurate and contains no restrictive conditions. Therefore, it is especially convenient for the portable flow volume measurement systems. In this study, the volume measurement of air which is flowing through a cross-section is achieved by using PLC ladder diagram. The measurements are done using two different approaches. Trapezoidal rule method is proposed to measure the flow sensor signal to minimize measurement errors due to the classical sampling method as a different approach. It is concluded that the trapezoidal rule method is more effective than the classical sampling.

A Numerical Approach for Lightning Impulse Flashover Voltage Prediction of Typical Air Gaps

  • Qiu, Zhibin;Ruan, Jiangjun;Huang, Congpeng;Xu, Wenjie;Huang, Daochun
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권3호
    • /
    • pp.1326-1336
    • /
    • 2018
  • This paper proposes a numerical approach to predict the critical flashover voltages of air gaps under lightning impulses. For an air gap, the impulse voltage waveform features and electric field features are defined to characterize its energy storage status before the initiation of breakdown. These features are taken as the input parameters of the predictive model established by support vector machine (SVM). Given an applied voltage range, the golden section search method is used to compute the prediction results efficiently. This method was applied to predict the critical flashover voltages of rod-rod, rod-plane and sphere-plane gaps over a wide range of gap lengths and impulse voltage waveshapes. The predicted results coincide well with the experimental data, with the same trends and acceptable errors. The mean absolute percentage errors of 6 groups of test samples are within 4.6%, which demonstrates the validity and accuracy of the predictive model. This method provides an effectual way to obtain the critical flashover voltage and might be helpful to estimate the safe clearances of air gaps for insulation design.

A data fusion method for bridge displacement reconstruction based on LSTM networks

  • Duan, Da-You;Wang, Zuo-Cai;Sun, Xiao-Tong;Xin, Yu
    • Smart Structures and Systems
    • /
    • 제29권4호
    • /
    • pp.599-616
    • /
    • 2022
  • Bridge displacement contains vital information for bridge condition and performance. Due to the limits of direct displacement measurement methods, the indirect displacement reconstruction methods based on the strain or acceleration data are also developed in engineering applications. There are still some deficiencies of the displacement reconstruction methods based on strain or acceleration in practice. This paper proposed a novel method based on long short-term memory (LSTM) networks to reconstruct the bridge dynamic displacements with the strain and acceleration data source. The LSTM networks with three hidden layers are utilized to map the relationships between the measured responses and the bridge displacement. To achieve the data fusion, the input strain and acceleration data need to be preprocessed by normalization and then the corresponding dynamic displacement responses can be reconstructed by the LSTM networks. In the numerical simulation, the errors of the displacement reconstruction are below 9% for different load cases, and the proposed method is robust when the input strain and acceleration data contains additive noise. The hyper-parameter effect is analyzed and the displacement reconstruction accuracies of different machine learning methods are compared. For experimental verification, the errors are below 6% for the simply supported beam and continuous beam cases. Both the numerical and experimental results indicate that the proposed data fusion method can accurately reconstruct the displacement.