• 제목/요약/키워드: error data

검색결과 9,417건 처리시간 0.034초

Error Check Algorithm in the Wireless Transmission of Digital Data by Water Level Measurement

  • Kim, Hie-Sik;Seol, Dae-Yeon;Kim, Young-Il;Nam, Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1666-1668
    • /
    • 2004
  • By wireless transmission data, there is high possibility to get distortion and lose by noise and barrier on wireless. If the data check damaged and lost at receiver, can't make it clear and can't judge whether this data is right or not. Therefore, by wireless transmission data need the data error check algorithm in order to decrease the data's distortion and lose and to monitoring the transmission data as real time. This study consists of RF station for wireless transmission, Water Level Meter station for water level measurement and Error check algorithm for error check of transmission data. This study is also that investigation and search for error check algorithm in order to wireless digital data transmission in condition of the least data's damage and lose. Designed transmitter and receiver with one - chip micro process to protect to swell the volume of circuit. Had designed RF transmitter - receiver station simply by means of ATMEL one - chip micro process in the systems. Used 10mW of the best RF power and 448MHz-449MHz on frequency band which can get permission to use by Frequency Law made by Korean government

  • PDF

A Statistical Perspective of Neural Networks for Imbalanced Data Problems

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • 제7권3호
    • /
    • pp.1-5
    • /
    • 2011
  • It has been an interesting challenge to find a good classifier for imbalanced data, since it is pervasive but a difficult problem to solve. However, classifiers developed with the assumption of well-balanced class distributions show poor classification performance for the imbalanced data. Among many approaches to the imbalanced data problems, the algorithmic level approach is attractive because it can be applied to the other approaches such as data level or ensemble approaches. Especially, the error back-propagation algorithm using the target node method, which can change the amount of weight-updating with regards to the target node of each class, attains good performances in the imbalanced data problems. In this paper, we analyze the relationship between two optimal outputs of neural network classifier trained with the target node method. Also, the optimal relationship is compared with those of the other error function methods such as mean-squared error and the n-th order extension of cross-entropy error. The analyses are verified through simulations on a thyroid data set.

시계열 데이터의 추정을 위한 웨이블릿 칼만 필터 기법 (The wavelet based Kalman filter method for the estimation of time-series data)

  • 홍찬영;윤태성;박진배
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 B
    • /
    • pp.449-451
    • /
    • 2003
  • The estimation of time-series data is fundamental process in many data analysis cases. However, the unwanted measurement error is usually added to true data, so that the exact estimation depends on efficient method to eliminate the error components. The wavelet transform method nowadays is expected to improve the accuracy of estimation, because it is able to decompose and analyze the data in various resolutions. Therefore, the wavelet based Kalman filter method for the estimation of time-series data is proposed in this paper. The wavelet transform separates the data in accordance with frequency bandwidth, and the detail wavelet coefficient reflects the stochastic process of error components. This property makes it possible to obtain the covariance of measurement error. We attempt the estimation of true data through recursive Kalman filtering algorithm with the obtained covariance value. The procedure is verified with the fundamental example of Brownian walk process.

  • PDF

Multimodal 데이터에 대한 분류 에러 예측 기법 (Error Estimation Based on the Bhattacharyya Distance for Classifying Multimodal Data)

  • 최의선;김재희;이철희
    • 대한전자공학회논문지SP
    • /
    • 제39권2호
    • /
    • pp.147-154
    • /
    • 2002
  • 본 논문에서는 multimodal 특성을 갖는 데이터에 대하여 패턴 분류 시 Bhattacharyya distance에 기반한 에러 예측 기법을 제안한다. 제안한 방법은 multimodal 데이터에 대하여 분류 에러와 Bhattacharyya distance를 각각 실험적으로 구하고 이 둘 사이의 관계를 유추하여 에러의 예측 가능성을 조사한다. 본 논문에서는 분류 에러 및 Bhattacharyya distance를 구하기 위하여 multimodal 데이터의 확률 밀도 함수를 정규 분포 특성을 갖는 부클래스들의 조합으로 추정한다. 원격 탐사 데이터를 이용하여 실험한 결과, multimodal 데이터의 분류 에러와 Bhattacharyya distance 사이에 밀접한 관련이 있음이 확인되었으며, Bhattacharyya distance를 이용한 에러 예측 가능성을 보여주었다.

Error Component 방법을 이용한 RP.SP 결합모형 개발 (Development of the RP and SP Combined using Error Component Method)

  • 김강수;조혜진
    • 대한교통학회지
    • /
    • 제21권2호
    • /
    • pp.119-130
    • /
    • 2003
  • SP 자료는 현재 존재하지 않는 교통정책 및 계획의 평가를 위해 광범위하게 이용되어 왔으나 현시선호와의 연계가 단점으로 지적되어 왔다. 이를 극복하는 방법의 하나로서 현시선호자료, 즉 RP 자료와의 결합이 제시되어 왔으며 RPㆍSP 결합방법론이 개발되었다. 본 논문의 목적은 Error Component 방법을 이용하여 새로운 RPㆍSP 결합방법을 제시하고 그 유용성을 입증하는 것이다. Error Component 방법은 SP 자료 또는 RP 자료의 상대적인 분산을 구하기 위해 각 자료의 오차를 분할하고 이에 대한 파라메타와 효용의 파라메타를 동시에 추정하는 것이다. 이를 위한 분석자료는 시뮬레이션을 통해서 인위적인 RP 자료와 SP 자료를 생성하여서 사용하였고 생성된 자료로 Error Component 방법을 이용한 결합모형과 기존의 결합방법의 결과를 파라메타 및 시간가치를 척도로 비교ㆍ분석하였다. 연구 결과 본 연구에서 제시한 방법론이 자료의 규모에 관계없이 일관되게 기존 RPㆍSP 결합방법에 의해 추정된 모형보다 가정된 파라메타 값에 일치함을 보여줘 Error Component 방법이 유용함을 증명하였다. 또한 파라메타의 비로 표현한 시간가치도 Error Component 방법의 적용값이 기존방법론의 적용값보다 가정된 값과 유사한 값을 보여 줘 본 연구가 제시한 방법의 우월성을 입증하였다. 또한 기존 결합모형인 동시적 모형과 순차적 모형이 모두 RP자료와 SP자료를 결합하는 방법으로 유용하게 사용될 수 있음을 보여주었으나 동시적방법이 보다 순차적방법보다 효율적인 방법으로 분석되었다.

에러 보정 코드를 이용한 비동기용 대용량 메모리 모듈의 성능 향상 (Performance Improvement of Asynchronous Mass Memory Module Using Error Correction Code)

  • 안재현;양오;연준상
    • 반도체디스플레이기술학회지
    • /
    • 제19권3호
    • /
    • pp.112-117
    • /
    • 2020
  • NAND flash memory is a non-volatile memory that retains stored data even without power supply. Internal memory used as a data storage device and solid-state drive (SSD) is used in portable devices such as smartphones and digital cameras. However, NAND flash memory carries the risk of electric shock, which can cause errors during read/write operations, so use error correction codes to ensure reliability. It efficiently recovers bad block information, which is a defect in NAND flash memory. BBT (Bad Block Table) is configured to manage data to increase stability, and as a result of experimenting with the error correction code algorithm, the bit error rate per page unit of 4Mbytes memory was on average 0ppm, and 100ppm without error correction code. Through the error correction code algorithm, data stability and reliability can be improved.

사출성형품의 역공학예서 Geometry정보를 이용한 정밀도 향상에 관한 연구 (A Study on Improvement of Accuracy using Geometry Information in Reverse Engineering of Injection Molding Parts)

  • 김연술;이희관;황금종;공영식;양균의
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 춘계학술대회 논문집
    • /
    • pp.546-550
    • /
    • 2002
  • This paper proposes an error compensation method that improves accuracy with geometry information of injection molding parts. Geometric information can give an improved accuracy in reverse engineering. Measuring data can not lead to get accurate geometric model, including errors of physical parts and measuring machines. Measuring data include errors which can be classified into two types. One is molding error in product, the other is measuring error. Measuring error includes optical error of laser scanner, deformation by probe forces of CMM and machine error. It is important to compensate these in reverse engineering. Least square method(LSM) provides the cloud data with a geometry compensation, improving accuracy of geometry. Also, the functional shape of a part and design concept can be reconstructed by error compensation using geometry information.

  • PDF

Improving the Error Back-Propagation Algorithm for Imbalanced Data Sets

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • 제8권2호
    • /
    • pp.7-12
    • /
    • 2012
  • Imbalanced data sets are difficult to be classified since most classifiers are developed based on the assumption that class distributions are well-balanced. In order to improve the error back-propagation algorithm for the classification of imbalanced data sets, a new error function is proposed. The error function controls weight-updating with regards to the classes in which the training samples are. This has the effect that samples in the minority class have a greater chance to be classified but samples in the majority class have a less chance to be classified. The proposed method is compared with the two-phase, threshold-moving, and target node methods through simulations in a mammography data set and the proposed method attains the best results.

채널상태에 적응적인 계층 부호화를 이용한 오류 은닉 방법 연구 (Channel Condition Adaptive Error Concealment using Scalability Coding)

  • 한승균;박승호;서덕영
    • 한국통신학회논문지
    • /
    • 제29권1B호
    • /
    • pp.8-17
    • /
    • 2004
  • 본 논문은 손실이 발생하기 쉬운 무선 네트워크에서 계층 부호화를 이용한 비디오 데이터의 적응적 오류 은닉기법을 제안한다. 비디오 데이터는 압축과정에서 중복성이 제거되므로, 전송 시 무선채널과 같이 손실이 발생하기 쉬운 네트워크에서는 오류에 더욱 더 민감하다. 본 논문에서 제안하는 오류 은닉방법은 두 가지이다. 첫째는 기본계층의 움직임 벡터를 이용하여 이전 VOP로 은닉하는 방법이고, 두 번째는 오류가 발생한 영역을 움직임의 유무에 따라 움직임이 있는 부분은 기본계층의 같은 위치영역 정보로 은닉하고 움직임이 없는 부분은 이전 VOP의 같은 위치 영역 정보로 은닉하는 적응적인 방법이다. 본 논문에서는 제안하는 오류 은닉 방법을 계층 부호화된 비디오 데이터에 적용했을 때 매우 유용함을 입증한다. 실험 결과에서 무선네트워크 망의 상태에 따라 달라지는 에러 패턴과 영상의 특성에 따라, 기본계층의 정보를 참조하거나 이전 VOP 정보를 참조함으로써 좀 더 나은 은닉방법임을 보였다. 본 논문에서는 계층부호화에 MPEG-4를 사용하는데, 더 나아가 DCT를 근간으로 하는 모든 비디오 코덱에 응용할 수 있다.

A cost model for determining optimal audit timing with related considerations for accounting data quality enhancement

  • Kim, Kisu
    • 경영과학
    • /
    • 제12권2호
    • /
    • pp.129-146
    • /
    • 1995
  • As society's relience on computerized information systems to support a wide range of activities proliferates, the long recognized importance for adequate data quality becomes imperative. Furthermore, current trends in information systems such as dispersal of the data resource together with its management have increased the difficulty of maintaining suitable levels of data integrity. Especially, the importance of adequate accounting (transaction) data quality has been long recognized and many procedures (extensive and often elaborate checks and controls) to prevent errors in accounting systems have been introduced and developed. Nevertheless, over time, even in the best maintained systems, deficiencies in stored data will develop. In order to maintain the accuracy and reliability of accounting data at certain level, periodic internal checks and error corrections (internal audits) are required as a part of internal control system. In this paper we develop a general data quality degradation (error accumulation ) and cost model for an account in which we have both error occurrences and error amounts and provide a closed form of optimal audit timing in terms of the number of transactions that should occur before an internal audit should be initiated. This paper also considers the cost- effectiveness of various audit types and different error prevention efforts and suggests how to select the most economical audit type and error prevention method.

  • PDF