• Title/Summary/Keyword: data error

Search Result 9,397, Processing Time 0.036 seconds

Error Check Algorithm in the Wireless Transmission of Digital Data by Water Level Measurement

  • Kim, Hie-Sik;Seol, Dae-Yeon;Kim, Young-Il;Nam, Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1666-1668
    • /
    • 2004
  • By wireless transmission data, there is high possibility to get distortion and lose by noise and barrier on wireless. If the data check damaged and lost at receiver, can't make it clear and can't judge whether this data is right or not. Therefore, by wireless transmission data need the data error check algorithm in order to decrease the data's distortion and lose and to monitoring the transmission data as real time. This study consists of RF station for wireless transmission, Water Level Meter station for water level measurement and Error check algorithm for error check of transmission data. This study is also that investigation and search for error check algorithm in order to wireless digital data transmission in condition of the least data's damage and lose. Designed transmitter and receiver with one - chip micro process to protect to swell the volume of circuit. Had designed RF transmitter - receiver station simply by means of ATMEL one - chip micro process in the systems. Used 10mW of the best RF power and 448MHz-449MHz on frequency band which can get permission to use by Frequency Law made by Korean government

  • PDF

A Statistical Perspective of Neural Networks for Imbalanced Data Problems

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.7 no.3
    • /
    • pp.1-5
    • /
    • 2011
  • It has been an interesting challenge to find a good classifier for imbalanced data, since it is pervasive but a difficult problem to solve. However, classifiers developed with the assumption of well-balanced class distributions show poor classification performance for the imbalanced data. Among many approaches to the imbalanced data problems, the algorithmic level approach is attractive because it can be applied to the other approaches such as data level or ensemble approaches. Especially, the error back-propagation algorithm using the target node method, which can change the amount of weight-updating with regards to the target node of each class, attains good performances in the imbalanced data problems. In this paper, we analyze the relationship between two optimal outputs of neural network classifier trained with the target node method. Also, the optimal relationship is compared with those of the other error function methods such as mean-squared error and the n-th order extension of cross-entropy error. The analyses are verified through simulations on a thyroid data set.

The wavelet based Kalman filter method for the estimation of time-series data (시계열 데이터의 추정을 위한 웨이블릿 칼만 필터 기법)

  • Hong, Chan-Young;Yoon, Tae-Sung;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.449-451
    • /
    • 2003
  • The estimation of time-series data is fundamental process in many data analysis cases. However, the unwanted measurement error is usually added to true data, so that the exact estimation depends on efficient method to eliminate the error components. The wavelet transform method nowadays is expected to improve the accuracy of estimation, because it is able to decompose and analyze the data in various resolutions. Therefore, the wavelet based Kalman filter method for the estimation of time-series data is proposed in this paper. The wavelet transform separates the data in accordance with frequency bandwidth, and the detail wavelet coefficient reflects the stochastic process of error components. This property makes it possible to obtain the covariance of measurement error. We attempt the estimation of true data through recursive Kalman filtering algorithm with the obtained covariance value. The procedure is verified with the fundamental example of Brownian walk process.

  • PDF

Error Estimation Based on the Bhattacharyya Distance for Classifying Multimodal Data (Multimodal 데이터에 대한 분류 에러 예측 기법)

  • Choe, Ui-Seon;Kim, Jae-Hui;Lee, Cheol-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.2
    • /
    • pp.147-154
    • /
    • 2002
  • In this paper, we propose an error estimation method based on the Bhattacharyya distance for multimodal data. First, we try to find the empirical relationship between the classification error and the Bhattacharyya distance. Then, we investigate the possibility to derive the error estimation equation based on the Bhattacharyya distance for multimodal data. We assume that the distribution of multimodal data can be approximated as a mixture of several Gaussian distributions. Experimental results with remotely sensed data showed that there exist strong relationships between the Bhattacharyya distance and the classification error and that it is possible to predict the classification error using the Bhattacharyya distance for multimodal data.

Development of the RP and SP Combined using Error Component Method (Error Component 방법을 이용한 RP.SP 결합모형 개발)

  • 김강수;조혜진
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.2
    • /
    • pp.119-130
    • /
    • 2003
  • SP data have been widely used in assessing new transport policies and transport related plans. However, one of criticisms of using SP is that respondents may show different reaction between hypothetical experiments and real life. In order to overcome the problem, combination of SP and RP data has been suggested and the combined methods have been being developed. The purpose of this paper is to suggest a new SP and RP combined method using error component method and to verify the method. The error component method decomposes IID extreme value error into non-IID error component(s) and an IID error component. The method estimates both of component parameters and utility parameters in order to obtain relative variance of SP data and RP data. The artificial SP and RP data was created by using simulation and used for the analysis, and the estimation results of the error component method were compared with those of existing SP and RP combined methods. The results show that regardless of data size, the parameters of the error component method models are similar to those assumed parameters much more than those of the existing SP and RP combined models, indicating usefulness of the error component method. Also the values of time for error component method are more similar to those assumed values than those of the existing combined models. Therefore, we can conclude that the error component method is useful in combining SP and RP data and more efficient than the existing methods.

Performance Improvement of Asynchronous Mass Memory Module Using Error Correction Code (에러 보정 코드를 이용한 비동기용 대용량 메모리 모듈의 성능 향상)

  • Ahn, Jae Hyun;Yang, Oh;Yeon, Jun Sang
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.3
    • /
    • pp.112-117
    • /
    • 2020
  • NAND flash memory is a non-volatile memory that retains stored data even without power supply. Internal memory used as a data storage device and solid-state drive (SSD) is used in portable devices such as smartphones and digital cameras. However, NAND flash memory carries the risk of electric shock, which can cause errors during read/write operations, so use error correction codes to ensure reliability. It efficiently recovers bad block information, which is a defect in NAND flash memory. BBT (Bad Block Table) is configured to manage data to increase stability, and as a result of experimenting with the error correction code algorithm, the bit error rate per page unit of 4Mbytes memory was on average 0ppm, and 100ppm without error correction code. Through the error correction code algorithm, data stability and reliability can be improved.

A Study on Improvement of Accuracy using Geometry Information in Reverse Engineering of Injection Molding Parts (사출성형품의 역공학예서 Geometry정보를 이용한 정밀도 향상에 관한 연구)

  • 김연술;이희관;황금종;공영식;양균의
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.546-550
    • /
    • 2002
  • This paper proposes an error compensation method that improves accuracy with geometry information of injection molding parts. Geometric information can give an improved accuracy in reverse engineering. Measuring data can not lead to get accurate geometric model, including errors of physical parts and measuring machines. Measuring data include errors which can be classified into two types. One is molding error in product, the other is measuring error. Measuring error includes optical error of laser scanner, deformation by probe forces of CMM and machine error. It is important to compensate these in reverse engineering. Least square method(LSM) provides the cloud data with a geometry compensation, improving accuracy of geometry. Also, the functional shape of a part and design concept can be reconstructed by error compensation using geometry information.

  • PDF

Improving the Error Back-Propagation Algorithm for Imbalanced Data Sets

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.8 no.2
    • /
    • pp.7-12
    • /
    • 2012
  • Imbalanced data sets are difficult to be classified since most classifiers are developed based on the assumption that class distributions are well-balanced. In order to improve the error back-propagation algorithm for the classification of imbalanced data sets, a new error function is proposed. The error function controls weight-updating with regards to the classes in which the training samples are. This has the effect that samples in the minority class have a greater chance to be classified but samples in the majority class have a less chance to be classified. The proposed method is compared with the two-phase, threshold-moving, and target node methods through simulations in a mammography data set and the proposed method attains the best results.

Channel Condition Adaptive Error Concealment using Scalability Coding (채널상태에 적응적인 계층 부호화를 이용한 오류 은닉 방법 연구)

  • Han Seung-Gyun;Park Seung-Ho;Suh Doug-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1B
    • /
    • pp.8-17
    • /
    • 2004
  • In this paper: we propose the adaptive error concealment technique for scalable video coding over wireless network error prove environment. We prove it is very effective that Error concealment techniques proposed in this paper are applied to scalable video data. In this paper, we propose two methods of error concealment. First one is that the en·or is concealed using the motion vector of base layer and previous VOP data. Second one is that according to existence of motion vector in error position, the error is concealed using the same position data of base layer when the motion vector is existing otherwise using the same position data of previous VOP when the motion vector is 0(zero) adaptively. We show that according to various error pattern caused by condition of wireless network and characteristics of sequence, we refer decoder to base layer data or previous enhancement layer data to effective error concealment. Using scalable coding of MPEG-4 In this paper, this error concealment techniques are available to be used every codec based on DCT.

A cost model for determining optimal audit timing with related considerations for accounting data quality enhancement

  • Kim, Kisu
    • Korean Management Science Review
    • /
    • v.12 no.2
    • /
    • pp.129-146
    • /
    • 1995
  • As society's relience on computerized information systems to support a wide range of activities proliferates, the long recognized importance for adequate data quality becomes imperative. Furthermore, current trends in information systems such as dispersal of the data resource together with its management have increased the difficulty of maintaining suitable levels of data integrity. Especially, the importance of adequate accounting (transaction) data quality has been long recognized and many procedures (extensive and often elaborate checks and controls) to prevent errors in accounting systems have been introduced and developed. Nevertheless, over time, even in the best maintained systems, deficiencies in stored data will develop. In order to maintain the accuracy and reliability of accounting data at certain level, periodic internal checks and error corrections (internal audits) are required as a part of internal control system. In this paper we develop a general data quality degradation (error accumulation ) and cost model for an account in which we have both error occurrences and error amounts and provide a closed form of optimal audit timing in terms of the number of transactions that should occur before an internal audit should be initiated. This paper also considers the cost- effectiveness of various audit types and different error prevention efforts and suggests how to select the most economical audit type and error prevention method.

  • PDF