• Title/Summary/Keyword: 0처리 오류

Search Result 137, Processing Time 0.03 seconds

The Origin and Instruction of Computational Errors with Zero (0처리 오류의 기원 및 0의 지도)

  • Kim, Soo-Mi
    • School Mathematics
    • /
    • v.8 no.4
    • /
    • pp.397-415
    • /
    • 2006
  • This paper is to find out the reason why students often make mistakes with 0 during computation and to get some instructional implication. For this, history of 0 is reviewed and mathematics textbook and workbook are analyzed. History of 0 tells us that the ancients had almost the same problem with 0 as we have. So we can guess children's problems with 0 have a kind of epistemological obstacles. And textbook analysis tells us that there are some instructional problems with 0 in textbooks: method and time of introducing 0, method of introducing computational algorithms, implicit teaching of the number facts with 0, ignoring the problems which can give rise to errors with 0. Finally, As a reult of analysis of Japanese and German textbooks, three instructional implications are induced:(i) emphasis of role of 0 as a place holder in decimal numeration system (ii) explicit and systematic teaching of the process and product of calculation with 0 (iii) giving practice of problems which can give rise to errors with 0 for prevention of systematical errors with 0.

  • PDF

An Efficient Algorithm to Minimize Total Error of the Imprecise Real Time Tasks with 0/1 Constraint (0/1 제약조건을 갖는 부정확한 실시간 태스크들의 총오류를 최소화시키는 효율적인 알고리즘)

  • Song, Gi-Hyeon
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.4
    • /
    • pp.309-320
    • /
    • 2006
  • The imprecise real-time system provides flexibility in scheduling time-critical tasks. Most scheduling problems of satisfying both 0/1 constraint and timing constraints, while the total error is minimized, are NP-complete when the optional tasks have arbitrary processing times. Liu suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on uniprocessors for minimizing the total error. Song et al suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on multiprocessors for minimizing the total error. But, these algorithms are all off-line algorithms. In the online scheduling, NORA algorithm can find a schedule with the minimum total error for the imprecise online task system. In the NORA algorithm, the EDF strategy is adopted in the optional scheduling.<중략> The algorithm, proposed in this paper, can be applied to some applications efficiently such as radar tracking, image processing, missile control and so on.

  • PDF

Analysing Textbooks and Devising Activities in relation to Errors of Zero(0) (0처리 오류에 기초한 교과용 도서 분석 및 활동 구성)

  • Chang, Hyewon;Choi, Mina;Lim, Miin
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.18 no.2
    • /
    • pp.257-278
    • /
    • 2014
  • The concept of zero(0) and calculations involving 0 are a few of the most difficult topics that students experience in learning mathematics. Therefore, it implies that proper guidance to help students understand the desirable concept of 0 and acquire calculating process involving 0 should be provided when we teach numbers and operations. This study aims to investigate instructional situations in relation to the errors of 0 and to search for efficient ways to teach them based on the previous research. To do this, we analysed elementary mathematics textbooks and workbooks. The result showed that $0{\div}$(a number) and the division of which quotient includes 0 as a middle digit lacked in current textbooks and workbooks. We devised the learning activities of the two topics for 3rd grades and 4th grades, respectively. We expect that the activities will be helpful to devise learning activities of textbook and suggest some implications for teaching the calculations involving 0.

  • PDF

Grammatical Error Correction Using Generative Adversarial Network (적대적 생성 신경망을 이용한 문법 오류 교정)

  • Kwon, Soonchoul;Yu, Hwanjo;Lee, Gary Geunbae
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.488-491
    • /
    • 2019
  • 문법 오류 교정은 문법적으로 오류가 있는 문장을 입력 받아 오류를 교정하는 시스템이다. 문법 오류 교정을 위해서는 문법 오류를 제거하는 것과 더불어 자연스러운 문장을 생성하는 것이 중요하다. 이 연구는 적대적 생성 신경망(GAN)을 이용하여 정답 문장과 구분이 되지 않을 만큼 자연스러운 문장을 생성하는 것을 목적으로 한다. 실험 결과 GAN을 이용한 문법 오류 교정은 MaxMatch F0.5 score 기준으로 0.4942을 달성하여 Baseline의 0.4462보다 높은 성능을 기록했다.

  • PDF

A Study on Analysis of Dynamic Generation of Initial Weights in EBP Learning (EBP 신경망 학습에서의 동적 초기 가중치 선택에 관한 연구)

  • Kim, Tea-Hun;Lee, Yill-Byung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.11a
    • /
    • pp.35-38
    • /
    • 2006
  • 다층 퍼셉트론(MLP) 학습 이론인 오류 역전파 알고리즘은 델타룰과 최급 하강법을 사용하기 때문에 학습시 많은 시간이 소요된다는 단점을 가지고 있다. 때문에 신경망에서의 잘못된 초기 가중치 선택은 오류 역전파 알고리즘을 사용하는 신경망에서의 현격한 학습 성능저하를 발생시키게 된다. 본 논문에서는 학습시 오류 역전파 알고리즘의 수렴시간을 개선하기 위한 신경망의 동적 초기 가중치 선택 알고리즘을 제안한다. 이 알고리즘은 학습전 기존의 선택 가중치와 모든 가중치가 1.0 또는 -1.0 값을 가지는 가중치 집합에서 가중치 변동률을 선측정하여 이들 중 가장 변동률이 큰 경우를 초기 가중치 집합으로 선정하게 된다. 즉, 초기의 가중치 변동률을 차후 성능을 판단하는 지표로 사용하여 잘못된 가중치 선택으로 인한 최악의 학습효율의 가능성을 배제시키고 다층 신경망의 학습특성상 평균 이상의 학습효율을 보장하는 초기 가중치 선택방법이다.

  • PDF

A Boundary Matching and Post-processing Method for the Temporal Error Concealment in H.264/AVC (H.264/AVC의 시간적 오류 은닉을 위한 경계 정합과 후처리 방법)

  • Lee, Jun-Woo;Na, Sang-Il;Won, In-Su;Lim, Dae-Kyu;Jeong, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.11
    • /
    • pp.1563-1571
    • /
    • 2009
  • In this paper, we propose a new boundary matching method for the temporal error concealment and a post processing algorithm for perceptual quality improvement of the concealed frame. Temporal error concealment is a method that substitutes error blocks with similar blocks from the reference frame. In conventional H.264/AVC standard, it compares outside pixels of erroneous block with inside pixels of reference block to find the most similar block. However, it is very possible that the conventional method substitutes erroneous block with the wrong one because it compares only narrow spatial range of pixels. In this paper, for substituting erroneous blocks with more correct blocks, we propose enhanced boundary matching method by comparing inside and outside pixels of reference block with outside pixels of erroneous block and setting up additional candidate motion vector in the fixed search range based on maximum and minimum value of candidate motion vectors. Furthermore, we propose a post processing method to smooth edges between concealed and decoded blocks without error by using the modified deblocking filter. We identified that the proposed method shows quality improvement of about 0.9dB over the conventional boundary matching methods.

  • PDF

A Study on the Characteristics of Errors Type for Wellness of Alzheimer's Dementia Patients in the Naming Task (알츠하이머성 치매환자의 웰니스를 위한 명명하기 과제에서의 오류유형 특성 연구)

  • Kang, Min-Gu
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.8
    • /
    • pp.213-219
    • /
    • 2020
  • The purpose of this study was to investigate the characteristics of error types in naming task for 8 questionable demeatia groups, 9 definite dementia groups, and 10 normal groups. The items of naming error analysis were classified into visual perception errors, semantic association errors, semantic non-correlation errors, phoneme errors, Don't Know, and No Response. For the analysis, descriptive statistics analysis, analysis of variance, and multivariate analysis of variance were conducted using SPSS 21.0. As a result, there was a significant difference in the error rate between groups according to the error type. The errors that showed significant differences between the normal group and the other two groups were visual perception errors and semantic non-related errors. The error of non-response was different from the dementia confirmation group, but there was no significant difference from the dementia suspicion group. These results showed that Alzheimer's patients had a defect in confrontation naming ability. Also, it was found that it is appropriate to provid other clues when the defects caused by the degeneration of a specific step during the information processing process become severe.

Kalman filter based Motion Vector Recovery for H.264 (H.264 비디오 표준에서의 칼만 필터 기반의 움직임벡터 복원)

  • Ko, Ki-Hong;Kim, Seong-Whan
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.801-808
    • /
    • 2007
  • Video coding standards such as MPEG-2, MPEG-4, H.263, and H.264 transmit a compressed video data using wired/wireless communication line with limited bandwidth. Because highly compressed bit-streams is likely to fragile to error from channel noise, video is damaged by error. There have been many research works on error concealment techniques, which recover transmission errors at decoder side [1, 2]. We designed an error concealment technique for lost motion vectors of H.264 video coding. In this paper, we propose a Kalman filter based motion vector recovery scheme, and experimented with standard video sequences. The experimental results show that our scheme restores original motion vector with more precision of 0.91 - 1.12 on average over conventional H.264 decoding with no error recovery.

Using Naïve Bayes Classifier and Confusion Matrix Spelling Correction in OCR (나이브 베이즈 분류기와 혼동 행렬을 이용한 OCR에서의 철자 교정)

  • Noh, Kyung-Mok;Kim, Chang-Hyun;Cheon, Min-Ah;Kim, Jae-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.310-312
    • /
    • 2016
  • OCR(Optical Character Recognition)의 오류를 줄이기 위해 본 논문에서는 교정 어휘 쌍의 혼동 행렬(confusion matrix)과 나이브 베이즈 분류기($na{\ddot{i}}ve$ Bayes classifier)를 이용한 철자 교정 시스템을 제안한다. 본 시스템에서는 철자 오류 중 한글에 대한 철자 오류만을 교정하였다. 실험에 사용된 말뭉치는 한국어 원시 말뭉치와 OCR 출력 말뭉치, OCR 정답 말뭉치이다. 한국어 원시 말뭉치로부터 자소 단위의 언어 모델(language model)과 교정 후보 검색을 위한 접두사 말뭉치를 구축했고, OCR 출력 말뭉치와 OCR 정답 말뭉치로부터 교정 어휘 쌍을 추출하고, 자소 단위로 분해하여 혼동 행렬을 만들고, 이를 이용하여 오류 모델(error model)을 구축했다. 접두사 말뭉치를 이용해서 교정 후보를 찾고 나이브 베이즈 분류기를 통해 확률이 높은 교정 후보 n개를 제시하였다. 후보 n개 내에 정답 어절이 있다면 교정을 성공하였다고 판단했고, 그 결과 약 97.73%의 인식률을 가지는 OCR에서, 3개의 교정 후보를 제시하였을 때, 약 0.28% 향상된 98.01%의 인식률을 보였다. 이는 한글에 대한 오류를 교정했을 때이며, 향후 특수 문자와 숫자 등을 복합적으로 처리하여 교정을 시도한다면 더 나은 결과를 보여줄 것이라 기대한다.

  • PDF

Effect of Processing Gain on the Iterative Decoding for a Recursive Single Parity Check Product Code (재귀적 SPCPC에 반복적 복호법을 적용할 때 처리 이득이 성능에 미치는 영향)

  • Chon, Su-Won;Kim, Yong-Cheol
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.721-728
    • /
    • 2010
  • CAMC (constant amplitude multi-code) has a better performance of error correction in iterative decoding than SPCPC (single parity check product code). CAMC benefits from a processing gain since it belongs to a spread spectrum signal. We show that the processing gain enhances the performance of CAMC. Additional correction of bit errors is achieved in the de-spreading of iteratively decoded signal. If the number of errors which survived the iterative decoding is less than or equal to ($\sqrt{N}/2-1$), all of the bit errors are removed after the de-spreading. We also propose a stopping criterion in the iterative decoding, which is based on the histogram of EI (extrinsic information). The initial values of EI are randomly distributed, and then they converge to ($-E_{max}$) or ($+E_{max}$) over the iterations. The strength of the convergence reflects how successfully error correction process is performed. Experimental results show that the proposed method achieves a gain of 0.2 dB in Eb/No.