• Title/Summary/Keyword: word decoding

Search Result 56, Processing Time 0.027 seconds

Effects of Orthographic Knowledge and Phonological Awareness on Visual Word Decoding and Encoding in Children Aged 5-8 Years (5~8세 아동의 철자지식과 음운인식이 시각적 단어 해독과 부호화에 미치는 영향)

  • Na, Ye-Ju;Ha, Ji-Wan
    • Journal of Digital Convergence
    • /
    • v.14 no.6
    • /
    • pp.535-546
    • /
    • 2016
  • This study examined the relation among orthographic knowledge, phonological awareness, and visual word decoding and encoding abilities. Children aged 5 to 8 years took letter knowledge test, phoneme-grapheme correspondence test, orthographic representation test(regular word and irregular word representation), phonological awareness test(word, syllable and phoneme awareness), word decoding test(regular word and irregular word reading) and word encoding test(regular word and irregular word dictation). The performances of all tasks were significantly different among groups, and there were positive correlations among the tasks. In the word decoding and encoding tests, the variables with the most predictive power were the letter knowledge ability and the orthographic representation ability. It was found that orthographic knowledge more influenced visual word decoding and encoding skills than phonological awareness at these ages.

Phonetic Tied-Mixture Syllable Model for Efficient Decoding in Korean ASR (효율적 한국어 음성 인식을 위한 PTM 음절 모델)

  • Kim Bong-Wan;Lee Yong-Jn
    • MALSORI
    • /
    • no.50
    • /
    • pp.139-150
    • /
    • 2004
  • A Phonetic Tied-Mixture (PTM) model has been proposed as a way of efficient decoding in large vocabulary continuous speech recognition systems (LVCSR). It has been reported that PTM model shows better performance in decoding than triphones by sharing a set of mixture components among states of the same topological location[5]. In this paper we propose a Phonetic Tied-Mixture Syllable (PTMS) model which extends PTM technique up to syllables. The proposed PTMS model shows 13% enhancement in decoding speed than PTM. In spite of difference in context dependent modeling (PTM : cross-word context dependent modeling, PTMS : word-internal left-phone dependent modeling), the proposed model shows just less than 1% degradation in word accuracy than PTM with the same beam width. With a different beam width, it shows better word accuracy than in PTM at the same or higher speed.

  • PDF

Korean first graders' word decoding skills, phonological awareness, rapid automatized naming, and letter knowledge with/without developmental dyslexia (초등 1학년 발달성 난독 아동의 낱말 해독, 음운인식, 빠른 이름대기, 자소 지식)

  • Yang, Yuna;Pae, Soyeong
    • Phonetics and Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.51-60
    • /
    • 2018
  • This study aims to compare the word decoding skills, phonological awareness (PA), rapid automatized naming (RAN) skills, and letter knowledge of first graders with developmental dyslexia (DD) and those who were typically developing (TD). Eighteen children with DD and eighteen TD children, matched by nonverbal intelligence and discourse ability, participated in the study. Word decoding of Korean language-based reading assessment(Pae et al., 2015) was conducted. Phoneme-grapheme correspondent words were analyzed according to whether the word has meaning, whether the syllable has a final consonant, and the position of the grapheme in the syllable. Letter knowledge asked about the names and sounds of 12 consonants and 6 vowels. The children's PA of word, syllable, body-coda, and phoneme blending was tested. Object and letter RAN was measured in seconds. The decoding difficulty of non-words was more noticeable in the DD group than in the TD one. The TD children read the syllable initial and syllable final position with 99% correctness. Children with DD read with 80% and 82% correctness, respectively. In addition, the DD group had more difficulty in decoding words with two patchims when compared with the TD one. The DD group read only 57% of words with two patchims correctly, while the TD one read 91% correctly. There were significant differences in body-coda PA, phoneme level PA, letter RAN, object RAN, and letter-sound knowledge between the two groups. This study confirms the existence of Korean developmental dyslexics, and the urgent need for the inclusion of a Korean-specific phonics approach in the education system.

A Modified Viterbi Algorithm for Word Boundary Detection Error Compensation (단어 경계 검출 오류 보정을 위한 수정된 비터비 알고리즘)

  • Chung, Hoon;Chung, Ik-Joo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.1E
    • /
    • pp.21-26
    • /
    • 2007
  • In this paper, we propose a modified Viterbi algorithm to compensate for endpoint detection error during the decoding phase of an isolated word recognition task. Since the conventional Viterbi algorithm explores only the search space whose boundaries are fixed to the endpoints of the segmented utterance by the endpoint detector, the recognition performance is highly dependent on the accuracy level of endpoint detection. Inaccurately segmented word boundaries lead directly to recognition error. In order to relax the degradation of recognition accuracy due to endpoint detection error, we describe an unconstrained search of word boundaries and present an algorithm to explore the search space with efficiency. The proposed algorithm was evaluated by performing a variety of simulated endpoint detection error cases on an isolated word recognition task. The proposed algorithm reduced the Word Error Rate (WER) considerably, from 84.4% to 10.6%, while consuming only a little more computation power.

New Decoding Scheme for LDPC Codes Based on Simple Product Code Structure

  • Shin, Beomkyu;Hong, Seokbeom;Park, Hosung;No, Jong-Seon;Shin, Dong-Joon
    • Journal of Communications and Networks
    • /
    • v.17 no.4
    • /
    • pp.351-361
    • /
    • 2015
  • In this paper, a new decoding scheme is proposed to improve the error correcting performance of low-density parity-check (LDPC) codes in high signal-to-noise ratio (SNR) region by using post-processing. It behaves as follows: First, a conventional LDPC decoding is applied to received LDPC codewords one by one. Then, we count the number of word errors in a predetermined number of decoded codewords. If there is no word error, nothing needs to be done and we can move to the next group of codewords with no delay. Otherwise, we perform a proper post-processing which produces a new soft-valued codeword (this will be fully explained in the main body of this paper) and then apply the conventional LDPC decoding to it again to recover the unsuccessfully decoded codewords. For the proposed decoding scheme, we adopt a simple product code structure which contains LDPC codes and simple algebraic codes as its horizontal and vertical codes, respectively. The decoding capability of the proposed decoding scheme is defined and analyzed using the parity-check matrices of vertical codes and, especially, the combined-decodability is derived for the case of single parity-check (SPC) codes and Hamming codes used as vertical codes. It is also shown that the proposed decoding scheme achieves much better error correcting capability in high SNR region with little additional decoding complexity, compared with the conventional LDPC decoding scheme.

Design of A Reed-Solomon Code Decoder for Compact Disc Player using Microprogramming Method (마이크로프로그래밍 방식을 이용한 CDP용 Reed-Solomon 부호의 복호기 설계)

  • 김태용;김재균
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.10
    • /
    • pp.1495-1507
    • /
    • 1993
  • In this paper, an implementation of RS (Reed-Solomon) code decoder for CDP (Compact Disc Player) using microprogramming method is presented. In this decoding strategy, the equations composed of Newton's identities are used for computing the coefficients of the error locator polynomial and for checking the number of erasures in C2(outer code). Also, in C2 decoding the values of erasures are computed from syndromes and the results of C1(inner code) decoding. We pulled up the error correctability by correcting 4 erasures or less. The decoder contains an arithmetic logic unit over GF(28) for error correcting and a decoding controller with programming ROM, and also microinstructions. Microinstructions are used for an implementation of a decoding algorithm for RS code. As a result, it can be easily modified for upgrade or other applications by changing the programming ROM only. The decoder is implemented by the Logic Level Modeling of Verilog HDL. In the decoder, each microinstruction has 14 bits( = 1 word), and the size of the programming ROM is 360 words. The number of the maximum clock-cycle for decoding both C1 and C2 is 424.

  • PDF

Error Correction of Digital Data in Radio Data System (라디오 데이터 시스템의 디지털 데이터 에러 정정)

  • 김기근
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1991.06a
    • /
    • pp.78-81
    • /
    • 1991
  • Digital radio data is composed of groups which are divided into 4 blocks of 26 bits. And each block is made up of information word and check word. Check word of digital radio data that is composed ofcode word and offset word is used for group/block synchronization and error correction. In this paper, we have investigated the group/block synchronizer using offext word and shortened cyclic decoder for correcting error produced during the radio data transimission. Also, we have simulated the decoding process of the proposed decoder. From the simulation results, we have confirmed that the proposed decoder most with the required coding capcbility.

  • PDF

A Simplified Two-Step Majority-Logic Decoder for Cyclic Product Codes (순환 곱 코드의 간단한 두 단계 다수결 논리 디코더)

  • 정연호;강창언
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.10 no.3
    • /
    • pp.115-122
    • /
    • 1985
  • In this paper, A decoder for the product of the (7, 4) cyclic code and the (3, 1) cyclic code was designed with less majority gates than other ordinary two-step majority-logic decoder using the same codes, then it was constucted in simple sturucture as a result of the use of a ROM as a mojority gate. It took 42 clock pulses to correct a received word(or 21bits) entirely. And so the decoding time in this decoding was multiplied by a factor of about 0.7 relative to the decoding time in the previous decoding in which two decoders and two-demensional word arrays were used together.

  • PDF

A Test Algorithm for Instruction Decoding Function of MC 68000$\mu$P (MC68000$\mu$P의 명령어디코오딩 기능에 관한 시험알고리즘)

  • 김종호;안광선
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.22 no.6
    • /
    • pp.124-132
    • /
    • 1985
  • The functional testing of microprocessor comes to be time - consuming task with the progress of technology of LSl/VLSl . In this paper, we present an efficient method to test instruction decoding function of MC 68000 that is the reason of complicated functional testing. This method is based on the analysis of operation word that is instruction dccoding information available to user with microprocessor's manual. Thc instruction is partitioned into representative instructions and party instructions. Then 332 minimum test instruction pairs are chosen from 69 basic instructions for detecting of instruction decoding function faults and test procedure for these is discussed.

  • PDF

Document Summarization Model Based on General Context in RNN

  • Kim, Heechan;Lee, Soowon
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1378-1391
    • /
    • 2019
  • In recent years, automatic document summarization has been widely studied in the field of natural language processing thanks to the remarkable developments made using deep learning models. To decode a word, existing models for abstractive summarization usually represent the context of a document using the weighted hidden states of each input word when they decode it. Because the weights change at each decoding step, these weights reflect only the local context of a document. Therefore, it is difficult to generate a summary that reflects the overall context of a document. To solve this problem, we introduce the notion of a general context and propose a model for summarization based on it. The general context reflects overall context of the document that is independent of each decoding step. Experimental results using the CNN/Daily Mail dataset show that the proposed model outperforms existing models.