• Title/Summary/Keyword: word decoding

Search Result 57, Processing Time 0.026 seconds

Document Summarization Model Based on General Context in RNN

  • Kim, Heechan;Lee, Soowon
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1378-1391
    • /
    • 2019
  • In recent years, automatic document summarization has been widely studied in the field of natural language processing thanks to the remarkable developments made using deep learning models. To decode a word, existing models for abstractive summarization usually represent the context of a document using the weighted hidden states of each input word when they decode it. Because the weights change at each decoding step, these weights reflect only the local context of a document. Therefore, it is difficult to generate a summary that reflects the overall context of a document. To solve this problem, we introduce the notion of a general context and propose a model for summarization based on it. The general context reflects overall context of the document that is independent of each decoding step. Experimental results using the CNN/Daily Mail dataset show that the proposed model outperforms existing models.

1-Pass Semi-Dynamic Network Decoding Using a Subnetwork-Based Representation for Large Vocabulary Continuous Speech Recognition (대어휘 연속음성인식을 위한 서브네트워크 기반의 1-패스 세미다이나믹 네트워크 디코딩)

  • Chung Minhwa;Ahn Dong-Hoon
    • MALSORI
    • /
    • no.50
    • /
    • pp.51-69
    • /
    • 2004
  • In this paper, we present a one-pass semi-dynamic network decoding framework that inherits both advantages of fast decoding speed from static network decoders and memory efficiency from dynamic network decoders. Our method is based on the novel language model network representation that is essentially of finite state machine (FSM). The static network derived from the language model network [1][2] is partitioned into smaller subnetworks which are static by nature or self-structured. The whole network is dynamically managed so that those subnetworks required for decoding are cached in memory. The network is near-minimized by applying the tail-sharing algorithm. Our decoder is evaluated on the 25k-word Korean broadcast news transcription task. In case of the search network itself, the network is reduced by 73.4% from the tail-sharing algorithm. Compared with the equivalent static network decoder, the semi-dynamic network decoder has increased at most 6% in decoding time while it can be flexibly adapted to the various memory configurations, giving the minimal usage of 37.6% of the complete network size.

  • PDF

Design of an Encoder and Decoder Using Reed-Muller Code (Reed-Muller 부호의 인코더 및 디코더 설계)

  • 김영곤;강창언
    • Proceedings of the Korean Institute of Communication Sciences Conference
    • /
    • 1984.10a
    • /
    • pp.15-18
    • /
    • 1984
  • The majority - logic decoding algorithm for Geometry code is more simply imlemented than the known decoding algorithm for BCH codes. Thus, the moderate code word, Geometry codes provide rather effective error control. The purpose of this paper is to investigate the Reed - Muller code and to design the encoder and decoder circuit and to find the performance for (15, 11) Reed - muller code. Experimental results show that the system has not only single error - correcting ability but also good performance.

  • PDF

Quantization Performances and Iteration Number Statistics for Decoding Low Density Parity Check Codes (LDPC 부호의 복호를 위한 양자화 성능과 반복 횟수 통계)

  • Seo, Young-Dong;Kong, Min-Han;Song, Moon-Kyou
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.45 no.2
    • /
    • pp.37-43
    • /
    • 2008
  • The performance and hardware complexity of LDPC decoders depend on the design parameters of quantization, the clipping threshold $c_{th}$ and the number of quantization bits q, and also on the maximum number of decoding iterations. In this paper, the BER performances of LDPC codes are evaluated according to the clipping threshold $c_{th}$ and the number of quantization bits q through the simulation studies. By comparing the quantized Min-Sum algorithm with the ideal Min-Sum algorithm, it is shown that the quantized case with $c_{th}=2.5$ and q=6 has the best performance, which approaches the idea case. The decoding complexities are calculated and the word error rates(WER) are estimated by using the pdf which is obtained through the statistical analyses on the iteration numbers. These results can be utilized to tradeoff between the decoding performance and the complexity in LDPC decoder design.

Comparison Research of Non-Target Sentence Rejection on Phoneme-Based Recognition Networks (음소기반 인식 네트워크에서의 비인식 대상 문장 거부 기능의 비교 연구)

  • Kim, Hyung-Tai;Ha, Jin-Young
    • MALSORI
    • /
    • no.59
    • /
    • pp.27-51
    • /
    • 2006
  • For speech recognition systems, rejection function as well as decoding function is necessary to improve the reliability. There have been many research efforts on out-of-vocabulary word rejection, however, little attention has been paid on non-target sentence rejection. Recently pronunciation approaches using speech recognition increase the need for non-target sentence rejection to provide more accurate and robust results. In this paper, we proposed filler model method and word/phoneme detection ratio method to implement non-target sentence rejection system. We made performance evaluation of filler model along to word-level, phoneme-level, and sentence-level filler models respectively. We also perform the similar experiment using word-level and phoneme-level word/phoneme detection ratio method. For the performance evaluation, the minimized average of FAR and FRR is used for comparing the effectiveness of each method along with the number of words of given sentences. From the experimental results, we got to know that word-level method outperforms the other methods, and word-level filler mode shows slightly better results than that of word detection ratio method.

  • PDF

Word Recognition, Phonological Awareness and RAN Ability of the Korean Second-graders

  • Yoon, Hyo-Jin;Pae, So-Yeong;Ko, Do-Heung
    • Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.7-14
    • /
    • 2005
  • This study investigated the reading ability of Korean second-graders and the relationship between reading and phonological awareness and RAN (Rapid Automatized Naming) ability. A language-based reading assessment battery was used. Children at the end of the Korean second-grade were still at the developmental stage of decoding skill and seemed to be at Chall's stage 1. Findings indicated significant correlations between reading ability and phonological awareness and between reading ability and RAN ability. Therefore, the importance of phonological processing could be extended to syllable-based alphabetic languages.

  • PDF

A Study on Decoding Method of the R-S Code for Double-Encoding System in the Frequency Domain (주파수 영역에서 2중부호화 R-S부호의 부호방식에 관한 연구)

  • 전경일;김남욱;김용득
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.14 no.3
    • /
    • pp.216-226
    • /
    • 1989
  • In this paper, we explain about an outline of the decoding method for double encoding system using the error correcting capacitance and a simple decoding method. We have been taken formation two-dimension code word of doubly-encoded code using $C_1$(32, 28, 5) and $C_2$(32, 26, 7) Reed-Solomon codes, and had computer simulation of the erroe correcting processes in frequency domain. On these processes, the newly developed digital signal processing technology such as error correction using Berlekamp-Massey algorithm in frequency domain have been proven.

  • PDF

Automatic Speech Database Verification Method Based on Confidence Measure

  • Kang Jeomja;Jung Hoyoung;Kim Sanghun
    • MALSORI
    • /
    • no.51
    • /
    • pp.71-84
    • /
    • 2004
  • In this paper, we propose the automatic speech database verification method(or called automatic verification) based on confidence measure for a large speech database. This method verifies the consistency between given transcription and speech using the confidence measure. The automatic verification process consists of two stages : the word-level likelihood computation stage and multi-level likelihood ratio computation stage. In the word-level likelihood computation stage, we calculate the word-level likelihood using the viterbi decoding algorithm and make the segment information. In the multi-level likelihood ratio computation stage, we calculate the word-level and the phone-level likelihood ratio based on confidence measure with anti-phone model. By automatic verification, we have achieved about 61% error reduction. And also we can reduce the verification time from 1 month in manual to 1-2 days in automatic.

  • PDF

Symbol Decoding Schemes Combined with Channel Estimations for Coded OFDM Systems in Fading Channels. (페이딩 채널환경에서 CDFDM 시스템에 대한 채널 추정과 결합된 심볼검출 방법)

  • Cho, Jin-Woong;Kang, Cheol-Ho
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.9
    • /
    • pp.1-10
    • /
    • 2000
  • This paper proposes symbol decoding schemes combined with channel estimation techniques for coded orthogonal frequency division multiplexing (COFDM) systems in fading channels. sThe proposed symbol decoding schemes are consisted of a symbol decoding technique and channel estimation techniques. The symbol decoding based on Viterbi algorithm is achieved by matching the length of branch word from encoder trellis to the codeword length of symbol candidate on decoder trellis. Three combination schemes are described and their error performances are compared. The first scheme is to combine a symbol decoding technique with a training channel estimation technique. The second scheme joins a decision directed channel estimation technique to the first scheme. The time varying channel transfer functions are tracked by the decision directed channel estimation technique and the channel transfer functions used in the symbol decoder are updated every COFDM symbol. Finally, In order to reduce the effect of additive white Gaussian noise (AWGN) between adjacent subchannels, deinterleaved average channel estimation technique is combined. The error performances of the three schemes are significantly improved being compared with that of zero forcing equalizing schemes.

  • PDF

The Design of High Speed Bit and Word Processor (비트 및 워드 연산용 초고속 프로세서 설계)

  • Her, Jae-Dong;Yang, Oh
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2534-2536
    • /
    • 2002
  • This paper presents the design of high speed bit and word processor for sequence logic control using a FPGA. This FPGA is able to execute sequence instruction during program fetch cycle, because the program memory was separated from the data memory for high speed execution at 40MHz clock. Also this processor has 274 instructions set with a 32bit fixed width, so instruction decoding time and data memory interface time was reduced. This FPGA was synthesized by V600EHQ240 and Foundation tool of Xilinx company. The final simulation was successfully performed under Foundation tool simulation environment. And the FPGA programmed by VHDL for a 240 pin HQFP package. Finally the benchmark was performed to prove that the designed for bit and word processor has better performance than Q4A of Mitsubishi for the sequence logic control.

  • PDF