• Title/Summary/Keyword: Subword units

Search Result 7, Processing Time 0.027 seconds

Subword Neural Language Generation with Unlikelihood Training

  • Iqbal, Salahuddin Muhammad;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.45-50
    • /
    • 2020
  • A Language model with neural networks commonly trained with likelihood loss. Such that the model can learn the sequence of human text. State-of-the-art results achieved in various language generation tasks, e.g., text summarization, dialogue response generation, and text generation, by utilizing the language model's next token output probabilities. Monotonous and boring outputs are a well-known problem of this model, yet only a few solutions proposed to address this problem. Several decoding techniques proposed to suppress repetitive tokens. Unlikelihood training approached this problem by penalizing candidate tokens probabilities if the tokens already seen in previous steps. While the method successfully showed a less repetitive generated token, the method has a large memory consumption because of the training need a big vocabulary size. We effectively reduced memory footprint by encoding words as sequences of subword units. Finally, we report competitive results with token level unlikelihood training in several automatic evaluations compared to the previous work.

Research on Subword Tokenization of Korean Neural Machine Translation and Proposal for Tokenization Method to Separate Jongsung from Syllables (한국어 인공신경망 기계번역의 서브 워드 분절 연구 및 음절 기반 종성 분리 토큰화 제안)

  • Eo, Sugyeong;Park, Chanjun;Moon, Hyeonseok;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.3
    • /
    • pp.1-7
    • /
    • 2021
  • Since Neural Machine Translation (NMT) uses only a limited number of words, there is a possibility that words that are not registered in the dictionary will be entered as input. The proposed method to alleviate this Out of Vocabulary (OOV) problem is Subword Tokenization, which is a methodology for constructing words by dividing sentences into subword units smaller than words. In this paper, we deal with general subword tokenization algorithms. Furthermore, in order to create a vocabulary that can handle the infinite conjugation of Korean adjectives and verbs, we propose a new methodology for subword tokenization training by separating the Jongsung(coda) from Korean syllables (consisting of Chosung-onset, Jungsung-neucleus and Jongsung-coda). As a result of the experiment, the methodology proposed in this paper outperforms the existing subword tokenization methodology.

Stochastic Pronunciation Lexicon Modeling for Large Vocabulary Continous Speech Recognition (확률 발음사전을 이용한 대어휘 연속음성인식)

  • Yun, Seong-Jin;Choi, Hwan-Jin;Oh, Yung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.49-57
    • /
    • 1997
  • In this paper, we propose the stochastic pronunciation lexicon model for large vocabulary continuous speech recognition system. We can regard stochastic lexicon as HMM. This HMM is a stochastic finite state automata consisting of a Markov chain of subword states and each subword state in the baseform has a probability distribution of subword units. In this method, an acoustic representation of a word can be derived automatically from sample sentence utterances and subword unit models. Additionally, the stochastic lexicon is further optimized to the subword model and recognizer. From the experimental result on 3000 word continuous speech recognition, the proposed method reduces word error rate by 23.6% and sentence error rate by 10% compare to methods based on standard phonetic representations of words.

  • PDF

A Blind Segmentation Algorithm for Speaker Verification System (화자확인 시스템을 위한 분절 알고리즘)

  • 김지운;김유진;민홍기;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.45-50
    • /
    • 2000
  • This paper proposes a delta energy method based on Parameter Filtering(PF), which is a speech segmentation algorithm for text dependent speaker verification system over telephone line. Our parametric filter bank adopts a variable bandwidth along with a fixed center frequency. Comparing with other methods, the proposed method turns out very robust to channel noise and background noise. Using this method, we segment an utterance into consecutive subword units, and make models using each subword nit. In terms of EER, the speaker verification system based on whole word model represents 6.1%, whereas the speaker verification system based on subword model represents 4.0%, improving about 2% in EER.

  • PDF

BERT with subword units for Korean Morphological Analysis (BERT에 기반한 Subword 단위 한국어 형태소 분석)

  • Min, Jin-Woo;Na, Seung-Hoon;Sin, Jong-Hun;Kim, Young-Kil
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.37-40
    • /
    • 2019
  • 한국어 형태소 분석은 입력된 문장 내의 어절들을 지니는 최소의 단위인 형태소로 분리하고 품사 부착하는 작업을 의미한다. 기존 한국어 형태소 분석 방법은 음절 기반 연구가 주를 이루고 이를 순차 태깅 문제로 보고 SVM, CRF혹은 Bi-LSTM-CRF 등을 이용하거나 특정 음절에서 형태소의 경계를 결정하는 전이 기반 모델을 통해 분석하는 모델 등이 연구되었다. 최근 자연어 처리 연구에서 대용량 코퍼스로부터 문맥을 고려한 BERT 등의 언어 모델을 활용한 연구가 각광받고 있다. 본 논문에서는 음절 단위가 아닌 BERT를 이용한 Sub-word 기반 형태소 분석 방법을 제안하고 기분석 사전을 통해 분석하는 과정을 거쳐 세종 한국어 형태소 분석 데이터 셋에서 형태소 단위 F1 : 95.22%, 어절 정확도 : 93.90%의 성능을 얻었다.

  • PDF

Modified Phonetic Decision Tree For Continuous Speech Recognition

  • Kim, Sung-Ill;Kitazoe, Tetsuro;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.4E
    • /
    • pp.11-16
    • /
    • 1998
  • For large vocabulary speech recognition using HMMs, context-dependent subword units have been often employed. However, when context-dependent phone models are used, they result in a system which has too may parameters to train. The problem of too many parameters and too little training data is absolutely crucial in the design of a statistical speech recognizer. Furthermore, when building large vocabulary speech recognition systems, unseen triphone problem is unavoidable. In this paper, we propose the modified phonetic decision tree algorithm for the automatic prediction of unseen triphones which has advantages solving these problems through following two experiments in Japanese contexts. The baseline experimental results show that the modified tree based clustering algorithm is effective for clustering and reducing the number of states without any degradation in performance. The task experimental results show that our proposed algorithm also has the advantage of providing a automatic prediction of unseen triphones.

  • PDF

Automatic Selection of Similar Sentences for Teaching Writing in Elementary School (초등 글쓰기 교육을 위한 유사 문장 자동 선별)

  • Park, Youngki
    • Journal of The Korean Association of Information Education
    • /
    • v.20 no.4
    • /
    • pp.333-340
    • /
    • 2016
  • When elementary students write their own sentences, it is often educationally beneficial to compare them with other people's similar sentences. However, it is impractical for use in most classrooms, because it is burdensome for teachers to look up all of the sentences written by students. To cope with this problem, we propose a novel approach for automatic selection of similar sentences based on a three-step process: 1) extracting the subword units from the word-level sentences, 2) training the model with the encoder-decoder architecture, and 3) using the approximate k-nearest neighbor search algorithm to find the similar sentences. Experimental results show that the proposed approach achieves the accuracy of 75% for our test data.