• 제목/요약/키워드: word context

검색결과 350건 처리시간 0.027초

소음환경에서 표적단어의 예상도가 조절된 한국어의 문장검사목록개발 시안 (Development of a test of Korean Speech Intelligibility in Noise(KSPIN) using sentence materials with controlled word predictability)

  • 김진숙;배소영;이정학
    • 음성과학
    • /
    • 제7권2호
    • /
    • pp.37-50
    • /
    • 2000
  • This paper describes a test of everyday speech understanding ability, in which a listener's utilization of the context-situational information of speech is assessed, and is compared with the utilization of acoustic-phonetic information. The test items are sentences which are presented in a babble type of noise, and the listener response is the key word in the sentence. The key words are always two-syllabic nouns and the questioning sentences are added to obtain the responding key words. Two types of sentences are used. One is the high-predictable sentences for which the key word is somewhat predictable from the context. The other is the low-predictable sentences for which the key-word cannot be predicted from the context. Both types are included in six 40-item forms of the test, which are balanced for intelligibility, key-word familiarity and predictability, phonetic content, and length. Performance of normally hearing listeners shows significantly different functions for various signal-to-noise ratios. The potential applications of this test, particularly in the assessment of speech understanding ability in the hearing impaired, are discussed.

  • PDF

Document Summarization Model Based on General Context in RNN

  • Kim, Heechan;Lee, Soowon
    • Journal of Information Processing Systems
    • /
    • 제15권6호
    • /
    • pp.1378-1391
    • /
    • 2019
  • In recent years, automatic document summarization has been widely studied in the field of natural language processing thanks to the remarkable developments made using deep learning models. To decode a word, existing models for abstractive summarization usually represent the context of a document using the weighted hidden states of each input word when they decode it. Because the weights change at each decoding step, these weights reflect only the local context of a document. Therefore, it is difficult to generate a summary that reflects the overall context of a document. To solve this problem, we introduce the notion of a general context and propose a model for summarization based on it. The general context reflects overall context of the document that is independent of each decoding step. Experimental results using the CNN/Daily Mail dataset show that the proposed model outperforms existing models.

대학생들을 위한 효과적인 어휘지도법 연구: 어형성 과정을 이용한 어휘지도법, 의미 있는 어휘를 이용한 어휘지도법, 문맥을 이용한 어휘지도법 (A Study of the Effective Methods of Vocabulary Teaching: The Methods of Teaching Vocabulary Through the Process of Word Formation, Meaningful Words and Context)

  • 편무태
    • 한국영어학회지:영어학
    • /
    • 제3권4호
    • /
    • pp.611-635
    • /
    • 2003
  • The main purpose of this study is to find out what teaching method is more effective than others through experiments for college students. Therefore, this study aims to review various effective methods of vocabulary teaching. According to the results of the experiments, the methods of teaching vocabulary through the process of word formation and meaningful words led to the high scores at the posttest regardless of the scores gained by the individual subjects at the pretest. However, the method of teaching vocabulary through context showed that the improved scores at the posttest generally reflected the individual differences of the scores at the pretest. That is, in the latter, it is recognizable that the subjects who achieved the high scores at the pretest did very well at the posttest as well. In conclusion, judging from the mean rate of improvement, the method of teaching vocabulary through word formation seems to be more effective than that of teaching vocabulary through meaningful words and context.

  • PDF

한국어 연속음성 인식을 위한 단어 결합 모델링에 관한 연구 (A Study on Word Juncture Modeling for Continuous Speech Recognition of Korean Language)

  • 최인정;은종관
    • 한국음향학회지
    • /
    • 제13권5호
    • /
    • pp.24-31
    • /
    • 1994
  • 본 논문에서는 단어 조음결합의 음성학적 모델을 이용한 한국어 연속음성 인식에 관해 연구한다. 조음결합 현상에 의한 성능 감소를 줄이기 위해 단어내에서의 전이뿐만 아니라 단어간의 전이를 모델링하는 context-dependent (CD)단위를 사용한다. 모든 경우에서 각 단어의 첫 음소는 앞에 올 수 있는 모든 단어의 마지막 음소에 의해 지정되며, 각 단어의 마지막 음소도 유사한 방법으로 지정된다. Hidden Markov model (HMM) 파라미터들의 강인성을 개선하기 위해 공분산 행렬을 평활화한다. 또한 음성 단위들 사이의 분별력을 높이기 위해 position-dependent 단위를 사용한다. 실험 결과들은 개선된 조음결합 모델을 사용함으로서 intra-word 단위만을 사용하는 기본 인식 시스템에 비해 성능을 상당히 개선할 수 있음을 보여 주었다.

  • PDF

단어 중의성 해소를 위한 지도학습 방법의 통계적 자질선정에 관한 연구 (A Study on Statistical Feature Selection with Supervised Learning for Word Sense Disambiguation)

  • 이용구
    • 한국비블리아학회지
    • /
    • 제22권2호
    • /
    • pp.5-25
    • /
    • 2011
  • 이 연구는 지도학습 방법을 이용한 단어 중의성 해소가 최적의 성능을 가져오는 통계적 자질선정 방법과 다양한 문맥의 크기를 파악하고자 하였다. 실험집단인 한글 신문기사에 자질선정 기준으로 정보획득량, 카이제곱 통계량, 문헌빈도, 적합성 함수 등을 적용하였다. 실험 결과, 텍스트 범주화 기법과 같이 단어 중의성 해소에서도 자질선정 방법이 매우 유용한 수단이 됨을 알 수 있었다. 실험에 적용한 자질선중 기준 중에 정보획득량이 가장 좋은 성능을 보였다. SVM 분류기는 자질집합 크기와 문맥 크기가 클수록 더 좋은 성능을 보여 자질선정에 영향을 받지 않았다. 나이브 베이즈 분류기는 10% 정도의 자질집합 크기에서 가장 좋은 성능을 보였다. kNN의 경우 10% 이하의 자질에서 가장 좋은 성능을 보였다. 단어 중의성 해소를 위한 자질선정을 적용할 때 작은 자질집합 크기와 큰 문맥 크기를 조합하거나, 반대로 큰 자질집합 크기와 작은 문맥 크기를 조합하면 성능을 극대화 할 수 있다.

영작문 자동 채점 시스템을 위한 문맥 고려 단어 오류 검사기 (Context-sensitive Word Error Detection and Correction for Automatic Scoring System of English Writing)

  • 최용석;이공주
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제4권1호
    • /
    • pp.45-56
    • /
    • 2015
  • 본 연구에서는 문맥 정보를 함께 고려해야만 인식할 수 있는 단어 오류에 대하여 오류 인식 방법과 수정 후보 생성 방법을 제안한다. 이 문제는 기존의 영어권에서 이미 많이 다룬 연구 주제이다. 본 연구에서는 영어 자동채점 시스템에서 사용하도록 특화된 방법을 제안한다. 문맥 정보를 고려한 단어 오류 검사에서는 자주 혼동되어 사용되는 단어집합(confusion set)을 활용한다. 비영어권 사용자의 작문 특성을 반영하기 위해 기존의 영어권에서 구축한 혼동집합 이외에 자동으로 혼동집합을 구축하여 실험해 보았다. 또한 품사 중의성으로 인해 기존의 구문오류 검사기가 다루지 못하는 오류를 정의하고 오류 인식과 오류수정 후보를 생성하는 방법을 제안한다. 실제 한국어가 모국어이면서 초/중급 작문 수준의 수험생들이 작성한 영어 문장에 대해 평가해 본 결과, 약 70.48%의 f1 값을 얻어 기존의 영어권 결과에 비해 뒤지지 않는 성능을 보였다.

음성인식에서 문맥의존 음향모델의 성능향상을 위한 유사음소단위에 관한 연구 (A Study on Phoneme Likely Units to Improve the Performance of Context-dependent Acoustic Models in Speech Recognition)

  • 임영춘;오세진;김광동;노덕규;송민규;정현열
    • 한국음향학회지
    • /
    • 제22권5호
    • /
    • pp.388-402
    • /
    • 2003
  • In this paper, we carried out the word, 4 continuous digits. continuous, and task-independent word recognition experiments to verify the effectiveness of the re-defined phoneme-likely units (PLUs) for the phonetic decision tree based HM-Net (Hidden Markov Network) context-dependent (CD) acoustic modeling in Korean appropriately. In case of the 48 PLUs, the phonemes /ㅂ/, /ㄷ/, /ㄱ/ are separated by initial sound, medial vowel, final consonant, and the consonants /ㄹ/, /ㅈ/, /ㅎ/ are also separated by initial sound, final consonant according to the position of syllable, word, and sentence, respectively. In this paper. therefore, we re-define the 39 PLUs by unifying the one phoneme in the separated initial sound, medial vowel, and final consonant of the 48 PLUs to construct the CD acoustic models effectively. Through the experimental results using the re-defined 39 PLUs, in word recognition experiments with the context-independent (CI) acoustic models, the 48 PLUs has an average of 7.06%, higher recognition accuracy than the 39 PLUs used. But in the speaker-independent word recognition experiments with the CD acoustic models, the 39 PLUs has an average of 0.61% better recognition accuracy than the 48 PLUs used. In the 4 continuous digits recognition experiments with the liaison phenomena. the 39 PLUs has also an average of 6.55% higher recognition accuracy. And then, in continuous speech recognition experiments, the 39 PLUs has an average of 15.08% better recognition accuracy than the 48 PLUs used too. Finally, though the 48, 39 PLUs have the lower recognition accuracy, the 39 PLUs has an average of 1.17% higher recognition characteristic than the 48 PLUs used in the task-independent word recognition experiments according to the unknown contextual factor. Through the above experiments, we verified the effectiveness of the re-defined 39 PLUs compared to the 48PLUs to construct the CD acoustic models in this paper.

Sentence model based subword embeddings for a dialog system

  • Chung, Euisok;Kim, Hyun Woo;Song, Hwa Jeon
    • ETRI Journal
    • /
    • 제44권4호
    • /
    • pp.599-612
    • /
    • 2022
  • This study focuses on improving a word embedding model to enhance the performance of downstream tasks, such as those of dialog systems. To improve traditional word embedding models, such as skip-gram, it is critical to refine the word features and expand the context model. In this paper, we approach the word model from the perspective of subword embedding and attempt to extend the context model by integrating various sentence models. Our proposed sentence model is a subword-based skip-thought model that integrates self-attention and relative position encoding techniques. We also propose a clustering-based dialog model for downstream task verification and evaluate its relationship with the sentence-model-based subword embedding technique. The proposed subword embedding method produces better results than previous methods in evaluating word and sentence similarity. In addition, the downstream task verification, a clustering-based dialog system, demonstrates an improvement of up to 4.86% over the results of FastText in previous research.

Empirical Comparison of Word Similarity Measures Based on Co-Occurrence, Context, and a Vector Space Model

  • Kadowaki, Natsuki;Kishida, Kazuaki
    • Journal of Information Science Theory and Practice
    • /
    • 제8권2호
    • /
    • pp.6-17
    • /
    • 2020
  • Word similarity is often measured to enhance system performance in the information retrieval field and other related areas. This paper reports on an experimental comparison of values for word similarity measures that were computed based on 50 intentionally selected words from a Reuters corpus. There were three targets, including (1) co-occurrence-based similarity measures (for which a co-occurrence frequency is counted as the number of documents or sentences), (2) context-based distributional similarity measures obtained from a latent Dirichlet allocation (LDA), nonnegative matrix factorization (NMF), and Word2Vec algorithm, and (3) similarity measures computed from the tf-idf weights of each word according to a vector space model (VSM). Here, a Pearson correlation coefficient for a pair of VSM-based similarity measures and co-occurrence-based similarity measures according to the number of documents was highest. Group-average agglomerative hierarchical clustering was also applied to similarity matrices computed by individual measures. An evaluation of the cluster sets according to an answer set revealed that VSM- and LDA-based similarity measures performed best.

자연어 처리 기법을 활용한 산업재해 위험요인 구조화 (Structuring Risk Factors of Industrial Incidents Using Natural Language Process)

  • 강성식;장성록;이종빈;서용윤
    • 한국안전학회지
    • /
    • 제36권1호
    • /
    • pp.56-63
    • /
    • 2021
  • The narrative texts of industrial accident reports help to identify accident risk factors. They relate the accident triggers to the sequence of events and the outcomes of an accident. Particularly, a set of related keywords in the context of the narrative can represent how the accident proceeded. Previous studies on text analytics for structuring accident reports have been limited to extracting individual keywords without context. We proposed a context-based analysis using a Natural Language Processing (NLP) algorithm to remedy this shortcoming. This study aims to apply Word2Vec of the NLP algorithm to extract adjacent keywords, known as word embedding, conducted by the neural network algorithm based on supervised learning. During processing, Word2Vec is conducted by adjacent keywords in narrative texts as inputs to achieve its supervised learning; keyword weights emerge as the vectors representing the degree of neighboring among keywords. Similar keyword weights mean that the keywords are closely arranged within sentences in the narrative text. Consequently, a set of keywords that have similar weights presents similar accidents. We extracted ten accident processes containing related keywords and used them to understand the risk factors determining how an accident proceeds. This information helps identify how a checklist for an accident report should be structured.