• Title/Summary/Keyword: 동음이의어

Search Result 28, Processing Time 0.028 seconds

Semantic Priming Effect of Korean Lexical Ambiguity: A Comparison of Homonymy and Polysemy (한국어의 어휘적 중의성의 의미점화효과: 동음이의어와 다의어의 비교)

  • Yu, Gi-Soon;Nam, Ki-Chun
    • Phonetics and Speech Sciences
    • /
    • v.1 no.2
    • /
    • pp.63-73
    • /
    • 2009
  • The present study was conducted to explore how the processing of lexical ambiguity between homonymy and polysemy differs from each other, and whether the representation of mental lexicon was separated from each lexical ambiguity by a semantic priming paradigm. Homonymy (M1 means the literal meaning of '사과', i.e. apple and M2 means another literal meaning of '사과', i.e. apologize) was used in Experiment I, and polysemy (M2 means the literal meaning of '바람', i.e. wind and M2 means the figurative meaning of '바람', i.e. wanton) was used in Experiment 2. The results of both experiments showed that a significant semantic priming effect occurs regardless of the type of ambiguities (homonymy and polysemy) and the difference of their semantic processes. However, the semantic priming effect for polysemy was larger than that for homonymy. This result supports the hypothesis that the semantic process of homonymy is different from that of polysemy.

  • PDF

Lexical Disambiguation for Intonation Synthesis : A CCG Approach (억양 합성을 위한 어휘 중의성 해소 : 결합범주문법을 통한 접근)

  • Lee, Ho-Joon;Park, Jong-Chul
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2005.06a
    • /
    • pp.103-118
    • /
    • 2005
  • IT의 급격한 발전과 함께 새로운 형태의 정보 전달 방법이 지속적으로 나타나면서 우리말의 정확한 발음에 대한 인식이 점점 약화되고 있는 추세이다. 특히 장단음의 발음은 발화에 대한 전문인들도 정확하게 구분하지 못하고 있는 심각한 상황이다. 본 논문에서는 한국어 명사에서 나타나는 장단음화 현상을 주변 어휘와의 관계를 바탕으로 살펴보고 동음이의어 중 다르게 발음되는 명사의 장단음 구분을 명사와 명사의 수식어, 명사의 서술어와의 관계를 중심으로 논의한다. 분석된 결과는 결합범주문법을 이용하여 표현하고 어휘적 중의성이 해소된 음성 합성 과정을 표준화된 SSML(Speech Synthesis Markup Language)으로 기술한다.

  • PDF

Procedures and Problems in Compiling a Disambiguated Tagged Corpus (어휘의미분석 말뭉치 구축의 절차와 문제)

  • Shin, Chi-Hyon;Choi, Min-Woo;Kang, Beom-Mo
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.479-486
    • /
    • 2001
  • 동음이의어 간의 서로 다른 의미를 효율적으로 변별해 줄 수 있는 방법 중 하나로 어휘의미분석 말뭉치의 활용을 들 수 있다. 이는 품사 단위의 중의성을 해소해 줄 수 있는 형태소 분석 말뭉치를 기반으로, 이 단계에서 해결하지 못하는 어휘적인 중의성을 해결한 것으로, 보다 정밀한 언어학적 연구와 단어 의미의 중의성 해결(word sense disambiguation) 등 자연언어처리 기술 개발에 사용될 수 있는 중요한 언어 자원이다. 본 연구는 실제로 어휘의미분석 말뭉치를 구축하기 위한 기반 연구로서, 어휘의미분서 말뭉치의 설계와 구축 방법론상의 제반 사항을 살펴보고, 중의적 단어들의 분포적 특징과 단어의 중의성 해결 단계에서 발생할 수 있는 문제점을 지적하고, 아울러 그 해결 방법을 모색해 의는 것을 목적으로 한다.

  • PDF

A Study on Resolving Word Sense Ambiguity Using Mutual Information (상호 정보를 이용한 어의 모호성 해소에 관한 연구)

  • Jeon, Mee-Sun;Park, Se-Young
    • Annual Conference on Human and Language Technology
    • /
    • 1994.11a
    • /
    • pp.369-373
    • /
    • 1994
  • 정보 검색 시스템의 정확성은 색인어의 정확성과 질의 해석의 정확성에 의존한다. 한국어 정보 검색분야에서는 한국어의 특성을 고려하는 것이 무엇보다 중요하다. 한국어의 문서 색인과 질의 해석시 야기되는 어의 모호성(word sense ambiguity)을 가지는 단어에 대해서는 어의 모호성을 해소한 정확한 색인과 질의 해석이 전제되어야 정확한 문서를 검색해낼 수 있다. 본 논문은 한국어 문서 색인시 동음이의어(homonym)에 의해 발생하는 어의 모호성을 해소하기 위한 방안에 대해 다루고 있으며 의미적 관련 정보를 이용할 것을 제안하고 타당성을 보이는 실험 결과를 제시한다.

  • PDF

Korean Co-reference Resolution using BERT with Surfaceform (표층형을 이용한 BERT 기반 한국어 상호참조해결)

  • Heo, Cheolhun;Kim, Kuntae;Choi, Key-sun
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.67-70
    • /
    • 2019
  • 상호참조해결은 자연언어 문서 내에서 같은 개체를 나타내는 언급들을 연결하는 문제다. 대명사, 지시 관형사, 축약어, 동음이의어와 같은 언급들의 상호참조를 해결함으로써, 다양한 자연언어 처리 문제의 성능 향상에 기여할 수 있다. 본 논문에서는 현재 영어권 상호참조해결에서 좋은 성능을 내고 있는 BERT 기반 상호참조해결 모델에 한국어 데이터 셋를 적용시키고 표층형을 이용한 규칙을 추가했다. 본 논문의 모델과 기존의 모델들을 실험하여 성능을 비교하였다. 기존의 연구들과는 다르게 적은 특질로 정밀도 73.59%, 재현율 71.1%, CoNLL F1-score 72.31%의 성능을 보였다. 모델들의 결과를 분석하여 BERT 기반의 모델이 다양한 특질을 사용한 기존 딥러닝 모델에 비해 문맥적 요소를 잘 파악하는 것을 확인했다.

  • PDF

Lexico-semantic interactions during the visual and spoken recognition of homonymous Korean Eojeols (한국어 시·청각 동음동철이의 어절 재인에 나타나는 어휘-의미 상호작용)

  • Kim, Joonwoo;Kang, Kathleen Gwi-Young;Yoo, Doyoung;Jeon, Inseo;Kim, Hyun Kyung;Nam, Hyeomin;Shin, Jiyoung;Nam, Kichun
    • Phonetics and Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.1-15
    • /
    • 2021
  • The present study investigated the mental representation and processing of an ambiguous word in the bimodal processing system by manipulating the lexical ambiguity of a visually or auditorily presented word. Homonyms (e.g., '물었다') with more than two meanings and control words (e.g., '고통을') with a single meaning were used in the experiments. The lemma frequency of words was manipulated while the relative frequency of multiple meanings of each homonym was balanced. In both experiments using the lexical decision task, a robust frequency effect and a critical interaction of word type by frequency were found. In Experiment 1, spoken homonyms yielded faster latencies relative to control words (i.e., ambiguity advantage) in the low frequency condition, while ambiguity disadvantage was found in the high frequency condition. A similar interactive pattern was found in visually presented homonyms in the subsequent Experiment 2. Taken together, the first key finding is that interdependent lexico-semantic processing can be found both in the visual and auditory processing system, which in turn suggests that semantic processing is not modality dependent, but rather takes place on the basis of general lexical knowledge. The second is that multiple semantic candidates provide facilitative feedback only when the lemma frequency of the word is relatively low.

Word Sense Disambiguation based on Concept Learning with a focus on the Lowest Frequency Words (저빈도어를 고려한 개념학습 기반 의미 중의성 해소)

  • Kim Dong-Sung;Choe Jae-Woong
    • Language and Information
    • /
    • v.10 no.1
    • /
    • pp.21-46
    • /
    • 2006
  • This study proposes a Word Sense Disambiguation (WSD) algorithm, based on concept learning with special emphasis on statistically meaningful lowest frequency words. Previous works on WSD typically make use of frequency of collocation and its probability. Such probability based WSD approaches tend to ignore the lowest frequency words which could be meaningful in the context. In this paper, we show an algorithm to extract and make use of the meaningful lowest frequency words in WSD. Learning method is adopted from the Find-Specific algorithm of Mitchell (1997), according to which the search proceeds from the specific predefined hypothetical spaces to the general ones. In our model, this algorithm is used to find contexts with the most specific classifiers and then moves to the more general ones. We build up small seed data and apply those data to the relatively large test data. Following the algorithm in Yarowsky (1995), the classified test data are exhaustively included in the seed data, thus expanding the seed data. However, this might result in lots of noise in the seed data. Thus we introduce the 'maximum a posterior hypothesis' based on the Bayes' assumption to validate the noise status of the new seed data. We use the Naive Bayes Classifier and prove that the application of Find-Specific algorithm enhances the correctness of WSD.

  • PDF

Document Clustering using Clustering and Wikipedi (군집과 위키피디아를 이용한 문서군집)

  • Park, Sun;Lee, Seong Ho;Park, Hee Man;Kim, Won Ju;Kim, Dong Jin;Chandra, Abel;Lee, Seong Ro
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.392-393
    • /
    • 2012
  • This paper proposes a new document clustering method using clustering and Wikipedia. The proposed method can well represent the concept of cluster topics by means of NMF. It can solve the problem of "bags of words" to be not considered the meaningful relationships between documents and clusters, which expands the important terms of cluster by using of the synonyms of Wikipedia. The experimental results demonstrate that the proposed method achieves better performance than other document clustering methods.

  • PDF

Effects of Conceptual Context on Implicit Memory (의미적 맥락에 대한 처리가 암묵기억에 미치는 영향)

  • 연은경;김민식
    • Korean Journal of Cognitive Science
    • /
    • v.13 no.4
    • /
    • pp.9-21
    • /
    • 2002
  • Four experiments were conducted to examine whether maintaining the same conceptual context across study and test would affect performance on a perceptual implicit memory task. The sense-specific theory of priming (Lewandowsky et al., 1989) predicts greater priming from a match in conceptual context across study and test compared with a condition in which the conceptual context is mismatched, whereas the transfer-appropriate-processing view (e.g., Blaxton, 1989) predicts no difference. In experiment 1 and 2, little or no effect of varying context was observed on a implicit task. In experiment 3 and 4, a process-dissociation procedure (proposed by Jacoby, 1991) was used to separate automatic influences from consciously controlled influence in implicit memory, which was measured by Korean word completion task. The results showed that conceptual context effect was observed in consciously controlled parts of implicit memory. These results suggest that only consciously controlled processing parts of implicit memory is sensitive to conceptual context.

  • PDF

Emotion Analysis Using a Bidirectional LSTM for Word Sense Disambiguation (양방향 LSTM을 적용한 단어의미 중의성 해소 감정분석)

  • Ki, Ho-Yeon;Shin, Kyung-shik
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.197-208
    • /
    • 2020
  • Lexical ambiguity means that a word can be interpreted as two or more meanings, such as homonym and polysemy, and there are many cases of word sense ambiguation in words expressing emotions. In terms of projecting human psychology, these words convey specific and rich contexts, resulting in lexical ambiguity. In this study, we propose an emotional classification model that disambiguate word sense using bidirectional LSTM. It is based on the assumption that if the information of the surrounding context is fully reflected, the problem of lexical ambiguity can be solved and the emotions that the sentence wants to express can be expressed as one. Bidirectional LSTM is an algorithm that is frequently used in the field of natural language processing research requiring contextual information and is also intended to be used in this study to learn context. GloVe embedding is used as the embedding layer of this research model, and the performance of this model was verified compared to the model applied with LSTM and RNN algorithms. Such a framework could contribute to various fields, including marketing, which could connect the emotions of SNS users to their desire for consumption.