• 제목/요약/키워드: lexical information

검색결과 324건 처리시간 0.023초

Lexical and Semantic Incongruities between the Lexicons of English and Korean

  • Lee, Yae-Sheik
    • 한국언어정보학회지:언어와정보
    • /
    • 제5권2호
    • /
    • pp.21-37
    • /
    • 2001
  • Pustejovsky (1995) rekindled debate on the dual problems of how to represent lexical meaning and on the information that is to be encoded in a lexicon. For natural language processing such as machine translation, these are important issues. When a lexical-conceptual mismatch occurs in translation of corresponding words from two different languages, the appropriate representation of their meanings is very important. This paper proposes a new formalism for representing lexical entries by first analysing observable mismatches in comparable pairs of nouns, verbs, and adjectives in English and Korean. Inherent mis-interpretations and mis-readings in each pair are identified. Then, concept theories such as those presented by Ganter and Wille (1996) and Priss (1998) are extended in order to reflect the cognitivist view that meaning resides in concept, and also to incorporate the propositions of the so-called ‘multiple inheritance’system. An alternative to the formalism of Pustejovsky (1995) and Pollard & Sag (1994) is then proposed. Finally, representative examples of lexical mismatches are analysed using the new model.

  • PDF

한국어 문장분석과 어휘정보의 연결에 관한 연구 (A Study of the Interface between Korean Sentence Parsing and Lexical Information)

  • 최병진
    • 한국언어정보학회지:언어와정보
    • /
    • 제4권2호
    • /
    • pp.55-68
    • /
    • 2000
  • The efficiency and stability of an NLP system depends crucially on how is lexicon is orga- nized . Then lexicon ought to encode linguistic generalizations and exceptions thereof. Nowadays many computational linguists tend to construct such lexical information in an inheritance hierarchy DATR is good for this purpose In this research I will construct a DATR-lexicon in order to parse sentences in Korean using QPATR is implemented on the basis of a unification based grammar developed in Dusseldorf. In this paper I want to show the interface between a syntactic parser(QPATR) and DTAR-formalism representing lexical information. The QPATR parse can extract the lexical information from the DATR lexicon which is organised hierarchically.

  • PDF

말소리 단어 재인 시 높낮이와 장단의 역할: 서울 방언과 대구 방언의 비교 (The Role of Pitch and Length in Spoken Word Recognition: Differences between Seoul and Daegu Dialects)

  • 이윤형;박현수
    • 말소리와 음성과학
    • /
    • 제1권2호
    • /
    • pp.85-94
    • /
    • 2009
  • The purpose of this study was to see the effects of pitch and length patterns on spoken word recognition. In Experiment 1, a syllable monitoring task was used to see the effects of pitch and length on the pre-lexical level of spoken word recognition. For both Seoul dialect speakers and Daegu dialect speakers, pitch and length did not affect the syllable detection processes. This result implies that there is little effect of pitch and length in pre-lexical processing. In Experiment 2, a lexical decision task was used to see the effect of pitch and length on the lexical access level of spoken word recognition. In this experiment, word frequency (low and high) as well as pitch and length was manipulated. The results showed that pitch and length information did not play an important role for Seoul dialect speakers, but that it did affect lexical decision processing for Daegu dialect speakers. Pitch and length seem to affect lexical access during the word recognition process of Daegu dialect speakers.

  • PDF

재난안전정보 관리를 위한 어휘자원 현황분석 및 활용방안 (A Study on the Utilization Plan of Lexical Resources for Disaster and Safety Information Management Based on Current Status Analysis)

  • 정힘찬;김태영;김용;오효정
    • 정보관리학회지
    • /
    • 제34권2호
    • /
    • pp.137-158
    • /
    • 2017
  • 재난은 국민의 생명 신체 재산에 직접적인 영향을 미치는 사건으로, 재난 발생 시 신속하고 효과적인 대응을 위해서는 관련 정보들을 효율적으로 공유, 활용하는 협조 과정이 무엇보다도 중요하다. 현재 재난안전 유관기관별로 다양한 재난안전정보가 생산 및 관리되고 있지만, 각 기관별로 개별적인 용어와 의미를 정의하여 활용하고 있다. 이는 재난안전정보를 검색하고 접근하려는 실무자 입장에서 큰 걸림돌이며, 기관별 정보 활용도를 저해시키는 요인 중에 하나이다. 이러한 문제점을 해결하기 위해 재난안전정보의 통합적 관리를 위한 어휘자원의 표준화 작업의 선행 연구로, 본 연구에서는 재난안전 유관기관에서 관리하고 있는 어휘자원의 현황분석을 수행하였다. 또한 수집된 어휘자원을 대상으로 정보제공자 및 이용자 관점에서의 활용도 분석을 통해 어휘 그룹별 특성을 파악하고 이에 기반해 재난안전정보 관리를 위한 활용방안을 제안하였다.

통계정보에 기반을 둔 한국어 어휘중의성해소 (Korean Lexical Disambiguation Based on Statistical Information)

  • 박하규;김영택
    • 한국통신학회논문지
    • /
    • 제19권2호
    • /
    • pp.265-275
    • /
    • 1994
  • 어휘중의성 해소는 음성 인식/생성, 정보 검색, 발뭉치 태킹 등 자연언어 처리에서 가장 기초가 되는 분야 중의 하나이다. 본 논문은 말뭉치로부터 추출된 통계정보를 이용하는 한국어 어휘중의성해소 기법에 대해 기술한다. 이 기법에서는 좀더 정밀한 중의성해소를 위해 품사태그 대신 형태소분석 결과에 해당하는 토큰태그를 사용하고 있다. 본 논문에서 제안한 어휘선택함수는 어미나 조사의 호응 관계등 한국어의 어휘적 특성을 잘 반영하기 때문에 상당히 높은 정확성을 보여준다. 그리고 활용분야에 적합하게 사용될 수 있도록 유일선택 방식과 다중선택 방식이라는 두가지 중의성해소 방식을 지원하고 있다.

  • PDF

어휘정보와 시소러스에 기반한 스팸메일 필터링 (Spam-mail Filtering based on Lexical Information and Thesaurus)

  • 강신재;김종완
    • 한국산업정보학회논문지
    • /
    • 제11권1호
    • /
    • pp.13-20
    • /
    • 2006
  • 본 연구에서는 어휘정보와 개념정보를 기반으로 스팸메일 필터링 시스템을 구축하였다. 스팸메일을 판별할 수 있는 정보를 두 가지로 구분하였는데, 확실한 정보군은 송신자 정보, URL, 그리고 최근 스팸 키워드 리스트이며, 덜 확실한 정보군은 메일 본문에서 추출한 단어목록과 개념코드이다. 먼저 확실한 정보군을 이용하여 스팸메일을 분류하고 그다음 덜 확실한 정보군을 이용하였다. 메일 본문에 포함된 어휘정보와 개념코드는 SVM 기계학습을 한 후 사용된다. 본 연구의 결과, 더 많은 어휘정보를 특징벡터로 사용하였을 때 스팸 정확률이 상승하였으며 추가로 개념코드를 특징벡터에 포함시켰을 때 스팸 재현율이 상승하였다.

  • PDF

동사 어휘의미망의 반자동 구축을 위한 사전정의문의 중심어 추출 (The Extraction of Head words in Definition for Construction of a Semi-automatic Lexical-semantic Network of Verbs)

  • 김혜경;윤애선
    • 한국언어정보학회지:언어와정보
    • /
    • 제10권1호
    • /
    • pp.47-69
    • /
    • 2006
  • Recently, there has been a surge of interests concerning the construction and utilization of a Korean thesaurus. In this paper, a semi-automatic method for generating a lexical-semantic network of Korean '-ha' verbs is presented through an analysis of the lexical definitions of these verbs. Initially, through the use of several tools that can filter out and coordinate lexical data, pairs constituting a word and a definition were prepared for treatment in a subsequent step. While inspecting the various definitions of each verb, we extracted and coordinated the head words from the sentences that constitute the definition of each word. These words are thought to be the main conceptual words that represent the sense of the current verb. Using these head words and related information, this paper shows that the creation of a thesaurus could be achieved without any difficulty in a semi-automatic fashion.

  • PDF

Analyzing the correlation of Spam Recall and Thesaurus

  • Kang, Sin-Jae;Kim, Jong-Wan
    • 한국정보기술응용학회:학술대회논문집
    • /
    • 한국정보기술응용학회 2005년도 6th 2005 International Conference on Computers, Communications and System
    • /
    • pp.21-25
    • /
    • 2005
  • In this paper, we constructed a two-phase spam-mail filtering system based on the lexical and conceptual information. There are two kinds of information that can distinguish the spam mail from the legitimate mail. The definite information is the mail sender's information, URL, a certain spam list, and the less definite information is the word list and concept codes extracted from the mail body. We first classified the spam mail by using the definite information, and then used the less definite information. We used the lexical information and concept codes contained in the email body for SVM learning in the $2^{nd}$ phase. According to our results the spam precision was increased if more lexical information was used as features, and the spam recall was increased when the concept codes were included in features as well.

  • PDF

품사태킹을 위한 어휘문맥 의존규칙의 말뭉치기반 중의성주도 학습 (Corpus-Based Ambiguity-Driven Learning of Context- Dependent Lexical Rules for Part-of-Speech Tagging)

  • 이상주;류원호;김진동;임해창
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제26권1호
    • /
    • pp.178-178
    • /
    • 1999
  • Most stochastic taggers can not resolve some morphological ambiguities that can be resolved only by referring to lexical contexts because they use only contextual probabilities based ontag n-grams and lexical probabilities. Existing lexical rules are effective for resolving such ambiguitiesbecause they can refer to lexical contexts. However, they have two limitations. One is that humanexperts tend to make erroneous rules because they are deterministic rules. Another is that it is hardand time-consuming to acquire rules because they should be manually acquired. In this paper, wepropose context-dependent lexical rules, which are lexical rules based on the statistics of a taggedcorpus, and an ambiguity-driven teaming method, which is the method of automatically acquiring theproposed rules from a tagged corpus. By using the proposed rules, the proposed tagger can partiallyannotate an unseen corpus with high accuracy because it is a kind of memorizing tagger that canannotate a training corpus with 100% accuracy. So, the proposed tagger is useful to improve theaccuracy of a stochastic tagger. And also, it is effectively used for detecting and correcting taggingerrors in a manually tagged corpus. Moreover, the experimental results show that the proposed methodis also effective for English part-of-speech tagging.

A Constraint on Lexical Transfer: Implications for Computer-Assisted Translation(CAT)

  • Park, Kabyong
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권11호
    • /
    • pp.9-16
    • /
    • 2016
  • The central goal of the current paper is to investigate lexical transfer between Korean and English and to identify rule-governed behavior and to provide implications for development of computer-assisted translation(CAT) software for the two languages. It will be shown that Sankoff and Poplack's Free Morpheme Constraint can not account for all the range of data. A constraint is proposed that a set of case-assigners such as verbs, INFL, prepositions, and the possessive marker may not undergo lexical transfer. The translation software is also expected to be equipped with the proposed claim that English verbs are actually borrowed as nouns or as defective verbs to escape from the direct attachment of inflectional morphemes.