• Title/Summary/Keyword: lexical information

Search Result 324, Processing Time 0.021 seconds

A Corpus-Based Study of the Use of HEART and HEAD in English

  • Oh, Sang-suk
    • Language and Information
    • /
    • v.18 no.2
    • /
    • pp.81-102
    • /
    • 2014
  • The purpose of this paper is to provide corpus-based quantitative analyses of HEART and HEAD in order to examine their actual usage status and to consider some cognitive linguistic aspects associated with their use. The two corpora COCA and COHA are used for analysis in this study. The analysis of COCA corpus reveals that the total frequency of HEAD is much higher than that of HEART, and that the figurative use of HEART (60%) is two times higher than its literal use (32%); by contrast, the figurative use of HEAD (41%) is a bit higher than its literal use (38%). Among all four genres, both lexemes occur most frequently in fictions and then in magazines. Over the past two centuries, the use of HEART has been steadily decreasing; by contrast, that the use of HEAD has been steadily increasing. It is assumed that the decreasing use of HEART has partially to do with the decrease in its figurative use and that the increasing use of HEAD is attributable to its diverse meanings, the increase of its lexical use, and the partial increase in its figurative use. The analysis of the collocation of verbs and adjectives preceding HEART and HEAD, as well the modifying and predicating forms of HEART and HEAD also provides some relevant information of the usage of the two lexemes. This paper showcases that the quantitative information helps understanding not only of the actual usage of the two lexemes but also of the cognitive forces working behind it.

  • PDF

The Detection and Correction of Context Dependent Errors of The Predicate using Noun Classes of Selectional Restrictions (선택 제약 명사의 의미 범주 정보를 이용한 용언의 문맥 의존 오류 검사 및 교정)

  • So, Gil-Ja;Kwon, Hyuk-Chul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.1
    • /
    • pp.25-31
    • /
    • 2014
  • Korean grammar checkers typically detect context-dependent errors by employing heuristic rules; these rules are formulated by language experts and consisted of lexical items. Such grammar checkers, unfortunately, show low recall which is detection ratio of errors in the document. In order to resolve this shortcoming, a new error-decision rule-generalization method that utilizes the existing KorLex thesaurus, the Korean version of Princeton WordNet, is proposed. The method extracts noun classes from KorLex and generalizes error-decision rules from them using the Tree Cut Model and information-theory-based MDL (minimum description length).

A Natural Language Question Answering System-an Application for e-learning

  • Gupta, Akash;Rajaraman, Prof. V.
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.285-291
    • /
    • 2001
  • This paper describes a natural language question answering system that can be used by students in getting as solution to their queries. Unlike AI question answering system that focus on the generation of new answers, the present system retrieves existing ones from question-answer files. Unlike information retrieval approaches that rely on a purely lexical metric of similarity between query and document, it uses a semantic knowledge base (WordNet) to improve its ability to match question. Paper describes the design and the current implementation of the system as an intelligent tutoring system. Main drawback of the existing tutoring systems is that the computer poses a question to the students and guides them in reaching the solution to the problem. In the present approach, a student asks any question related to the topic and gets a suitable reply. Based on his query, he can either get a direct answer to his question or a set of questions (to a maximum of 3 or 4) which bear the greatest resemblance to the user input. We further analyze-application fields for such kind of a system and discuss the scope for future research in this area.

  • PDF

Determination of an Optimal Sentence Segmentation Position using Statistical Information and Genetic Learning (통계 정보와 유전자 학습에 의한 최적의 문장 분할 위치 결정)

  • 김성동;김영택
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.10
    • /
    • pp.38-47
    • /
    • 1998
  • The syntactic analysis for the practical machine translation should be able to analyze a long sentence, but the long sentence analysis is a critical problem because of its high analysis complexity. In this paper a sentence segmentation method is proposed for an efficient analysis of a long sentence and the method of determining optimal sentence segmentation positions using statistical information and genetic learning is introduced. It consists of two modules: (1) decomposable position determination which uses lexical contextual constraints acquired from a training data tagged with segmentation positions. (2) segmentation position selection by the selection function of which the weights of parameters are determined through genetic learning, which selects safe segmentation positions with enhancing the analysis efficiency as much as possible. The safe segmentation by the proposed sentence segmentation method and the efficiency enhancement of the analysis are presented through experiments.

  • PDF

Analysis of Readability of Text in English for Radiation Therapy for Foreigner Patient with Cancer in South Korea (외국인 암 환자를 위한 국내 방사선치료 영문 텍스트 가독성 분석)

  • Dae-Gun, Kim;Sungchul, Kim
    • Journal of radiological science and technology
    • /
    • v.45 no.6
    • /
    • pp.543-552
    • /
    • 2022
  • This study compared and analyzed with the United States(USA) to evaluated the level of readability of radiotherapy information (English text) provide to foreign patients with cancer by medical institutions in South Korea (KOR). A total of 20 the KOR and USA medical hospitals in 10 each provide information for radiation therapy technology were selected. The readability was comparatively analyzed a total of three aspects (lexical, syntactic, cohesion and readability) by using a Coh-Metrix on-line web program. In readability respect, the mean of the Flesch Reading Ease (FRE) was lower in the KOR (8.3) than in the USA (23.2), Flesch-Kincaid grade level (FKGL) was higher in the KOR than in the USA (14.2) indicating that KOR was less readable than the US (p<.05). In both KOR and USA, the reading level (literacy) of the English text for the radiation therapy was found to be higher than high school (FRE level 50 or lower). Therefore, text information in English for the radiation therapy to foreign patients with cancer should be lowered to elementary school level and read to improve the quality of medical services.

The Locus of the Word Frequency Effect in Speech Production: Evidence from the Picture-word Interference Task (말소리 산출에서 단어빈도효과의 위치 : 그림-단어간섭과제에서 나온 증거)

  • Koo, Min-Mo;Nam, Ki-Chun
    • MALSORI
    • /
    • no.62
    • /
    • pp.51-68
    • /
    • 2007
  • Two experiments were conducted to determine the exact locus of the frequency effect in speech production. Experiment 1 addressed the question as to whether the word frequency effect arise from the stage of lemma selection. A picture-word interference task was performed to test the significance of interactions between the effects of target frequency, distractor frequency and semantic relatedness. There was a significant interaction between the distractor frequency and the semantic relatedness and between the target and the distractor frequency. Experiment 2 examined whether the word frequency effect is attributed to the lexeme level which represent phonological information of words. A methodological logic applied to Experiment 2 was the same as that of Experiment 1. There was no significant interaction between the distractor frequency and the phonological relatedness. These results demonstrate that word frequency has influence on the processes involved in selecting a correct lemma corresponding to an activated lexical concept in speech production.

  • PDF

Measurement of Document Similarity using Word and Word-Pair Frequencies (단어 및 단어쌍 별 빈도수를 이용한 문서간 유사도 측정)

  • 김혜숙;박상철;김수형
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1311-1314
    • /
    • 2003
  • In this paper, we propose a method to measure document similarity. First, we have exploited single-term method that extracts nouns by using a lexical analyzer as a preprocessing step to match one index to one noun. In spite of irrelevance between documents, possibility of increasing document similarity is high with this method. For this reason, a term-phrase method has been reported. This method constructs co-occurrence between two words as an index to measure document similarity. In this paper, we tried another method that combine these two methods to compensate the problems in these two methods. Six types of features are extracted from two input documents, and they are fed into a neural network to calculate the final value of document similarity. Reliability of our method has been proved by an experiment of document retrieval.

  • PDF

Design of a Contextual Lexical Knowledge Graph Extraction Algorithm (맥락적 어휘 지식 그래프 추출 알고리즘의 설계)

  • Nam, Sangha;Choi, Gyuhyeon;Hahm, Younggyun;Choi, Key-Sun
    • 한국어정보학회:학술대회논문집
    • /
    • 2016.10a
    • /
    • pp.147-151
    • /
    • 2016
  • 본 논문에서는 Reified 트리플 추출을 위한 한국어 개방형 정보추출 방법을 제시한다. 시맨틱웹 분야에서 지식은 흔히 RDF 트리플 형태로 표현되지만, 자연언어문장은 복수개의 서술어와 논항간의 관계로 구성되어 있다. 이러한 이유로, 시맨틱웹의 대표적인 지식표현법인 트리플을 따름과 동시에 문장의 의존구조를 반영하여 복수개의 술어와 논항간의 관계를 지식화하는 새로운 개방형 정보추출 시스템이 필요하다. 본 논문에서는 문장 구조에 대한 일관성있는 변환을 고려한 새로운 개방형 정보추출 방법을 제안하며, 개체중심의 지식과 사건중심의 지식을 함께 표현할 수 있는 Reified 트리플 추출방법을 제안한다. 본 논문에서 제안한 방법의 우수성과 실효성을 입증하기 위해 한국어 위키피디아 알찬글 본문을 대상으로 추출된 지식의 양과 정확도 측정 실험을 수행하였고, 본 논문에서 제안한 방식을 응용한 의사 SPARQL 질의 생성 모듈에 대해 소개한다.

  • PDF

Using Lexical Co-occurrence Information in Syntactic Analysis (구문 분석에서의 어휘간 공기 정보의 활용)

  • Yoon, Jun-Tae;Choi, Key-Sun;Kim, Seon-Ho;Song, Man-Suk
    • Annual Conference on Human and Language Technology
    • /
    • 1998.10c
    • /
    • pp.276-280
    • /
    • 1998
  • 구문 분석에 있어서 어휘 정보는 구문적 중의성을 해결하는 데 매우 중요한 역할을 한다. 본 논문에서는 대량의 말뭉치로부터 추출된 공기 정보가 구문 분석에서 효과적으로 이용될 수 있음을 보인다. 첫째, 공기 정보로부터 보다 의미있는 연어를 추출하고 이를 구문 분석에 이용함으로써 보다 효율적인 파서의 구축이 가능함을 밝힌다. 둘째로는 대량의 말뭉치로부터 추출한 공기 정보가 구문 분석시 보조사나 조사 생략에 의한 격 중의성 혹은 관계 관형절에서 발생하는 명사구 이동에 따른 격 중의성의 해결에 적용될 수 있음을 보인다. 이를 위해 본 연구에서는 연세대학교 한국어 사전 편찬실의 연세 말뭉치 3,000만 어절과 KAIST 말뭉치 중 1,000만 어절로부터 <서술어, 명사, 격관계> 공기 정보를 추출하였다.

  • PDF

Component-Based VHDL Analyzer for Reuse and Embedment (재사용 및 내장 가능한 구성요소 기반 VHDL 분석기)

  • 박상헌;손영석
    • Proceedings of the IEEK Conference
    • /
    • 2003.07b
    • /
    • pp.1015-1018
    • /
    • 2003
  • As increasing the size and complexity of hard-ware and software system, more efficient design methodology has been developed. Especially design-reuse technique enables fast system development via integrating existing hardware and software. For this technique available hardware/software should be prepared as component-based parts, adaptable to various systems. This paper introduces a component-based VHDL analyzer allowing to be embedded in other applications, such as simulator, synthesis tool, or smart editor. VHDL analyzer parses VHDL description input, and performs lexical, syntactic, semantic checking, and finally generates intermediate-form data as the result. VHDL has full-features of object-oriented language such as data abstraction, inheritance, and polymorphism. To support these features special analysis algorithm and intermediate form is required. This paper summarizes practical issues on implementing high-performance/quality VHDL analyzer and provides its solution that is based on the intensive experience of VHDL analyzer development.

  • PDF