• 제목/요약/키워드: agglutinative language

검색결과 23건 처리시간 0.017초

A Machine Learning Approach to Korean Language Stemming

  • Cho, Se-hyeong
    • 한국지능시스템학회논문지
    • /
    • 제11권6호
    • /
    • pp.549-557
    • /
    • 2001
  • Morphological analysis and POS tagging require a dictionary for the language at hand . In this fashion though it is impossible to analyze a language a dictionary. We also have difficulty if significant portion of the vocabulary is new or unknown . This paper explores the possibility of learning morphology of an agglutinative language. in particular Korean language, without any prior lexical knowledge of the language. We use unsupervised learning in that there is no instructor to guide the outcome of the learner, nor any tagged corpus. Here are the main characteristics of the approach: First. we use only raw corpus without any tags attached or any dictionary. Second, unlike many heuristics that are theoretically ungrounded, this method is based on statistical methods , which are widely accepted. The method is currently applied only to Korean language but since it is essentially language-neutral it can easily be adapted to other agglutinative languages.

  • PDF

Korean Semantic Annotation on the EXCOM Platform

  • Chai, Hyun-Zoo;Djioua, Brahim;Priol, Florence Le;Descles, Jean-Pierre
    • 한국언어정보학회:학술대회논문집
    • /
    • 한국언어정보학회 2007년도 정기학술대회
    • /
    • pp.548-556
    • /
    • 2007
  • We present an automatic semantic annotation system for Korean on the EXCOM (EXploration COntextual for Multilingual) platform. The purpose of natural language processing is enabling computers to understand human language, so that they can perform more sophisticated tasks. Accordingly, current research concentrates more and more on extracting semantic information. The realization of semantic processing requires the widespread annotation of documents. However, compared to that of inflectional languages, the technology in agglutinative language processing such as Korean still has shortcomings. EXCOM identifies semantic information in Korean text using our new method, the Contextual Exploration Method. Our initial system properly annotates approximately 88% of standard Korean sentences, and this annotation rate holds across text domains.

  • PDF

KOREAN TOPIC MODELING USING MATRIX DECOMPOSITION

  • June-Ho Lee;Hyun-Min Kim
    • East Asian mathematical journal
    • /
    • 제40권3호
    • /
    • pp.307-318
    • /
    • 2024
  • This paper explores the application of matrix factorization, specifically CUR decomposition, in the clustering of Korean language documents by topic. It addresses the unique challenges of Natural Language Processing (NLP) in dealing with the Korean language's distinctive features, such as agglutinative words and morphological ambiguity. The study compares the effectiveness of Latent Semantic Analysis (LSA) using CUR decomposition with the classical Singular Value Decomposition (SVD) method in the context of Korean text. Experiments are conducted using Korean Wikipedia documents and newspaper data, providing insight into the accuracy and efficiency of these techniques. The findings demonstrate the potential of CUR decomposition to improve the accuracy of document clustering in Korean, offering a valuable approach to text mining and information retrieval in agglutinative languages.

언어유형론의 비판적 고찰 한국어는 교착어, 불어는 굴절어라는 것의 의미를 묻다 (A Critical Review of Language Typology: for the subjecthood of Korean linguistics)

  • 목정수
    • 인문언어
    • /
    • 제6권
    • /
    • pp.185-211
    • /
    • 2004
  • Korean linguistics or linguistics In Korea has the viviparous limitation that on the one hand, it was influxed from Europe and Japan and on the other hand, these days the American linguistics takes the initiative in Korea. That's why Korean linguistics cannot be free of the problems of 'dependence/independence', 'central/marginal', etc. It calls for two conditions to study the nature of Korean itself and to establish the independence of Korean linguistics in this situation. The first condition is that we should reveal some peculiarities of Korean in itself. The second condition is that we should reveal universals of Korean by comparing it objectively with other languages which are typologically and genealogically different. 1 think the first is important but the latter is more important. To meet the second condition, we analysed the expansion structure of NP in Korean and French, and suggested a new tree-diagram for describing equivalently the NP structure of the two languages. As for VP structure, we suggested some possibilities of comparing the final endings in Korean with personal pronouns in French, and of comparing the prefinal ending 'si' in Korean with the second plural pronoun 'vous', etc. As a result of the comparison of Korean and French, we came to conclusion that Korean is a inflectional agglutinative language while French is a agglutinative inflectional one. In other words, they are same in 'typus', are different in 'topos'. This may be a surprising/unexpected conclusion. But this, we think, can lead us to much closer approach to the nature of the two languages Korean and French.

  • PDF

Language- Independent Sentence Boundary Detection with Automatic Feature Selection

  • Lee, Do-Gil
    • Journal of the Korean Data and Information Science Society
    • /
    • 제19권4호
    • /
    • pp.1297-1304
    • /
    • 2008
  • This paper proposes a machine learning approach for language-independent sentence boundary detection. The proposed method requires no heuristic rules and language-specific features, such as part-of-speech information, a list of abbreviations or proper names. With only the language-independent features, we perform experiments on not only an inflectional language but also an agglutinative language, having fairly different characteristics (in this paper, English and Korean, respectively). In addition, we obtain good performances in both languages. We have also experimented with the methods under a wide range of experimental conditions, especially for the selection of useful features.

  • PDF

딥러닝 사전학습 언어모델 기술 동향 (Recent R&D Trends for Pretrained Language Model)

  • 임준호;김현기;김영길
    • 전자통신동향분석
    • /
    • 제35권3호
    • /
    • pp.9-19
    • /
    • 2020
  • Recently, a technique for applying a deep learning language model pretrained from a large corpus to fine-tuning for each application task has been widely used as a language processing technology. The pretrained language model shows higher performance and satisfactory generalization performance than existing methods. This paper introduces the major research trends related to deep learning pretrained language models in the field of language processing. We describe in detail the motivations, models, learning methods, and results of the BERT language model that had significant influence on subsequent studies. Subsequently, we introduce the results of language model studies after BERT, focusing on SpanBERT, RoBERTa, ALBERT, BART, and ELECTRA. Finally, we introduce the KorBERT pretrained language model, which shows satisfactory performance in Korean language. In addition, we introduce techniques on how to apply the pretrained language model to Korean (agglutinative) language, which consists of a combination of content and functional morphemes, unlike English (refractive) language whose endings change depending on the application.

구문요소의 전치에 기반한 문서 워터마킹 (Text Watermarking Based on Syntactic Constituent Movement)

  • 김미영
    • 정보처리학회논문지B
    • /
    • 제16B권1호
    • /
    • pp.79-84
    • /
    • 2009
  • 이 논문은 한국어 문장을 대상으로 구문요소의 전치를 기반으로 한 문서 워터마킹 방법을 제안한다. 한국어와 같은 교착어는 구문요소의 순서가 자유롭기 때문에 구문 트리 기반의 자연어 워터마킹을 위한 좋은 환경을 제공한다. 본 논문에서 제안하는 자연어 워터마킹 방법은 7단계로 구성되어 있다. 첫째, 문장의 구문분석을 수행한다. 다음으로, 구문요소가 해당 절의 범위 안에서만 전치되도록 범위를 한정하기 위하여 구문 트리로부터 각 절을 분할한다. 세 번째로, 전치를 위한 목표 구문요소를 선택한다. 네 번째, 목표 구문요소의 전치 후에도 문장의 의미나 문체의 변화가 최소화되도록 가장 자연스러운 전이위치를 결정한다. 그 후, 목표 구문요소에 대한 워터마크 비트를 삽입한다. 여섯 번째 단계로, 워터마크 비트가 목표 구문요소의 전치 방향과 상응하지 않으면 구문 트리에서 목표 구문요소를 전치한다. 마지막으로 변환된 구문 트리에서 워터마킹된 문서를 얻는다. 실험 결과를 통해 본 논문에서 제안한 방법의 적용률은 91.53%이고, 최종 워터마킹된 문장들 중 부자연스러운 문장의 비율은 23.16%로서 기존 시스템들보다 좋은 결과를 보여준다. 또한 워터마킹된 문장이 원시 문장과 같은 문체를 유지하고, 의미적인 왜곡없이 같은 정보를 나타내고 있다.

자율 학습에 의한 실질 형태소와 형식 형태소의 분리 (A Korean Language Stemmer based on Unsupervised Learning)

  • 조세형
    • 정보처리학회논문지B
    • /
    • 제8B권6호
    • /
    • pp.675-684
    • /
    • 2001
  • 본 논문은 태그가 없는 단순 말뭉치만을 가지고 자율학습을 이용하여 정보 검색을 위한 색인어의 추출 등에 이용될 수 있도록 한국어의 실질 형태소와 형식 형태소를 분리해내는 기법에 대하여 기술한다. 본 기법은 사전 등의 언어 관련 지식을 요구하지 않으며 오직 단순 말뭉치만을 필요로 한다. 또한 자율학습을 이용함으로써 사람의 간섭이 필요하지 않아 학습에 필요한 시간과 노력이 거의 들지 않는다. 본 방식은 잘 확립된 통계적 방법론을 이용하기 때문에 일반적인 휴리스틱과는 달리 이론적인 기반이 확고하여 확장 및 발전이 용이하다. 본 결과는 한국어에 우선 적용되었으나 한국어에 종속적인 방법이 아니어서 다른 교착어에도 쉽게 적용될 수 있을 것이다.

  • PDF

A Study of Morphological Errors in Aphasic Language

  • Kim, Heui-Beom
    • 음성과학
    • /
    • 제1권
    • /
    • pp.227-236
    • /
    • 1997
  • How do aphasics deal with the inflectional marking occurring in agglutinative languages like Korean? Korean speech repetition, comprehension and production were studied in 3 Broca's aphasic speakers of Korean. As experimental materials, 100 easy sentences were chosen in 1st grade Korean elementary school textbooks about reading writing and listening, and two pictures were made from each sentence. This study examines the use of three kinds of inflectional markings--past tense, nominative case, and accusative case. The analysis focuses on whether each inflectional marking was performed well or not in tasks such as repetition, comprehension and production. In addition, morphological errors concerned with each inflectional marking were analyzed in view of markedness. In general, the aphasic subjects showed a clear preservation of the morphological aspects of their native language. So the view of Broca's aphasics as agrammatical could not be strongly supported. It can be suggested that nominative case and accusative case are marked elements in Korean.

  • PDF

TAKTAG: 통계와 규칙에 기반한 2단계 학습을 통한 품사 중의성 해결 (TAKTAG: Two phase learning method for hybrid statistical/rule-based part-of-speech disambiguation)

  • 신상현;이근배;이종혁
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 1995년도 제7회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.169-174
    • /
    • 1995
  • 품사 태깅은 형태소 분석 이후 발생한 모호성을 제거하는 것으로, 통계적 방법과 규칙에 기 반한 방법이 널리 사용되고 있다. 하지만, 이들 방법론에는 각기 한계점을 지니고 있다. 통계적인 방법인 은닉 마코프 모델(Hidden Markov Model)은 유연성(flexibility)을 지니지만, 교착어(agglutinative language)인 한국어에 있어서 제한된 윈도우로 인하여, 중의성 해결의 실마리가 되는 어휘나 품사별 제대로 참조하지 못하는 경우가 있다. 반면, 규칙에 기반한 방법은 차체가 품사에 영향을 받으므로 인하여, 새로운 태그집합(tagset)이나 언어에 대하여 유연성이나 정확성을 제공해 주지 못한다. 이러한 각기 서로 다른 방법론의 한계를 극복하기 위하여, 본 논문에서는 통계와 규칙을 통합한 한국어 태깅 모델을 제안한다. 즉 통계적 학습을 통한 통계 모델이후에 2차적으로 규칙을 자동학습 하게 하여, 통계모델이 다루지 못하는 범위의 규칙을 생성하게 된다. 이처럼 2단계의 통계와 규칙의 자동 학습단계를 거치게 됨으로써, 두개 모델의 단점을 보강한 높은 정확도를 가지는 한국어 태거를 개발할 수 있게 하였다.

  • PDF