• Title/Summary/Keyword: agglutinative language

Search Result 23, Processing Time 0.026 seconds

A Machine Learning Approach to Korean Language Stemming

  • Cho, Se-hyeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.6
    • /
    • pp.549-557
    • /
    • 2001
  • Morphological analysis and POS tagging require a dictionary for the language at hand . In this fashion though it is impossible to analyze a language a dictionary. We also have difficulty if significant portion of the vocabulary is new or unknown . This paper explores the possibility of learning morphology of an agglutinative language. in particular Korean language, without any prior lexical knowledge of the language. We use unsupervised learning in that there is no instructor to guide the outcome of the learner, nor any tagged corpus. Here are the main characteristics of the approach: First. we use only raw corpus without any tags attached or any dictionary. Second, unlike many heuristics that are theoretically ungrounded, this method is based on statistical methods , which are widely accepted. The method is currently applied only to Korean language but since it is essentially language-neutral it can easily be adapted to other agglutinative languages.

  • PDF

Korean Semantic Annotation on the EXCOM Platform

  • Chai, Hyun-Zoo;Djioua, Brahim;Priol, Florence Le;Descles, Jean-Pierre
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.548-556
    • /
    • 2007
  • We present an automatic semantic annotation system for Korean on the EXCOM (EXploration COntextual for Multilingual) platform. The purpose of natural language processing is enabling computers to understand human language, so that they can perform more sophisticated tasks. Accordingly, current research concentrates more and more on extracting semantic information. The realization of semantic processing requires the widespread annotation of documents. However, compared to that of inflectional languages, the technology in agglutinative language processing such as Korean still has shortcomings. EXCOM identifies semantic information in Korean text using our new method, the Contextual Exploration Method. Our initial system properly annotates approximately 88% of standard Korean sentences, and this annotation rate holds across text domains.

  • PDF

KOREAN TOPIC MODELING USING MATRIX DECOMPOSITION

  • June-Ho Lee;Hyun-Min Kim
    • East Asian mathematical journal
    • /
    • v.40 no.3
    • /
    • pp.307-318
    • /
    • 2024
  • This paper explores the application of matrix factorization, specifically CUR decomposition, in the clustering of Korean language documents by topic. It addresses the unique challenges of Natural Language Processing (NLP) in dealing with the Korean language's distinctive features, such as agglutinative words and morphological ambiguity. The study compares the effectiveness of Latent Semantic Analysis (LSA) using CUR decomposition with the classical Singular Value Decomposition (SVD) method in the context of Korean text. Experiments are conducted using Korean Wikipedia documents and newspaper data, providing insight into the accuracy and efficiency of these techniques. The findings demonstrate the potential of CUR decomposition to improve the accuracy of document clustering in Korean, offering a valuable approach to text mining and information retrieval in agglutinative languages.

A Critical Review of Language Typology: for the subjecthood of Korean linguistics (언어유형론의 비판적 고찰 한국어는 교착어, 불어는 굴절어라는 것의 의미를 묻다)

  • 목정수
    • Lingua Humanitatis
    • /
    • v.6
    • /
    • pp.185-211
    • /
    • 2004
  • Korean linguistics or linguistics In Korea has the viviparous limitation that on the one hand, it was influxed from Europe and Japan and on the other hand, these days the American linguistics takes the initiative in Korea. That's why Korean linguistics cannot be free of the problems of 'dependence/independence', 'central/marginal', etc. It calls for two conditions to study the nature of Korean itself and to establish the independence of Korean linguistics in this situation. The first condition is that we should reveal some peculiarities of Korean in itself. The second condition is that we should reveal universals of Korean by comparing it objectively with other languages which are typologically and genealogically different. 1 think the first is important but the latter is more important. To meet the second condition, we analysed the expansion structure of NP in Korean and French, and suggested a new tree-diagram for describing equivalently the NP structure of the two languages. As for VP structure, we suggested some possibilities of comparing the final endings in Korean with personal pronouns in French, and of comparing the prefinal ending 'si' in Korean with the second plural pronoun 'vous', etc. As a result of the comparison of Korean and French, we came to conclusion that Korean is a inflectional agglutinative language while French is a agglutinative inflectional one. In other words, they are same in 'typus', are different in 'topos'. This may be a surprising/unexpected conclusion. But this, we think, can lead us to much closer approach to the nature of the two languages Korean and French.

  • PDF

Language- Independent Sentence Boundary Detection with Automatic Feature Selection

  • Lee, Do-Gil
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1297-1304
    • /
    • 2008
  • This paper proposes a machine learning approach for language-independent sentence boundary detection. The proposed method requires no heuristic rules and language-specific features, such as part-of-speech information, a list of abbreviations or proper names. With only the language-independent features, we perform experiments on not only an inflectional language but also an agglutinative language, having fairly different characteristics (in this paper, English and Korean, respectively). In addition, we obtain good performances in both languages. We have also experimented with the methods under a wide range of experimental conditions, especially for the selection of useful features.

  • PDF

Recent R&D Trends for Pretrained Language Model (딥러닝 사전학습 언어모델 기술 동향)

  • Lim, J.H.;Kim, H.K.;Kim, Y.K.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.3
    • /
    • pp.9-19
    • /
    • 2020
  • Recently, a technique for applying a deep learning language model pretrained from a large corpus to fine-tuning for each application task has been widely used as a language processing technology. The pretrained language model shows higher performance and satisfactory generalization performance than existing methods. This paper introduces the major research trends related to deep learning pretrained language models in the field of language processing. We describe in detail the motivations, models, learning methods, and results of the BERT language model that had significant influence on subsequent studies. Subsequently, we introduce the results of language model studies after BERT, focusing on SpanBERT, RoBERTa, ALBERT, BART, and ELECTRA. Finally, we introduce the KorBERT pretrained language model, which shows satisfactory performance in Korean language. In addition, we introduce techniques on how to apply the pretrained language model to Korean (agglutinative) language, which consists of a combination of content and functional morphemes, unlike English (refractive) language whose endings change depending on the application.

Text Watermarking Based on Syntactic Constituent Movement (구문요소의 전치에 기반한 문서 워터마킹)

  • Kim, Mi-Young
    • The KIPS Transactions:PartB
    • /
    • v.16B no.1
    • /
    • pp.79-84
    • /
    • 2009
  • This paper explores a method of text watermarking for agglutinative languages and develops a syntactic tree-based syntactic constituent movement scheme. Agglutinative languages provide a good ground for the syntactic tree-based natural language watermarking because syntactic constituent order is relatively free. Our proposed natural language watermarking method consists of seven procedures. First, we construct a syntactic dependency tree of unmarked text. Next, we perform clausal segmentation from the syntactic tree. Third, we choose target syntactic constituents, which will move within its clause. Fourth, we determine the movement direction of the target constituents. Then, we embed a watermark bit for each target constituent. Sixth, if the watermark bit does not coincide with the direction of the target constituent movement, we displace the target constituent in the syntactic tree. Finally, from the modified syntactic tree, we obtain a marked text. From the experimental results, we show that the coverage of our method is 91.53%, and the rate of unnatural sentences of marked text is 23.16%, which is better than that of previous systems. Experimental results also show that the marked text keeps the same style, and it has the same information without semantic distortion.

A Korean Language Stemmer based on Unsupervised Learning (자율 학습에 의한 실질 형태소와 형식 형태소의 분리)

  • Jo, Se-Hyeong
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.675-684
    • /
    • 2001
  • This paper describes a method for stemming of Korean language by using unsupervised learning from raw corpus. This technique does not require a lexicon or any language-specific knowledge. Since we use unsupervised learning, the time and effort required for learning is negligible. Unlike heuristic approaches that are theoretically ungrounded, this method is based on widely accepted statistical methods, and therefore can be easily extended. The method is currently applied only to Korean language, but it can easily be adapted to other agglutinative languages, since it is not language-dependent.

  • PDF

A Study of Morphological Errors in Aphasic Language

  • Kim, Heui-Beom
    • Speech Sciences
    • /
    • v.1
    • /
    • pp.227-236
    • /
    • 1997
  • How do aphasics deal with the inflectional marking occurring in agglutinative languages like Korean? Korean speech repetition, comprehension and production were studied in 3 Broca's aphasic speakers of Korean. As experimental materials, 100 easy sentences were chosen in 1st grade Korean elementary school textbooks about reading writing and listening, and two pictures were made from each sentence. This study examines the use of three kinds of inflectional markings--past tense, nominative case, and accusative case. The analysis focuses on whether each inflectional marking was performed well or not in tasks such as repetition, comprehension and production. In addition, morphological errors concerned with each inflectional marking were analyzed in view of markedness. In general, the aphasic subjects showed a clear preservation of the morphological aspects of their native language. So the view of Broca's aphasics as agrammatical could not be strongly supported. It can be suggested that nominative case and accusative case are marked elements in Korean.

  • PDF

TAKTAG: Two phase learning method for hybrid statistical/rule-based part-of-speech disambiguation (TAKTAG: 통계와 규칙에 기반한 2단계 학습을 통한 품사 중의성 해결)

  • Shin, Sang-Hyun;Lee, Geun-Bae;Lee, Jong-Hyeok
    • Annual Conference on Human and Language Technology
    • /
    • 1995.10a
    • /
    • pp.169-174
    • /
    • 1995
  • 품사 태깅은 형태소 분석 이후 발생한 모호성을 제거하는 것으로, 통계적 방법과 규칙에 기 반한 방법이 널리 사용되고 있다. 하지만, 이들 방법론에는 각기 한계점을 지니고 있다. 통계적인 방법인 은닉 마코프 모델(Hidden Markov Model)은 유연성(flexibility)을 지니지만, 교착어(agglutinative language)인 한국어에 있어서 제한된 윈도우로 인하여, 중의성 해결의 실마리가 되는 어휘나 품사별 제대로 참조하지 못하는 경우가 있다. 반면, 규칙에 기반한 방법은 차체가 품사에 영향을 받으므로 인하여, 새로운 태그집합(tagset)이나 언어에 대하여 유연성이나 정확성을 제공해 주지 못한다. 이러한 각기 서로 다른 방법론의 한계를 극복하기 위하여, 본 논문에서는 통계와 규칙을 통합한 한국어 태깅 모델을 제안한다. 즉 통계적 학습을 통한 통계 모델이후에 2차적으로 규칙을 자동학습 하게 하여, 통계모델이 다루지 못하는 범위의 규칙을 생성하게 된다. 이처럼 2단계의 통계와 규칙의 자동 학습단계를 거치게 됨으로써, 두개 모델의 단점을 보강한 높은 정확도를 가지는 한국어 태거를 개발할 수 있게 하였다.

  • PDF