• Title/Summary/Keyword: agglutinative language

Search Result 23, Processing Time 0.021 seconds

The Formation and Alternation of Sino-Korean Pronunciation (조선한자음(朝鮮漢字音)의 성립(成立)과 변천(變遷))

  • Chung, Kwang
    • Lingua Humanitatis
    • /
    • v.7
    • /
    • pp.31-56
    • /
    • 2005
  • In most Asian areas Chinese writing and characters had been used as a unique recording device. The way to account for the circumstance related with the writing system could be twofold. Firstly the races inhabited around Sino-territory actually neither used the type of languages as Chinese - not isolating type but agglutinative one - nor established any independent writing letters. Secondly those people who belonged to the races accepted the writing system of China due to the frequent cultural and economical interchange between them and Chinese people. In Korean peninsula the same situation of linguistic phenomenon had been pervasive. The aborigine of the territory who acquired to use Chinese writing applied their knowledge of the second language to record the facts related with the management of the country. But the grammatical structure of Chines writing and native language showed the remarkable contrast; so, the people of the peninsula managed the specific letter system - in other words, the discrepancy between language and writing. This difference carried on the huge influence on the way of using Chinese writing and characters in Korea. Some scholars of historical linguistics of Korean language considered the alternation of Chinese writing system and characters as "the procedure of nativization" - in which the inflow of characters into Korean and the same one continuously used in China illustrated the large gap of the phonological aspects. The method of reading Chinese characters came to be named as Sino-Korean Pronunciation. In the categorization of Chinese characters' pronunciation Sino-Korean Pronunciation was also categorized as the Eastern Pronunciation(東音). It indicates the sound of Chinese characters which has been historically adapted to the phonological system of Korean language. In this paper the main point is to survey the procedure of reception of Chinese writing and characters and that of establishment and alternation of Korean phonetic feature of Chinese writing and characters.

  • PDF

Performance Analysis of n-Gram Indexing Methods for Korean text Retrieval (한글 문서 검색에서 n-Gram 색인방법의 성능 분석)

  • 이준규;심수정;박혁로
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.145-148
    • /
    • 2003
  • The agglutinative nature of Korean language makes the problem of automatic indexing of Korean much different from that of Indo-Eroupean languages. Especially, indexing with compound nouns in Korean is very problematic because of the exponential number of possible analysis and the existence of unknown words. To deal with this compound noun indexing problem, we propose a new indexing methods which combines the merits of the morpheme-based indexing methods and the n-gram based indexing methods. Through the experiments, we also find that the best performance of n-gram indexing methods can be achieved with 1.75-gram which is never considered in the previous researches.

  • PDF

A Conceptual Framework for Korean-English Machine Translation using Expression Patterns (표현 패턴에 의한 한국어-영어 기계 번역을 위한 개념 구성)

  • Lee, Ho-Suk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06c
    • /
    • pp.236-241
    • /
    • 2008
  • This paper discusses a Korean-English machine translation method using expression patterns. The expression patterns are defined for the purpose of aligning Korean expressions with appropriate English expressions in semantic and expressive senses. This paper also argues to develop a new Korean syntax analysis method using agglutinative characteristics of Korean language, expression pattern concept, sentence partition concept, and incorporation of semantic structures as well in the parsing process. We defined a simple Korean grammar to show the possibility of new Korean syntax analysis method.

  • PDF

Word Similarity Calculation by Using the Edit Distance Metrics with Consonant Normalization

  • Kang, Seung-Shik
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.573-582
    • /
    • 2015
  • Edit distance metrics are widely used for many applications such as string comparison and spelling error corrections. Hamming distance is a metric for two equal length strings and Damerau-Levenshtein distance is a well-known metrics for making spelling corrections through string-to-string comparison. Previous distance metrics seems to be appropriate for alphabetic languages like English and European languages. However, the conventional edit distance criterion is not the best method for agglutinative languages like Korean. The reason is that two or more letter units make a Korean character, which is called as a syllable. This mechanism of syllable-based word construction in the Korean language causes an edit distance calculation to be inefficient. As such, we have explored a new edit distance method by using consonant normalization and the normalization factor.

Error Correction in Korean Morpheme Recovery using Deep Learning (딥 러닝을 이용한 한국어 형태소의 원형 복원 오류 수정)

  • Hwang, Hyunsun;Lee, Changki
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1452-1458
    • /
    • 2015
  • Korean Morphological Analysis is a difficult process. Because Korean is an agglutinative language, one of the most important processes in Morphological Analysis is Morpheme Recovery. There are some methods using Heuristic rules and Pre-Analyzed Partial Words that were examined for this process. These methods have performance limits as a result of not using contextual information. In this study, we built a Korean morpheme recovery system using deep learning, and this system used word embedding for the utilization of contextual information. In '들/VV' and '듣/VV' morpheme recovery, the system showed 97.97% accuracy, a better performance than with SVM(Support Vector Machine) which showed 96.22% accuracy.

The Review about the Development of Korean Linguistic Inquiry and Word Count (언어적 특성을 이용한 '심리학적 한국어 글분석 프로그램(KLIWC)' 개발 과정에 대한 고찰)

  • Lee Chang H.;Sim Jung-Mi;Yoon Aesun
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.2
    • /
    • pp.93-121
    • /
    • 2005
  • Substantial amounts of research have been accumulated by the attempt to use linguistic styles as the dependent measure in conducting psychological research. This research was condoned to develope a Korean text analysis program(KLIWC) based on the English text analysis program, LIWC(Linguistic Inquiry and Word Count), and the program reflects the Korean linguistic characteristics and culture that is related with language. We made it possible to analyze agglutinative phrase of many morphemes by linguistic tagging, and basic form dictionary and inflection rule were built. In addition, the face-saving weeds and emotional words were included as the analysis variables. The process of development and characteristics of Korean text analysis have been reviewed, and future direction for the improvement of the program has been discussed.

  • PDF

Korean Semantic Role Labeling Based on Suffix Structure Analysis and Machine Learning (접사 구조 분석과 기계 학습에 기반한 한국어 의미 역 결정)

  • Seok, Miran;Kim, Yu-Seop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.555-562
    • /
    • 2016
  • Semantic Role Labeling (SRL) is to determine the semantic relation of a predicate and its argu-ments in a sentence. But Korean semantic role labeling has faced on difficulty due to its different language structure compared to English, which makes it very hard to use appropriate approaches developed so far. That means that methods proposed so far could not show a satisfied perfor-mance, compared to English and Chinese. To complement these problems, we focus on suffix information analysis, such as josa (case suffix) and eomi (verbal ending) analysis. Korean lan-guage is one of the agglutinative languages, such as Japanese, which have well defined suffix structure in their words. The agglutinative languages could have free word order due to its de-veloped suffix structure. Also arguments with a single morpheme are then labeled with statistics. In addition, machine learning algorithms such as Support Vector Machine (SVM) and Condi-tional Random Fields (CRF) are used to model SRL problem on arguments that are not labeled at the suffix analysis phase. The proposed method is intended to reduce the range of argument instances to which machine learning approaches should be applied, resulting in uncertain and inaccurate role labeling. In experiments, we use 15,224 arguments and we are able to obtain approximately 83.24% f1-score, increased about 4.85% points compared to the state-of-the-art Korean SRL research.

Building a Korean Sentiment Lexicon Using Collective Intelligence (집단지성을 이용한 한글 감성어 사전 구축)

  • An, Jungkook;Kim, Hee-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.49-67
    • /
    • 2015
  • Recently, emerging the notion of big data and social media has led us to enter data's big bang. Social networking services are widely used by people around the world, and they have become a part of major communication tools for all ages. Over the last decade, as online social networking sites become increasingly popular, companies tend to focus on advanced social media analysis for their marketing strategies. In addition to social media analysis, companies are mainly concerned about propagating of negative opinions on social networking sites such as Facebook and Twitter, as well as e-commerce sites. The effect of online word of mouth (WOM) such as product rating, product review, and product recommendations is very influential, and negative opinions have significant impact on product sales. This trend has increased researchers' attention to a natural language processing, such as a sentiment analysis. A sentiment analysis, also refers to as an opinion mining, is a process of identifying the polarity of subjective information and has been applied to various research and practical fields. However, there are obstacles lies when Korean language (Hangul) is used in a natural language processing because it is an agglutinative language with rich morphology pose problems. Therefore, there is a lack of Korean natural language processing resources such as a sentiment lexicon, and this has resulted in significant limitations for researchers and practitioners who are considering sentiment analysis. Our study builds a Korean sentiment lexicon with collective intelligence, and provides API (Application Programming Interface) service to open and share a sentiment lexicon data with the public (www.openhangul.com). For the pre-processing, we have created a Korean lexicon database with over 517,178 words and classified them into sentiment and non-sentiment words. In order to classify them, we first identified stop words which often quite likely to play a negative role in sentiment analysis and excluded them from our sentiment scoring. In general, sentiment words are nouns, adjectives, verbs, adverbs as they have sentimental expressions such as positive, neutral, and negative. On the other hands, non-sentiment words are interjection, determiner, numeral, postposition, etc. as they generally have no sentimental expressions. To build a reliable sentiment lexicon, we have adopted a concept of collective intelligence as a model for crowdsourcing. In addition, a concept of folksonomy has been implemented in the process of taxonomy to help collective intelligence. In order to make up for an inherent weakness of folksonomy, we have adopted a majority rule by building a voting system. Participants, as voters were offered three voting options to choose from positivity, negativity, and neutrality, and the voting have been conducted on one of the largest social networking sites for college students in Korea. More than 35,000 votes have been made by college students in Korea, and we keep this voting system open by maintaining the project as a perpetual study. Besides, any change in the sentiment score of words can be an important observation because it enables us to keep track of temporal changes in Korean language as a natural language. Lastly, our study offers a RESTful, JSON based API service through a web platform to make easier support for users such as researchers, companies, and developers. Finally, our study makes important contributions to both research and practice. In terms of research, our Korean sentiment lexicon plays an important role as a resource for Korean natural language processing. In terms of practice, practitioners such as managers and marketers can implement sentiment analysis effectively by using Korean sentiment lexicon we built. Moreover, our study sheds new light on the value of folksonomy by combining collective intelligence, and we also expect to give a new direction and a new start to the development of Korean natural language processing.

Using Syntactic Unit of Morpheme for Reducing Morphological and Syntactic Ambiguity (형태소 및 구문 모호성 축소를 위한 구문단위 형태소의 이용)

  • Hwang, Yi-Gyu;Lee, Hyun-Young;Lee, Yong-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.784-793
    • /
    • 2000
  • The conventional morphological analysis of Korean language presents various morphological ambiguities because of its agglutinative nature. These ambiguities cause syntactic ambiguities and they make it difficult to select the correct parse tree. This problem is mainly related to the auxiliary predicate or bound noun in Korean. They have a strong relationship with the surrounding morphemes which are mostly functional morphemes that cannot stand alone. The combined morphemes have a syntactic or semantic role in the sentence. We extracted these morphemes from 0.2 million tagged words and classified these morphemes into three types. We call these morphemes a syntactic morpheme and regard them as an input unit of the syntactic analysis. This paper presents the syntactic morpheme is an efficient method for solving the following problems: 1) reduction of morphological ambiguities, 2) elimination of unnecessary partial parse trees during the parsing, and 3) reduction of syntactic ambiguity. Finally, the experimental results show that the syntactic morpheme is an essential unit for reducing morphological and syntactic ambiguity.

  • PDF

Korean Head-Tail Tokenization and Part-of-Speech Tagging by using Deep Learning (딥러닝을 이용한 한국어 Head-Tail 토큰화 기법과 품사 태깅)

  • Kim, Jungmin;Kang, Seungshik;Kim, Hyeokman
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.199-208
    • /
    • 2022
  • Korean is an agglutinative language, and one or more morphemes are combined to form a single word. Part-of-speech tagging method separates each morpheme from a word and attaches a part-of-speech tag. In this study, we propose a new Korean part-of-speech tagging method based on the Head-Tail tokenization technique that divides a word into a lexical morpheme part and a grammatical morpheme part without decomposing compound words. In this method, the Head-Tail is divided by the syllable boundary without restoring irregular deformation or abbreviated syllables. Korean part-of-speech tagger was implemented using the Head-Tail tokenization and deep learning technique. In order to solve the problem that a large number of complex tags are generated due to the segmented tags and the tagging accuracy is low, we reduced the number of tags to a complex tag composed of large classification tags, and as a result, we improved the tagging accuracy. The performance of the Head-Tail part-of-speech tagger was experimented by using BERT, syllable bigram, and subword bigram embedding, and both syllable bigram and subword bigram embedding showed improvement in performance compared to general BERT. Part-of-speech tagging was performed by integrating the Head-Tail tokenization model and the simplified part-of-speech tagging model, achieving 98.99% word unit accuracy and 99.08% token unit accuracy. As a result of the experiment, it was found that the performance of part-of-speech tagging improved when the maximum token length was limited to twice the number of words.