• Title/Summary/Keyword: 어절 정보

Search Result 378, Processing Time 0.031 seconds

Using Syntactic Unit of Morpheme for Reducing Morphological and Syntactic Ambiguity (형태소 및 구문 모호성 축소를 위한 구문단위 형태소의 이용)

  • Hwang, Yi-Gyu;Lee, Hyun-Young;Lee, Yong-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.784-793
    • /
    • 2000
  • The conventional morphological analysis of Korean language presents various morphological ambiguities because of its agglutinative nature. These ambiguities cause syntactic ambiguities and they make it difficult to select the correct parse tree. This problem is mainly related to the auxiliary predicate or bound noun in Korean. They have a strong relationship with the surrounding morphemes which are mostly functional morphemes that cannot stand alone. The combined morphemes have a syntactic or semantic role in the sentence. We extracted these morphemes from 0.2 million tagged words and classified these morphemes into three types. We call these morphemes a syntactic morpheme and regard them as an input unit of the syntactic analysis. This paper presents the syntactic morpheme is an efficient method for solving the following problems: 1) reduction of morphological ambiguities, 2) elimination of unnecessary partial parse trees during the parsing, and 3) reduction of syntactic ambiguity. Finally, the experimental results show that the syntactic morpheme is an essential unit for reducing morphological and syntactic ambiguity.

  • PDF

A Research on Module Arrangement of Korean Spelling Corrector to Optimize Correction Rate (교정률 최적화를 위한 한국어 철자교정기의 모듈 배열)

  • Yun Keun-Soo;Kwon Hyuk-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.5
    • /
    • pp.366-377
    • /
    • 2005
  • We find a module may that takes optimal correction rate of Korean spelling corrector. If there are a lot of module numbers of spelling corrector, it is difficult to calculate optimal correction rate of spelling corrector because permutation of N-modules is N!. This Korean spelling corrector consists of 19 modules. It is impossible to arrange 19 modules actually and the correction rate is various according to input data. We found the range of correction rate using parallel processing between modules and the optimal correction rate using sequential processing of modules. Input data that are used in an experiment is 753,191 eojeol's sets that happen in newspaper publishing company during several years. About this error set, theoretical maximum correction rate of spelling corrector is $97.28\%$ (732,764/753,191). But we got the optimal correction rate $96.62\%$ (727,750/733,191). This optimal correction rate is almost near to $99.31\%$ (727,750/732,764) of the maximum correction rate.

Categorization of Korean News Articles Based on Convolutional Neural Network Using Doc2Vec and Word2Vec (Doc2Vec과 Word2Vec을 활용한 Convolutional Neural Network 기반 한국어 신문 기사 분류)

  • Kim, Dowoo;Koo, Myoung-Wan
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.742-747
    • /
    • 2017
  • In this paper, we propose a novel approach to improve the performance of the Convolutional Neural Network(CNN) word embedding model on top of word2vec with the result of performing like doc2vec in conducting a document classification task. The Word Piece Model(WPM) is empirically proven to outperform other tokenization methods such as the phrase unit, a part-of-speech tagger with substantial experimental evidence (classification rate: 79.5%). Further, we conducted an experiment to classify ten categories of news articles written in Korean by feeding words and document vectors generated by an application of WPM to the baseline and the proposed model. From the results of the experiment, we report the model we proposed showed a higher classification rate (89.88%) than its counterpart model (86.89%), achieving a 22.80% improvement. Throughout this research, it is demonstrated that applying doc2vec in the document classification task yields more effective results because doc2vec generates similar document vector representation for documents belonging to the same category.

Haum: Educational Mobile Game for Korean Language life Conversation (하움: 한국어 생활회화 교육용 모바일 게임)

  • Yun, Jihye;Lee, hansol;Hong, Jiyeon;Yoon, Daseul;Park, Su e;Park, Jung Kyu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.208-211
    • /
    • 2017
  • The biggest challenge immigrants face is language. We realized that Korean education contents suited for it are lacking more than we thought. To help with the above problems, we decided to make a mobile game for Korean conversation education. The proposed game is based on the online course of Sejonghakdang and is composed of life conversation which can be used immediately in real. We selected female marriage immigrants from China who have a large number of foreign residents and need a lot of Korean education but have a relatively low chance of being contacted. In the case of female marriage immigrants, communication was possible, but it was characterized that the composition of sentences was not smooth. Considering these features, we chose the game method that can match the problem in the unit of the word.

  • PDF

Morphology Representation using STT API in Rasbian OS (Rasbian OS에서 STT API를 활용한 형태소 표현에 대한 연구)

  • Woo, Park-jin;Im, Je-Sun;Lee, Sung-jin;Moon, Sang-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.373-375
    • /
    • 2021
  • In the case of Korean, the possibility of development is lower than that of English if tagging is done through the word tokenization like English. Although the form of tokenizing the corpus by separating it into morpheme units via KoNLPy is represented as a graph database, full separation of voice files and verification of practicality is required when converting the module from graph database to corpus. In this paper, morphology representation using STT API is shown in Raspberry Pi. The voice file converted to Corpus is analyzed to KoNLPy and tagged. The analyzed results are represented by graph databases and can be divided into tokens divided by morpheme, and it is judged that data mining extraction with specific purpose is possible by determining practicality and degree of separation.

  • PDF

On "Dimension" Nouns In Korean (한국어 "크기" 명사 부류에 대하여)

  • Song, Kuen-Young;Hong, Chai-Song
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.260-266
    • /
    • 2001
  • 본 논문은 불어 명사의 의미 통사적 분류와 관련된 '대상부류(classes d'objets)' 이론을 바탕으로 한국어의 "크기" 명사 부류에 대한 의미적, 형식적 기준을 설정함으로써 자연언어 처리에의 활용 방안을 모색하고자 한다. 한국어의 일부 명사들은 어떤 대상 혹은 현상의 다양한 속성이 특정 차원에서 갖는 규모의 의미를 표현한다 예를 들어, '길이', '깊이', '넓이', '높이', '키', '무게', '온도', '기온' 등이 이에 해당하는데, 이들은 측정의 개념과도 밀접한 연관을 가지며, 통사적으로도 일정한 속성을 공유한다. 즉 '측정하다', '재다' 등 측정의 개념을 나타내는 동사 및 수량 표현과 더불어 일정한 통사 형식으로 실현된다는 점이다. 본 논문에서는 이러한 조건을 만족시키는 한국어 명사들을 "크기" 명사라 명명하며, "크기" 명사와 특징적으로 결합하는 '측정하다', '재다' 등의 동사를 "크기" 명사 부류에 대한 적정술어라 부른다. 또한 "크기" 명사는 결합 가능한 단위명사의 종류 및 호응 가능한 정도 형용사의 종류 등에 따라 세부 하위유형으로 분류할 수도 있다. 따라서 주로 술어와의 통사적 결합관계를 기준으로 "크기" 명사 부류를 외형적으로 한정하고, 이 부류에 속하는 개개 명사들의 통사적 세부 속성을 전자사전의 체계로 구축한다면 한국어 "크기" 명사에 대한 전반적이고 총체적인 의미적 통사적 분류와 기술이 가능해질 것이다. 한편 "크기" 명사에 대한 연구는 반드시 이들 명사를 특징지어주는 단위명사 부류의 연구와 병행되어야 한다. 본 연구는 한국어 "크기" 명사를 한정하고 분류하는 보다 엄밀하고 형식적인 기준과 그 의미 통사 정보를 체계적으로 제시해 줄 것이다. 이러한 정보들은 한국어 자동처리에 활용되어 "크기" 명사를 포함하는 구문의 자동분석 및 산출 과정에 즉각적으로 활용될 수 있을 것이다. 또한, 이러한 정보들은 현재 구축중인 세종 전자사전에도 직접 반영되고 있다.teness)은 언화행위가 성공적이라는 것이다.[J. Searle] (7) 수로 쓰인 것(상수)(象數)과 시로 쓰인 것(의리)(義理)이 하나인 것은 그 나타난 것과 나타나지 않은 것들 사이에 어떠한 들도 없음을 말한다. [(성중영)(成中英)] (8) 공통의 규범의 공통성 속에 규범적인 측면이 벌써 있다. 공통성에서 개인적이 아닌 공적인 규범으로의 전이는 규범, 가치, 규칙, 과정, 제도로의 전이라고 본다. [C. Morrison] (9) 우리의 언어사용에 신비적인 요소를 부인할 수가 없다. 넓은 의미의 발화의미(utterance meaning) 속에 신비적인 요소나 애정표시도 수용된다. 의미분석은 지금 한글을 연구하고, 그 결과에 의존하여서 우리의 실제의 생활에 사용하는 $\ulcorner$한국어사전$\lrcorner$ 등을 만드는 과정에서, 어떤 의미에서 실험되었다고 말할 수가 있는 언어과학의 연구의 결과에 의존하여서 수행되는 철학적인 작업이다. 여기에서는 하나의 철학적인 연구의 시작으로 받아들여지는 이 의미분석의 문제를 반성하여 본다.반인과 다르다는 것이 밝혀졌다. 이 결과가 옳다면 한국의 심성 어휘집은 어절 문맥에 따라서 어간이나 어근 또는 활용형 그 자체로 이루어져 있을 것이다.으며, 레드 클로버 + 혼파 초지가 건물수량과 사료가치를 높이는데 효과적이었다.\ell}$ 이었으며 , yeast extract 첨가(添加)하여 배양시(培養時)는 yeast extract 농도(濃度)가 증가(增加)함에 따라 단백질(蛋白質) 함량(含量)도 증가(增加)하였다. 7. CHS-13 균주(菌株)의 RNA 함량(含量)은 $4.92{\times}10^{-2 }\;mg/m{\ell}$이었으며 yeast extract 농도(濃度)가 증가(增加)함에 따라 증가(增加)하다가 농도(濃度) 0.2%에서 최대함량(最大含量)을 나타내고 그후는 감소(減少)하였다.

  • PDF

The Method of the Evaluation of Verbal Lexical-Semantic Network Using the Automatic Word Clustering System (단어클러스터링 시스템을 이용한 어휘의미망의 활용평가 방안)

  • Kim, Hae-Gyung;Song, Mi-Young
    • Korean Journal of Oriental Medicine
    • /
    • v.12 no.3 s.18
    • /
    • pp.1-15
    • /
    • 2006
  • For the recent several years, there has been much interest in lexical semantic network. However, it seems to be very difficult to evaluate the effectiveness and correctness of it and invent the methods for applying it into various problem domains. In order to offer the fundamental ideas about how to evaluate and utilize lexical semantic networks, we developed two automatic word clustering systems, which are called system A and system B respectively. 68,455,856 words were used to learn both systems. We compared the clustering results of system A to those of system B which is extended by the lexical-semantic network. The system B is extended by reconstructing the feature vectors which are used the elements of the lexical-semantic network of 3,656 '-ha' verbs. The target data is the 'multilingual Word Net-CoreNet'.When we compared the accuracy of the system A and system B, we found that system B showed the accuracy of 46.6% which is better than that of system A, 45.3%.

  • PDF

Automatic Construction of Foreign Word Transliteration Dictionary from English-Korean Parallel Corpus (영-한 병렬 코퍼스로부터 외래어 표기 사전의 자동 구축)

  • Lee, Jae Sung
    • The Journal of Korean Association of Computer Education
    • /
    • v.6 no.2
    • /
    • pp.9-21
    • /
    • 2003
  • This paper proposes an automatic construction system for transliteration dictionary from English-Korean parallel corpus. The system works in 3 steps: it extracts all nouns from Korean documents as the first step, filters transliterated foreign word nouns out of them with the language identification method as the second step, and extracts the corresponding English words by using a probabilistic alignment method as the final step. Specially, the fact that there is a corresponding English word in most cases, is utilized to extract the purely transliterated part from a Koreans word phrase, which is usually used in combined forms with Korean endings(Eomi) or particles(Josa). Moreover, the direct phonetic comparison is done to the words in two different alphabet systems without converting them to the same alphabet system. The experiment showed that the performance was influenced by the first and the second preprocessing steps; the most efficient model among manually preprocessed ones showed 85.4% recall, 91.0% precision and the most efficient model among fully automated ones got 68.3% recall, 89.2% precision.

  • PDF

A comparative study of prosodic features according to the syntactic diversities between children with reading disability and nondisabled children (읽기장애아동과 일반아동의 통사적 다양성에 따른 운율 특성 비교)

  • Park, Sungsook;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.55-66
    • /
    • 2021
  • Proper prosody in reading allows the reader to naturally convey the meaning, which manifests as changes in pitch, loudness, and speech rate. Children with reading disability face difficulty in delivering information due to poor prosody. This study identified the difference in prosodic features between children with reading disabilities and nondisabled children through means of reading tasks. Reading tasks, according to sentence types (short sentences, assumptions/conditions, intentions, relative-clause), were recorded by 15 children studying in the 3rd to 6th grade in elementary school. Children with reading disability had a statistically significant wider range of pitch, slower speech rate, more frequent usage of pauses, longer total pause duration, and steeper pitch slope than nondisabled one in sentence-final and -medial words. Children with reading disability, therefore, exhibited a less natural and expressive reading than nondisabled children. Through this study, the characteristics of prosody observed in children with reading disability were identified and the need for an approach for effective intervention was also suggested.

PPEditor: Semi-Automatic Annotation Tool for Korean Dependency Structure (PPEditor: 한국어 의존구조 부착을 위한 반자동 말뭉치 구축 도구)

  • Kim Jae-Hoon;Park Eun-Jin
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.63-70
    • /
    • 2006
  • In general, a corpus contains lots of linguistic information and is widely used in the field of natural language processing and computational linguistics. The creation of such the corpus, however, is an expensive, labor-intensive and time-consuming work. To alleviate this problem, annotation tools to build corpora with much linguistic information is indispensable. In this paper, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. The most ideal way is to fully automatically create the corpus without annotators' interventions, but as a matter of fact, it is impossible. The proposed tool is semi-automatic like most other annotation tools and is designed to edit errors, which are generated by basic analyzers like part-of-speech tagger and (partial) parser. We also design it to avoid repetitive works while editing the errors and to use it easily and friendly. Using the proposed annotation tool, 10,000 Korean sentences containing over 20 words are annotated with dependency structures. For 2 months, eight annotators have worked every 4 hours a day. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.