• Title/Summary/Keyword: 문법적 형태소

Search Result 60, Processing Time 0.025 seconds

Characteristics of Narrative Writing in Normal Aging: Story Grammar and Syntactic Structure (노년층의 글쓰기 특성 -이야기문법과 구문구조)

  • Kim, Hyeon Ah;Won, Sae Rom;Lee, Bo Eun;Yoon, Ji Hye
    • 재활복지
    • /
    • v.21 no.1
    • /
    • pp.193-212
    • /
    • 2017
  • The elderly often produce irrelevant speech and get off-topic more easily than the young; the former also has difficulty generating fewer syntactic structures and makes errors of grammatical morphemes. In particular, the elderly might have more difficulty writing since it requires more complex cognitive processes than storytelling. The participants in this study were 32 young people and 32 older people. They were asked to write a short story of Korean fairy tale('Heungbu Nolbu'). The data was analyzed in narrative composition and syntactic structures. The study revealed the following: First, in composition aspects, the elderly group showed significantly lower total number of story grammar and episodes. In addition, the elderly produced more off topic statements. Second, in syntactic aspects, although there was no significant difference in the number of producing complex sentences between two groups, the elderly group generated more inadequate cohesive devices and used fewer relative and adverbial clauses. These findings suggest that the elderly have a tendency to perform tasks by producing more off-topic statements and shows decreasing coherence by using lower number of relative and adverbial clauses. However, this study also uncovers that the elderly were able to write more complex and longer sentences using visual feedback.

Sentiment Analysis using Robust Parallel Tri-LSTM Sentence Embedding in Out-of-Vocabulary Word (Out-of-Vocabulary 단어에 강건한 병렬 Tri-LSTM 문장 임베딩을 이용한 감정분석)

  • Lee, Hyun Young;Kang, Seung Shik
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.16-24
    • /
    • 2021
  • The exiting word embedding methodology such as word2vec represents words, which only occur in the raw training corpus, as a fixed-length vector into a continuous vector space, so when mapping the words incorporated in the raw training corpus into a fixed-length vector in morphologically rich language, out-of-vocabulary (OOV) problem often happens. Even for sentence embedding, when representing the meaning of a sentence as a fixed-length vector by synthesizing word vectors constituting a sentence, OOV words make it challenging to meaningfully represent a sentence into a fixed-length vector. In particular, since the agglutinative language, the Korean has a morphological characteristic to integrate lexical morpheme and grammatical morpheme, handling OOV words is an important factor in improving performance. In this paper, we propose parallel Tri-LSTM sentence embedding that is robust to the OOV problem by extending utilizing the morphological information of words into sentence-level. As a result of the sentiment analysis task with corpus in Korean, we empirically found that the character unit is better than the morpheme unit as an embedding unit for Korean sentence embedding. We achieved 86.17% accuracy on the sentiment analysis task with the parallel bidirectional Tri-LSTM sentence encoder.

A Study on Lexical Ambiguity Resolution of Korean Morphological Analyzer (형태소 분석기의 어휘적 중의성 해결에 관한 연구)

  • Park, Yong-Uk
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.4
    • /
    • pp.783-787
    • /
    • 2012
  • It is not easy to find out syntactic error in a spelling checker systems of Korean, because the spelling checker is generally to correct each phrase and it cannot check the errors of contextual ill-matched words. Spelling checker system tests errors based on a words. Disambiguation of lexical ambiguities is important in natural language processing. Its outputs is used in syntactic analysis. For accurate analysis of a sentence, syntactic analysis system must find out the ambiguity of morphemes in a word. In this paper, we suggest several rules to resolve the ambiguities of morphemes in a word. Using these methods, we can reduce many lexical ambiguities in Korean.

Analysis of Korean Spontaneous Speech Characteristics for Spoken Dialogue Recognition (대화체 연속음성 인식을 위한 한국어 대화음성 특성 분석)

  • 박영희;정민화
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.330-338
    • /
    • 2002
  • Spontaneous speech is ungrammatical as well as serious phonological variations, which make recognition extremely difficult, compared with read speech. In this paper, for conversational speech recognition, we analyze the transcriptions of the real conversational speech, and then classify the characteristics of conversational speech in the speech recognition aspect. Reflecting these features, we obtain the baseline system for conversational speech recognition. The classification consists of long duration of silence, disfluencies and phonological variations; each of them is classified with similar features. To deal with these characteristics, first, we update silence model and append a filled pause model, a garbage model; second, we append multiple phonetic transcriptions to lexicon for most frequent phonological variations. In our experiments, our baseline morpheme error rate (WER) is 31.65%; we obtain MER reductions such as 2.08% for silence and garbage model, 0.73% for filled pause model, and 0.73% for phonological variations. Finally, we obtain 27.92% MER for conversational speech recognition, which will be used as a baseline for further study.

Concept-based Translation System in the Korean Spoken Language Translation System (한국어 대화체 음성언어 번역시스템에서의 개념기반 번역시스템)

  • Choi, Un-Cheon;Han, Nam-Yong;Kim, Jae-Hoon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.8
    • /
    • pp.2025-2037
    • /
    • 1997
  • The concept-based translation system, which is a part of the Korean spoken language translation system, translates spoken utterances from Korean speech recognizer into one of English, Japanese and Korean in a travel planning task. Our system regulates semantic rather than the syntactic category in order to process the spontaneous speech which tends to be regarded as the one ungrammatical and subject to recognition errors. Utterances are parsed into concept structures, and the generation module produces the sentence of the specified target language. We have developed a token-separator using base-words and an automobile grammar corrector for Korean processing. We have also developed postprocessors for each target language in order to improve the readability of the generation results.

  • PDF

Implementation of Feature-based Dialog System in Restaurant domain (레스토랑 영역에서의 자질기반 대화시스템 구현)

  • Yang, Hyeon-Seok;Kim, Dong-Joo;Seol, Yong-Soo;Jung, Sung-Hun;Kim, Han-Woo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.425-428
    • /
    • 2011
  • 서비스 로봇과 펫 로봇 등 사람과 직접 상호작용하는 로봇기술의 필요성이 증가하고 있다. 대화시스템은 자연언어처리 기술을 활용하여 음성인식 기술과의 결합을 통해 현재 로봇에서 주로 사용되고 있는 버튼과 터치스크린 위주의 HRI(Human-Robot Interface)보다 자연스러운 HRI를 제공한다. 이러한 자연스러운 HRI를 수행할 수 있는 로봇을 구성하기 위해서는 로봇이 서비스를 제공할 실제 영역에 맞는 대화시스템의 연구가 필요하다. 본 논문에서는 자질사전, 단일화 문법(unification grammar), 대화 흐름도(dialogue flow diagram)를 사용한 레스토랑 영역의 자질기반(feature-based) 대화시스템을 제시한다. 자질 정보는 형태소, 시제, 어휘의 의미구조 등을 나타내며 화행(speech act) 결정에 사용하고 문장 자질과 구문 자질을 파서에서 활용한다. 자질기반 대화시스템을 통하여 레스토랑 영역에서 사용자 화행 이해 및 주문, 안내 등의 서비스를 성공적으로 수행할 수 있음을 보인다.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Applikative Konstruktion und Partizipantenrelationen (적용구문과 참여자관계)

  • Shin Yong-Min
    • Koreanishche Zeitschrift fur Deutsche Sprachwissenschaft
    • /
    • v.6
    • /
    • pp.57-78
    • /
    • 2002
  • 적용구문(Applikative Konstruktion)은 타동사 구문의 일종이며 적용동사(Applikatives Verb)는 두 가지 유형으로 나눌 수 있다 적용동사가 자동사에서 타동사화 된 경우면 자동사에서는 없던 직접 목적어를 위한 슬롯이 적용형태소(Applikativmarker)를 통해 생긴다. 타동사에서 적용형태소의 삽입을 통해 변화된 적용동사는 두 개의 직접목적어를 취할 수 있는 동사의 특징을 나타내거나, 동사의 논항구조를 재배열하는 기능을 가진다. '논항구조 재배열'(rearrangement of argument structure)의 가장 전형적인 예는 타동사의 주변적인 참여자(peripherer Partizipant)를 적용동사를 통해 격상(Promotion) 시키는 반면 핵심참여자(zentraler Partizipant)는 격하(Demotion) 되는 구문이다. 즉 비 적용구문의 주변적인 참여자가 적용구문에서는 핵심참여자로서 직접목적어(direktes Objekt)의 통사적 기능을 가지는 것이다. 이러한 현상은 세계 여러 나라 언어에서 찾아 볼 수 있는데 본 논문에서는 독일어, 유카텍마야어, 인도네시아어, 캄베라어를 연구대상으로 삼았으며 이들 각 언어에서 어떤 참여자관계(Partizipantenrelation)가 적용구문의 직접목적어로 표현될 수 있는가를 살펴보았다. 이들 언어에서는 장소(Lokation)>수혜자($Benfizi\"{a}r$) & 수취인(Rezipient) > 동반자(Komitativ) > 기구(Instrument) 등의 순서로 가능하다. 이 것을 페터슨(1999)의 연구결과와 종합하여 살펴보면 적용구문의 직접목적어로 나타날 수 있는 참여자들의 순서는 루라기(2000)에 소개된 참여자의 원인연쇄(Kausale Kette)의 역순과 거의 일치하는 것을 볼 수 있는데 제일 자주 나타나는 참여자를 그 순서대로 보면 다음과 같다: 수혜자($Benefizi\"{a}r$) & 수취인(Rezipient)<장소(Lokation)>동반자(Komitativ) & 기구(Instrument)> 원인(Ursache). 이러한 순서를 우리는 '적용성의 위계'($Applikativit\"{a}tshierarchie$)라 부를 수 있으며 이것을 가능한 많은 언어에 유효한 언어의 보편성 중의 하나가 될 수 있다는 가설을 제기해 본다.

  • PDF

Implementation of Iconic Language for the Language Support System of the Language Disorders (언어 장애인의 언어보조 시스템을 위한 아이콘 언어의 구현)

  • Choo Kyo-Nam;Woo Yo-Seob;Min Hong-Ki
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.479-488
    • /
    • 2006
  • The iconic language interlace is designed to provide more convenient environments for communication to the target system than the keyboard-based interface. For this work, tendencies and features of vocabulary are analyzed in conversation corpora constructed from the corresponding domains with high degree of utilization, and the meaning and vocabulary system of iconic language are constructed through application of natural language processing methodologies such as morphological, syntactic and semantic analyses. The part of speech and grammatical rules of iconic language are defined in order to make the situation corresponding the icon to the vocabulary and meaning of the Korean language and to communicate through icon sequence. For linguistic ambiguity resolution which may occur in the iconic language and for effective semantic processing, semantic data focused on situation of the iconic language are constructed from the general purpose Korean semantic dictionary and subcategorization dictionary. Based on them, the Korean language generation from the iconic interface in semantic domain is suggested.

On Doublets (쌍형어에 대하여)

  • Yi, Eun-Gyeong
    • Cross-Cultural Studies
    • /
    • v.50
    • /
    • pp.425-451
    • /
    • 2018
  • In this paper, we examined the issues of the discussions on the subject of doublets. In general, as a definition, the use of doublets refer to a pair of words which have a common etymon, but also to a pair of words or grammatical morphemes that have the same meaning and similar forms of the word. In this paper, we have seen that a typical pairing word is a pair of words with a common etymology. Generally speaking, it is possible to divide doublets into subtypes depending on the identified similarities or differences in the meaning or form. The most distant type from the typical type of doublets is a pair of words that do not have a common etymon, but have the same meaning and are similar in form. The second issue about doublets is whether doublets include only words. For example, if some josas (postpositions or particles) have a common etymon, then it is noted that they can be accepted as a kind of doublets. In the case of suffixes, it may be possible to recognize the suffixes as doublets if they have a common etymon. In other words, it is not necessary to recognize the suffixes as doublets because the derivatives which are derived by the suffixes can be accepted as doublets. In the case of endings, it may be possible to recognize a pair of endings which have the same meaning and the common etymon as a doublet. Otherwise, the word forms to which the endings are combined can be accepted likewise as doublets. However, considering the fact that the endings typically in use in the Korean language may have syntactic properties, the endings should be considered as doublets rather than the words which have the endings. Finally, we conclude that there may be some debate as to whether stem doublets or ending doublets belong to a lexical item in the lexicon. It can be said that they are plural underlying forms and may be deserving of further research.