• Title/Summary/Keyword: 어절 정보

Search Result 378, Processing Time 0.022 seconds

Decision Tree based Disambiguation of Semantic Roles for Korean Adverbial Postpositions in Korean-English Machine Translation (한영 기계번역에서 결정 트리 학습에 의한 한국어 부사격 조사의 의미 중의성 해소)

  • Park, Seong-Bae;Zhang, Byoung-Tak;Kim, Yung-Taek
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.668-677
    • /
    • 2000
  • Korean has the characteristics that case postpositions determine the syntactic roles of phrases and a postposition may have more than one meanings. In particular, the adverbial postpositions make translation from Korean to English difficult, because they can have various meanings. In this paper, we describe a method for resolving such semantic ambiguities of Korean adverbial postpositions using decision trees. The training examples for decision tree induction are extracted from a corpus consisting of 0.5 million words, and the semantic roles for adverbial postpositions are classified into 25 classes. The lack of training examples in decision tree induction is overcome by clustering words into classes using a greedy clustering algorithm. The cross validation results show that the presented method achieved 76.2% of precision on the average, which means 26.0% improvement over the method determining the semantic role of an adverbial postposition as the most frequently appearing role.

  • PDF

Modification Distance Model using Headible Path Contexts for Korean Dependency Parsing (지배가능 경로 문맥을 이용한 의존 구문 분석의 수식 거리 모델)

  • Woo, Yeon-Moon;Song, Young-In;Park, So-Young;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.2
    • /
    • pp.140-149
    • /
    • 2007
  • This paper presents a statistical model for Korean dependency-based parsing. Although Korean is one of free word order languages, it has the feature of which some word order is preferred to local contexts. Earlier works proposed parsing models using modification lengths due to this property. Our model uses headible path contexts for modification length probabilities. Using a headible path of a dependent it is effective for long distance relation because the large surface context for a dependent are abbreviated as its headible path. By combined with lexical bigram dependency, our probabilistic model achieves 86.9% accuracy in eojoel analysis for KAIST corpus, more improvement especially for long distance dependencies.

Sentence Similarity Measurement Method Using a Set-based POI Data Search (집합 기반 POI 검색을 이용한 문장 유사도 측정 기법)

  • Ko, EunByul;Lee, JongWoo
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.12
    • /
    • pp.711-716
    • /
    • 2014
  • With the gradual increase of interest in plagiarism and intelligent file content search, the demand for similarity measuring between two sentences is increasing. There is a lot of researches for sentence similarity measurement methods in various directions such as n-gram, edit-distance and LSA. However, these methods have their own advantages and disadvantages. In this paper, we propose a new sentence similarity measurement method approaching from another direction. The proposed method uses the set-based POI data search that improves search performance compared to the existing hard matching method when data includes the inverse, omission, insertion and revision of characters. Using this method, we are able to measure the similarity between two sentences more accurately and more quickly. We modified the data loading and text search algorithm of the set-based POI data search. We also added a word operation algorithm and a similarity measure between two sentences expressed as a percentage. From the experimental results, we observe that our sentence similarity measurement method shows better performance than n-gram and the set-based POI data search.

Improving Recall for Context-Sensitive Spelling Correction Rules Through Integrated Constraint Loosening Method (통합적 제약완화 방식을 통한 한국어 문맥의존 철자오류 교정규칙의 재현율 향상)

  • Choi, Hyunsoo;Yoon, Aesun;Kwon, Hyukchul
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.6
    • /
    • pp.412-417
    • /
    • 2015
  • Context-sensitive spelling errors (CSSE) are hard to correct, since they are perfect words when analyzed alone. Determined only by considering the semantic and syntactic relations of their context, CSSEs affect largely the performance of spelling and grammar checkers. The existing Korean Spelling and Grammar Checker (KSGC 4.5) adopts a rule-based method, which uses hand-made correction rules for CSSEs. Using rule-based method, the KSGC 4.5 is designed to obtain the very high precision, which results in the extremely low recall. In this paper, we integrate our previous works that control the CSSE correction rules, in order to improve the recall without sacrificing the precision. In addition to the integration, facultative insertion of adverbs and conjugation suffix of predicates are also considered, as for constraint-loosening linguistic features.

Speech Recognition of the Korean Vowel 'ㅜ' Based on Time Domain Bulk Indicators (시간 영역 벌크 지표에 기반한 한국어 모음 'ㅜ'의 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.11
    • /
    • pp.591-600
    • /
    • 2016
  • Computing technologies are increasingly applied to most casual human environment networks, as computing technologies are further developed. In addition, the rapidly increasing interest in IoT has led to the wide acceptance of speech recognition as a means of HCI. In this study, we present a novel method for recognizing the Korean vowel 'ㅜ', as a part of a phoneme based Korean speech recognition system. The proposed method involves analyses of bulk indicators calculated in the time domain instead of analysis in the frequency domain, with consequent reduction in the computational cost. Four elementary algorithms for detecting typical waveform patterns of 'ㅜ' using bulk indicators are presented and combined to make final decisions. The experimental results show that the proposed method can achieve 90.1% recognition accuracy, and recognition speed of 0.68 msec per syllable.

A Comparative Study on Optimal Feature Identification and Combination for Korean Dialogue Act Classification (한국어 화행 분류를 위한 최적의 자질 인식 및 조합의 비교 연구)

  • Kim, Min-Jeong;Park, Jae-Hyun;Kim, Sang-Bum;Rim, Hae-Chang;Lee, Do-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.11
    • /
    • pp.681-691
    • /
    • 2008
  • In this paper, we have evaluated and compared each feature and feature combinations necessary for statistical Korean dialogue act classification. We have implemented a Korean dialogue act classification system by using the Support Vector Machine method. The experimental results show that the POS bigram does not work well and the morpheme-POS pair and other features can be complementary to each other. In addition, a small number of features, which are selected by a feature selection technique such as chi-square, are enough to show steady performance of dialogue act classification. We also found that the last eojeol plays an important role in classifying an entire sentence, and that Korean characteristics such as free order and frequent subject ellipsis can affect the performance of dialogue act classification.

Speech Recognition of the Korean Vowel 'ㅡ' based on Neural Network Learning of Bulk Indicators (벌크 지표의 신경망 학습에 기반한 한국어 모음 'ㅡ'의 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.11
    • /
    • pp.617-624
    • /
    • 2017
  • Speech recognition is now one of the most widely used technologies in HCI. Many applications where speech recognition may be used (such as home automation, automatic speech translation, and car navigation) are now under active development. In addition, the demand for speech recognition systems in mobile environments is rapidly increasing. This paper is intended to present a method for instant recognition of the Korean vowel 'ㅡ', as a part of a Korean speech recognition system. The proposed method uses bulk indicators (which are calculated in the time domain) instead of the frequency domain and consequently, the computational cost for the recognition can be reduced. The bulk indicators representing predominant sequence patterns of the vowel 'ㅡ' are learned by neural networks and final recognition decisions are made by those trained neural networks. The results of the experiment show that the proposed method can achieve 88.7% recognition accuracy, and recognition speed of 0.74 msec per syllable.

The automatic Lexical Knowledge acquisition using morpheme information and Clustering techniques (어절 내 형태소 출현 정보와 클러스터링 기법을 이용한 어휘지식 자동 획득)

  • Yu, Won-Hee;Suh, Tae-Won;Lim, Heui-Seok
    • The Journal of Korean Association of Computer Education
    • /
    • v.13 no.1
    • /
    • pp.65-73
    • /
    • 2010
  • This study offered lexical knowledge acquisition model of unsupervised learning method in order to overcome limitation of lexical knowledge hand building manual of supervised learning method for research of natural language processing. The offered model obtains the lexical knowledge from the lexical entry which was given by inputting through the process of vectorization, clustering, lexical knowledge acquisition automatically. In the process of obtaining the lexical knowledge acquisition of model, some parts of lexical knowledge dictionary which changes in the number of lexical knowledge and characteristics of lexical knowledge appeared by parameter changes were shown. The experimental results show that is possibility of automatic building of Machine-readable dictionary, because observed to the number of lexical class information cluster collected constant. also building of lexical ditionary including left-morphosyntactic information and right-morphosyntactic information is reflected korean characteristic.

  • PDF

Rule-based Speech Recognition Error Correction for Mobile Environment (모바일 환경을 고려한 규칙기반 음성인식 오류교정)

  • Kim, Jin-Hyung;Park, So-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.25-33
    • /
    • 2012
  • In this paper, we propose a rule-based model to correct errors in a speech recognition result in the mobile device environment. The proposed model considers the mobile device environment with limited resources such as processing time and memory, as follows. In order to minimize the error correction processing time, the proposed model removes some processing steps such as morphological analysis and the composition and decomposition of syllable. Also, the proposed model utilizes the longest match rule selection method to generate one error correction candidate per point, assumed that an error occurs. For the purpose of deploying memory resource, the proposed model uses neither the Eojeol dictionary nor the morphological analyzer, and stores a combined rule list without any classification. Considering the modification and maintenance of the proposed model, the error correction rules are automatically extracted from a training corpus. Experimental results show that the proposed model improves 5.27% on the precision and 5.60% on the recall based on Eojoel unit for the speech recognition result.

Shallow Parsing on Grammatical Relations in Korean Sentences (한국어 문법관계에 대한 부분구문 분석)

  • Lee, Song-Wook;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.984-989
    • /
    • 2005
  • This study aims to identify grammatical relations (GRs) in Korean sentences. The key task is to find the GRs in sentences in terms of such GR categories as subject, object, and adverbial. To overcome this problem, we are fared with the many ambiguities. We propose a statistical model, which resolves the grammatical relational ambiguity first, and then finds correct noun phrases (NPs) arguments of given verb phrases (VP) by using the probabilities of the GRs given NPs and VPs in sentences. The proposed model uses the characteristics of the Korean language such as distance, no-crossing and case property. We attempt to estimate the probabilities of GR given an NP and a VP with Support Vector Machines (SVM) classifiers. Through an experiment with a tree and GR tagged corpus for training the model, we achieved an overall accuracy of $84.8\%,\;94.1\%,\;and\;84.8\%$ in identifying subject, object, and adverbial relations in sentences, respectively.