• Title/Summary/Keyword: Korean Lexical Feature

Search Result 34, Processing Time 0.028 seconds

Improving the Performance of Korean Text Chunking by Machine learning Approaches based on Feature Set Selection (자질집합선택 기반의 기계학습을 통한 한국어 기본구 인식의 성능향상)

  • Hwang, Young-Sook;Chung, Hoo-jung;Park, So-Young;Kwak, Young-Jae;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.654-668
    • /
    • 2002
  • In this paper, we present an empirical study for improving the Korean text chunking based on machine learning and feature set selection approaches. We focus on two issues: the problem of selecting feature set for Korean chunking, and the problem of alleviating the data sparseness. To select a proper feature set, we use a heuristic method of searching through the space of feature sets using the estimated performance from a machine learning algorithm as a measure of "incremental usefulness" of a particular feature set. Besides, for smoothing the data sparseness, we suggest a method of using a general part-of-speech tag set and selective lexical information under the consideration of Korean language characteristics. Experimental results showed that chunk tags and lexical information within a given context window are important features and spacing unit information is less important than others, which are independent on the machine teaming techniques. Furthermore, using the selective lexical information gives not only a smoothing effect but also the reduction of the feature space than using all of lexical information. Korean text chunking based on the memory-based learning and the decision tree learning with the selected feature space showed the performance of precision/recall of 90.99%/92.52%, and 93.39%/93.41% respectively.

Focus and Particle Constructions

  • Lee, Wonbin
    • Korean Journal of English Language and Linguistics
    • /
    • v.4 no.2
    • /
    • pp.195-227
    • /
    • 2004
  • This paper concerns the issue related to the focus phenomena with a particular reference to the two alternating orders (continuous vs. discontinuous orders) in particle constructions in English. To explain the alternation of word order in particle constructions, I will argue that the choice of word order is closely related to the focus property of the object DPs. Following Drubig (2003), I will assume that focus-feature is taken to be a syntactic feature assigned freely to a lexical head in the process of the mapping into Lexical Array (LA) from the lexicon (LEX). I argue that the focus-marked object DP cannot move out of its focus domain and thus the continuous order is derived. In the case of non-focus-marked object DP, however, the object DP moves out of VP in order to receive an appropriate interpretation. As a result, the discontinuous order is derived.

  • PDF

Recognition of Korean Implicit Citation Sentences Using Machine Learning with Lexical Features (어휘 자질 기반 기계 학습을 사용한 한국어 암묵 인용문 인식)

  • Kang, In-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.8
    • /
    • pp.5565-5570
    • /
    • 2015
  • Implicit citation sentence recognition is to locate citation sentences which lacks explicit citation markers, from articles' full-text. State-of-the-art approaches exploit word ngrams, clue words, researcher's surnames, mentions of previous methods, and distance relative to nearest explicit citation sentences, etc., reaching over 50% performance. However, most previous works have been conducted on English. As for Korean, a rule-based method using positive/negative clue patterns was reported to attain the performance of 42%, requiring further improvement. This study attempted to learn to recognize implicit citation sentences from Korean literatures' full-text using Korean lexical features. Different lexical feature units such as Eojeol, morpheme, and Eumjeol were evaluated to determine proper lexical features for Korean implicit citation sentence recognition. In addition, lexical features were combined with the position features representing backward/forward proximities to explicit citation sentences, improving the performance up to over 50%.

The Method of the Evaluation of Verbal Lexical-Semantic Network Using the Automatic Word Clustering System (단어클러스터링 시스템을 이용한 어휘의미망의 활용평가 방안)

  • Kim, Hae-Gyung;Song, Mi-Young
    • Korean Journal of Oriental Medicine
    • /
    • v.12 no.3 s.18
    • /
    • pp.1-15
    • /
    • 2006
  • For the recent several years, there has been much interest in lexical semantic network. However, it seems to be very difficult to evaluate the effectiveness and correctness of it and invent the methods for applying it into various problem domains. In order to offer the fundamental ideas about how to evaluate and utilize lexical semantic networks, we developed two automatic word clustering systems, which are called system A and system B respectively. 68,455,856 words were used to learn both systems. We compared the clustering results of system A to those of system B which is extended by the lexical-semantic network. The system B is extended by reconstructing the feature vectors which are used the elements of the lexical-semantic network of 3,656 '-ha' verbs. The target data is the 'multilingual Word Net-CoreNet'.When we compared the accuracy of the system A and system B, we found that system B showed the accuracy of 46.6% which is better than that of system A, 45.3%.

  • PDF

Multi-stage Speech Recognition Using Confidence Vector (신뢰도 벡터 기반의 다단계 음성인식)

  • Jeon, Hyung-Bae;Hwang, Kyu-Woong;Chung, Hoon;Kim, Seung-Hi;Park, Jun;Lee, Yun-Keun
    • MALSORI
    • /
    • no.63
    • /
    • pp.113-124
    • /
    • 2007
  • In this paper, we propose a use of confidence vector as an intermediate input feature for multi-stage based speech recognition architecture to improve recognition accuracy. A multi-stage speech recognition structure is introduced as a method to reduce the computational complexity of the decoding procedure and then accomplish faster speech recognition. Conventional multi-stage speech recognition is usually composed of three stages, acoustic search, lexical search, and acoustic re-scoring. In this paper, we focus on improving the accuracy of the lexical decoding by introducing a confidence vector as an input feature instead of phoneme which was used typically. We take experimental results on 220K Korean Point-of-Interest (POI) domain and the experimental results show that the proposed method contributes on improving accuracy.

  • PDF

A Semantic-Based Feature Expansion Approach for Improving the Effectiveness of Text Categorization by Using WordNet (문서범주화 성능 향상을 위한 의미기반 자질확장에 관한 연구)

  • Chung, Eun-Kyung
    • Journal of the Korean Society for information Management
    • /
    • v.26 no.3
    • /
    • pp.261-278
    • /
    • 2009
  • Identifying optimal feature sets in Text Categorization(TC) is crucial in terms of improving the effectiveness. In this study, experiments on feature expansion were conducted using author provided keyword sets and article titles from typical scientific journal articles. The tool used for expanding feature sets is WordNet, a lexical database for English words. Given a data set and a lexical tool, this study presented that feature expansion with synonymous relationship was significantly effective on improving the results of TC. The experiment results pointed out that when expanding feature sets with synonyms using on classifier names, the effectiveness of TC was considerably improved regardless of word sense disambiguation.

Study on the Recognition of Spoken Korean Continuous Digits Using Phone Network (음성망을 이용한 한국어 연속 숫자음 인식에 관한 연구)

  • Lee, G.S.;Lee, H.J.;Byun, Y.G.;Kim, S.H.
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.624-627
    • /
    • 1988
  • This paper describes the implementation of recognition of speaker - dependent Korean spoken continuous digits. The recognition system can be divided into two parts, acoustic - phonetic processor and lexical decoder. Acoustic - phonetic processor calculates the feature vectors from input speech signal and the performs frame labelling and phone labelling. Frame labelling is performed by Bayesian classification method and phone labelling is performed using labelled frame and posteriori probability. The lexical decoder accepts segments (phones) from acoustic - phonetic processor and decodes its lexical structure through phone network which is constructed from phonetic representation of ten digits. The experiment carried out with two sets of 4continuous digits, each set is composed of 35 patterns. An evaluation of the system yielded a pattern accuracy of about 80 percent resulting from a word accuracy of about 95 percent.

  • PDF

The Method of Using the Automatic Word Clustering System for the Evaluation of Verbal Lexical-Semantic Network (동사 어휘의미망 평가를 위한 단어클러스터링 시스템의 활용 방안)

  • Kim Hae-Gyung;Yoon Ae-Sun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.40 no.3
    • /
    • pp.175-190
    • /
    • 2006
  • For the recent several years, there has been much interest in lexical semantic network However it seems to be very difficult to evaluate the effectiveness and correctness of it and invent the methods for applying it into various problem domains. In order to offer the fundamental ideas about how to evaluate and utilize lexical semantic networks, we developed two automatic vol·d clustering systems, which are called system A and system B respectively. 68.455.856 words were used to learn both systems. We compared the clustering results of system A to those of system B which is extended by the lexical-semantic network. The system B is extended by reconstructing the feature vectors which are used the elements of the lexical-semantic network of 3.656 '-ha' verbs. The target data is the 'multilingual Word Net-CoroNet'. When we compared the accuracy of the system A and system B, we found that system B showed the accuracy of 46.6% which is better than that of system A. 45.3%.

Improvement and Evaluation of the Korean Large Vocabulary Continuous Speech Recognition Platform (ECHOS) (한국어 음성인식 플랫폼(ECHOS)의 개선 및 평가)

  • Kwon, Suk-Bong;Yun, Sung-Rack;Jang, Gyu-Cheol;Kim, Yong-Rae;Kim, Bong-Wan;Kim, Hoi-Rin;Yoo, Chang-Dong;Lee, Yong-Ju;Kwon, Oh-Wook
    • MALSORI
    • /
    • no.59
    • /
    • pp.53-68
    • /
    • 2006
  • We report the evaluation results of the Korean speech recognition platform called ECHOS. The platform has an object-oriented and reusable architecture so that researchers can easily evaluate their own algorithms. The platform has all intrinsic modules to build a large vocabulary speech recognizer: Noise reduction, end-point detection, feature extraction, hidden Markov model (HMM)-based acoustic modeling, cross-word modeling, n-gram language modeling, n-best search, word graph generation, and Korean-specific language processing. The platform supports both lexical search trees and finite-state networks. It performs word-dependent n-best search with bigram in the forward search stage, and rescores the lattice with trigram in the backward stage. In an 8000-word continuous speech recognition task, the platform with a lexical tree increases 40% of word errors but decreases 50% of recognition time compared to the HTK platform with flat lexicon. ECHOS reduces 40% of recognition errors through incorporation of cross-word modeling. With the number of Gaussian mixtures increasing to 16, it yields word accuracy comparable to the previous lexical tree-based platform, Julius.

  • PDF

Vocabulary Recognition Retrieval Optimized System using MLHF Model (MLHF 모델을 적용한 어휘 인식 탐색 최적화 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.217-223
    • /
    • 2009
  • Vocabulary recognition system of Mobile terminal is executed statistical method for vocabulary recognition and used statistical grammar recognition system using N-gram. If limit arithmetic processing capacity in memory of vocabulary to grow then vocabulary recognition algorithm complicated and need a large scale search space and many processing time on account of impossible to process. This study suggest vocabulary recognition optimize using MLHF System. MLHF separate acoustic search and lexical search system using FLaVoR. Acoustic search feature vector of speech signal extract using HMM, lexical search recognition execution using Levenshtein distance algorithm. System performance as a result of represent vocabulary dependence recognition rate of 98.63%, vocabulary independence recognition rate of 97.91%, represent recognition speed of 1.61 second.