• Title/Summary/Keyword: 어휘정보

Search Result 1,062, Processing Time 0.032 seconds

An analysis of Speech Acts for Korean Using Support Vector Machines (지지벡터기계(Support Vector Machines)를 이용한 한국어 화행분석)

  • En Jongmin;Lee Songwook;Seo Jungyun
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.365-368
    • /
    • 2005
  • We propose a speech act analysis method for Korean dialogue using Support Vector Machines (SVM). We use a lexical form of a word, its part of speech (POS) tags, and bigrams of POS tags as sentence features and the contexts of the previous utterance as context features. We select informative features by Chi square statistics. After training SVM with the selected features, SVM classifiers determine the speech act of each utterance. In experiment, we acquired overall $90.54\%$ of accuracy with dialogue corpus for hotel reservation domain.

Design and Implementation of Short-Essay Marking System by Using Semantic Kernel and WordNet (의미 커널과 워드넷을 이용한 주관식 문제 채점 시스템의 설계 및 구현)

  • Cho, Woo-Jin;Chu, Seung-Woo;O, Jeong-Seok;Kim, Han-Saem;Kim, Yu-Seop;Lee, Jae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.1027-1030
    • /
    • 2005
  • 기존 의미커널을 적용한 주관식 채점 시스템은 여러 답안과 말뭉치에서 추출한 색인어들과의 상관관계를 벡터방식으로 표현하여 자연어 처리에 대한 문제를 해결하려 하였다. 본 논문에서는 기존 시스템의 답안 및 색인어의 표현 한계로 인한 유사도 계산오차 가능성에 대한 문제를 해결하고자 시소러스를 이용한 임의 추출 방식의 답안 확장을 적용하였다. 서술형 주관식 평가에서는 문장의 문맥보다는 사용된 어휘에 채점가중치가 높다는 점을 착안, 출제자와 수험자 모두의 답안을 동의어, 유의어 그룹으로 확장하여 채점 성능을 향상시키려 하였다. 우선 두 답안을 형태소 분석기를 이용해 색인어를 추출한 후 워드넷을 이용하여 동의어, 유의어 그룹으로 확장한다. 이들을 말뭉치 색인을 이용하여 단어들 간 상관관계를 측정하기 위한 벡터로 구성하고 의미 커널을 적용하여 정답 유사도를 계산하였다. 출제자의 채점결과와 각 모델의 채점 점수의 상관계수 계산 결과 ELSA 모델이 가장 높은 유사도를 나타내었다..

  • PDF

Corpus-Based Ambiguity-Driven Learning of Context- Dependent Lexical Rules for Part-of-Speech Tagging (품사태킹을 위한 어휘문맥 의존규칙의 말뭉치기반 중의성주도 학습)

  • 이상주;류원호;김진동;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.178-178
    • /
    • 1999
  • Most stochastic taggers can not resolve some morphological ambiguities that can be resolved only by referring to lexical contexts because they use only contextual probabilities based ontag n-grams and lexical probabilities. Existing lexical rules are effective for resolving such ambiguitiesbecause they can refer to lexical contexts. However, they have two limitations. One is that humanexperts tend to make erroneous rules because they are deterministic rules. Another is that it is hardand time-consuming to acquire rules because they should be manually acquired. In this paper, wepropose context-dependent lexical rules, which are lexical rules based on the statistics of a taggedcorpus, and an ambiguity-driven teaming method, which is the method of automatically acquiring theproposed rules from a tagged corpus. By using the proposed rules, the proposed tagger can partiallyannotate an unseen corpus with high accuracy because it is a kind of memorizing tagger that canannotate a training corpus with 100% accuracy. So, the proposed tagger is useful to improve theaccuracy of a stochastic tagger. And also, it is effectively used for detecting and correcting taggingerrors in a manually tagged corpus. Moreover, the experimental results show that the proposed methodis also effective for English part-of-speech tagging.

Semantic Ontology Speech Recognition Performance Improvement using ERB Filter (ERB 필터를 이용한 시맨틱 온톨로지 음성 인식 성능 향상)

  • Lee, Jong-Sub
    • Journal of Digital Convergence
    • /
    • v.12 no.10
    • /
    • pp.265-270
    • /
    • 2014
  • Existing speech recognition algorithm have a problem with not distinguish the order of vocabulary, and the voice detection is not the accurate of noise in accordance with recognized environmental changes, and retrieval system, mismatches to user's request are problems because of the various meanings of keywords. In this article, we proposed to event based semantic ontology inference model, and proposed system have a model to extract the speech recognition feature extract using ERB filter. The proposed model was used to evaluate the performance of the train station, train noise. Noise environment of the SNR-10dB, -5dB in the signal was performed to remove the noise. Distortion measure results confirmed the improved performance of 2.17dB, 1.31dB.

Determination of an Optimal Sentence Segmentation Position using Statistical Information and Genetic Learning (통계 정보와 유전자 학습에 의한 최적의 문장 분할 위치 결정)

  • 김성동;김영택
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.10
    • /
    • pp.38-47
    • /
    • 1998
  • The syntactic analysis for the practical machine translation should be able to analyze a long sentence, but the long sentence analysis is a critical problem because of its high analysis complexity. In this paper a sentence segmentation method is proposed for an efficient analysis of a long sentence and the method of determining optimal sentence segmentation positions using statistical information and genetic learning is introduced. It consists of two modules: (1) decomposable position determination which uses lexical contextual constraints acquired from a training data tagged with segmentation positions. (2) segmentation position selection by the selection function of which the weights of parameters are determined through genetic learning, which selects safe segmentation positions with enhancing the analysis efficiency as much as possible. The safe segmentation by the proposed sentence segmentation method and the efficiency enhancement of the analysis are presented through experiments.

  • PDF

Political Information Filtering on Online News Comment (정보 중립성 확보를 위한 인터넷 뉴스 댓글의 정치성향 분석)

  • Choi, Hyebong;Kim, Jaehong;Lee, Jihyun;Lee, Mingu
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.575-582
    • /
    • 2020
  • We proposes a method to estimate political preference of users who write comments on internet news. We collected and analyzed a massive amount of new comment data from internet news to extract features that effectively characterizes political preference of users. We expect that it helps user to obtain unbiased information from internet news and online discussion by providing estimated political stance of news comment writer. Through comprehensive tests we prove the effectiveness of two proposed methods, lexicon-based algorithm and similarity-based algorithm.

In Out-of Vocabulary Rejection Algorithm by Measure of Normalized improvement using Optimization of Gaussian Model Confidence (미등록어 거절 알고리즘에서 가우시안 모델 최적화를 이용한 신뢰도 정규화 향상)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.12
    • /
    • pp.125-132
    • /
    • 2010
  • In vocabulary recognition has unseen tri-phone appeared when recognition training. This system has not been created beginning estimation figure of model parameter. It's bad points could not be created that model for phoneme data. Therefore it's could not be secured accuracy of Gaussian model. To improve suggested Gaussian model to optimized method of model parameter using probability distribution. To improved of confidence that Gaussian model to optimized of probability distribution to offer by accuracy and to support searching of phoneme data. This paper suggested system performance comparison as a result of recognition improve represent 1.7% by out-of vocabulary rejection algorithm using normalization confidence.

Implementation of A Morphological Analyzer Based on Pseudo-morpheme for Large Vocabulary Speech Recognizing (대어휘 음성인식을 위한 의사형태소 분석 시스템의 구현)

  • 양승원
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.4 no.2
    • /
    • pp.102-108
    • /
    • 1999
  • It is important to decide processing unit in the large vocabulary speech recognition system we propose a Pseudo-Morpheme as the recognition unit to resolve the problems in the recognition systems using the phrase or the general morpheme. We implement a morphological analysis system and tagger for Pseudo-Morpheme. The speech processing system using this pseudo-morpheme can get better result than other systems using the phrase or the general morpheme. So, the quality of the whole spoken language translation system can be improved. The analysis-ratio of our implemented system is similar to the common morphological analysis systems.

  • PDF

Morpheme Conversion for korean Text-to-Sign Language Translation System (한국어-수화 번역시스템을 위한 형태소 변환)

  • Park, Su-Hyun;Kang, Seok-Hoon;Kwon, Hyuk-Chul
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.3
    • /
    • pp.688-702
    • /
    • 1998
  • In this paper, we propose sign language morpheme generation rule corresponding to morpheme analysis for each part of speech. Korean natural sign language has extremely limited vocabulary, and the number of grammatical components eing currently used are limited, too. In this paper, therefore, we define natural sign language grammar corresponding to Korean language grammar in order to translate natural Korean language sentences to the corresponding sign language. Each phrase should define sign language morpheme generation grammar which is different from Korean language analysis grammar. Then, this grammar is applied to morpheme analysis/combination rule and sentence structure analysis rule. It will make us generate most natural sign language by definition of this grammar.

  • PDF

XSLT Scripts for Fast XML Document Transformation (XML 문서의 빠른 변환을 위한 XSLT 스크립트)

  • Shin Dong-Hoon;Lee Kyong-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.6
    • /
    • pp.538-549
    • /
    • 2005
  • This paper proposes a method of generating XSLT scripts, which support the fast transformation of XML documents, given one-to-one matching relationships between leaf nodes of XML schemas. The proposed method consists of two steps: computing matchings between cardinality nodes and generating XSLT scripts. Matching relationships between cardinality nodes are computed by using Proposed lexical and structural similarities. Based on the cardinality node matching relationships, an XSLT script is generated. Experimental results show that the proposed method has generated XSLT scripts that support the faster transformation of XML documents, compared with previous works.