• Title/Summary/Keyword: 핵심어 유사도

Search Result 45, Processing Time 0.023 seconds

Performance Improvement of Word Spotting Using State Weighting of HMM (HMM의 상태별 가중치를 이용한 핵심어 검출의 성능 향상)

  • 최동진
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.305-308
    • /
    • 1998
  • 본 논문에서는 핵심어 검출의 성능을 향상시키기 위한 새로운 후처리 방법을 제안한다. 일반적으로 핵심어 검출 시스템에 의해 검출된 상위 n개의 후보 단어들의 우도(likelihood)는 비슷한 경우가 많다. 따라서, 한 음성구간에 대해 음향학적으로 유사한 핵심어들간의 오인식 가능성이 높아진다. 그러나 기존의 핵심어 검출에 사용된 후처리 방법은 음성의 모든 구간에 같은 비중을 두고 우도를 평가하므로 비슷한 음향학적 특징을 가지는 유사한 핵심어들의 비교에 적합하지 못하다. 이를 해결하기 위하여, 본 논문에서는 후보단어들의 부분적인 음향학적 특징 차이에 기반한 가중치를 우도 계산 시에 반영함으로써 보다 변별력을 높이는 알고리즘을 제안한다. 실험 결과, 제안된 방법을 이용하여 유사한 후보단어들간의 변별력을 높일 수 있었고, 인식율이 93%일 때, 우도비검사 방법에 비해 19.6%의 false alarm rate을 감소시킬 수 있었다.

  • PDF

A Semi-Automatic Semantic Mark Tagging System for Building Dialogue Corpus (대화 말뭉치 구축을 위한 반자동 의미표지 태깅 시스템)

  • Park, Junhyeok;Lee, Songwook;Lim, Yoonseob;Choi, Jongsuk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.5
    • /
    • pp.213-222
    • /
    • 2019
  • Determining the meaning of a keyword in a speech dialogue system is an important technology for the future implementation of an intelligent speech dialogue interface. After extracting keywords to grasp intention from user's utterance, the intention of utterance is determined by using the semantic mark of keyword. One keyword can have several semantic marks, and we regard the task of attaching the correct semantic mark to the user's intentions on these keyword as a problem of word sense disambiguation. In this study, about 23% of all keywords in the corpus is manually tagged to build a semantic mark dictionary, a synonym dictionary, and a context vector dictionary, and then the remaining 77% of all keywords is automatically tagged. The semantic mark of a keyword is determined by calculating the context vector similarity from the context vector dictionary. For an unregistered keyword, the semantic mark of the most similar keyword is attached using a synonym dictionary. We compare the performance of the system with manually constructed training set and semi-automatically expanded training set by selecting 3 high-frequency keywords and 3 low-frequency keywords in the corpus. In experiments, we obtained accuracy of 54.4% with manually constructed training set and 50.0% with semi-automatically expanded training set.

Key-word Recognition System using Signification Analysis and Morphological Analysis (의미 분석과 형태소 분석을 이용한 핵심어 인식 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.11
    • /
    • pp.1586-1593
    • /
    • 2010
  • Vocabulary recognition error correction method has probabilistic pattern matting and dynamic pattern matting. In it's a sentences to based on key-word by semantic analysis. Therefore it has problem with key-word not semantic analysis for morphological changes shape. Recognition rate improve of vocabulary unrecognized reduced this paper is propose. In syllable restoration algorithm find out semantic of a phoneme recognized by a phoneme semantic analysis process. Using to sentences restoration that morphological analysis and morphological analysis. Find out error correction rate using phoneme likelihood and confidence for system parse. When vocabulary recognition perform error correction for error proved vocabulary. system performance comparison as a result of recognition improve represent 2.0% by method using error pattern learning and error pattern matting, vocabulary mean pattern base on method.

Key-word Error Correction System using Syllable Restoration Algorithm (음절 복원 알고리즘을 이용한 핵심어 오류 보정 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.10
    • /
    • pp.165-172
    • /
    • 2010
  • There are two method of error correction in vocabulary recognition system. one error pattern matting base on method other vocabulary mean pattern base on method. They are a failure while semantic of key-word problem for error correction. In improving, in this paper is propose system of key-word error correction using algorithm of syllable restoration. System of key-word error correction by processing of semantic parse through recognized phoneme meaning. It's performed restore by algorithm of syllable restoration phoneme apply fluctuation before word. It's definitely parse of key-word and reduced of unrecognized. Find out error correction rate using phoneme likelihood and confidence for system parse. When vocabulary recognition perform error correction for error proved vocabulary. system performance comparison as a result of recognition improve represent 2.3% by method using error pattern learning and error pattern matting, vocabulary mean pattern base on method.

Evaluation Method of Machine Translation System (기계번역 성능평가를 위한 핵심어 전달율 측정방안)

  • Yu, Cho-Rong;Lee, Young-Jik;Park, Jun
    • Annual Conference on Human and Language Technology
    • /
    • 2003.10d
    • /
    • pp.241-245
    • /
    • 2003
  • 본 논문은 기계번역 시스템의 성능평가를 위한 '핵심어 전달율 측정' 방안에 대해서 기술한다. 기계번역 시스템의 성능평가는 두 가지 측면으로 고려될 수 있다. 첫 번째는 객관적인 평가로 IBM에서 주창한 BLEU score 측정이나 NIST의 NIST score 측정이 그 예이다. 객관적인 평가는 평가자의 주관적인 판단이나 언어적인 특성을 배제한 방법으로 프로그램을 통해 자동으로 fluency와 adequacy를 측정하여 성능을 평가한다. 다음은 주관적인 평가이다. 주관적인 평가는 평가자의 평가를 통해 번역의 품질을 평가하는 방법이다. 주관적 평가 방법의 대표적인 것으로는 NESPOLE이나 LDC가 있다. 주관적인 평가는 평가자의 정확한 판단으로 신뢰할만한 성능평가 결과를 도출하지만, 시간과 비용이 많이 들고, 재사용할 수 없다는 단점이 있다. 본 논문에서는 이러한 문제를 해결하기 위해, 번역대상 문장에서 핵심어를 추출하고, 그 핵심어가 기계번역 시스템의 수행결과에 전달된 정도를 자동으로 측정하는 새로운 평가방법인 '핵심어 전달율 측정' 방안을 제안한다. 이는 성능평가의 비용과 시간을 절약하고, 주관적 평가와 유사한 신뢰성 있는 평가결과를 얻을 수 있는 좋은 지표가 될 수 있을 것으로 기대한다.

  • PDF

A Document Summarization System Using Dynamic Connection Graph (동적 연결 그래프를 이용한 자동 문서 요약 시스템)

  • Song, Won-Moon;Kim, Young-Jin;Kim, Eun-Ju;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.62-69
    • /
    • 2009
  • The purpose of document summarization is to provide easy and quick understanding of documents by extracting summarized information from the documents produced by various application programs. In this paper, we propose a document summarization method that creates and analyzes a connection graph representing the similarity of keyword lists of sentences in a document taking into account the mean length(the number of keywords) of sentences of the document. We implemented a system that automatically generate a summary from a document using the proposed method. To evaluate the performance of the method, we used a set of 20 documents associated with their correct summaries and measured the precision, the recall and the F-measure. The experiment results show that the proposed method is more efficient compared with the existing methods.

Exploring inter-media agenda-setting effects: Network agenda-setting model by using big-data analysis (자살 보도에 대한 미디어 간 의제 설정 분석: 빅데이터를 이용한 네트워크 의제 설정 모델 분석을 중심으로)

  • Kim, Daewook
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.5
    • /
    • pp.121-126
    • /
    • 2021
  • Based on network agenda-setting theory, this study attempted to analyze media reports about suicide from 2000 to 2020 in order to find solutions for suicide problem in the Korean society. Results showed that top 10 key words in media were suicide, death leap, death, attempt, supposition, discovery, men, pessimism. Those key words were appeared similarly and contunually in the media. In addition, both newspapers and broadcastings had similar reports trend, so it is plausible to consider inter-media agenda setting relations between newspapers and broadcasings.

A Study of Keyword Spotting System Based on the Weight of Non-Keyword Model (비핵심어 모델의 가중치 기반 핵심어 검출 성능 향상에 관한 연구)

  • Kim, Hack-Jin;Kim, Soon-Hyub
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.381-388
    • /
    • 2003
  • This paper presents a method of giving weights to garbage class clustering and Filler model to improve performance of keyword spotting system and a time-saving method of dialogue speech processing system for keyword spotting by calculating keyword transition probability through speech analysis of task domain users. The point of the method is grouping phonemes with phonetic similarities, which is effective in sensing similar phoneme groups rather than individual phonemes, and the paper aims to suggest five groups of phonemes obtained from the analysis of speech sentences in use in Korean morphology and in stock-trading speech processing system. Besides, task-subject Filler model weights are added to the phoneme groups, and keyword transition probability included in consecutive speech sentences is calculated and applied to the system in order to save time for system processing. To evaluate performance of the suggested system, corpus of 4,970 sentences was built to be used in task domains and a test was conducted with subjects of five people in their twenties and thirties. As a result, FOM with the weights on proposed five phoneme groups accounts for 85%, which has better performance than seven phoneme groups of Yapanel [1] with 88.5% and a little bit poorer performance than LVCSR with 89.8%. Even in calculation time, FOM reaches 0.70 seconds than 0.72 of seven phoneme groups. Lastly, it is also confirmed in a time-saving test that time is saved by 0.04 to 0.07 seconds when keyword transition probability is applied.

Improving Patent Information Service System using Vector Space Model and Thesaurus (벡터스페이스모델과 시소러스를 이용한 특허검색시스템의 성능향상)

  • 임성신;정홍석;한기덕;권혁철
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.802-804
    • /
    • 2004
  • 지적재산권이 산업의 핵심으로 자리잡음으로써 특허의 중요성이 날로 증가하고 있다. 현재 특허문서 검색을 서비스하고 있는 상용시스템의 경우 문서간의 유사도나, 질의어에 따른 순위(Ranking)가 매겨지지 않는 불리언 모델이 검색에 사용되고 있다. 본 논문에서는 유사도에 기반 한 순위화가 가능한 벡터모델기반의 특허검색시스템을 개발하고 시계분야의 시소러스를 구축하여 시계분야의 특허검색 시스템에 적용하였다. 쿼리확장의 성능을 평가하기 위해 10개의 쿼리로 실험하였고 평균 36.2%의 정확도가 향상되었다. 그리고 검색결과의 오른쪽에 시소러스를 제시함으로써 특허검색시스템을 이용하는 사용자에게 추가 질의어를 쉴게 선택할 수 있도록 하여 인터페이스 부분의 향상을 추구하였다.

  • PDF

Improvement of Keyword Spotting Performance Using Normalized Confidence Measure (정규화 신뢰도를 이용한 핵심어 검출 성능향상)

  • Kim, Cheol;Lee, Kyoung-Rok;Kim, Jin-Young;Choi, Seung-Ho;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.380-386
    • /
    • 2002
  • Conventional post-processing as like confidence measure (CM) proposed by Rahim calculates phones' CM using the likelihood between phoneme model and anti-model, and then word's CM is obtained by averaging phone-level CMs[1]. In conventional method, CMs of some specific keywords are tory low and they are usually rejected. The reason is that statistics of phone-level CMs are not consistent. In other words, phone-level CMs have different probability density functions (pdf) for each phone, especially sri-phone. To overcome this problem, in this paper, we propose normalized confidence measure. Our approach is to transform CM pdf of each tri-phone to the same pdf under the assumption that CM pdfs are Gaussian. For evaluating our method we use common keyword spotting system. In that system context-dependent HMM models are used for modeling keyword utterance and contort-independent HMM models are applied to non-keyword utterance. The experiment results show that the proposed NCM reduced FAR (false alarm rate) from 0.44 to 0.33 FA/KW/HR (false alarm/keyword/hour) when MDR is about 8%. It achieves 25% improvement of FAR.