• 제목/요약/키워드: word vector

검색결과 247건 처리시간 0.023초

의미특징과 워드넷 기반의 의사 연관 피드백을 사용한 질의기반 문서요약 (Query-based Document Summarization using Pseudo Relevance Feedback based on Semantic Features and WordNet)

  • 김철원;박선
    • 한국정보통신학회논문지
    • /
    • 제15권7호
    • /
    • pp.1517-1524
    • /
    • 2011
  • 본 논문은 의미특징과 워드넷 기반의 의사연관피드백을 이용하여 사용자의 질의에 관련 있는 의미 있는 문장을 추출하여 문서요약을 하는 새로운 방법을 제안한다. 제안된 방법은 비음수 행렬 분해로부터 유도된 의미특정이 문서의 잠재의미를 잘 나타나기 때문에 문서요약의 질을 향상할 수 있다. 또한 의미특정과 워드넷기반의 의사연관피드백을 이용하여서 사용자의 요구사항과 제안방법의 요약결과 사이의 의미적 차이를 감소시킨다. 실험결과 제안방법이 유사도, 비음수행렬분해를 이용한 방법들에 비하여 좋은 성능을 보인다.

Extraction of ObjectProperty-UsageMethod Relation from Web Documents

  • Pechsiri, Chaveevan;Phainoun, Sumran;Piriyakul, Rapeepun
    • Journal of Information Processing Systems
    • /
    • 제13권5호
    • /
    • pp.1103-1125
    • /
    • 2017
  • This paper aims to extract an ObjectProperty-UsageMethod relation, in particular the HerbalMedicinalProperty-UsageMethod relation of the herb-plant object, as a semantic relation between two related sets, a herbal-medicinal-property concept set and a usage-method concept set from several web documents. This HerbalMedicinalProperty-UsageMethod relation benefits people by providing an alternative treatment/solution knowledge to health problems. The research includes three main problems: how to determine EDU (where EDU is an elementary discourse unit or a simple sentence/clause) with a medicinal-property/usage-method concept; how to determine the usage-method boundary; and how to determine the HerbalMedicinalProperty-UsageMethod relation between the two related sets. We propose using N-Word-Co on the verb phrase with the medicinal-property/usage-method concept to solve the first and second problems where the N-Word-Co size is determined by the learning of maximum entropy, support vector machine, and naïve Bayes. We also apply naïve Bayes to solve the third problem of determining the HerbalMedicinalProperty-UsageMethod relation with N-Word-Co elements as features. The research results can provide high precision in the HerbalMedicinalProperty-UsageMethod relation extraction.

Automatic extraction of similar poetry for study of literary texts: An experiment on Hindi poetry

  • Prakash, Amit;Singh, Niraj Kumar;Saha, Sujan Kumar
    • ETRI Journal
    • /
    • 제44권3호
    • /
    • pp.413-425
    • /
    • 2022
  • The study of literary texts is one of the earliest disciplines practiced around the globe. Poetry is artistic writing in which words are carefully chosen and arranged for their meaning, sound, and rhythm. Poetry usually has a broad and profound sense that makes it difficult to be interpreted even by humans. The essence of poetry is Rasa, which signifies mood or emotion. In this paper, we propose a poetry classification-based approach to automatically extract similar poems from a repository. Specifically, we perform a novel Rasa-based classification of Hindi poetry. For the task, we primarily used lexical features in a bag-of-words model trained using the support vector machine classifier. In the model, we employed Hindi WordNet, Latent Semantic Indexing, and Word2Vec-based neural word embedding. To extract the rich feature vectors, we prepared a repository containing 37 717 poems collected from various sources. We evaluated the performance of the system on a manually constructed dataset containing 945 Hindi poems. Experimental results demonstrated that the proposed model attained satisfactory performance.

주제어구 추출과 질의어 기반 요약을 이용한 문서 요약 (Document Summarization using Topic Phrase Extraction and Query-based Summarization)

  • 한광록;오삼권;임기욱
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권4호
    • /
    • pp.488-497
    • /
    • 2004
  • 본 논문에서는 추출 요약 방식과 질의어 기반의 요약 방식을 혼합한 문서 요약 방법에 관해서 기술한다. 학습문서를 이용해 주제어구 추출을 위한 학습 모델을 만든다. 학습 알고리즘은 Naive Bayesian, 결정트리, Supported Vector Machine을 이용한다. 구축된 모델을 이용하여 입력 문서로부터 주제어구 리스트를 자동으로 추출한다. 추출된 주제어구들을 질의어로 하여 이들의 국부적 유사도에 의한 기여도를 계산함으로써 요약문을 추출한다. 본 논문에서는 주제어구가 원문 요약에 미치는 영향과, 몇 개의 주제어구 추출이 문서 요약에 적당한지를 실험하였다. 추출된 요약문과 수동으로 추출한 요약문을 비교하여 결과를 평가하였으며, 객관적인 성능 평가를 위하여 MS-Word에 포함된 문서 요약 기능과 실험 결과를 비교하였다.

다양한 신뢰도 척도를 이용한 SVM 기반 발화검증 연구 (SVM-based Utterance Verification Using Various Confidence Measures)

  • 권석봉;김회린;강점자;구명완;류창선
    • 대한음성학회지:말소리
    • /
    • 제60호
    • /
    • pp.165-180
    • /
    • 2006
  • In this paper, we present several confidence measures (CM) for speech recognition systems to evaluate the reliability of recognition results. We propose heuristic CMs such as mean log-likelihood score, N-best word log-likelihood ratio, likelihood sequence fluctuation and likelihood ratio testing(LRT)-based CMs using several types of anti-models. Furthermore, we propose new algorithms to add weighting terms on phone-level log-likelihood ratio to merge word-level log-likelihood ratios. These weighting terms are computed from the distance between acoustic models and knowledge-based phoneme classifications. LRT-based CMs show better performance than heuristic CMs excessively, and LRT-based CMs using phonetic information show that the relative reduction in equal error rate ranges between $8{\sim}13%$ compared to the baseline LRT-based CMs. We use the support vector machine to fuse several CMs and improve the performance of utterance verification. From our experiments, we know that selection of CMs with low correlation is more effective than CMs with high correlation.

  • PDF

Weibo Disaster Rumor Recognition Method Based on Adversarial Training and Stacked Structure

  • Diao, Lei;Tang, Zhan;Guo, Xuchao;Bai, Zhao;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권10호
    • /
    • pp.3211-3229
    • /
    • 2022
  • To solve the problems existing in the process of Weibo disaster rumor recognition, such as lack of corpus, poor text standardization, difficult to learn semantic information, and simple semantic features of disaster rumor text, this paper takes Sina Weibo as the data source, constructs a dataset for Weibo disaster rumor recognition, and proposes a deep learning model BERT_AT_Stacked LSTM for Weibo disaster rumor recognition. First, add adversarial disturbance to the embedding vector of each word to generate adversarial samples to enhance the features of rumor text, and carry out adversarial training to solve the problem that the text features of disaster rumors are relatively single. Second, the BERT part obtains the word-level semantic information of each Weibo text and generates a hidden vector containing sentence-level feature information. Finally, the hidden complex semantic information of poorly-regulated Weibo texts is learned using a Stacked Long Short-Term Memory (Stacked LSTM) structure. The experimental results show that, compared with other comparative models, the model in this paper has more advantages in recognizing disaster rumors on Weibo, with an F1_Socre of 97.48%, and has been tested on an open general domain dataset, with an F1_Score of 94.59%, indicating that the model has better generalization.

New Postprocessing Methods for Rejectin Out-of-Vocabulary Words

  • Song, Myung-Gyu
    • The Journal of the Acoustical Society of Korea
    • /
    • 제16권3E호
    • /
    • pp.19-23
    • /
    • 1997
  • The goal of postprocessing in automatic speech recognition is to improve recognition performance by utterance verification at the output of recognition stage. It is focused on the effective rejection of out-of vocabulary words based on the confidence score of hypothesized candidate word. We present two methods for computing confidence scores. Both methods are based on the distance between each observation vector and the representative code vector, which is defined by the most likely code vector at each state. While the first method employs simple time normalization, the second one uses a normalization technique based on the concept of on-line garbage mode[1]. According to the speaker independent isolated words recognition experiment with discrete density HMM, the second method outperforms both the first one and conventional likelihood ratio scoring method[2].

  • PDF

카이 제곱 통계량과 지지벡터기계를 이용한 자동 스팸 메일 분류기 (An Automatic Spam e-mail Filter System Using χ2 Statistics and Support Vector Machines)

  • 이성욱
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2009년도 춘계학술대회
    • /
    • pp.592-595
    • /
    • 2009
  • 우리는 지지벡터기계를 이용하여 스팸 이메일을 자동으로 분류하는 시스템을 제안한다. 단어의 어휘 정보와 품사 태그 정보를 지지벡터기계의 자질로 사용한다. 우리는 카이 제곱 통계량을 이용하여 유용한 자질을 선택한 후 각각의 자질을 문서 빈도(TF)와 역문헌빈도(IDF) 값으로 표현하였다. 자질들을 이용하여 SVM을 학습한 후, SVM 분류기는 각각의 이메일의 스팸 유무를 결정한다. 실험 결과, 웹메일 시스템에서 수집한 이메일 데이터에 대해 약 82.7%의 정확률을 얻었다.

  • PDF

Proper Noun Embedding Model for the Korean Dependency Parsing

  • Nam, Gyu-Hyeon;Lee, Hyun-Young;Kang, Seung-Shik
    • Journal of Multimedia Information System
    • /
    • 제9권2호
    • /
    • pp.93-102
    • /
    • 2022
  • Dependency parsing is a decision problem of the syntactic relation between words in a sentence. Recently, deep learning models are used for dependency parsing based on the word representations in a continuous vector space. However, it causes a mislabeled tagging problem for the proper nouns that rarely appear in the training corpus because it is difficult to express out-of-vocabulary (OOV) words in a continuous vector space. To solve the OOV problem in dependency parsing, we explored the proper noun embedding method according to the embedding unit. Before representing words in a continuous vector space, we replace the proper nouns with a special token and train them for the contextual features by using the multi-layer bidirectional LSTM. Two models of the syllable-based and morpheme-based unit are proposed for proper noun embedding and the performance of the dependency parsing is more improved in the ensemble model than each syllable and morpheme embedding model. The experimental results showed that our ensemble model improved 1.69%p in UAS and 2.17%p in LAS than the same arc-eager approach-based Malt parser.

지지벡터기계(Support Vector Machines)를 이용한 한국어 화행분석 (An analysis of Speech Acts for Korean Using Support Vector Machines)

  • 은종민;이성욱;서정연
    • 정보처리학회논문지B
    • /
    • 제12B권3호
    • /
    • pp.365-368
    • /
    • 2005
  • 본 연구에서는 지지 벡터 기계(Support Vector Machines)를 이용하여 한국어 대화의 화행을 분석하는 방법을 제안한다. 우리는 발화의 어휘 및 품사와 이진 품사 쌍을 문장 자질로 사용하고 이전 발화의 문맥을 문맥 발화로 사용한다. 카이 제곱 통계량을 이용해 적절한 자질을 선택하고 선택된 자질로 지지 벡터 기계를 학습하였다. 학습된 지지 벡터 기계 분류기를 이용하여 각 발화의 화행을 분석하였다. 호텔 예약 영역의 말뭉치에 대해 제안된 시스템을 이용하여 실험한 결과 약 $90.54\%$의 정확률을 얻었다.