• Title/Summary/Keyword: idf-domain

Search Result 26, Processing Time 0.037 seconds

Style-Specific Language Model Adaptation using TF*IDF Similarity for Korean Conversational Speech Recognition

  • Park, Young-Hee;Chung, Min-Hwa
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2E
    • /
    • pp.51-55
    • /
    • 2004
  • In this paper, we propose a style-specific language model adaptation scheme using n-gram based tf*idf similarity for Korean spontaneous speech recognition. Korean spontaneous speech shows especially different style-specific characteristics such as filled pauses, word omission, and contraction, which are related to function words and depend on preceding or following words. To reflect these style-specific characteristics and overcome insufficient data for training language model, we estimate in-domain dependent n-gram model by relevance weighting of out-of-domain text data according to their n-. gram based tf*idf similarity, in which in-domain language model include disfluency model. Recognition results show that n-gram based tf*idf similarity weighting effectively reflects style difference.

A NOTE ON ASCEND AND DESCEND OF FACTORIZATION PROPERTIES

  • Shah Tariq
    • Bulletin of the Korean Mathematical Society
    • /
    • v.43 no.2
    • /
    • pp.419-424
    • /
    • 2006
  • In this paper we extend the study of ascend and descend of factorization properties (for atomic domains, domains satisfying ACCP, bounded factorization domains, half-factorial domains, pre-Schreier and semirigid domains) to the finite factorization domains and idf-domains for domain extension $A\;{\subseteq}\;B$.

Keyword Extraction from News Corpus using Modified TF-IDF (TF-IDF의 변형을 이용한 전자뉴스에서의 키워드 추출 기법)

  • Lee, Sung-Jick;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.59-73
    • /
    • 2009
  • Keyword extraction is an important and essential technique for text mining applications such as information retrieval, text categorization, summarization and topic detection. A set of keywords extracted from a large-scale electronic document data are used for significant features for text mining algorithms and they contribute to improve the performance of document browsing, topic detection, and automated text classification. This paper presents a keyword extraction technique that can be used to detect topics for each news domain from a large document collection of internet news portal sites. Basically, we have used six variants of traditional TF-IDF weighting model. On top of the TF-IDF model, we propose a word filtering technique called 'cross-domain comparison filtering'. To prove effectiveness of our method, we have analyzed usefulness of keywords extracted from Korean news articles and have presented changes of the keywords over time of each news domain.

  • PDF

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

Query Extension of Retrieve System Using Hangul Word Embedding and Apriori (한글 워드임베딩과 아프리오리를 이용한 검색 시스템의 질의어 확장)

  • Shin, Dong-Ha;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.6
    • /
    • pp.617-624
    • /
    • 2016
  • The hangul word embedding should be performed certainly process for noun extraction. Otherwise, it should be trained words that are not necessary, and it can not be derived efficient embedding results. In this paper, we propose model that can retrieve more efficiently by query language expansion using hangul word embedded, apriori, and text mining. The word embedding and apriori is a step expanding query language by extracting association words according to meaning and context for query language. The hangul text mining is a step of extracting similar answer and responding to the user using noun extraction, TF-IDF, and cosine similarity. The proposed model can improve accuracy of answer by learning the answer of specific domain and expanding high correlation query language. As future research, it needs to extract more correlation query language by analysis of user queries stored in database.

Case Study on Public Document Classification System That Utilizes Text-Mining Technique in BigData Environment (빅데이터 환경에서 텍스트마이닝 기법을 활용한 공공문서 분류체계의 적용사례 연구)

  • Shim, Jang-sup;Lee, Kang-wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.1085-1089
    • /
    • 2015
  • Text-mining technique in the past had difficulty in realizing the analysis algorithm due to text complexity and degree of freedom that variables in the text have. Although the algorithm demanded lots of effort to get meaningful result, mechanical text analysis took more time than human text analysis. However, along with the development of hardware and analysis algorithm, big data technology has appeared. Thanks to big data technology, all the previously mentioned problems have been solved while analysis through text-mining is recognized to be valuable as well. However, applying text-mining to Korean text is still at the initial stage due to the linguistic domain characteristics that the Korean language has. If not only the data searching but also the analysis through text-mining is possible, saving the cost of human and material resources required for text analysis will lead efficient resource utilization in numerous public work fields. Thus, in this paper, we compare and evaluate the public document classification by handwork to public document classification where word frequency(TF-IDF) in a text-mining-based text and Cosine similarity between each document have been utilized in big data environment.

  • PDF

RNN Sentence Embedding and ELM Algorithm Based Domain and Dialogue Acts Classification for Customer Counseling in Finance Domain (RNN 문장 임베딩과 ELM 알고리즘을 이용한 금융 도메인 고객상담 대화 도메인 및 화행분류 방법)

  • Oh, Kyo-Joong;Park, Chanyong;Lee, DongKun;Lim, Chae-Gyun;Choi, Ho-Jin
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.220-224
    • /
    • 2017
  • 최근 은행, 보험회사 등 핀테크 관련 업체에서는 챗봇과 같은 인공지능 대화 시스템을 고객상담 업무에 도입하고 있다. 본 논문에서는 금융 도메인을 위한 고객상담 챗봇을 구현하기 위하여, 자연어 이해 기술 중 하나인 고객상담 대화의 도메인 및 화행분류 방법을 제시한다. 이 기술을 통해 자연어로 이루어지는 상담내용을 이해하고 적합한 응답을 해줄 수 있는 기술을 개발할 수 있다. TF-IDF, LDA, 문장 임베딩 등 대화 문장에 대한 자질을 추출하고, 추출된 자질을 Extreme learning machine(ELM)을 통해 도메인 및 화행 분류 모델을 학습한다.

  • PDF

Spontaneous Speech Language Modeling using N-gram based Similarity (N-gram 기반의 유사도를 이용한 대화체 연속 음성 언어 모델링)

  • Park Young-Hee;Chung Minhwa
    • MALSORI
    • /
    • no.46
    • /
    • pp.117-126
    • /
    • 2003
  • This paper presents our language model adaptation for Korean spontaneous speech recognition. Korean spontaneous speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpus. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf/sup */idf similarity. In addition to relevance weighting, we use disfluencies as Predictor to the neighboring words. The best result reduces 9.7% word error rate relatively and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor also.

  • PDF

RNN Sentence Embedding and ELM Algorithm Based Domain and Dialogue Acts Classification for Customer Counseling in Finance Domain (RNN 문장 임베딩과 ELM 알고리즘을 이용한 금융 도메인 고객상담 대화 도메인 및 화행분류 방법)

  • Oh, Kyo-Joong;Park, Chanyong;Lee, DongKun;Lim, Chae-Gyun;Choi, Ho-Jin
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.220-224
    • /
    • 2017
  • 최근 은행, 보험회사 등 핀테크 관련 업체에서는 챗봇과 같은 인공지능 대화 시스템을 고객상담 업무에 도입하고 있다. 본 논문에서는 금융 도메인을 위한 고객상담 챗봇을 구현하기 위하여, 자연어 이해 기술 중 하나인 고객상담 대화의 도메인 및 화행분류 방법을 제시한다. 이 기술을 통해 자연어로 이루어지는 상담내용을 이해하고 적합한 응답을 해줄 수 있는 기술을 개발할 수 있다. TF-IDF, LDA, 문장 임베딩 등 대화 문장에 대한 자질을 추출하고, 추출된 자질을 Extreme learning machine(ELM)을 통해 도메인 및 화행 분류 모델을 학습한다.

  • PDF

Language Model Adaptation for Conversational Speech Recognition (대화체 연속음성 인식을 위한 언어모델 적응)

  • Park Young-Hee;Chung Minhwa
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.83-86
    • /
    • 2003
  • This paper presents our style-based language model adaptation for Korean conversational speech recognition. Korean conversational speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpora. For style-based language model adaptation, we report two approaches. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf*idf similarity. In addition to relevance weighting, we use disfluencies as predictor to the neighboring words. The best result reduces 6.5% word error rate absolutely and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor.

  • PDF