• Title/Summary/Keyword: Document Frequency

Search Result 303, Processing Time 0.022 seconds

Open Domain Machine Reading Comprehension using InferSent (InferSent를 활용한 오픈 도메인 기계독해)

  • Jeong-Hoon, Kim;Jun-Yeong, Kim;Jun, Park;Sung-Wook, Park;Se-Hoon, Jung;Chun-Bo, Sim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.89-96
    • /
    • 2022
  • An open domain machine reading comprehension is a model that adds a function to search paragraphs as there are no paragraphs related to a given question. Document searches have an issue of lower performance with a lot of documents despite abundant research with word frequency based TF-IDF. Paragraph selections also have an issue of not extracting paragraph contexts, including sentence characteristics accurately despite a lot of research with word-based embedding. Document reading comprehension has an issue of slow learning due to the growing number of parameters despite a lot of research on BERT. Trying to solve these three issues, this study used BM25 which considered even sentence length and InferSent to get sentence contexts, and proposed an open domain machine reading comprehension with ALBERT to reduce the number of parameters. An experiment was conducted with SQuAD1.1 datasets. BM25 recorded a higher performance of document research than TF-IDF by 3.2%. InferSent showed a higher performance in paragraph selection than Transformer by 0.9%. Finally, as the number of paragraphs increased in document comprehension, ALBERT was 0.4% higher in EM and 0.2% higher in F1.

A Study on Change in Perception of Community Service and Demand Prediction based on Big Data

  • Chun-Ok, Jang
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.230-237
    • /
    • 2022
  • The Community Social Service Investment project started as a state subsidy project in 2007 and has grown very rapidly in quantitative terms in a short period of time. It is a bottom-up project that discovers the welfare needs of people and plans and provides services suitable for them. The purpose of this study is to analyze using big data to determine the social response to local community service investment projects. For this, data was collected and analyzed by crawling with a specific keyword of community service investment project on Google and Naver sites. As for the analysis contents, monthly search volume, related keywords, monthly search volume, search rate by age, and gender search rate were conducted. As a result, 10 items were found as related keywords in Google, and 3 items were found in Naver. The overall results of Google and Naver sites were slightly different, but they increased and decreased at almost the same time. Therefore, it can be seen that the community service investment project continues to attract users' interest.

Evaluation for usefulness of Chukwookee Data in Rainfall Frequency Analysis (강우빈도해석에서의 측우기자료의 유용성 평가)

  • Kim, Kee-Wook;Yoo, Chul-Sang;Park, Min-Kyu;Kim, Dae-Ha;Park, Sangh-Young;Kim, Hyeon-Jun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2007.05a
    • /
    • pp.1526-1530
    • /
    • 2007
  • In this study, the chukwookee data were evaluated by applying that for the historical rainfall frequency analysis. To derive a two parameter log-normal distribution by using historical data and modern data, censored data MLE and binomial censored data MLE were applied. As a result, we found that both average and standard deviation were all estimated smaller with chukwookee data then those with only modern data. This indicates that rather big events rarely happens during the period of chukwookee data then during the modern period. The frequency analysis results using the parameters estimated were also similar to those expected. The point to be noticed is that the rainfall quantiles estimated by both methods were similar, especially for the 99% threshold. This result indicates that the historical document records like the annals of Chosun dynasty could be valuable and effective for the frequency analysis. This also means the extension of data available for frequency analysis.

  • PDF

Detecting Spelling Errors by Comparison of Words within a Document (문서내 단어간 비교를 통한 철자오류 검출)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.83-92
    • /
    • 2011
  • Typographical errors by the author's mistyping occur frequently in a document being prepared with word processors contrary to usual publications. Preparing this online document, the most common orthographical errors are spelling errors resulting from incorrectly typing intent keys to near keys on keyboard. Typical spelling checkers detect and correct these errors by using morphological analyzer. In other words, the morphological analysis module of a speller tries to check well-formedness of input words, and then all words rejected by the analyzer are regarded as misspelled words. However, if morphological analyzer accepts even mistyped words, it treats them as correctly spelled words. In this paper, I propose a simple method capable of detecting and correcting errors that the previous methods can not detect. Proposed method is based on the characteristics that typographical errors are generally not repeated and so tend to have very low frequency. If words generated by operations of deletion, exchange, and transposition for each phoneme of a low frequency word are in the list of high frequency words, some of them are considered as correctly spelled words. Some heuristic rules are also presented to reduce the number of candidates. Proposed method is able to detect not syntactic errors but some semantic errors, and useful to scoring candidates.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Feature Selection with Non-linear PCA in Text Categorization (대용량 문서분류에서의 비선형 주성분 분석을 이용한 특징 추출)

  • 신형주;장병탁;김영택
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.146-148
    • /
    • 1999
  • 문서분류의 문제점 중의 하나는 사용하는 데이터의 차원이 매우 크다는 것이다. 그러므로 문서에서 필요한 단어만을 자동적으로 추출하여 문서데이터의 차원을 축소하는 작업이 문서분류에서는 필수적이다. DF(Document Frequency)는 문서의 차원축소의 대표적인 통계적 방법 중 하나인데, 본 논문에서는 문서의 차원축소에 DF와 주성분 분석(PCA)을 비교하여 주성분 분석이 문서의 차원축소에 적합함을 실험적으로 보인다. 그리고 비선형 주성분 분석(nonlinear PCA) 방법 중 locally linear PCA와 kenel PCA를 적용하여 비선형 주성분 분석을 이용하여 문서의 차원을 줄이는 것이 선형 주성분 분석을 이용하는 것 보다 문서분류에 더 적합함을 실험적으로 보인다.

  • PDF

Inverse Document Frequency Weighting Revisited (역문헌빈도 가중치의 재검토)

  • 이재윤
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 2003.08a
    • /
    • pp.253-261
    • /
    • 2003
  • 역문헌빈도 가중치는 문헌 집단에서 출현빈도가 낮을수록 색인어의 중요도가 높다는 가정에 근거하고 있다. 이 연구에서는 역문헌빈도 가중치의 가정에 의문을 제기하고, 이를 보완하는 새로운 문헌빈도 가중치 공식을 제안하였다. 제안한 가중치 공식은 저빈도어가 아닌 중간빈도어가 더 중요하다는 가정에 근거한 것으로서 역시 문헌빈도를 이용한 함수이다. 문헌빈도에 의한 가중치를 문헌의 색인어에 부여하는 경우와 질의어에 부여하는 경우로 나누어서 실험을 수행하고, 두 경우의 차이점을 논하였다.

  • PDF

Study on Implementation of a Digital Frequency Discriminator using 4 channel Delay line (4채널 지연선로를 이용한 디지털 주파수 판별기 구현에 관한 연구)

  • Kook, Chan-Ho;Kwon, Ik-Jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.512-515
    • /
    • 2010
  • SIGINT(SIGnal INTelligence) includes several parameters intercepted by measurement and analysis of the RF(Radio frequency) signal from free space. One of the important parameters is frequency information. Expecially, in order to perform instantaneous frequency measurement of Radar and Missile seeker's RF signals, we use dedicated RF modules as a DFD(Digital Frequency Discriminator) to provide frequency information by measurement of the relative phase difference between signals via intended RF delay lines. It must measure and provide realtime based frequency information on short pulsed RF signal up to 100 nSec or less. This document proposes Ultra wideband DFD consisted of a RF input section of Wideband 4 channel RF delay line and correlator, a digital processing section to measure and provide frequency information from I/Q signal, and a frequency calibration section. Also, it will show design suitability based on test results measured under test condition of very short input pulse signals.

  • PDF

Analyzing the Effect of Characteristics of Dictionary on the Accuracy of Document Classifiers (용어 사전의 특성이 문서 분류 정확도에 미치는 영향 연구)

  • Jung, Haegang;Kim, Namgyu
    • Management & Information Systems Review
    • /
    • v.37 no.4
    • /
    • pp.41-62
    • /
    • 2018
  • As the volume of unstructured data increases through various social media, Internet news articles, and blogs, the importance of text analysis and the studies are increasing. Since text analysis is mostly performed on a specific domain or topic, the importance of constructing and applying a domain-specific dictionary has been increased. The quality of dictionary has a direct impact on the results of the unstructured data analysis and it is much more important since it present a perspective of analysis. In the literature, most studies on text analysis has emphasized the importance of dictionaries to acquire clean and high quality results. However, unfortunately, a rigorous verification of the effects of dictionaries has not been studied, even if it is already known as the most essential factor of text analysis. In this paper, we generate three dictionaries in various ways from 39,800 news articles and analyze and verify the effect each dictionary on the accuracy of document classification by defining the concept of Intrinsic Rate. 1) A batch construction method which is building a dictionary based on the frequency of terms in the entire documents 2) A method of extracting the terms by category and integrating the terms 3) A method of extracting the features according to each category and integrating them. We compared accuracy of three artificial neural network-based document classifiers to evaluate the quality of dictionaries. As a result of the experiment, the accuracy tend to increase when the "Intrinsic Rate" is high and we found the possibility to improve accuracy of document classification by increasing the intrinsic rate of the dictionary.

Link Error Analysis and Modeling for Video Streaming Cross-Layer Design in Mobile Communication Networks

  • Karner, Wolfgang;Nemethova, Olivia;Svoboda, Philipp;Rupp, Markus
    • ETRI Journal
    • /
    • v.29 no.5
    • /
    • pp.569-595
    • /
    • 2007
  • Particularly in wireless communications, link errors severely affect the quality of the services due to the high error probability and the specific error characteristics (burst errors) in the radio access part of the network. In this work, we show that thorough analysis and appropriate modeling of radio-link error behavior are essential to evaluate and optimize higher layer protocols and services. They are also the basis for finding network-aware cross-layer processing algorithms which are capable of exploiting the specific properties of the link error statistics, such as predictability. This document presents the analysis of the radio link errors based on measurements in live Universal Mobile Telecommunication System (UMTS) radio access networks as well as new link error models originating from that analysis. It is shown that the knowledge of the specific link error characteristics leads to significant improvements in the quality of streamed video by applying the proposed novel network- and content-aware cross-layer scheduling algorithms. Although based on live UMTS network experience, many of the conclusions in this work are of general validity and are not limited to UMTS only.

  • PDF