• Title/Summary/Keyword: Document Frequency

Search Result 298, Processing Time 0.026 seconds

An Experimental Evaluation of Short Opinion Document Classification Using A Word Pattern Frequency (단어패턴 빈도를 이용한 단문 오피니언 문서 분류기법의 실험적 평가)

  • Chang, Jae-Young;Kim, Ilmin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.5
    • /
    • pp.243-253
    • /
    • 2012
  • An opinion mining technique which was developed from document classification in area of data mining now becomes a common interest in domestic as well as international industries. The core of opinion mining is to decide precisely whether an opinion document is a positive or negative one. Although many related approaches have been previously proposed, a classification accuracy was not satisfiable enough to applying them in practical applications. A opinion documents written in Korean are not easy to determine a polarity automatically because they often include various and ungrammatical words in expressing subjective opinions. Proposed in this paper is a new approach of classification of opinion documents, which considers only a frequency of word patterns and excludes the grammatical factors as much as possible. In proposed method, we express a document into a bag of words and then apply a learning algorithm using a frequency of word patterns, and finally decide the polarity of the document using a score function. Additionally, we also present the experiment results for evaluating the accuracy of the proposed method.

Retrieval algorithm for Web Document using XML DOM (XML DOM을 이용한 웹문서 검색 알고리즘)

  • 김노환;정충교
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.6
    • /
    • pp.775-782
    • /
    • 2001
  • Until recently Web retrieval engine has presented a demanded document to users according to the amount and the frequency of inquired key words in each document under the assumption that the more key words a document has, the more accessible it is. This method of searching doesn't matter to a normal document such as HTML Web data in which structural information is not involved. However, Web data realized in XML contains structural information and modeling of graphic forms is also available. Therefore, in the case of XML, this method leads to no less trouble since it depends only on the frequency of key words. We consider that this problem can be resolved by way of inquiry which is similar to SQL. This form of inquiry enables us to snatch an exact data we want in a quick and clear way with a full advantage of structural quality of XML, overcoming the shortcomings of frequency-based engine. In this paper, We aim to design a model of information retrieval system of XML data using XML DOM and consider its algorithm related with it.

  • PDF

Step-by-step Approach for Effective Korean Unknown Word Recognition (한국어 미등록어 인식을 위한 단계별 접근방법)

  • Park, So-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.369-372
    • /
    • 2009
  • Recently, newspapers as well as web documents include many newly coined words such as "mid"(meaning "American drama" since "mi" means "America" in Korean and "d" refers to the "d" of drama) and "anseup"(meaning "pathetic" since "an" and "seup" literally mean eyeballs and moist respectively). However, these words cause a Korean analyzing system's performance to decrease. In order to recognize these unknown word automatically, this paper propose a step-by-step approach consisting of an unknown noun recognition phase based on full text analysis, an unknown verb recognition phase based on web document frequency, and an unknown noun recognition phase based on web document frequency. The proposed approach includes the phase based on full text analysis to recognize accurately the unknown words occurred once and again in a document. Also, the proposed approach includes two phases based on web document frequency to recognize broadly the unknown words occurred once in the document. Besides, the proposed model divides between an unknown noun recognition phase and an unknown verb recognition phase to recognize various unknown words. Experimental results shows that the proposed approach improves precision 1.01% and recall 8.50% as compared with a previous approach.

  • PDF

Design of Document Suggestion System based on TF-IDF Algorithm for Efficient Organization of Documentation (효율적인 문서 구성을 위한 TF-IDF 알고리즘 기반 문서 제안 시스템의 설계)

  • Kim, Young-Hoon;Park, Seung-Min;Cho, Dae-Soo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.527-528
    • /
    • 2022
  • 빠르게 변하는 환경에 맞춰 평생 교육이 일반화되고 개인에게 요구되는 학습량은 많아지고 있으며 높아진 학습량에 맞게 학습 시간 단축과 효율적인 학습을 위한 학습 방법을 선택하는 것이 중요해지고 있다. 본 논문에서는 학습 정리를 위해 작성한 문서를 분석하여 해당 문서와 관련된 문서를 제안하고 본 문서와 엮어 학습을 위한 문서 묶음을 만들 수 있는 시스템을 제안한다. 문서의 유사도, 중요도를 구할 수 있는 TF-IDF를 이용하여 문서를 분석해 키워드를 추출한 다음 그와 관련된 문서를 제안하고 문서 묶음을 만들어 조회할 수 있도록 한다. 이 시스템은 학습 정리 시 관련 문서를 함께 볼 수 있도록 하고, 필요하다면 묶음으로 만들어 효과적인 학습을 위한 도구로 이용할 수 있다.

  • PDF

Rank-Size Distribution with Web Document Frequency of City Name : Case study with U.S incorporated places of 100,000 or more population (인터넷 문서빈도를 통해 본 도시순위규모에 관한 연구 -미국 10만 이상의 인구를 갖는 도시들을 사례로-)

  • Hong, Il-Young
    • Journal of the Korean association of regional geographers
    • /
    • v.13 no.3
    • /
    • pp.290-300
    • /
    • 2007
  • In this study, web document frequency of city place name is analyzed and it is used as the dataset for rank-size analysis. The search keywords are compared in the context of spatial meaning and the different domain corpus is applied. The acquired search results are applied for the further analysis. Firstly, the rank-size analysis is applied to compare the result between population and document frequency. Secondly, in case of correlation analysis, the significant changes are revealed when the spatial criteria for search keywords are increased. In case of corpus, COM, NET, and ORG shows the higher coefficient values. Lastly, the cluster analysis is applied to classify the list of cities that shows the similarity and difference. These analyses have a significant role in representing the rank-size distribution of city names that are reflected on the web documents in the information society.

  • PDF

N-gram Feature Selection for Text Classification Based on Symmetrical Conditional Probability and TF-IDF (대칭 조건부 확률과 TF-IDF 기반 텍스트 분류를 위한 N-gram 특질 선택)

  • Choi, Woo-Sik;Kim, Seoung Bum
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.4
    • /
    • pp.381-388
    • /
    • 2015
  • The rapid growth of the World Wide Web and online information services has generated and made accessible a huge number of text documents. To analyze texts, selecting important keywords is an essential step. In this paper, we propose a feature selection method that combines a term frequency-inverse document frequency technique and symmetrical conditional probability. The proposed method can identify features with N-gram, the sequential multiword. The effectiveness of the proposed method is demonstrated through a real text data from the machine learning repository, University of California, Irvine.

Analysis of the National Police Agency business trends using text mining (텍스트 마이닝 기법을 이용한 경찰청 업무 트렌드 분석)

  • Sun, Hyunseok;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.301-317
    • /
    • 2019
  • There has been significant research conducted on how to discover various insights through text data using statistical techniques. In this study we analyzed text data produced by the Korean National Police Agency to identify trends in the work by year and compare work characteristics among local authorities by identifying distinctive keywords in documents produced by each local authority. A preprocessing according to the characteristics of each data was conducted and the frequency of words for each document was calculated in order to draw a meaningful conclusion. The simple term frequency shown in the document is difficult to describe the characteristics of the keywords; therefore, the frequency for each term was newly calculated using the term frequency-inverse document frequency weights. The L2 norm normalization technique was used to compare the frequency of words. The analysis can be used as basic data that can be newly for future police work improvement policies and as a method to improve the efficiency of the police service that also help identify a demand for improvements in indoor work.

Document Replacement Policy by Site Popularity in Web Cache (웹 캐시에서 사이트의 인기도에 의한 도큐먼트 교체정책)

  • Yoo, Hang-Suk;Jang, Tea-Mu
    • Journal of Korea Game Society
    • /
    • v.3 no.1
    • /
    • pp.67-73
    • /
    • 2003
  • Most web caches save documents temporarily into themselves on the basis of those documents. And when a corresponding document exists within the cache on wei s request, web cache sends the document to corresponding user. On the contrary, when there is not any document within the cache, web cache requests a new document to the related server to copy the document into the cache and then rum it back to user. Here, web cache uses a replacement policy to change existing document into a new one due to exceeded capacity of cache. Typical replacement policy includes document-based LRU or LFU technique and other various replacement policies are used to replace the documents within cache effectively. However, these replacement policies function only with regard to the time and frequency of document request, not considering the popularity of each web site. Based on replacement policies with regard to documents on frequent requests and the popularity of each web site, this paper aims to present the document replacement policies with regard to the popularity of each web site, which are suitable for latest network environments to enhance the hit-ratio of cache and efficiently manage the contents of cache by effectively replacing documents on intermittent requests by new ones.

  • PDF

Document Replacement Policy by Web Site Popularity (웹 사이트의 인기도에 의한 도큐먼트 교체정책)

  • Yoo, Hang-Suk;Chang, Tae-Mu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.1
    • /
    • pp.227-232
    • /
    • 2008
  • General web caches save documents temporarily into themselves on the basis of those documents. And when a corresponding document exists within the cache on user's request. web cache sends the document to corresponding user. On the contrary. when there is not any document within the cache, web cache requests a new document to the related server to copy the document into the cache and then turn it back to user. Here, web cache uses a replacement policy to change existing document into a new one due to exceeded capacity of cache. Typical replacement policy includes document-based LRU or LFU technique and other various replacement policies are used to replace the documents within cache effectively. However. these replacement policies function only with regard to the time and frequency of document request. not considering the popularity of each web site. Based on replacement policies with regard to documents on frequent requests and the popularity of each web site, this paper aims to present the document replacement policies with regard to the popularity of each web site, which are suitable for latest network environments to enhance the hit-ratio of cache and efficiently manage the contents of cache by effectively replacing documents on intermittent requests by new ones.

  • PDF

Document Thematic words Extraction using Principal Component Analysis (주성분 분석을 이용한 문서 주제어 추출)

  • Lee, Chang-Beom;Kim, Min-Soo;Lee, Ki-Ho;Lee, Guee-Sang;Park, Hyuk-Ro
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.10
    • /
    • pp.747-754
    • /
    • 2002
  • In this paper, We propose a document thematic words extraction by using principal component analysis(PCA) which is one of the multivariate statistical methods. The proposed PCA model understands the flow of words in the document by using an eigenvalue and an eigenvector, and extracts thematic words. The proposed model is estimated by applying to document summarization. Experimental results using newspaper articles show that the proposed model is superior to the model using either word frequency or information retrieval thesaurus. We expect that the Proposed model can be applied to information retrieval , information extraction and document summarization.