• Title/Summary/Keyword: 단어빈도

Search Result 542, Processing Time 0.031 seconds

Measuring Improvement of Sentence-Redundancy in Multi-Document Summarization (다중 문서요약에서 문장의 중복도 측정방법 개선)

  • 임정민;강인수;배재학;이종혁
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.493-495
    • /
    • 2003
  • 다중문서요약에서는 단일문서요약과 달리 문장간의 중복도를 측정하는 방법이 요구된다. 기존에는 중복된 단어의 빈도수를 이용하거나, 구문트리 구조를 이용한 방법이 있으나, 중복도를 측정하는데 도움이 되지 못하는 단어와, 구문분석기 성능에 따라서 중복도 측정에 오류를 발생시킨다. 본 논문은 주절 종속절의 구분, 문장성분, 주절 용언의 의미를 이용하는 문장간 중복도 측정방법을 제안한다. 위의 방법으로 구현된 시스템은 기존의 중복된 단어 빈도수 방식에 비해 정확율에서 56%의 성능 향상이 있었다.

  • PDF

Efficient Term Weighting For Term-based Web Document Search (단어기반 웹 문서 검색을 위한 효과적인 단어 가중치의 계산)

  • 권순만;박병준
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.169-171
    • /
    • 2004
  • 웹(WWW)은 방대한 양의 정보들과 함께 그에 따른 웹의 환경과 그에 따른 정보도 증가하게 되었다. 그에 따라 사용자가 찾고자 하는 정보가 잘 표현된 웹 문서를 효과적으로 찾는 것은 중요한 일이 되었다. 단어기반의 검색에서는 사용자가 찾고자 하는 단어가 나타난 문서들을 사용자에게 보여주게 된다. 검색 단어를 가지고 문서에 대한 가중치를 계산하게 되는데, 본 논문에서는 이러한 단어기반의 검색에서 단어에 대한 가중치를 효과적으로 계산하는 방법을 제시한다 기존의 방식은 단어가 나타난 빈도수에 한정되어진 계산을 하게 되는 반면, 수정된 방식은 태그별로 분류를 통한 차별화 된 가중치를 부여하여 계산된다. 기존의 방식과 비교한 결과 본 논문에서 제시한 수정된 방식이 더 높은 정확도를 나타냈다.

  • PDF

Korean Emotion Vocabulary: Extraction and Categorization of Feeling Words (한국어 감정표현단어의 추출과 범주화)

  • Sohn, Sun-Ju;Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.1
    • /
    • pp.105-120
    • /
    • 2012
  • This study aimed to develop a Korean emotion vocabulary list that functions as an important tool in understanding human feelings. In doing so, the focus was on the careful extraction of most widely used feeling words, as well as categorization into groups of emotion(s) in relation to its meaning when used in real life. A total of 12 professionals (including Korean major graduate students) partook in the study. Using the Korean 'word frequency list' developed by Yonsei University and through various sorting processes, the study condensed the original 64,666 emotion words into a finalized 504 words. In the next step, a total of 80 social work students evaluated and classified each word for its meaning and into any of the following categories that seem most appropriate for inclusion: 'happiness', 'sadness', 'fear', 'anger', 'disgust', 'surprise', 'interest', 'boredom', 'pain', 'neutral', and 'other'. Findings showed that, of the 504 feeling words, 426 words expressed a single emotion, whereas 72 words reflected two emotions (i.e., same word indicating two distinct emotions), and 6 words showing three emotions. Of the 426 words that represent a single emotion, 'sadness' was predominant, followed by 'anger' and 'happiness'. Amongst 72 words that showed two emotions were mostly a combination of 'anger' and 'disgust', followed by 'sadness' and 'fear', and 'happiness' and 'interest'. The significance of the study is on the development of a most adaptive list of Korean feeling words that can be meticulously combined with other emotion signals such as facial expression in optimizing emotion recognition research, particularly in the Human-Computer Interface (HCI) area. The identification of feeling words that connote more than one emotion is also noteworthy.

  • PDF

Implementation of Search Method based on Sequence and Adjacency Relationship of User Query (사용자 검색 질의 단어의 순서 및 단어간의 인접 관계에 기반한 검색 기법의 구현)

  • So, Byung-Chul;Jung, Jin-Woo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.724-729
    • /
    • 2011
  • Information retrieval is a method to search the needed data by users. Generally, when a user searches some data in the large scale data set like the internet, ranking-based search is widely used because it is not easy to find the exactly needed data at once. In this paper, we propose a novel ranking-based search method based on sequence and adjacency relationship of user query by the help of TF-IDF and n-gram. As a result, it was possible to find the needed data more accurately with 73% accuracy in more than 19,000 data set.

A Design of Meta Search Engine that Uses Link and Click Frequencies (링크 빈도와 클릭 빈도를 이용하는 메타 검색엔진의 설계)

  • 유태명;김준태
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.292-294
    • /
    • 2000
  • 대부분의 검색엔진들이 사용하는 내용 기반 검색 방법은 웹 페이지에 있는 단어의 빈도만을 이용하여 순위를 결정함으로써 비슷한 단어 빈도를 가지고 있는 방대한 양의 검색 결과로부터 참조할만한 가치가 있는 중요한 페이지를 찾아내기가 매우 어렵다. 중요한 페이지를 구분해 내는 한가지 방법은 얼마나 많은 웹 페이지들이 참조하고 있는가 또한 얼마나 많은 사용자들이 그 웹 페이지에 접속하는가를 보는 것이다. 본 논문에서는 링크 빈도와 클릭 빈도를 이용하여 웹 페이지의 중요도를 계산하는 메타 검색엔진의 프로토타입을 구현하였다. 링크 빈도는 검색엔진에 해당 웹 페이지의 URL을 질의로 던져 구하고 클릭 빈도는 servlet을 이용하여 사용자의 클릭 행위를 감시하여 얻어내도록 하였다. 메타 검색엔진은 이 두 값의 가중치 합으로 각 페이지의 중요도를 계산하고 중요도 순으로 검색 결과를 재배열하여 사용자에게 보여 준다.

  • PDF

Word Frequency-Based Big Data Analysis for the Annals of the Joseon Dynasty (조선왕조실록 분석을 위한 단어 빈도수 기반 빅 데이터 분석)

  • Bong, Young-Il;Lee, Choong-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.707-709
    • /
    • 2022
  • Annals of the Joseon Dynasty is a librarian that compiled the history of the Joseon Dynasty for 472 years, from Taejo to Cheoljong. The Annals of the Joseon Dynasty, National Treasure No. 151, are important documented heritages, but they are difficult to analyze due to their vast content. Therefore, rather than analyzing all the contents of the Annals of the Joseon Dynasty, it is necessary to extract and analyze important words. In this paper, we propose a method of extracting words from the main body of the Annals of the Joseon Dynasty through web crawling and analyzing the translated texts of the Annals of the Joseon Dynasty based on the data sorted according to the frequency of words. In this study, only the part of King Sejong of the Annals of the Joseon Dynasty was extracted and the importance was analyzed according to the frequency of words.

  • PDF

Analysis of the National Police Agency business trends using text mining (텍스트 마이닝 기법을 이용한 경찰청 업무 트렌드 분석)

  • Sun, Hyunseok;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.301-317
    • /
    • 2019
  • There has been significant research conducted on how to discover various insights through text data using statistical techniques. In this study we analyzed text data produced by the Korean National Police Agency to identify trends in the work by year and compare work characteristics among local authorities by identifying distinctive keywords in documents produced by each local authority. A preprocessing according to the characteristics of each data was conducted and the frequency of words for each document was calculated in order to draw a meaningful conclusion. The simple term frequency shown in the document is difficult to describe the characteristics of the keywords; therefore, the frequency for each term was newly calculated using the term frequency-inverse document frequency weights. The L2 norm normalization technique was used to compare the frequency of words. The analysis can be used as basic data that can be newly for future police work improvement policies and as a method to improve the efficiency of the police service that also help identify a demand for improvements in indoor work.

Learning-based Automatic Keyphrase Indexing from Korean Scientific LIS Articles (자동색인을 위한 학습기반 주요 단어(핵심어) 추출에 관한 연구)

  • Kim, Hea-Jin;Jeoung, Yoo-Kyung
    • Proceedings of the Korean Society for Information Management Conference
    • /
    • 2017.08a
    • /
    • pp.15-18
    • /
    • 2017
  • 학술 데이터베이스를 통해 방대한 양의 텍스트 데이터에 대한 접근이 가능해지면서, 많은 데이터로부터 중요한 정보를 자동으로 추출하는 것에 대한 필요성 또한 증가하였다. 특히, 텍스트 데이터로부터 중요한 단어나 단어구를 선별하여 자동으로 추출하는 기법은 자료의 효과적인 관리와 정보검색 등 다양한 응용분야에 적용될 수 있는 핵심적인 기술임에도, 한글 텍스트를 대상으로 한 연구는 많이 이루어지지 않고 있다. 기존의 한글 텍스트를 대상으로 한 핵심어 또는 핵심어구 추출 연구들은 단어의 빈도나 동시출현 빈도, 이를 변형한 단어 가중치 등에 근거하여 핵심어(구)를 식별하는 수준에 그쳐있다. 이에 본 연구는 한글 학술논문의 초록으로부터 추출한 다양한 자질 요소들을 학습하여 핵심어(구)를 추출하는 모델을 제안하였고 그 성능을 평가하였다.

  • PDF

Clustering of Web Document Exploiting with the Union of Term frequency and Co-link in Hypertext (단어빈도와 동시링크의 결합을 통한 웹 문서 클러스터링 성능 향상에 관한 연구)

  • Lee, Kyo-Woon;Lee, Won-hee;Park, Heum;Kim, Young-Gi;Kwon, Hyuk-Chul
    • Journal of Korean Library and Information Science Society
    • /
    • v.34 no.3
    • /
    • pp.211-229
    • /
    • 2003
  • In this paper, we have focused that the number of word in the web document affects definite clustering performance. Our experimental results have clearly shown the relationship between the amounts of word and its impact on clustering performance. We also have presented an algorithm that can be supplemented of the contrast portion through co-links frequency of web documents. Testing bench of this research is 1,449 web documents included on 'Natural science' category among the Naver Directory. We have clustered these objects by term-based clustering, link-based clustering, and hybrid clustering method, and compared the output results with originally allocated category of Naver directory.

  • PDF

Recommendation System using Associative Web Document Classification by Word Frequency and α-Cut (단어 빈도와 α-cut에 의한 연관 웹문서 분류를 이용한 추천 시스템)

  • Jung, Kyung-Yong;Ha, Won-Shik
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.282-289
    • /
    • 2008
  • Although there were some technological developments in improving the collaborative filtering, they have yet to fully reflect the actual relation of the items. In this paper, we propose the recommendation system using associative web document classification by word frequency and ${\alpha}$-cut to address the short comings of the collaborative filtering. The proposed method extracts words from web documents through the morpheme analysis and accumulates the weight of term frequency. It makes associative rules and applies the weight of term frequency to its confidence by using Apriori algorithm. And it calculates the similarity among the words using the hypergraph partition. Lastly, it classifies related web document by using ${\alpha}$-cut and calculates similarity by using adjusted cosine similarity. The results show that the proposed method significantly outperforms the existing methods.