• Title/Summary/Keyword: 뉴스 문서 요약

Search Result 20, Processing Time 0.027 seconds

Automatic Text Categorization using the Importance of Sentences (문장 중요도를 이용한 자동 문서 범주화)

  • Ko, Young-Joong;Park, Jin-Woo;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.6
    • /
    • pp.417-424
    • /
    • 2002
  • Automatic text categorization is a problem of assigning predefined categories to free text documents. In order to classify text documents, we have to extract good features from them. In previous researches, a text document is commonly represented by the frequency of each feature. But there is a difference between important and unimportant sentences in a text document. It has an effect on the importance of features in a text document. In this paper, we measure the importance of sentences in a text document using text summarizing techniques. A text document is represented by features with different weights according to the importance of each sentence. To verify the new method, we constructed Korean news group data set and experiment our method using it. We found that our new method gale a significant improvement over a basis system for our data sets.

Keyword Extraction from News Corpus using Modified TF-IDF (TF-IDF의 변형을 이용한 전자뉴스에서의 키워드 추출 기법)

  • Lee, Sung-Jick;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.59-73
    • /
    • 2009
  • Keyword extraction is an important and essential technique for text mining applications such as information retrieval, text categorization, summarization and topic detection. A set of keywords extracted from a large-scale electronic document data are used for significant features for text mining algorithms and they contribute to improve the performance of document browsing, topic detection, and automated text classification. This paper presents a keyword extraction technique that can be used to detect topics for each news domain from a large document collection of internet news portal sites. Basically, we have used six variants of traditional TF-IDF weighting model. On top of the TF-IDF model, we propose a word filtering technique called 'cross-domain comparison filtering'. To prove effectiveness of our method, we have analyzed usefulness of keywords extracted from Korean news articles and have presented changes of the keywords over time of each news domain.

  • PDF

Contextual Advertisement System based on Document Clustering (문서 클러스터링을 이용한 문맥 광고 시스템)

  • Lee, Dong-Kwang;Kang, In-Ho;An, Dong-Un
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.73-80
    • /
    • 2008
  • In this paper, an advertisement-keyword finding method using document clustering is proposed to solve problems by ambiguous words and incorrect identification of main keywords. News articles that have similar contents and the same advertisement-keywords are clustered to construct the contextual information of advertisement-keywords. In addition to news articles, the web page and summary of a product are also used to construct the contextual information. The given document is classified as one of the news article clusters, and then cluster-relevant advertisement-keywords are used to identify keywords in the document. We could achieve 21% precision improvement by our proposed method.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Media-based Analysis of Gasoline Inventory with Korean Text Summarization (한국어 문서 요약 기법을 활용한 휘발유 재고량에 대한 미디어 분석)

  • Sungyeon Yoon;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.509-515
    • /
    • 2023
  • Despite the continued development of alternative energies, fuel consumption is increasing. In particular, the price of gasoline fluctuates greatly according to fluctuations in international oil prices. Gas stations adjust their gasoline inventory to respond to gasoline price fluctuations. In this study, news datasets is used to analyze the gasoline consumption patterns through fluctuations of the gasoline inventory. First, collecting news datasets with web crawling. Second, summarizing news datasets using KoBART, which summarizes the Korean text datasets. Finally, preprocessing and deriving the fluctuations factors through N-Gram Language Model and TF-IDF. Through this study, it is possible to analyze and predict gasoline consumption patterns.

Topic Modeling of News Article about International Construction Market Using Latent Dirichlet Allocation (Latent Dirichlet Allocation 기법을 활용한 해외건설시장 뉴스기사의 토픽 모델링(Topic Modeling))

  • Moon, Seonghyeon;Chung, Sehwan;Chi, Seokho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.4
    • /
    • pp.595-599
    • /
    • 2018
  • Sufficient understanding of oversea construction market status is crucial to get profitability in the international construction project. Plenty of researchers have been considering the news article as a fine data source for figuring out the market condition, since the data includes market information such as political, economic, and social issue. Since the text data exists in unstructured format with huge size, various text-mining techniques were studied to reduce the unnecessary manpower, time, and cost to summarize the data. However, there are some limitations to extract the needed information from the news article because of the existence of various topics in the data. This research is aimed to overcome the problems and contribute to summarization of market status by performing topic modeling with Latent Dirichlet Allocation. With assuming that 10 topics existed in the corpus, the topics included projects for user convenience (topic-2), private supports to solve poverty problems in Africa (topic-4), and so on. By grouping the topics in the news articles, the results could improve extracting useful information and summarizing the market status.

A Study on Layout Extraction from Internet Documents Through Xpath (Xpath에 의한 인터넷 문서의 레이아웃 추출 방법에 관한 연구)

  • Han Kwang-Rok;Sun Bok-Keun
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.237-244
    • /
    • 2005
  • Currently most Internet documents including news data are made based on predefined templates, but templates are usually formed only for main data and are not helpful for information retrieval against indexes, advertisements, header data etc. Templates in such forms are not appropriate when Internet documents are used as data for information retrieval. In order to process Internet documents in various areas of information retrieval, it is necessary to detect additional information such as advertisements and page indexes. Thus this study proposes a method of detecting the layout of web pages by identifying the characteristics and structure of block tags that affect the layout of web pages and calculating distances between web pages. As a result of experiment, we can successfully extract 640 documents from 1000 samples and obtain 64% recall rate. This method is purposed to reduce the cost of web document automatic processing and improve its efficiency through applying the method to document preprocessing of information retrieval such as data extraction and document summarization.

  • PDF

Policy agenda proposals from text mining analysis of patents and news articles (특허 및 뉴스 기사 텍스트 마이닝을 활용한 정책의제 제안)

  • Lee, Sae-Mi;Hong, Soon-Goo
    • Journal of Digital Convergence
    • /
    • v.18 no.3
    • /
    • pp.1-12
    • /
    • 2020
  • The purpose of this study is to explore the trend of blockchain technology through analysis of patents and news articles using text mining, and to suggest the blockchain policy agenda by grasping social interests. For this purpose, 327 blockchain-related patent abstracts in Korea and 5,941 full-text online news articles were collected and preprocessed. 12 patent topics and 19 news topics were extracted with latent dirichlet allocation topic modeling. Analysis of patents showed that topics related to authentication and transaction accounted were largely predominant. Analysis of news articles showed that social interests are mainly concerned with cryptocurrency. Policy agendas were then derived for blockchain development. This study demonstrates the efficient and objective use of an automated technique for the analysis of large text documents. Additionally, specific policy agendas are proposed in this study which can inform future policy-making processes.

A Semi-automatic Annotation Tool based on Named Entity Dictionary (개체명 사전 기반의 반자동 말뭉치 구축 도구)

  • Noh, Kyung-Mok;Kim, Chang-Hyun;Cheon, Min-Ah;Park, Ho-Min;Yoon, Ho;Kim, Jae-Kyun;Kim, Jae-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.309-313
    • /
    • 2017
  • 개체명은 인명, 지명, 조직명 등 문서 내에서 중요한 의미를 가지므로 질의응답, 요약, 기계번역 분야에서 유용하게 사용되고 있다. 개체명 인식은 문서에서 개체명에 해당하는 단어를 찾아 개체명 범주를 부착하는 작업을 말한다. 개체명 인식 연구에는 개체명 범주가 부착된 개체명 말뭉치를 사용한다. 개체명의 범주는 연구 분야에 따라 다양하게 정의되므로 연구 분야에 적합한 개체명 말뭉치가 필요하다. 하지만 이런 말뭉치를 구축하는 일은 시간과 인력이 많이 필요하다. 따라서 본 논문에서는 개체명 사전 기반의 반자동 말뭉치 구축 도구를 제안한다. 제안하는 도구는 크게 전처리, 사용자 태깅, 후처리 단계로 나뉜다. 전처리 단계는 자동으로 개체명을 찾는 단계이다. 약 11만 개의 개체명을 기반으로 하여 트라이(trie) 구조의 개체명 사전을 구축한 후 사전을 이용하여 개체명을 자동으로 찾는다. 사용자 태깅 단계는 사용자가 수동으로 개체명을 태깅하는 단계이다. 전처리 단계에서 찾은 개체명 중 오류가 있는 개체명들은 수정하거나 삭제하고, 찾지 못한 개체명들은 사용자가 추가로 태깅하는 단계이다. 후처리 단계는 태깅한 결과로부터 사전 정보를 갱신하는 단계이다. 제안한 말뭉치 구축 도구를 이용하여 752개의 뉴스 기사에 대해 개체명을 태깅한 결과 7,620개의 개체명이 사전에 추가되었다. 제안한 도구를 사용한 결과 사용하지 않았을 때 비해 약 57.6% 정도 태깅 횟수가 감소했다.

  • PDF

A Semi-automatic Annotation Tool based on Named Entity Dictionary (개체명 사전 기반의 반자동 말뭉치 구축 도구)

  • Noh, Kyung-Mok;Kim, Chang-Hyun;Cheon, Min-Ah;Park, Ho-Min;Yoon, Ho;Kim, Jae-Kyun;Kim, Jae-Hoon
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.309-313
    • /
    • 2017
  • 개체명은 인명, 지명, 조직명 등 문서 내에서 중요한 의미를 가지므로 질의응답, 요약, 기계번역 분야에서 유용하게 사용되고 있다. 개체명 인식은 문서에서 개체명에 해당하는 단어를 찾아 개체명 범주를 부착하는 작업을 말한다. 개체명 인식 연구에는 개체명 범주가 부착된 개체명 말뭉치를 사용한다. 개체명의 범주는 연구 분야에 따라 다양하게 정의되므로 연구 분야에 적합한 개체명 말뭉치가 필요하다. 하지만 이런 말뭉치를 구축하는 일은 시간과 인력이 많이 필요하다. 따라서 본 논문에서는 개체명 사전 기반의 반자동 말뭉치 구축 도구를 제안한다. 제안하는 도구는 크게 전처리, 사용자 태깅, 후처리 단계로 나뉜다. 전처리 단계는 자동으로 개체명을 찾는 단계이다. 약 11만 개의 개체명을 기반으로 하여 트라이(trie) 구조의 개체명 사전을 구축한 후 사전을 이용하여 개체명을 자동으로 찾는다. 사용자 태깅 단계는 사용자가 수동으로 개체명을 태깅하는 단계이다. 전처리 단계에서 찾은 개체명 중 오류가 있는 개체명들은 수정하거나 삭제하고, 찾지 못한 개체명들은 사용자가 추가로 태깅하는 단계이다. 후처리 단계는 태깅한 결과로부터 사전 정보를 갱신하는 단계이다. 제안한 말뭉치 구축 도구를 이용하여 752개의 뉴스 기사에 대해 개체명을 태깅한 결과 7,620개의 개체명이 사전에 추가되었다. 제안한 도구를 사용한 결과 사용하지 않았을 때 비해 약 57.6% 정도 태깅 횟수가 감소했다.

  • PDF