• Title/Summary/Keyword: Text summarization

Search Result 127, Processing Time 0.022 seconds

Korean Summarization System using Automatic Paragraphing (단락 자동 구분을 이용한 문서 요약 시스템)

  • 김계성;이현주;이상조
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.681-686
    • /
    • 2003
  • In this paper, we describes a system that extracts important sentences from Korean newspaper articles using automatic paragraphing. First, we detect repeated words between sentences. Through observation of the repeated words, this system compute Closeness Degree between Sentences(CDS ) from the degree of morphological agreement and the change of grammatical role. And then, it automatically divides a document into meaningful paragraphs using the number of paragraph defined by the user´s need. Finally. it selects one representative sentence from each paragraph and it generates summary using representative sentences. Though our system doesn´t utilize some features such as title, sentence position, rhetorical structure, etc., it is able to extract meaningful sentences to be included in the summary.

Designing Effective Summary Models for Defense Articles with AI and Evaluating Performance (AI를 이용한 국방 기사의 효과적인 요약 모델 설계 및 성능 평가)

  • Yerin Nam;YunYoung Choi;JongGeun Choi;HyukJin Kwone
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.20 no.1
    • /
    • pp.64-75
    • /
    • 2024
  • With the development of the Internet, the information in our lives has become fast and diverse. Especially in the field of defense, articles and information are pouring in from various sources every day, and fast information selection, understanding, and decision-making are required in the ever-changing situation. It is very cumbersome to go from platform to platform and read articles one by one to get the information you need. To solve this problem, this research aims to save time and provide quick access to the latest information by allowing you to quickly grasp key information from summarized content without having to read the entire article. This can improve efficiency by allowing defense professionals to focus more on important tasks rather than extensive information search and analysis.

Keyword Extraction from News Corpus using Modified TF-IDF (TF-IDF의 변형을 이용한 전자뉴스에서의 키워드 추출 기법)

  • Lee, Sung-Jick;Kim, Han-Joon
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.59-73
    • /
    • 2009
  • Keyword extraction is an important and essential technique for text mining applications such as information retrieval, text categorization, summarization and topic detection. A set of keywords extracted from a large-scale electronic document data are used for significant features for text mining algorithms and they contribute to improve the performance of document browsing, topic detection, and automated text classification. This paper presents a keyword extraction technique that can be used to detect topics for each news domain from a large document collection of internet news portal sites. Basically, we have used six variants of traditional TF-IDF weighting model. On top of the TF-IDF model, we propose a word filtering technique called 'cross-domain comparison filtering'. To prove effectiveness of our method, we have analyzed usefulness of keywords extracted from Korean news articles and have presented changes of the keywords over time of each news domain.

  • PDF

A Study of Pre-trained Language Models for Korean Language Generation (한국어 자연어생성에 적합한 사전훈련 언어모델 특성 연구)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.309-328
    • /
    • 2022
  • This study empirically analyzed a Korean pre-trained language models (PLMs) designed for natural language generation. The performance of two PLMs - BART and GPT - at the task of abstractive text summarization was compared. To investigate how performance depends on the characteristics of the inference data, ten different document types, containing six types of informational content and creation content, were considered. It was found that BART (which can both generate and understand natural language) performed better than GPT (which can only generate). Upon more detailed examination of the effect of inference data characteristics, the performance of GPT was found to be proportional to the length of the input text. However, even for the longest documents (with optimal GPT performance), BART still out-performed GPT, suggesting that the greatest influence on downstream performance is not the size of the training data or PLMs parameters but the structural suitability of the PLMs for the applied downstream task. The performance of different PLMs was also compared through analyzing parts of speech (POS) shares. BART's performance was inversely related to the proportion of prefixes, adjectives, adverbs and verbs but positively related to that of nouns. This result emphasizes the importance of taking the inference data's characteristics into account when fine-tuning a PLMs for its intended downstream task.

An Innovative Approach of Bangla Text Summarization by Introducing Pronoun Replacement and Improved Sentence Ranking

  • Haque, Md. Majharul;Pervin, Suraiya;Begum, Zerina
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.752-777
    • /
    • 2017
  • This paper proposes an automatic method to summarize Bangla news document. In the proposed approach, pronoun replacement is accomplished for the first time to minimize the dangling pronoun from summary. After replacing pronoun, sentences are ranked using term frequency, sentence frequency, numerical figures and title words. If two sentences have at least 60% cosine similarity, the frequency of the larger sentence is increased, and the smaller sentence is removed to eliminate redundancy. Moreover, the first sentence is included in summary always if it contains any title word. In Bangla text, numerical figures can be presented both in words and digits with a variety of forms. All these forms are identified to assess the importance of sentences. We have used the rule-based system in this approach with hidden Markov model and Markov chain model. To explore the rules, we have analyzed 3,000 Bangla news documents and studied some Bangla grammar books. A series of experiments are performed on 200 Bangla news documents and 600 summaries (3 summaries are for each document). The evaluation results demonstrate the effectiveness of the proposed technique over the four latest methods.

KR-WordRank : An Unsupervised Korean Word Extraction Method Based on WordRank (KR-WordRank : WordRank를 개선한 비지도학습 기반 한국어 단어 추출 방법)

  • Kim, Hyun-Joong;Cho, Sungzoon;Kang, Pilsung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.40 no.1
    • /
    • pp.18-33
    • /
    • 2014
  • A Word is the smallest unit for text analysis, and the premise behind most text-mining algorithms is that the words in given documents can be perfectly recognized. However, the newly coined words, spelling and spacing errors, and domain adaptation problems make it difficult to recognize words correctly. To make matters worse, obtaining a sufficient amount of training data that can be used in any situation is not only unrealistic but also inefficient. Therefore, an automatical word extraction method which does not require a training process is desperately needed. WordRank, the most widely used unsupervised word extraction algorithm for Chinese and Japanese, shows a poor word extraction performance in Korean due to different language structures. In this paper, we first discuss why WordRank has a poor performance in Korean, and propose a customized WordRank algorithm for Korean, named KR-WordRank, by considering its linguistic characteristics and by improving the robustness to noise in text documents. Experiment results show that the performance of KR-WordRank is significantly better than that of the original WordRank in Korean. In addition, it is found that not only can our proposed algorithm extract proper words but also identify candidate keywords for an effective document summarization.

An Improved Automatic Text Summarization Based on Lexical Chaining Using Semantical Word Relatedness (단어 간 의미적 연관성을 고려한 어휘 체인 기반의 개선된 자동 문서요약 방법)

  • Cha, Jun Seok;Kim, Jeong In;Kim, Jung Min
    • Smart Media Journal
    • /
    • v.6 no.1
    • /
    • pp.22-29
    • /
    • 2017
  • Due to the rapid advancement and distribution of smart devices of late, document data on the Internet is on the sharp increase. The increment of information on the Web including a massive amount of documents makes it increasingly difficult for users to understand corresponding data. In order to efficiently summarize documents in the field of automated summary programs, various researches are under way. This study uses TextRank algorithm to efficiently summarize documents. TextRank algorithm expresses sentences or keywords in the form of a graph and understands the importance of sentences by using its vertices and edges to understand semantic relations between vocabulary and sentence. It extracts high-ranking keywords and based on keywords, it extracts important sentences. To extract important sentences, the algorithm first groups vocabulary. Grouping vocabulary is done using a scale of specific weight. The program sorts out sentences with higher scores on the weight scale, and based on selected sentences, it extracts important sentences to summarize the document. This study proved that this process confirmed an improved performance than summary methods shown in previous researches and that the algorithm can more efficiently summarize documents.

Generic Text Summarization Using Non-negative Matrix Factorization (비음수 행렬 인수분해를 이용한 일반적 문서 요약)

  • Park Sun;Lee Ju-Hong;Ahn Chan-Min;Park Tae-Su;Kim Ja-Woo;Kim Deok-Hwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.469-472
    • /
    • 2006
  • 본 논문은 비음수 행렬 인수분해(NMF, non-negative matrix factorization)를 이용하여 문장을 추출하여 문서를 요약하는 새로운 방법을 제안하였다. 제안된 방법은 문장추출에 사용되는 의미 특징(semantic feature)이 비 음수 값을 갖기 때문에 잠재의미분석에 비해 문서의 내용을 정확하게 요약한다. 또한, 적은 계산비용을 통하여 쉽게 요약 문장을 추출할 수 있는 장점을 갖는다.

  • PDF

Text Undestanding System for Summarization (텍스트 이해 모델에 기반한 정보 검색 시스템)

  • Song, In-Seok;Park, Hyuk-Ro
    • Annual Conference on Human and Language Technology
    • /
    • 1997.10a
    • /
    • pp.1-6
    • /
    • 1997
  • 본 논문에서는 인지적 텍스트 이해 모형을 제시하고 이에 기반한 자동 요약 시스템을 구현하였다. 문서는 정보의 단순한 집합체가 아닌 정형화된 언어 표현 양식으로서 단어의 의미적 정보와 함께 표현 양식, 문장의 구조와 문서의 구성을 통해 정보를 전달한다. 요약 목적의 텍스트 이해 및 분석 과정을 위해 경제 분야 기사 1000건에 대한 수동 요약문을 분석, 이해 모델을 정립하였고. 경제 분야 기사 1000건에 대한 테스트 결과를 토대로 문장간의 관계, 문서의 구조에서 요약 정보 추출에 사용되는 정보를 분석하였다. 본 텍스트 이해 모형은 단어 빈도수에 의존하는 통계적 모델과 비교해 볼 때, 단어 간의 관련성을 찾아내고, 문서구조정보에 기반한 주제문 추출 및 문장간의 관계를 효과적으로 사용함으로서 정보를 생성한다. 그리고 텍스트 이해 과정에서 사용되는 요약 지식과 구조 분석정보의 상관관계를 체계적으로 연결함으로서 자동정보 추출에서 야기되는 내용적 만족도 문제를 보완한다.

  • PDF

Sentence Abstraction: A Sentence Revision Methodology for Text Summarization (문장추상화: 문서요약을 위한 문장교열 방법론)

  • Kim, Gon;Bae, Jae-Hak J.
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2002.05a
    • /
    • pp.51-56
    • /
    • 2002
  • 본 논문에서는 문서요약을 위한 문장교열 방법론으로 문장추상화를 생각하였다. 이에 문장추상화의 판단기준이 되는 요소들을 구문분석기를 통해 얻은 정보와, 문장의 구성성분들이 가지는 온톨로지 정보를 바탕으로 선정하였다. 문장추상화에는 Roget 시소러스에 기반한 온톨로지 OfN, 구문분석기 LGPI+ 그리고 이를 활용하는 문장추상기 SABOT를 이용하였다. 본 논문을 통하여 문장추상화가 문서요약을 위한 문장교열 방법의 하나로 가능함을 보였다.

  • PDF