• 제목/요약/키워드: Document Summary

검색결과 85건 처리시간 0.036초

텍스트 구성요소 판별 기법과 자질을 이용한 문서 요약 시스템의 개발 및 평가 (Development and Evaluation of a Document Summarization System using Features and a Text Component Identification Method)

  • 장동현;맹성현
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제27권6호
    • /
    • pp.678-689
    • /
    • 2000
  • 논 본문은 문서의 주요 내용을 나타내는 문장을 추출함으로써 요약문을 작성하는 자동 요약 기법에 대해 기술하고 있다. 개발한 시스템은 문서 집합으로부터 추출한 어휘적, 통계적 정보를 고려하여 요약 문장을 작성하는 모델이다. 시스템은 크게 두 부분, 학습과정과 요약과정으로 구성이 된다. 학습 과정은 수동으로 작성한 요약문장으로부터 다양한 통계적인 정보를 추출하는 단계이며, 요약 과정은 학습 과정에서 추출한 정보를 이용하여 각 문장이 요약문장에 포함될 가능성을 계산하는 과정이다. 본 연구는 크게 세 가지 의의를 갖는다. 첫째, 개발된 시스템은 각 문장을 텍스트 구성 요소의 하나로 분류하는 텍스트 구성 요소 판별 모델을 사용한다. 이 과정을 통해 요약 문장에 포함될 가능성이 없는 문장을 미리 제거하는 효과를 얻게 된다. 둘째, 개발한 시스템이 영어 기반의 시스템을 발전시킨 것이지만, 각각의 자질을 독립적으로 요약에 적용시켰으며, Dempster-Shafer 규칙을 사용해서 다양한 자질의 확률 값을 혼합함으로써 문장이 요약문에 포함될 최종 확률을 계산하게 된다. 셋째, 기존의 시스템에서 사용하지 않은 새로운 자질 (feature)을 사용하였으며, 실험을 통하여 각각의 자질이 요약 시스템의 성능에 미치는 효과를 알아보았다.

  • PDF

Text Summarization on Large-scale Vietnamese Datasets

  • Ti-Hon, Nguyen;Thanh-Nghi, Do
    • Journal of information and communication convergence engineering
    • /
    • 제20권4호
    • /
    • pp.309-316
    • /
    • 2022
  • This investigation is aimed at automatic text summarization on large-scale Vietnamese datasets. Vietnamese articles were collected from newspaper websites and plain text was extracted to build the dataset, that included 1,101,101 documents. Next, a new single-document extractive text summarization model was proposed to evaluate this dataset. In this summary model, the k-means algorithm is used to cluster the sentences of the input document using different text representations, such as BoW (bag-of-words), TF-IDF (term frequency - inverse document frequency), Word2Vec (Word-to-vector), Glove, and FastText. The summary algorithm then uses the trained k-means model to rank the candidate sentences and create a summary with the highest-ranked sentences. The empirical results of the F1-score achieved 51.91% ROUGE-1, 18.77% ROUGE-2 and 29.72% ROUGE-L, compared to 52.33% ROUGE-1, 16.17% ROUGE-2, and 33.09% ROUGE-L performed using a competitive abstractive model. The advantage of the proposed model is that it can perform well with O(n,k,p) = O(n(k+2/p)) + O(nlog2n) + O(np) + O(nk2) + O(k) time complexity.

VAE를 이용한 의미적 연결 관계 기반 다중 문서 요약 기법 (Multi-Document Summarization Method Based on Semantic Relationship using VAE)

  • 백수진
    • 디지털융복합연구
    • /
    • 제15권12호
    • /
    • pp.341-347
    • /
    • 2017
  • 많은 양의 문서 데이터가 증가됨에 따라 사용자는 해당 문서를 이해하기 위한 요약된 정보를 필요로 한다. 그러나, 기존 문서 요약 연구 방법들은 지나치게 단순한 통계에 의존함으로써 문장의 모호성 및 의미 있는 문장 생성을 위한 다중 문서 요약 연구가 미흡한 실정이다. 본 논문에서는 의미적 연결 관계에 대한 파악 및 불필요한 정보를 처리하기 위한 전처리 과정을 거치며, 어휘 의미 패턴 정보를 기반으로 VAE를 이용하여 문장 간의 의미적 연결성을 높인 다중 문서 요약 기법을 제안하였다. 문장을 이루고 있는 단어 벡터들을 이용하여, 잠재된 변수로 생성된 압축된 정보와 속성 판별기로부터 학습을 한 후 문장을 재구성함으로써 의미적 연결 처리가 자연스러운 요약문을 생성하였다. 제안된 방법과 다른 문서 요약 방법을 비교했을 시 미세하지만 더 향상된 성능을 나타냈으며, 이는 의미적 문장 생성 및 연결성을 높일 수 있음을 증명하였다. 앞으로, 다양한 속성 설정 값을 가지고 실험하여 의미적 연결 관계를 확장할 수 있는 방법을 연구하고자 한다.

문서 입출력 시스템의 구성에 관한 연구 (A Study on the Construction of a Document Input/Output system)

  • 함영국;도상윤;정홍규;김우성;박래홍;이창범;김상중
    • 전자공학회논문지B
    • /
    • 제29B권10호
    • /
    • pp.100-112
    • /
    • 1992
  • In this paper, an integrated document input/output system is developed which constructs the graphic document from a text file, converts the document into encoded facsimile data, and also recognizes printed/handwritten alphanumerics and Korean characters in a facsimile or graphic document. For an output system, we develop the method which generates bit-map patterns from the document consisting of the KSC5601 and ASCII codes. The binary graphic image, if necessary, is encoded by the G3 coding scheme for facsimile transmission. For a user friendly input system for documents consisting of alphanumerics and Korean characters obtained from a facsimile or scanner, we propose a document recognition algirithm utilizing several special features(partial projection, cross point, and distance features) and the membership function of the fuzzy set theory. In summary, we develop an integrated document input/output system and its performance is demonstrated via computer simulation.

  • PDF

Document Summarization Model Based on General Context in RNN

  • Kim, Heechan;Lee, Soowon
    • Journal of Information Processing Systems
    • /
    • 제15권6호
    • /
    • pp.1378-1391
    • /
    • 2019
  • In recent years, automatic document summarization has been widely studied in the field of natural language processing thanks to the remarkable developments made using deep learning models. To decode a word, existing models for abstractive summarization usually represent the context of a document using the weighted hidden states of each input word when they decode it. Because the weights change at each decoding step, these weights reflect only the local context of a document. Therefore, it is difficult to generate a summary that reflects the overall context of a document. To solve this problem, we introduce the notion of a general context and propose a model for summarization based on it. The general context reflects overall context of the document that is independent of each decoding step. Experimental results using the CNN/Daily Mail dataset show that the proposed model outperforms existing models.

Joint Hierarchical Semantic Clipping and Sentence Extraction for Document Summarization

  • Yan, Wanying;Guo, Junjun
    • Journal of Information Processing Systems
    • /
    • 제16권4호
    • /
    • pp.820-831
    • /
    • 2020
  • Extractive document summarization aims to select a few sentences while preserving its main information on a given document, but the current extractive methods do not consider the sentence-information repeat problem especially for news document summarization. In view of the importance and redundancy of news text information, in this paper, we propose a neural extractive summarization approach with joint sentence semantic clipping and selection, which can effectively solve the problem of news text summary sentence repetition. Specifically, a hierarchical selective encoding network is constructed for both sentence-level and document-level document representations, and data containing important information is extracted on news text; a sentence extractor strategy is then adopted for joint scoring and redundant information clipping. This way, our model strikes a balance between important information extraction and redundant information filtering. Experimental results on both CNN/Daily Mail dataset and Court Public Opinion News dataset we built are presented to show the effectiveness of our proposed approach in terms of ROUGE metrics, especially for redundant information filtering.

Web Service 기반의 휴대용 건강 요약지 보고 시스템 구현 (Implementation of reporting system for continuity of care document based on web service)

  • 김종욱;전소혜;임청묵;박선영;김남현
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.402-404
    • /
    • 2009
  • The development of health information technology enables people to access, view and acquire personal health record. But still, there have been a number of obstacles such as the absence of the standard to realize the ideal Personal Health Record(PHR) system. In this study, we proposed the service model that serves periodic Health Record Summary which is made by a medical specialist to people who are in the busy lives. Healthcare data from EMR in a hospital including people generate themselves at home is sent to a physician to make a medical opinion, and then it is changed into Health Level 7 Continuity of Care Document(CCD) format for interoperability. After a physician writes his opinion about patient's health condition, it will send to people by email. People who receive the health record summary data by email can save them into a USB device to view own PHR and medical comments of a physician through a computer. It will help people managing their own health condition with an opinion of a medical specialist.

  • PDF

An Innovative Approach of Bangla Text Summarization by Introducing Pronoun Replacement and Improved Sentence Ranking

  • Haque, Md. Majharul;Pervin, Suraiya;Begum, Zerina
    • Journal of Information Processing Systems
    • /
    • 제13권4호
    • /
    • pp.752-777
    • /
    • 2017
  • This paper proposes an automatic method to summarize Bangla news document. In the proposed approach, pronoun replacement is accomplished for the first time to minimize the dangling pronoun from summary. After replacing pronoun, sentences are ranked using term frequency, sentence frequency, numerical figures and title words. If two sentences have at least 60% cosine similarity, the frequency of the larger sentence is increased, and the smaller sentence is removed to eliminate redundancy. Moreover, the first sentence is included in summary always if it contains any title word. In Bangla text, numerical figures can be presented both in words and digits with a variety of forms. All these forms are identified to assess the importance of sentences. We have used the rule-based system in this approach with hidden Markov model and Markov chain model. To explore the rules, we have analyzed 3,000 Bangla news documents and studied some Bangla grammar books. A series of experiments are performed on 200 Bangla news documents and 600 summaries (3 summaries are for each document). The evaluation results demonstrate the effectiveness of the proposed technique over the four latest methods.

Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings

  • Al-Sabahi, Kamal;Zuping, Zhang;Kang, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권1호
    • /
    • pp.254-276
    • /
    • 2019
  • Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, the researchers are paying much attention to Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding has shown good performance allowing words to match on a semantic level. Naively concatenating word embeddings makes common words dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the Latent Semantic Analysis input matrix. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. They are modified versions of the augment weight and the entropy frequency that combine the strength of traditional weighting schemes and word embedding. The proposed approach is evaluated on three English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization. Experimental results on the three datasets show that the proposed model achieved competitive performance compared to the state-of-the-art leading to a conclusion that it provides a better document representation and a better document summary as a result.

문장 수반 관계를 고려한 문서 요약 (Document Summarization Considering Entailment Relation between Sentences)

  • 권영대;김누리;이지형
    • 정보과학회 논문지
    • /
    • 제44권2호
    • /
    • pp.179-185
    • /
    • 2017
  • 문서의 요약은 요약문 내의 문장들끼리 서로 연관성 있게 이어져야 하고 하나의 짜임새 있는 글이 되어야 한다. 본 논문에서는 위의 목적을 달성하기 위해 문장 간의 유사도와 수반 관계(Entailment)를 고려하여 문서 내에서 연관성이 크고 의미, 개념적인 연결성이 높은 문장들을 추출할 수 있도록 하였다. 본 논문에서는 Recurrent Neural Network 기반의 문장 관계 추론 모델과 그래프 기반의 랭킹(Graph-based ranking) 알고리즘을 혼합하여 단일 문서 추출요약 작업에 적용한 새로운 알고리즘인 TextRank-NLI를 제안한다. 새로운 알고리즘의 성능을 평가하기 위해 기존의 문서요약 알고리즘인 TextRank와 동일한 데이터 셋을 사용하여 성능을 비교 분석하였으며 기존의 알고리즘보다 약 2.3% 더 나은 성능을 보이는 것을 확인하였다.