• Title/Summary/Keyword: Rouge

Search Result 79, Processing Time 0.028 seconds

Characteristic of Film Music by Director Baz Luhrmann : Focusing on the Movies , and (바즈루어만 감독의 영화음악 특징 : 영화 <댄싱히어로>, <로미오와 줄리엣>, <물랑루즈>를 중심으로)

  • Kim, Youn-Sik;Kim, Young-Sam
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.223-230
    • /
    • 2019
  • This paper aims to derive the specificity of film music of Baz Luhrmann, a Hollywood film director, focusing on his representative works such as Dancing Heroes, Romeo and Juliet, and Moulin Rouge. Frist, Dancing Heroes captures various dance music genres through dynamic shooting techniques and shows trendy sensibility by using the main theme song 'Time after Time,' sung by the main character, Tina. Second, Romeo and Juliet, the original work of Shakespeare, keeps thier lines and stories while decorating gorgeous fashion and rock music in jukebox style. Also, it is harmonized with the most modern and trendy MTV-style video. Third, Moulin Rouge presents film music through the 'mix and match' method, which consists of jukebox-type trendy songs containing classical back stage musical and Bollywood musical images. In conclusion, the style of Baz Luhrmann has been reborn as a unique way of directing Buz Luhrmann's film music that it is expressed by connecting various juke box style music with amazing visual effect. Through director's style, it is possible to suggest the direction of various film music to the industries.

Leaching of Trifluralin in the Commerce Clay Loam Soil (토양 중 Trifluralin의 용탈)

  • Kim, Jung-Ho
    • Korean Journal of Environmental Agriculture
    • /
    • v.15 no.4
    • /
    • pp.464-471
    • /
    • 1996
  • Trifluralin was selected to study the leaching potentials related to the pollution on Commerce silty clay loam soil near Baton Rouge, Louisiana, USA. The batch equilibrium of trifluralin resulted in the Koc value of 875. When the soil columns(5.4 cm i.d. ${\times}$ 26 cm length) were leached with three pore volumes of water, the distributions of trifluralin in soil and leachate were 99.993% and 0.007% of the total recoveries, respectively. When applied at the rate of 1,683 g/ha in the field, the amount of trifluralin within the $0{\sim}10$ cm soil depth was 96.9% of that within the $0{\sim}60cm$ soil depth 31 days after application. The concentrations of trifluralin detected in 1- and 2m- depth wells during 62 days after application ranged from 0.04 ng/mL to 0.08 ng/mL, which were lower than 2.0 ng/mL of the U.S. EPA advisory levels for drinking water. Trifluralin was strongly adsorbed on soil and hardly reached ground water. The leaching properties of trifluralin in the fields were predicted and concurred with those in the columns.

  • PDF

Building a Korean Text Summarization Dataset Using News Articles of Social Media (신문기사와 소셜 미디어를 활용한 한국어 문서요약 데이터 구축)

  • Lee, Gyoung Ho;Park, Yo-Han;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.8
    • /
    • pp.251-258
    • /
    • 2020
  • A training dataset for text summarization consists of pairs of a document and its summary. As conventional approaches to building text summarization dataset are human labor intensive, it is not easy to construct large datasets for text summarization. A collection of news articles is one of the most popular resources for text summarization because it is easily accessible, large-scale and high-quality text. From social media news services, we can collect not only headlines and subheads of news articles but also summary descriptions that human editors write about the news articles. Approximately 425,000 pairs of news articles and their summaries are collected from social media. We implemented an automatic extractive summarizer and trained it on the dataset. The performance of the summarizer is compared with unsupervised models. The summarizer achieved better results than unsupervised models in terms of ROUGE score.

Single Document Extractive Summarization Based on Deep Neural Networks Using Linguistic Analysis Features (언어 분석 자질을 활용한 인공신경망 기반의 단일 문서 추출 요약)

  • Lee, Gyoung Ho;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.8
    • /
    • pp.343-348
    • /
    • 2019
  • In recent years, extractive summarization systems based on end-to-end deep learning models have become popular. These systems do not require human-crafted features and adopt data-driven approaches. However, previous related studies have shown that linguistic analysis features such as part-of-speeches, named entities and word's frequencies are useful for extracting important sentences from a document to generate a summary. In this paper, we propose an extractive summarization system based on deep neural networks using conventional linguistic analysis features. In order to prove the usefulness of the linguistic analysis features, we compare the models with and without those features. The experimental results show that the model with the linguistic analysis features improves the Rouge-2 F1 score by 0.5 points compared to the model without those features.

Multi-Document Summarization Method of Reviews Using Word Embedding Clustering (워드 임베딩 클러스터링을 활용한 리뷰 다중문서 요약기법)

  • Lee, Pil Won;Hwang, Yun Young;Choi, Jong Seok;Shin, Young Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.535-540
    • /
    • 2021
  • Multi-document refers to a document consisting of various topics, not a single topic, and a typical example is online reviews. There have been several attempts to summarize online reviews because of their vast amounts of information. However, collective summarization of reviews through existing summary models creates a problem of losing the various topics that make up the reviews. Therefore, in this paper, we present method to summarize the review with minimal loss of the topic. The proposed method classify reviews through processes such as preprocessing, importance evaluation, embedding substitution using BERT, and embedding clustering. Furthermore, the classified sentences generate the final summary using the trained Transformer summary model. The performance evaluation of the proposed model was compared by evaluating the existing summary model, seq2seq model, and the cosine similarity with the ROUGE score, and performed a high performance summary compared to the existing summary model.

Improving the effectiveness of document extraction summary based on the amount of sentence information (문장 정보량 기반 문서 추출 요약의 효과성 제고)

  • Kim, Eun Hee;Lim, Myung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.3
    • /
    • pp.31-38
    • /
    • 2022
  • In the document extraction summary study, various methods for selecting important sentences based on the relationship between sentences were proposed. In the Korean document summary using the summation similarity of sentences, the summation similarity of the sentences was regarded as the amount of sentence information, and the summary sentences were extracted by selecting important sentences based on this. However, the problem is that it does not take into account the various importance that each sentence contributes to the entire document. Therefore, in this study, we propose a document extraction summary method that provides a summary by selecting important sentences based on the amount of quantitative and semantic information in the sentence. As a result, the extracted sentence agreement was 58.56% and the ROUGE-L score was 34, which was superior to the method using only the combined similarity. Compared to the deep learning-based method, the extraction method is lighter, but the performance is similar. Through this, it was confirmed that the method of compressing information based on semantic similarity between sentences is an important approach in document extraction summary. In addition, based on the quickly extracted summary, the document generation summary step can be effectively performed.

Text summarization of dialogue based on BERT

  • Nam, Wongyung;Lee, Jisoo;Jang, Beakcheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.41-47
    • /
    • 2022
  • In this paper, we propose how to implement text summaries for colloquial data that are not clearly organized. For this study, SAMSum data, which is colloquial data, was used, and the BERTSumExtAbs model proposed in the previous study of the automatic summary model was applied. More than 70% of the SAMSum dataset consists of conversations between two people, and the remaining 30% consists of conversations between three or more people. As a result, by applying the automatic text summarization model to colloquial data, a result of 42.43 or higher was derived in the ROUGE Score R-1. In addition, a high score of 45.81 was derived by fine-tuning the BERTSum model, which was previously proposed as a text summarization model. Through this study, the performance of colloquial generation summary has been proven, and it is hoped that the computer will understand human natural language as it is and be used as basic data to solve various tasks.

Semantic Pre-training Methodology for Improving Text Summarization Quality (텍스트 요약 품질 향상을 위한 의미적 사전학습 방법론)

  • Mingyu Jeon;Namgyu Kim
    • Smart Media Journal
    • /
    • v.12 no.5
    • /
    • pp.17-27
    • /
    • 2023
  • Recently, automatic text summarization, which automatically summarizes only meaningful information for users, is being studied steadily. Especially, research on text summarization using Transformer, an artificial neural network model, has been mainly conducted. Among various studies, the GSG method, which trains a model through sentence-by-sentence masking, has received the most attention. However, the traditional GSG has limitations in selecting a sentence to be masked based on the degree of overlap of tokens, not the meaning of a sentence. Therefore, in this study, in order to improve the quality of text summarization, we propose SbGSG (Semantic-based GSG) methodology that selects sentences to be masked by GSG considering the meaning of sentences. As a result of conducting an experiment using 370,000 news articles and 21,600 summaries and reports, it was confirmed that the proposed methodology, SbGSG, showed superior performance compared to the traditional GSG in terms of ROUGE and BERT Score.

Summarization of Korean Dialogues through Dialogue Restructuring (대화문 재구조화를 통한 한국어 대화문 요약)

  • Eun Hee Kim;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.77-85
    • /
    • 2023
  • After COVID-19, communication through online platforms has increased, leading to an accumulation of massive amounts of conversational text data. With the growing importance of summarizing this text data to extract meaningful information, there has been active research on deep learning-based abstractive summarization. However, conversational data, compared to structured texts like news articles, often contains missing or transformed information, necessitating consideration from multiple perspectives due to its unique characteristics. In particular, vocabulary omissions and unrelated expressions in the conversation can hinder effective summarization. Therefore, in this study, we restructured by considering the characteristics of Korean conversational data, fine-tuning a pre-trained text summarization model based on KoBART, and improved conversation data summary perfomance through a refining operation to remove redundant elements from the summary. By restructuring the sentences based on the order of utterances and extracting a central speaker, we combined methods to restructure the conversation around them. As a result, there was about a 4 point improvement in the Rouge-1 score. This study has demonstrated the significance of our conversation restructuring approach, which considers the characteristics of dialogue, in enhancing Korean conversation summarization performance.

Denoising Response Generation for Learning Korean Conversational Model (한국어 대화 모델 학습을 위한 디노이징 응답 생성)

  • Kim, Tae-Hyeong;Noh, Yunseok;Park, Seong-Bae;Park, Se-Yeong
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.29-34
    • /
    • 2017
  • 챗봇 혹은 대화 시스템은 특정 질문이나 발화에 대해 적절한 응답을 해주는 시스템으로 자연어처리 분야에서 활발히 연구되고 있는 주제 중 하나이다. 최근에는 대화 모델 학습에 딥러닝 방식의 시퀀스-투-시퀀스 프레임워크가 많이 이용되고 있다. 하지만 해당 방식을 적용한 모델의 경우 학습 데이터에 나타나지 않은 다양한 형태의 질의문에 대해 응답을 잘 못해주는 문제가 있다. 이 논문에서는 이러한 문제점을 해결하기 위하여 디노이징 응답 생성 모델을 제안한다. 제안하는 방법은 다양한 형태의 노이즈가 임의로 가미된 질의문을 모델 학습 시에 경험시킴으로써 강건한 응답 생성이 가능한 모델을 얻을 수 있게 한다. 제안하는 방법의 우수성을 보이기 위해 9만 건의 질의-응답 쌍으로 구성된 한국어 대화 데이터에 대해 실험을 수행하였다. 실험 결과 제안하는 방법이 비교 모델에 비해 정량 평가인 ROUGE 점수와 사람이 직접 평가한 정성 평가 모두에서 더 우수한 결과를 보이는 것을 확인할 수 있었다.

  • PDF