• Title/Summary/Keyword: speech summarization

Search Result 11, Processing Time 0.03 seconds

Investigating an Automatic Method for Summarizing and Presenting a Video Speech Using Acoustic Features (음향학적 자질을 활용한 비디오 스피치 요약의 자동 추출과 표현에 관한 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for information Management
    • /
    • v.29 no.4
    • /
    • pp.191-208
    • /
    • 2012
  • Two fundamental aspects of speech summary generation are the extraction of key speech content and the style of presentation of the extracted speech synopses. We first investigated whether acoustic features (speaking rate, pitch pattern, and intensity) are equally important and, if not, which one can be effectively modeled to compute the significance of segments for lecture summarization. As a result, we found that the intensity (that is, difference between max DB and min DB) is the most efficient factor for speech summarization. We evaluated the intensity-based method of using the difference between max-DB and min-DB by comparing it to the keyword-based method in terms of which method produces better speech summaries and of how similar weight values assigned to segments by two methods are. Then, we investigated the way to present speech summaries to the viewers. As such, for speech summarization, we suggested how to extract key segments from a speech video efficiently using acoustic features and then present the extracted segments to the viewers.

Investigating an Automatic Method in Summarizing a Video Speech Using User-Assigned Tags (이용자 태그를 활용한 비디오 스피치 요약의 자동 생성 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.46 no.1
    • /
    • pp.163-181
    • /
    • 2012
  • We investigated how useful video tags were in summarizing video speech and how valuable positional information was for speech summarization. Furthermore, we examined the similarity among sentences selected for a speech summary to reduce its redundancy. Based on such analysis results, we then designed and evaluated a method for automatically summarizing speech transcripts using a modified Maximum Marginal Relevance model. This model did not only reduce redundancy but it also enabled the use of social tags, title words, and sentence positional information. Finally, we compared the proposed method to the Extractor system in which key sentences of a video speech were chosen using the frequency and location information of speech content words. Results showed that the precision and recall rates of the proposed method were higher than those of the Extractor system, although there was no significant difference in the recall rates.

Improvement of MP3-Based Music Summarization Using Linear Regression (선형 근사를 통한 MP3 음악 요약의 성능 향상)

  • Koh, Seo-Young;Park, Jeong-Sik;Oh, Yung-hwan
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.55-58
    • /
    • 2005
  • Music Summarization is to extract there presentative section of a song such as chorus or motif. In previous work, the length of music summarization was fixed, and the threshold to determine the chorus section was so sensitive that the tuning was needed. Also, the rapid change of rhythm or variation of sound effects make the chorus extraction errors. We suggest the linear regression for extracting the changeable length and for minimizing the effects of threshold variation. The experimental result shows that proposed method outperforms conventional one.

  • PDF

Moving Average Filter for Automatic Music Segmentation & Summarization (이동 평균 필터를 적용한 음악 세그멘테이션 및 요약)

  • Kim Kil-Youn;Oh Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.143-146
    • /
    • 2006
  • Music is now digitally produced and distributed via internet and we face a huge amount of music day by day. A music summarization technology has been studied in order to help people concentrate on the most impressive section of the song andone can skim a song as listening the climax(chorus, refrain) only. Recent studies try to find the climax section using various methods such as finding diagonal line segment or kernel based segmentation. All these methods fail to capture the inherent structure of music due to polyphonic and noisy nature of music. In this paper, after applying moving average filter to time domain of MFCC/chroma feature, we achieved a remarkable result to capture the music structure.

  • PDF

MAS: Real-time Meeting Scripting and Summarization Service using BART and WebRTC library (MAS: BART 와 WebRTC 라이브러리를 이용한 실시간 회의 스크립트화 및 요약 서비스)

  • Kwon, Ki-Jun;Ko, Geon-Jun;Joo, Yeong-Hwan;Chi, Jeong-hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.619-621
    • /
    • 2022
  • COVID-19 사태의 지속화로 재택근무 및 화상 수업의 수요가 증가함에 따라, 화상 회의 서비스에 대한 수요 또한 증가하고 있다. 본 논문은 회의 내용의 텍스트화 및 요약 회의록 생성에 관한 연구를 통해 보다 효율적인 화상 회의 서비스를 제공하고자 한다. WebRTC를 기반으로 화상 회의 서비스를 제공하며, WebSpeech API 를 활용하여 회의 내용을 스크립트화 한다. 회의 스크립트는 BART를 통해 요약본으로 재생성되며, 회의 스크립트와 요약본은 언제든지 열람 및 다운로드가 가능하다. 본 논문은 회의 요약 기능을 제공하는 화상 회의 서비스 MAS (Meeting Auto Summarization)를 제안하며, MAS 의 설계 및 구현 방법을 소개한다.

A Lecture Summarization Application Using STT (Speech-To-Text) and ChatGPT (STT(Speech-To-Text)와 ChatGPT 를 활용한 강의 요약 애플리케이션)

  • Jin-Woong Kim;Bo-Sung Geum;Tae-Kook Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.297-298
    • /
    • 2023
  • COVID-19 가 사실상 종식됨에 따라 대학 강의가 비대면 온라인 강의에서 대면 강의로 전환되었다. 온라인 강의에서는 다시 보기를 통한 복습이 가능했지만, 대면강의에서는 녹음을 통해서 이를 대체하고 있다. 하지만 다시 보기와 녹음본은 원하는 부분을 찾거나 내용을 요약하는데 있어서 시간이 오래 걸리고 불편하다. 본 논문에서는 강의 내용을 STT(Speech-to-Text) 기술을 활용하여 텍스트로 변환하고 ChatGPT(Chat-Generative Pre-trained Transformer)로 요약하는 애플리케이션을 제안한다.

VOC Summarization and Classification based on Sentence Understanding (구문 의미 이해 기반의 VOC 요약 및 분류)

  • Kim, Moonjong;Lee, Jaean;Han, Kyouyeol;Ahn, Youngmin
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.50-55
    • /
    • 2016
  • To attain an understanding of customers' opinions or demands regarding a companies' products or service, it is important to consider VOC (Voice of Customer) data; however, it is difficult to understand contexts from VOC because segmented and duplicate sentences and a variety of dialog contexts. In this article, POS (part of speech) and morphemes were selected as language resources due to their semantic importance regarding documents, and based on these, we defined an LSP (Lexico-Semantic-Pattern) to understand the structure and semantics of the sentences and extracted summary by key sentences; furthermore the LSP was introduced to connect the segmented sentences and remove any contextual repetition. We also defined the LSP by categories and classified the documents based on those categories that comprise the main sentences matched by LSP. In the experiment, we classified the VOC-data documents for the creation of a summarization before comparing the result with the previous methodologies.

Comparing the Use of Semantic Relations between Tags Versus Latent Semantic Analysis for Speech Summarization (스피치 요약을 위한 태그의미분석과 잠재의미분석간의 비교 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.47 no.3
    • /
    • pp.343-361
    • /
    • 2013
  • We proposed and evaluated a tag semantic analysis method in which original tags are expanded and the semantic relations between original or expanded tags are used to extract key sentences from lecture speech transcripts. To do that, we first investigated how useful Flickr tag clusters and WordNet synonyms are for expanding tags and for detecting the semantic relations between tags. Then, to evaluate our proposed method, we compared it with a latent semantic analysis (LSA) method. As a result, we found that Flick tag clusters are more effective than WordNet synonyms and that the F measure mean (0.27) of the tag semantic analysis method is higher than that of LSA method (0.22).

Multimodal Approach for Summarizing and Indexing News Video

  • Kim, Jae-Gon;Chang, Hyun-Sung;Kim, Young-Tae;Kang, Kyeong-Ok;Kim, Mun-Churl;Kim, Jin-Woong;Kim, Hyung-Myung
    • ETRI Journal
    • /
    • v.24 no.1
    • /
    • pp.1-11
    • /
    • 2002
  • A video summary abstracts the gist from an entire video and also enables efficient access to the desired content. In this paper, we propose a novel method for summarizing news video based on multimodal analysis of the content. The proposed method exploits the closed caption data to locate semantically meaningful highlights in a news video and speech signals in an audio stream to align the closed caption data with the video in a time-line. Then, the detected highlights are described using MPEG-7 Summarization Description Scheme, which allows efficient browsing of the content through such functionalities as multi-level abstracts and navigation guidance. Multimodal search and retrieval are also within the proposed framework. By indexing synchronized closed caption data, the video clips are searchable by inputting a text query. Intensive experiments with prototypical systems are presented to demonstrate the validity and reliability of the proposed method in real applications.

  • PDF

ICLAL: In-Context Learning-Based Audio-Language Multi-Modal Deep Learning Models (ICLAL: 인 컨텍스트 러닝 기반 오디오-언어 멀티 모달 딥러닝 모델)

  • Jun Yeong Park;Jinyoung Yeo;Go-Eun Lee;Chang Hwan Choi;Sang-Il Choi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.514-517
    • /
    • 2023
  • 본 연구는 인 컨택스트 러닝 (In-Context Learning)을 오디오-언어 작업에 적용하기 위한 멀티모달 (Multi-Modal) 딥러닝 모델을 다룬다. 해당 모델을 통해 학습 단계에서 오디오와 텍스트의 소통 가능한 형태의 표현 (Representation)을 학습하고 여러가지 오디오-텍스트 작업을 수행할 수 있는 멀티모달 딥러닝 모델을 개발하는 것이 본 연구의 목적이다. 모델은 오디오 인코더와 언어 인코더가 연결된 구조를 가지고 있으며, 언어 모델은 6.7B, 30B 의 파라미터 수를 가진 자동회귀 (Autoregressive) 대형 언어 모델 (Large Language Model)을 사용한다 오디오 인코더는 자기지도학습 (Self-Supervised Learning)을 기반으로 사전학습 된 오디오 특징 추출 모델이다. 언어모델이 상대적으로 대용량이기 언어모델의 파라미터를 고정하고 오디오 인코더의 파라미터만 업데이트하는 프로즌 (Frozen) 방법으로 학습한다. 학습을 위한 과제는 음성인식 (Automatic Speech Recognition)과 요약 (Abstractive Summarization) 이다. 학습을 마친 후 질의응답 (Question Answering) 작업으로 테스트를 진행했다. 그 결과, 정답 문장을 생성하기 위해서는 추가적인 학습이 필요한 것으로 보였으나, 음성인식으로 사전학습 한 모델의 경우 정답과 유사한 키워드를 사용하는 문법적으로 올바른 문장을 생성함을 확인했다.