• 제목/요약/키워드: linguistic features

검색결과 181건 처리시간 0.025초

다중 비주얼 특징을 이용한 어학 교육 비디오의 자동 요약 방법 (Automatic Summary Method of Linguistic Educational Video Using Multiple Visual Features)

  • 한희준;김천석;추진호;노용만
    • 한국멀티미디어학회논문지
    • /
    • 제7권10호
    • /
    • pp.1452-1463
    • /
    • 2004
  • 양방향 방송 서비스로의 전환을 맞아 다양한 사용자 요구 및 기호에 적합한 컨텐츠를 제공하고, 증가하는 방송 컨텐츠를 효율적으로 관리, 이용하기 위해 비디오의 자동 에 대한 요구가 증가하고 있다. 본 논문에서는 내용 구성이 잘 갖추어진 어학 교육 비디오의 자동 에 대한 방법을 제안한다. 내용 기반을 자동으로 생성하기 위해 먼저 디지털 비디오로부터 샷 경계를 검출한 후, 각 샷을 대표하는 키프레임으로부터 비주얼 특징들을 추출한다. 그리고 추출된 다중 비주얼 특징을 이용해 어학 교육 비디오의 세분화된 내용 정보를 결정한다. 마지막으로, 결정된 내용 정보를 기술하는 요약문을 MPEG-7 MDS(Multimedia Description cheme)에 정의된 계층적 (Hierarchical Summary) 구조에 맞추어 XML 문서로 생성한다. 외국어 회화 비디오에 대해 실험하여 제안한 자동 방법의 효율성을 검증하였으며, 제안한 방법이 교육 방송용 컨텐츠의 다양한 서비스 제공 및 관리를 위한 비디오 요약 시스템에 효율적으로 적용 가능함을 확인하였다.

  • PDF

멀티미디어 및 언어적 특성을 활용한 크라우드펀딩 캠페인의 성공 여부 예측 (Predicting Success of Crowdfunding Campaigns using Multimedia and Linguistic Features)

  • 이강희;이승훈;김현철
    • 한국멀티미디어학회논문지
    • /
    • 제21권2호
    • /
    • pp.281-288
    • /
    • 2018
  • Crowdfunding has seen an enormous rise, becoming a new alternative funding source for emerging startup companies in recent years. Despite the huge success of crowdfunding, it has been reported that only around 40% of crowdfunding campaigns successfully raise the desired goal amount. The purpose of this study is to investigate key factors influencing successful fundraising on crowdfunding platforms. To this end, we mainly focus on contents of project campaigns, particularly their linguistic cues as well as multiple features extracted from project information and multimedia contents. We reveal which of these features are useful for predicting success of crowdfunding campaigns, and then build a predictive model based on those selected features. Our experimental results demonstrate that the built model predicts the success or failure of a crowdfunding campaign with 86.15% accuracy.

PROSODY IN SPEECH TECHNOLOGY - National project and some of our related works -

  • Hirose Keikichi
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 2002년도 하계학술발표대회 논문집 제21권 1호
    • /
    • pp.15-18
    • /
    • 2002
  • Prosodic features of speech are known to play an important role in the transmission of linguistic information in human conversation. Their roles in the transmission of para- and non- linguistic information are even much more. In spite of their importance in human conversation, from engineering viewpoint, research focuses are mainly placed on segmental features, and not so much on prosodic features. With the aim of promoting research works on prosody, a research project 'Prosody and Speech Processing' is now going on. A rough sketch of the project is first given in the paper. Then, the paper introduces several prosody-related research works, which are going on in our laboratory. They include, corpus-based fundamental frequency contour generation, speech rate control for dialogue-like speech synthesis, analysis of prosodic features of emotional speech, reply speech generation in spoken dialogue systems, and language modeling with prosodic boundaries.

  • PDF

언어학적 단서를 활용한 동화 텍스트 내 발화문의 화자 파악 (Identification of Speakers in Fairytales with Linguistic Clues)

  • 민혜진;정진우;박종철
    • 한국언어정보학회지:언어와정보
    • /
    • 제17권2호
    • /
    • pp.93-121
    • /
    • 2013
  • Identifying the speakers of individual utterances mentioned in textual stories is an important step towards developing applications that involve the use of unique characteristics of speakers in stories, such as robot storytelling and story-to-scene generation. Despite the usefulness, it is a challenging task because not only human entities but also animals and even inanimate objects can become speakers especially in fairytales so that the number of candidates is much more than that in other types of text. In addition, since the action of speaking is not always mentioned explicitly, it is necessary to infer the speaker from the implicitly mentioned speaking behaviors such as appearances or emotional expressions. In this paper, we investigate a method to exploit linguistic clues to identify the speakers of utterances from textual fairytale stories in Korean, especially in order to handle such challenging issues. Compared with the previous work, the present work takes into account additional linguistic features such as vocative roles and pairs of conversation participants, and proposes the use of discourse-level turn-taking behaviors between speakers to further reduce the number of possible candidate speakers. We describe a simple rule-based method to choose a speaker from candidates based on such linguistic features and turn-taking behaviors.

  • PDF

A Computational Model of Language Learning Driven by Training Inputs

  • 이은석;이지훈;장병탁
    • 한국인지과학회:학술대회논문집
    • /
    • 한국인지과학회 2010년도 춘계학술대회
    • /
    • pp.60-65
    • /
    • 2010
  • Language learning involves linguistic environments around the learner. So the variation in training input to which the learner is exposed has been linked to their language learning. We explore how linguistic experiences can cause differences in learning linguistic structural features, as investigate in a probabilistic graphical model. We manipulate the amounts of training input, composed of natural linguistic data from animation videos for children, from holistic (one-word expression) to compositional (two- to six-word one) gradually. The recognition and generation of sentences are a "probabilistic" constraint satisfaction process which is based on massively parallel DNA chemistry. Random sentence generation tasks succeed when networks begin with limited sentential lengths and vocabulary sizes and gradually expand with larger ones, like children's cognitive development in learning. This model supports the suggestion that variations in early linguistic environments with developmental steps may be useful for facilitating language acquisition.

  • PDF

Social Media Marketing Strategies for Tourism Destinations: Effects of Linguistic Features and Content Types

  • Song, Seobgyu;Park, Seunghyun Brian;Park, Kwangsoo
    • Journal of Smart Tourism
    • /
    • 제1권3호
    • /
    • pp.21-29
    • /
    • 2021
  • This study explored the relationship between post types and linguistic characteristics in marketer-generated content and social media engagement to find the optimized content to enhance social media engagement level. Post data of 23,588 marketer-generated content were collected from 50 states' destination marketing organization Facebook pages in the United States. The collected data were analyzed by employing social media analytics, linguistic analysis, multivariate analysis of variance, and discriminant analysis. The results showed that there are significant differences in both engagement indicators and linguistic scores among the three post types. Based on research findings, this research not only provided researchers with theoretical implications but also suggested practitioners the most effective content designs for travel destination marketing in Facebook.

다양한 언어적 자질을 고려한 발화간 유사도 측정 방법 (A Method for Measuring Inter-Utterance Similarity Considering Various Linguistic Features)

  • 이연수;신중휘;홍금원;송영인;이도길;임해창
    • 한국음향학회지
    • /
    • 제28권1호
    • /
    • pp.61-69
    • /
    • 2009
  • 본 연구는 예제 기반 대화 시스템에서 응답을 결정하기 위한 핵심 요소 기술 중 하나인 발차간 유사도 측정 방법의 개선에 대해 논한다. 일반적인 문장간 유사도 측정과는 달리, 대화에서 발차간 유사도 측정은 단어 분포간 유사도 뿐만 아니라, 문형, 시제, 긍/부정, 양태등 대화 자연스러움을 결정하는 문장의 다양한 언어적 요소 역시 중요하게 고려되어야 한다. 그러나 기존 연구에서는 이에 대한 고려가 부족 했던 것이 사실이며, 따라서 본 연구에서는 개선 방안으로서 발화의 형태적 유사성 뿐 아니라 다양한 언어적 자질들을 분석하고 이를 유사도 측정에 반영하여 정확도를 향상시키는 새로운 유사도 측정 방법을 제안한다. 또한, 발차의 자질별 유사도를 고려함으로써, 한정된 수의 예제들의 활용도를 높일 수 있는 방법을 제안하였다. 실험 결과 제안하는 방법이 기존 방식에 비해 10%p 이상 정확도 성능 향상이 있었다.

언어자원 자동 구축을 위한 위키피디아 콘텐츠 활용 방안 연구 (A Study on Utilization of Wikipedia Contents for Automatic Construction of Linguistic Resources)

  • 류철중;김용;윤보현
    • 디지털융복합연구
    • /
    • 제13권5호
    • /
    • pp.187-194
    • /
    • 2015
  • 급변하는 자연언어를 기계가 이해할 수 있도록 하기 위해서는 다양한 언어지식자원(linguistic knowledge resources)의 구축이 필수적으로 수반된다. 본 논문에서는 온라인 콘텐츠의 특성을 활용해 언어지식자원을 자동으로 구축함으로써 지속적으로 확장 가능한 방법을 고안하고자 한다. 특히 언어분석 과정에서 가장 활용도가 높은 개체명(NE: Named Entity) 사전을 자동으로 구축, 확장하는데 주안점을 둔다. 이를 위해 본 논문에서는 개체명 사전 구축대상문서로 위키피디아(Wikipedia)를 선정, 그 특성을 파악하기 위해 다양한 통계 분석을 수행하였다. 이에 기반하여 위키피디아 콘텐츠가 갖는 구문적 특성과 구조 정보 등의 메타데이터를 활용하여 개체명 사전을 구축, 확장하는 방법을 제안한다.

Multimodal Context Embedding for Scene Graph Generation

  • Jung, Gayoung;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • 제16권6호
    • /
    • pp.1250-1260
    • /
    • 2020
  • This study proposes a novel deep neural network model that can accurately detect objects and their relationships in an image and represent them as a scene graph. The proposed model utilizes several multimodal features, including linguistic features and visual context features, to accurately detect objects and relationships. In addition, in the proposed model, context features are embedded using graph neural networks to depict the dependencies between two related objects in the context feature vector. This study demonstrates the effectiveness of the proposed model through comparative experiments using the Visual Genome benchmark dataset.

Voice Frequency Synthesis using VAW-GAN based Amplitude Scaling for Emotion Transformation

  • Kwon, Hye-Jeong;Kim, Min-Jeong;Baek, Ji-Won;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.713-725
    • /
    • 2022
  • Mostly, artificial intelligence does not show any definite change in emotions. For this reason, it is hard to demonstrate empathy in communication with humans. If frequency modification is applied to neutral emotions, or if a different emotional frequency is added to them, it is possible to develop artificial intelligence with emotions. This study proposes the emotion conversion using the Generative Adversarial Network (GAN) based voice frequency synthesis. The proposed method extracts a frequency from speech data of twenty-four actors and actresses. In other words, it extracts voice features of their different emotions, preserves linguistic features, and converts emotions only. After that, it generates a frequency in variational auto-encoding Wasserstein generative adversarial network (VAW-GAN) in order to make prosody and preserve linguistic information. That makes it possible to learn speech features in parallel. Finally, it corrects a frequency by employing Amplitude Scaling. With the use of the spectral conversion of logarithmic scale, it is converted into a frequency in consideration of human hearing features. Accordingly, the proposed technique provides the emotion conversion of speeches in order to express emotions in line with artificially generated voices or speeches.