• Title/Summary/Keyword: Linguistic Features

Search Result 178, Processing Time 0.03 seconds

Automatic Summary Method of Linguistic Educational Video Using Multiple Visual Features (다중 비주얼 특징을 이용한 어학 교육 비디오의 자동 요약 방법)

  • Han Hee-Jun;Kim Cheon-Seog;Choo Jin-Ho;Ro Yong-Man
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1452-1463
    • /
    • 2004
  • The requirement of automatic video summary is increasing as bi-directional broadcasting contents and various user requests and preferences for the bi -directional broadcast environment are increasing. Automatic video summary is needed for an efficient management and usage of many contents in service provider as well. In this paper, we propose a method to generate a content-based summary of linguistic educational videos automatically. First, shot-boundaries and keyframes are generated from linguistic educational video and then multiple(low-level) visual features are extracted. Next, the semantic parts (Explanation part, Dialog part, Text-based part) of the linguistic educational video are generated using extracted visual features. Lastly the XMI- document describing summary information is made based on HieraTchical Summary architecture oi MPEG-7 MDS (Multimedia I)escription Scheme). Experimental results show that our proposed algorithm provides reasonable performance for automatic summary of linguistic educational videos. We verified that the proposed method is useful ior video summary system to provide various services as well as management of educational contents.

  • PDF

Predicting Success of Crowdfunding Campaigns using Multimedia and Linguistic Features (멀티미디어 및 언어적 특성을 활용한 크라우드펀딩 캠페인의 성공 여부 예측)

  • Lee, Kang-hee;Lee, Seung-hun;Kim, Hyun-chul
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.281-288
    • /
    • 2018
  • Crowdfunding has seen an enormous rise, becoming a new alternative funding source for emerging startup companies in recent years. Despite the huge success of crowdfunding, it has been reported that only around 40% of crowdfunding campaigns successfully raise the desired goal amount. The purpose of this study is to investigate key factors influencing successful fundraising on crowdfunding platforms. To this end, we mainly focus on contents of project campaigns, particularly their linguistic cues as well as multiple features extracted from project information and multimedia contents. We reveal which of these features are useful for predicting success of crowdfunding campaigns, and then build a predictive model based on those selected features. Our experimental results demonstrate that the built model predicts the success or failure of a crowdfunding campaign with 86.15% accuracy.

PROSODY IN SPEECH TECHNOLOGY - National project and some of our related works -

  • Hirose Keikichi
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.15-18
    • /
    • 2002
  • Prosodic features of speech are known to play an important role in the transmission of linguistic information in human conversation. Their roles in the transmission of para- and non- linguistic information are even much more. In spite of their importance in human conversation, from engineering viewpoint, research focuses are mainly placed on segmental features, and not so much on prosodic features. With the aim of promoting research works on prosody, a research project 'Prosody and Speech Processing' is now going on. A rough sketch of the project is first given in the paper. Then, the paper introduces several prosody-related research works, which are going on in our laboratory. They include, corpus-based fundamental frequency contour generation, speech rate control for dialogue-like speech synthesis, analysis of prosodic features of emotional speech, reply speech generation in spoken dialogue systems, and language modeling with prosodic boundaries.

  • PDF

Identification of Speakers in Fairytales with Linguistic Clues (언어학적 단서를 활용한 동화 텍스트 내 발화문의 화자 파악)

  • Min, Hye-Jin;Chung, Jin-Woo;Park, Jong C.
    • Language and Information
    • /
    • v.17 no.2
    • /
    • pp.93-121
    • /
    • 2013
  • Identifying the speakers of individual utterances mentioned in textual stories is an important step towards developing applications that involve the use of unique characteristics of speakers in stories, such as robot storytelling and story-to-scene generation. Despite the usefulness, it is a challenging task because not only human entities but also animals and even inanimate objects can become speakers especially in fairytales so that the number of candidates is much more than that in other types of text. In addition, since the action of speaking is not always mentioned explicitly, it is necessary to infer the speaker from the implicitly mentioned speaking behaviors such as appearances or emotional expressions. In this paper, we investigate a method to exploit linguistic clues to identify the speakers of utterances from textual fairytale stories in Korean, especially in order to handle such challenging issues. Compared with the previous work, the present work takes into account additional linguistic features such as vocative roles and pairs of conversation participants, and proposes the use of discourse-level turn-taking behaviors between speakers to further reduce the number of possible candidate speakers. We describe a simple rule-based method to choose a speaker from candidates based on such linguistic features and turn-taking behaviors.

  • PDF

A Computational Model of Language Learning Driven by Training Inputs

  • Lee, Eun-Seok;Lee, Ji-Hoon;Zhang, Byoung-Tak
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2010.05a
    • /
    • pp.60-65
    • /
    • 2010
  • Language learning involves linguistic environments around the learner. So the variation in training input to which the learner is exposed has been linked to their language learning. We explore how linguistic experiences can cause differences in learning linguistic structural features, as investigate in a probabilistic graphical model. We manipulate the amounts of training input, composed of natural linguistic data from animation videos for children, from holistic (one-word expression) to compositional (two- to six-word one) gradually. The recognition and generation of sentences are a "probabilistic" constraint satisfaction process which is based on massively parallel DNA chemistry. Random sentence generation tasks succeed when networks begin with limited sentential lengths and vocabulary sizes and gradually expand with larger ones, like children's cognitive development in learning. This model supports the suggestion that variations in early linguistic environments with developmental steps may be useful for facilitating language acquisition.

  • PDF

Social Media Marketing Strategies for Tourism Destinations: Effects of Linguistic Features and Content Types

  • Song, Seobgyu;Park, Seunghyun Brian;Park, Kwangsoo
    • Journal of Smart Tourism
    • /
    • v.1 no.3
    • /
    • pp.21-29
    • /
    • 2021
  • This study explored the relationship between post types and linguistic characteristics in marketer-generated content and social media engagement to find the optimized content to enhance social media engagement level. Post data of 23,588 marketer-generated content were collected from 50 states' destination marketing organization Facebook pages in the United States. The collected data were analyzed by employing social media analytics, linguistic analysis, multivariate analysis of variance, and discriminant analysis. The results showed that there are significant differences in both engagement indicators and linguistic scores among the three post types. Based on research findings, this research not only provided researchers with theoretical implications but also suggested practitioners the most effective content designs for travel destination marketing in Facebook.

A Method for Measuring Inter-Utterance Similarity Considering Various Linguistic Features (다양한 언어적 자질을 고려한 발화간 유사도 측정 방법)

  • Lee, Yeon-Su;Shin, Joong-Hwi;Hong, Gum-Won;Song, Young-In;Lee, Do-Gil;Rim, Hae-Chang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.1
    • /
    • pp.61-69
    • /
    • 2009
  • This paper presents an improved method measuring inter-utterance similarity in an example-based dialogue system, which searches the most similar utterance in a dialogue database to generate a response to a given user utterance. Unlike general inter-sentence similarity measures, the inter-utterance similarity measure for example-based dialogue system should consider not only word distribution but also various linguistic features, such as affirmation/negation, tense, modality, sentence type, which affects the natural conversation. However, previous approaches do not sufficiently reflect these features. This paper proposes a new utterance similarity measure by analyzing and reflecting various linguistic features to improve performance in accuracy. Also, by considering substitutability of the features, the proposed method can utilize limited number of examples. Experimental results show that the proposed method achieves 10%p improvement in accuracy compared to the previous method.

A Study on Utilization of Wikipedia Contents for Automatic Construction of Linguistic Resources (언어자원 자동 구축을 위한 위키피디아 콘텐츠 활용 방안 연구)

  • Yoo, Cheol-Jung;Kim, Yong;Yun, Bo-Hyun
    • Journal of Digital Convergence
    • /
    • v.13 no.5
    • /
    • pp.187-194
    • /
    • 2015
  • Various linguistic knowledge resources are required in order that machine can understand diverse variation in natural languages. This paper aims to devise an automatic construction method of linguistic resources by reflecting characteristics of online contents toward continuous expansion. Especially we focused to build NE(Named-Entity) dictionary because the applicability of NEs is very high in linguistic analysis processes. Based on the investigation on Korean Wikipedia, we suggested an efficient construction method of NE dictionary using the syntactic patterns and structural features such as metadatas.

Multimodal Context Embedding for Scene Graph Generation

  • Jung, Gayoung;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1250-1260
    • /
    • 2020
  • This study proposes a novel deep neural network model that can accurately detect objects and their relationships in an image and represent them as a scene graph. The proposed model utilizes several multimodal features, including linguistic features and visual context features, to accurately detect objects and relationships. In addition, in the proposed model, context features are embedded using graph neural networks to depict the dependencies between two related objects in the context feature vector. This study demonstrates the effectiveness of the proposed model through comparative experiments using the Visual Genome benchmark dataset.

Voice Frequency Synthesis using VAW-GAN based Amplitude Scaling for Emotion Transformation

  • Kwon, Hye-Jeong;Kim, Min-Jeong;Baek, Ji-Won;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.713-725
    • /
    • 2022
  • Mostly, artificial intelligence does not show any definite change in emotions. For this reason, it is hard to demonstrate empathy in communication with humans. If frequency modification is applied to neutral emotions, or if a different emotional frequency is added to them, it is possible to develop artificial intelligence with emotions. This study proposes the emotion conversion using the Generative Adversarial Network (GAN) based voice frequency synthesis. The proposed method extracts a frequency from speech data of twenty-four actors and actresses. In other words, it extracts voice features of their different emotions, preserves linguistic features, and converts emotions only. After that, it generates a frequency in variational auto-encoding Wasserstein generative adversarial network (VAW-GAN) in order to make prosody and preserve linguistic information. That makes it possible to learn speech features in parallel. Finally, it corrects a frequency by employing Amplitude Scaling. With the use of the spectral conversion of logarithmic scale, it is converted into a frequency in consideration of human hearing features. Accordingly, the proposed technique provides the emotion conversion of speeches in order to express emotions in line with artificially generated voices or speeches.