• Title/Summary/Keyword: Video Scripts

Search Result 19, Processing Time 0.022 seconds

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

Establishment of a Process for Collecting Video Scripts on a Disaster Site based on Public-private Partnerships: Focus on 2019 Practical Activities during Typhoon in the Korean Peninsula (민관협력 기반 재난현장 영상정보 수집 및 활용체계 구축: 2019년 한반도 태풍 내습 시 실전활동 사례 중심)

  • Lee, Sohee;Lee, Junwoo;Cho, Sibum
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_4
    • /
    • pp.1167-1177
    • /
    • 2020
  • In this study, we established the process for collecting and utilizing video scripts of disaster site based on public-private partnerships. It is for the purpose of actively utilizing private capabilities in disaster management, and quickly sharing video scripts to identify field conditions. Based on the experience of actual operation of public-private partnerships system in the event of typhoon in 2019, we also derived implications for continuous operation of the process. Results are meaningful in that the government established the process for collecting and utilizing video scripts through public-private partnerships during the initial disaster response phase. And we also confirmed the possibility of spreading positive perception of disaster management organizations. However, there is a limit to the actualization and practical use of performance as an experimental pilot operation in the R&D stage. In addition, for continuous operation of the system, it is necessary to prepare institutional support measures such as organization, infrastructure for operating, programs of education and training, and policy making.

A Study on Non-Face-to-Face General English Courses for International Students: Reading Movie Scripts Aloud (유학생 대상의 비대면 교양 영어 수업 방안: 영화 대본 소리 내어 읽기를 중심으로)

  • Lee, Ji-Hyun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.267-272
    • /
    • 2021
  • This study's purpose is to investigate the effects of reading movie scripts aloud in non-face-to-face general English courses on international students' English ability in the COVID-19 era. A general English class was delivered once a week for 15 weeks to 47 international students at a Seoul-based university. The animated movie Tangled and its script were used as learning materials. Biweekly, students had to watch video lectures using the university's learning management system(LMS) and read scripts aloud through Zoom. In the video lectures, the teacher went over specific vocabulary and interpreted the movie scripts in easy Korean. For the second activity through Zoom, international students read the movie script aloud individually and in groups. The post-test revealed significant improvements in both reading and writing, as compared to the pre-test. Through the study's survey, participants exhibited positive attitudes in affective domains(understanding, satisfaction, interest, and recommendation).

Contextual In-Video Advertising Using Situation Information (상황 정보를 활용한 동영상 문맥 광고)

  • Yi, Bong-Jun;Woo, Hyun-Wook;Lee, Jung-Tae;Rim, Hae-Chang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.8
    • /
    • pp.3036-3044
    • /
    • 2010
  • With the rapid growth of video data service, demand to provide advertisements or additional information with regard to a particular video scene is increasing. However, the direct use of automated visual analysis or speech recognition on videos virtually has limitations with current level of technology; the metadata of video such as title, category information, or summary does not reflect the content of continuously changing scenes. This work presents a new video contextual advertising system that serves relevant advertisements on a given scene by leveraging the scene's situation information inferred from video scripts. Experimental results show that the use of situation information extracted from scripts leads to better performance and display of more relevant advertisements to the user.

On the Relationship between College Students' Attitude toward the Internet and their Self-directed English Learning Ability

  • Park, Kab-Yong;Sung, Tae-Soo;Joo, Chi-Woon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.2
    • /
    • pp.117-123
    • /
    • 2018
  • This article is to investigate the possibility that project-based classes introducing mobile phones can replace the monotony of traditional classes led by teachers as well as they can encourage students to take active part in the classes to some extent. The students in groups choose a genre for their own video projects (e.g., movie, drama, news, documentary, and commercial) and produce the video contents using a mobile phone for presentation made at the end of a semester. In the sense that the students are allowed to do video-based mobile phone projects, they can work independently outside of class, where time and space are more flexible and students are free from the anxiety of speaking or acting in front of an audience. A mobile phone project consists of around five stages done both in and outside of the classroom. All of these stages can be graded independently, including genre selection, drafting of scripts, peer review and revision, rehearsals, and presentation of the video. Feedback is given to students. After the presentation, students filled out a survey questionnaire sheet devised to analyze students' responses toward preferences and level of difficulty of the project activity. Finally, proposals are made for introduction of a better mobile phone-based project classes.

Using Mobile Phones in EFL Classes

  • Sung, Tae-Soo;Park, Kab-Yong;Joo, Chi-Woon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.6
    • /
    • pp.33-40
    • /
    • 2017
  • This article is to investigate the possibility that project-based classes introducing mobile phones can replace the monotony of traditional classes led by teachers as well as they can encourage students to take active part in the classes to some extent. The students in groups choose a genre for their own video projects (e.g., movie, drama, news, documentary, and commercial) and produce the video contents using a mobile phone for presentation made at the end of a semester. In the sense that the students are allowed to do video-based mobile phone projects, they can work independently outside of class, where time and space are more flexible and students are free from the anxiety of speaking or acting in front of an audience. A mobile phone project consists of around five stages done both in and outside of the classroom. All of these stages can be graded independently, including genre selection, drafting of scripts, peer review and revision, rehearsals, and presentation of the video. Feedback is given to students. After the presentation, students filled out a survey questionnaire sheet devised to analyze students' responses toward preferences and level of difficulty of the project activity. Finally, proposals are made for introduction of a better mobile phone-based project classes.

Performance Enhancement of Scaling Filter and Transcoder using CUDA (CUDA를 활용한 스케일링 필터 및 트랜스코더의 성능향상)

  • Han, Jae-Geun;Ko, Young-Sub;Suh, Sung-Han;Ha, Soon-Hoi
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.507-511
    • /
    • 2010
  • In this paper, we propose to enhance the performance of software transcoder by using GPGPU for scaling filters. Video transcoding is a technique that translates a video file to another video file that has a different coding algorithm and/or a different frame size. Its demand increases as more multimedia devices with different specification coexist in our daily life. Since transcoding is computationally intensive, a software transcoder that runs on a CPU takes long processing time. In this paper, we achieve significant speed-up by parallelizing the scaling filter using a GPGPU that can provide significantly large computation power. Through extensive experiments with various video scripts of different size and with various scaling filter options, it is verified that the enhanced transcoder could achieve 36% performance improvement in the default option, and up to 101% in a certain option.

TVML (TV program Making Language) - Automatic TV Program Generation from Text-based Script -

  • Masaki-HAYASHI;Hirotada-UEDA;Tsuneya-KURIHARA;Michiaki-YASUMURA
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.151-158
    • /
    • 1999
  • This paper describes TVML (TV program Making Language) for automatically generating television programs from text-based script. This language describes the contents of a television program using expression with a high level of abstraction like“title #1”and“zoom-in”. The software used to read a script written in TVML and to automatically generate the program video and audio is called the TVML Player. The paper begins by describing TVML language specifications and the TVML Player. It then describes the“external control mode”of the TVML Player that can be used for applying TVML to interactive applications. Finally, it describes the TVML Editor, a user interface that we developed which enables users having no specialized knowledge of computer languages to make TVML scripts. In addition to its role as a television-program production tool. TVML is expected to have a wide range of applications in the network and multimedia fields.

Modelling Grammatical Pattern Acquisition using Video Scripts (비디오 스크립트를 이용한 문법적 패턴 습득 모델링)

  • Seok, Ho-Sik;Zhang, Byoung-Tak
    • Annual Conference on Human and Language Technology
    • /
    • 2010.10a
    • /
    • pp.127-129
    • /
    • 2010
  • 본 논문에서는 다양한 코퍼스를 통해 언어를 학습하는 과정을 모델링하여 무감독학습(Unsupervised learning)으로 문법적 패턴을 습득하는 방법론을 소개한다. 제안 방법에서는 적은 수의 특성 조합으로 잠재적 패턴의 부분만을 표현한 후 표현된 규칙을 조합하여 유의미한 문법적 패턴을 탐색한다. 본 논문에서 제안한 방법은 베이지만 추론(Bayesian Inference)과 MCMC (Markov Chain Mote Carlo) 샘플링에 기반하여 특성 조합을 유의미한 문법적 패턴으로 정제하는 방법으로, 랜덤하이퍼그래프(Random Hypergraph) 모델을 이용하여 많은 수의 하이퍼에지를 생성한 후 생성된 하이퍼에지의 가중치를 조정하여 유의미한 문법적 패턴을 탈색하는 방법론이다. 우리는 본 논문에서 유아용 비디오의 스크립트를 이용하여 다양한 유아용 비디오 스크립트에서 문법적 패턴을 습득하는 방법론을 소개한다.

  • PDF

A News Video Mining based on Multi-modal Approach and Text Mining (멀티모달 방법론과 텍스트 마이닝 기반의 뉴스 비디오 마이닝)

  • Lee, Han-Sung;Im, Young-Hee;Yu, Jae-Hak;Oh, Seung-Geun;Park, Dai-Hee
    • Journal of KIISE:Databases
    • /
    • v.37 no.3
    • /
    • pp.127-136
    • /
    • 2010
  • With rapid growth of information and computer communication technologies, the numbers of digital documents including multimedia data have been recently exploded. In particular, news video database and news video mining have became the subject of extensive research, to develop effective and efficient tools for manipulation and analysis of news videos, because of their information richness. However, many research focus on browsing, retrieval and summarization of news videos. Up to date, it is a relatively early state to discover and to analyse the plentiful latent semantic knowledge from news videos. In this paper, we propose the news video mining system based on multi-modal approach and text mining, which uses the visual-textual information of news video clips and their scripts. The proposed system systematically constructs a taxonomy of news video stories in automatic manner with hierarchical clustering algorithm which is one of text mining methods. Then, it multilaterally analyzes the topics of news video stories by means of time-cluster trend graph, weighted cluster growth index, and network analysis. To clarify the validity of our approach, we analyzed the news videos on "The Second Summit of South and North Korea in 2007".