• Title/Summary/Keyword: video expression

Search Result 187, Processing Time 0.022 seconds

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

The Climax Expression Analysis Based on the Shot-list Data of Movies (영화의 쇼트리스트 데이터를 기반한 클라이맥스 표현 분석)

  • Lim, Yangmi
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.965-976
    • /
    • 2016
  • Recently studies about audio-visual immersion are being carried out due to development of digital video but studies analysing quantitatively the content or climax of video has not carried out. The paper is analysed quantitatively by using shot size, camera angle, camera direction and camera position, objective & subjective which is general component of video expression. We can see the climax effects in the video destroying the principles because there is a rule when they use the component. This thesis analyses shot-list based on video expression in existing movies therefore it can analyse quantitatively several method commonly used in order to make the climax. These suggesting method that can find the part of climax based on analysis of shot-list can be used when you search the specific part of video like long movie or when you search the genre of the movie or when you index the genre. Also it can be very effective in various information services companies offering climax movie which is searched.

A Study on the Characteristics and Production of Short-form Fashion Video (숏폼 패션영상의 특성과 제작에 관한 연구)

  • Kim, Sejin
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.45 no.1
    • /
    • pp.200-216
    • /
    • 2021
  • This article considers short-form fashion videos as distinguished from fashion films, defines the concept, details the expressive characteristics of short-form fashion video, and reveals the method of producing it. For the methodology, a literature review was conducted to derive the concept and expression techniques. A case study was also performed to define the expressive characteristics. Five short-form fashion videos were also produced based on the results. The final results are as follows. First, short-form fashion video was defined as a fashion medium on the purpose of fashion communication within 60 seconds and classified by three digital image formats. Second, the result of analyzing the expression of the short-form fashion video shows the simplicity and reconstitution, characterization and remediation, borderless and expansion, and synesthesia trigger of the fashion image. Third, five short-form fashion videos were produced based on the theme of the digital garden. It shows that the short-form fashion video intensively expresses the content as a medium whose sensational expression is more prominent than the composition of the story by the short running time that reflects the taste of digital mainstream.

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

Recognition of Human Facial Expression in a Video Image using the Active Appearance Model

  • Jo, Gyeong-Sic;Kim, Yong-Guk
    • Journal of Information Processing Systems
    • /
    • v.6 no.2
    • /
    • pp.261-268
    • /
    • 2010
  • Tracking human facial expression within a video image has many useful applications, such as surveillance and teleconferencing, etc. Initially, the Active Appearance Model (AAM) was proposed for facial recognition; however, it turns out that the AAM has many advantages as regards continuous facial expression recognition. We have implemented a continuous facial expression recognition system using the AAM. In this study, we adopt an independent AAM using the Inverse Compositional Image Alignment method. The system was evaluated using the standard Cohn-Kanade facial expression database, the results of which show that it could have numerous potential applications.

A Research on Expressional Directing Methods for Film and Video - Focused on the of Expression derived from Spatial Characteristics of the Filming Zone - (영상 표현 연출 방법에 관한 연구 - 촬영 공간의 형태적 특성에 기인하는 이미지 표현을 중심으로 -)

  • Yoo, Taek-Sang
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.217-228
    • /
    • 2006
  • It is obvious that the characteristics of the media are related with the visual expression found in film and video shots. It can be assumed that there would be unavoidable distortion when the space and objects are being framed into an image by the camera and that unavoidable distortion can be utilized for creative expression. Therefore the relationship among the shape of filming zone, the structure of the image, and the strategies of disposition of camera, actors, and objects were studied in accordance with expressional attempts found in film and video images. The classified cases of the expressions found in film and video was studied from the view of the dispositions and movements of the physical elements such as camera, actors, and objects which made the designated expression possible. Finally the organization of method to arrange of the elements used for film within film zone was resulted as the expressional directing methods for filming. Later the usefulness of the method was tested by the application of the method in educational procedure and the evaluation of the analysis of the students' results.

  • PDF

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.