• Title/Summary/Keyword: Video expression method

검색결과 55건 처리시간 0.029초

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

영화의 쇼트리스트 데이터를 기반한 클라이맥스 표현 분석 (The Climax Expression Analysis Based on the Shot-list Data of Movies)

  • 임양미
    • 방송공학회논문지
    • /
    • 제21권6호
    • /
    • pp.965-976
    • /
    • 2016
  • 최근 디지털영상의 발달로 시청각 몰입에 대한 정량적 연구는 진행되고 있으나, 영화에서 내용이나 클라이맥스 부분의 영상을 정량적으로 분석하는 것은 거의 연구되지 않았다. 본 연구에서는 일반적인 영상표현 구성요소들인 쇼트사이즈(shot size), 카메라 앵글(camera angle), 카메라의 움직임의 방향(camera direction), 카메라 위치(camera position), 배우들의 대립 구도(objective & subjective) 등을 사용하여 정량적 분석을 진행하였다. 이들 사용에는 규칙이 있어 원칙을 파괴하는 부분의 영상 쇼트에서 주로 클라이맥스 효과를 볼 수 있다. 본 연구는 기존에 있는 영화들을 영상표현 구성 요소 기반으로 쇼트리스트 (shot-list)분석하여 클라이맥스 효과를 내기 위해 공통적으로 사용되는 몇 가지 방법들을 정량적으로 분석한다. 이와 같은 쇼트리스트 분석 기반의 클라이맥스 부분을 찾는 방법 제안은 영화와 같은 긴 영상에서 특정 부분만 검색하고 싶을 때, 영화의 장르를 검색하거나 색인화할 때 사용될 수 있다. 또한 검색된 일부 클라이맥스 영상과 유사 관련 정보를 제공하는 등의 다양한 정보 제공 서비스 분야에서 효용성이 높다고 할 수 있다.

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

Recognition of Human Facial Expression in a Video Image using the Active Appearance Model

  • Jo, Gyeong-Sic;Kim, Yong-Guk
    • Journal of Information Processing Systems
    • /
    • 제6권2호
    • /
    • pp.261-268
    • /
    • 2010
  • Tracking human facial expression within a video image has many useful applications, such as surveillance and teleconferencing, etc. Initially, the Active Appearance Model (AAM) was proposed for facial recognition; however, it turns out that the AAM has many advantages as regards continuous facial expression recognition. We have implemented a continuous facial expression recognition system using the AAM. In this study, we adopt an independent AAM using the Inverse Compositional Image Alignment method. The system was evaluated using the standard Cohn-Kanade facial expression database, the results of which show that it could have numerous potential applications.

영상 표현 연출 방법에 관한 연구 - 촬영 공간의 형태적 특성에 기인하는 이미지 표현을 중심으로 - (A Research on Expressional Directing Methods for Film and Video - Focused on the of Expression derived from Spatial Characteristics of the Filming Zone -)

  • 유택상
    • 디자인학연구
    • /
    • 제19권2호
    • /
    • pp.217-228
    • /
    • 2006
  • 영상의 매체 특성이 영상 표현에 크게 영향을 미치고 있음은 주지의 사실이다. 본 연구에서는 대상이 카메라를 통해 이미지화 되는 과정에서 어떠한 형태로든 왜곡이 일어나며 이러한 피할 수 없는 왜곡은 새로운 표현을 끌어내는 도구가 될 수 있다는 점에 착안하여, 촬영 매체인 카메라를 중심으로 카메라의 시선이 포괄하는 공간, 즉 촬영 공간의 형태적 특성을 기반으로 영상의 표현의 방법의 도출을 시도하였다. 그를 위해 먼저 촬영 공간의 형태적 특징과 그로 인해 생성되는 이미지의 구조를 규명하고, 그러한 관점에서 행해질 수 있는 촬영 공간 내의 물리적 요소의 배치 및 움직임의 다양한 경우를 탐구하였으며, 그에 의해 구현되는 여러 가지 연출상의 표현 가능성을 검토하였다. 이어 영상제작물 가운데 표현성이 두드러진 샘플을 수집하여 각각의 표현과 그러한 표현을 도출해낸 촬영 공간 내의 물리적 요소의 배치 및 이동의 방법을 서로 연결시켜 그 관계의 규명을 시도하였으며 이들 각각을 유형화하여 체계화함으로써 이를 창의적 영상 표현의 개발에 활용될 수 있는 표현 연출 방법으로 제시하였다. 이에 부가하여 본 연구에서 제시하고 있는 이론과 이들 방법들이 영상 연출 교육에 유효하게 활용될 수 있는가를 검증하기 위해 교육 과정에 도입하여 수업을 실행하였으며 수업의 전후에 부여되고 수집된 과제물의 분석을 통하여 그 유효성을 확인하였다.

  • PDF

숏폼 패션영상의 특성과 제작에 관한 연구 (A Study on the Characteristics and Production of Short-form Fashion Video)

  • 김세진
    • 한국의류학회지
    • /
    • 제45권1호
    • /
    • pp.200-216
    • /
    • 2021
  • This article considers short-form fashion videos as distinguished from fashion films, defines the concept, details the expressive characteristics of short-form fashion video, and reveals the method of producing it. For the methodology, a literature review was conducted to derive the concept and expression techniques. A case study was also performed to define the expressive characteristics. Five short-form fashion videos were also produced based on the results. The final results are as follows. First, short-form fashion video was defined as a fashion medium on the purpose of fashion communication within 60 seconds and classified by three digital image formats. Second, the result of analyzing the expression of the short-form fashion video shows the simplicity and reconstitution, characterization and remediation, borderless and expansion, and synesthesia trigger of the fashion image. Third, five short-form fashion videos were produced based on the theme of the digital garden. It shows that the short-form fashion video intensively expresses the content as a medium whose sensational expression is more prominent than the composition of the story by the short running time that reflects the taste of digital mainstream.

Micro-Expression Recognition Base on Optical Flow Features and Improved MobileNetV2

  • Xu, Wei;Zheng, Hao;Yang, Zhongxue;Yang, Yingjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권6호
    • /
    • pp.1981-1995
    • /
    • 2021
  • When a person tries to conceal emotions, real emotions will manifest themselves in the form of micro-expressions. Research on facial micro-expression recognition is still extremely challenging in the field of pattern recognition. This is because it is difficult to implement the best feature extraction method to cope with micro-expressions with small changes and short duration. Most methods are based on hand-crafted features to extract subtle facial movements. In this study, we introduce a method that incorporates optical flow and deep learning. First, we take out the onset frame and the apex frame from each video sequence. Then, the motion features between these two frames are extracted using the optical flow method. Finally, the features are inputted into an improved MobileNetV2 model, where SVM is applied to classify expressions. In order to evaluate the effectiveness of the method, we conduct experiments on the public spontaneous micro-expression database CASME II. Under the condition of applying the leave-one-subject-out cross-validation method, the recognition accuracy rate reaches 53.01%, and the F-score reaches 0.5231. The results show that the proposed method can significantly improve the micro-expression recognition performance.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • 제17권4호
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

디지털 미디어 시대의 방송 분장 변화에 관한 연구 (A Study for the Broadcasting Makeup and Image Representation Changes in the Digital Media Era)

  • 방기정;김경희;김주덕
    • 복식문화연구
    • /
    • 제18권6호
    • /
    • pp.1194-1210
    • /
    • 2010
  • The influence of digital media according to environmental change of multi-media came to have significance more than what we imagine. In accordance with high resolution of HDTV in digital media era, the cautious awareness is required for skin color by the immediate color such as replica of TV color, lighting and clothing. As for the broadcasting makeup expression technique caused by a change in broadcasting environment in the digital media era, the first, There is necessity for natural makeup technique, and for expressing the whole makeup evenly and very delicately. The makeup work gets much more delicate. For the delicate expression, more time is being required than the existing makeup time. Second, Lots of time and manpower are required for elaborate real-object processing on all the production fields such as background set, stage properties, and makeup. Third, Realistic expression is available on the screen. Importance of basic makeup is highlighted. Thus, even the skin care shop came to be prevalent. Development in only HD cosmetics is needed for foundation with fine particle in new material and with diverse colors hereafter. The video-media field is a method that is ignored a sense of distance through vehicles such as camera, picture tube, and several kinds of broadcasting machinery and equipment and that is delivered vividly to viewers through screen, unlike the stage makeup, thereby being needed the makeup technology proper for HDTV according to the changing broadcasting environment and media. The video machinery and equipment are proceeding with being gradually high-tech and precise. Thus, an expert in makeup needs to know common sense on the video machinery and equipment before makeup, and needs to make an effort according to it. And, a follow-up research can be said to be necessary on the advance in makeup method and on more diverse dedicated cosmetics along with a research on color tone proper for HDTV.