• Title/Summary/Keyword: Video expression method

Search Result 55, Processing Time 0.023 seconds

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

The Climax Expression Analysis Based on the Shot-list Data of Movies (영화의 쇼트리스트 데이터를 기반한 클라이맥스 표현 분석)

  • Lim, Yangmi
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.965-976
    • /
    • 2016
  • Recently studies about audio-visual immersion are being carried out due to development of digital video but studies analysing quantitatively the content or climax of video has not carried out. The paper is analysed quantitatively by using shot size, camera angle, camera direction and camera position, objective & subjective which is general component of video expression. We can see the climax effects in the video destroying the principles because there is a rule when they use the component. This thesis analyses shot-list based on video expression in existing movies therefore it can analyse quantitatively several method commonly used in order to make the climax. These suggesting method that can find the part of climax based on analysis of shot-list can be used when you search the specific part of video like long movie or when you search the genre of the movie or when you index the genre. Also it can be very effective in various information services companies offering climax movie which is searched.

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

Recognition of Human Facial Expression in a Video Image using the Active Appearance Model

  • Jo, Gyeong-Sic;Kim, Yong-Guk
    • Journal of Information Processing Systems
    • /
    • v.6 no.2
    • /
    • pp.261-268
    • /
    • 2010
  • Tracking human facial expression within a video image has many useful applications, such as surveillance and teleconferencing, etc. Initially, the Active Appearance Model (AAM) was proposed for facial recognition; however, it turns out that the AAM has many advantages as regards continuous facial expression recognition. We have implemented a continuous facial expression recognition system using the AAM. In this study, we adopt an independent AAM using the Inverse Compositional Image Alignment method. The system was evaluated using the standard Cohn-Kanade facial expression database, the results of which show that it could have numerous potential applications.

A Research on Expressional Directing Methods for Film and Video - Focused on the of Expression derived from Spatial Characteristics of the Filming Zone - (영상 표현 연출 방법에 관한 연구 - 촬영 공간의 형태적 특성에 기인하는 이미지 표현을 중심으로 -)

  • Yoo, Taek-Sang
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.217-228
    • /
    • 2006
  • It is obvious that the characteristics of the media are related with the visual expression found in film and video shots. It can be assumed that there would be unavoidable distortion when the space and objects are being framed into an image by the camera and that unavoidable distortion can be utilized for creative expression. Therefore the relationship among the shape of filming zone, the structure of the image, and the strategies of disposition of camera, actors, and objects were studied in accordance with expressional attempts found in film and video images. The classified cases of the expressions found in film and video was studied from the view of the dispositions and movements of the physical elements such as camera, actors, and objects which made the designated expression possible. Finally the organization of method to arrange of the elements used for film within film zone was resulted as the expressional directing methods for filming. Later the usefulness of the method was tested by the application of the method in educational procedure and the evaluation of the analysis of the students' results.

  • PDF

A Study on the Characteristics and Production of Short-form Fashion Video (숏폼 패션영상의 특성과 제작에 관한 연구)

  • Kim, Sejin
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.45 no.1
    • /
    • pp.200-216
    • /
    • 2021
  • This article considers short-form fashion videos as distinguished from fashion films, defines the concept, details the expressive characteristics of short-form fashion video, and reveals the method of producing it. For the methodology, a literature review was conducted to derive the concept and expression techniques. A case study was also performed to define the expressive characteristics. Five short-form fashion videos were also produced based on the results. The final results are as follows. First, short-form fashion video was defined as a fashion medium on the purpose of fashion communication within 60 seconds and classified by three digital image formats. Second, the result of analyzing the expression of the short-form fashion video shows the simplicity and reconstitution, characterization and remediation, borderless and expansion, and synesthesia trigger of the fashion image. Third, five short-form fashion videos were produced based on the theme of the digital garden. It shows that the short-form fashion video intensively expresses the content as a medium whose sensational expression is more prominent than the composition of the story by the short running time that reflects the taste of digital mainstream.

Micro-Expression Recognition Base on Optical Flow Features and Improved MobileNetV2

  • Xu, Wei;Zheng, Hao;Yang, Zhongxue;Yang, Yingjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.1981-1995
    • /
    • 2021
  • When a person tries to conceal emotions, real emotions will manifest themselves in the form of micro-expressions. Research on facial micro-expression recognition is still extremely challenging in the field of pattern recognition. This is because it is difficult to implement the best feature extraction method to cope with micro-expressions with small changes and short duration. Most methods are based on hand-crafted features to extract subtle facial movements. In this study, we introduce a method that incorporates optical flow and deep learning. First, we take out the onset frame and the apex frame from each video sequence. Then, the motion features between these two frames are extracted using the optical flow method. Finally, the features are inputted into an improved MobileNetV2 model, where SVM is applied to classify expressions. In order to evaluate the effectiveness of the method, we conduct experiments on the public spontaneous micro-expression database CASME II. Under the condition of applying the leave-one-subject-out cross-validation method, the recognition accuracy rate reaches 53.01%, and the F-score reaches 0.5231. The results show that the proposed method can significantly improve the micro-expression recognition performance.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

A Study for the Broadcasting Makeup and Image Representation Changes in the Digital Media Era (디지털 미디어 시대의 방송 분장 변화에 관한 연구)

  • Barng, Kee-Jung;Kim, Kyung-Hee;Kim, Ju-Duck
    • The Research Journal of the Costume Culture
    • /
    • v.18 no.6
    • /
    • pp.1194-1210
    • /
    • 2010
  • The influence of digital media according to environmental change of multi-media came to have significance more than what we imagine. In accordance with high resolution of HDTV in digital media era, the cautious awareness is required for skin color by the immediate color such as replica of TV color, lighting and clothing. As for the broadcasting makeup expression technique caused by a change in broadcasting environment in the digital media era, the first, There is necessity for natural makeup technique, and for expressing the whole makeup evenly and very delicately. The makeup work gets much more delicate. For the delicate expression, more time is being required than the existing makeup time. Second, Lots of time and manpower are required for elaborate real-object processing on all the production fields such as background set, stage properties, and makeup. Third, Realistic expression is available on the screen. Importance of basic makeup is highlighted. Thus, even the skin care shop came to be prevalent. Development in only HD cosmetics is needed for foundation with fine particle in new material and with diverse colors hereafter. The video-media field is a method that is ignored a sense of distance through vehicles such as camera, picture tube, and several kinds of broadcasting machinery and equipment and that is delivered vividly to viewers through screen, unlike the stage makeup, thereby being needed the makeup technology proper for HDTV according to the changing broadcasting environment and media. The video machinery and equipment are proceeding with being gradually high-tech and precise. Thus, an expert in makeup needs to know common sense on the video machinery and equipment before makeup, and needs to make an effort according to it. And, a follow-up research can be said to be necessary on the advance in makeup method and on more diverse dedicated cosmetics along with a research on color tone proper for HDTV.