• Title/Summary/Keyword: facial expression analysis

Search Result 164, Processing Time 0.032 seconds

Harris Corner Detection for Eyes Detection in Facial Images

  • Navastara, Dini Adni;Koo, Kyung-Mo;Park, Hyun-Jun;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.373-376
    • /
    • 2013
  • Nowadays, eyes detection is required and considered as the most important step in several applications, such as eye tracking, face identification and recognition, facial expression analysis and iris detection. This paper presents the eyes detection in facial images using Harris corner detection. Firstly, Haar-like features for face detection is used to detect a face region in an image. To separate the region of the eyes from a whole face region, the projection function is applied in this paper. At the last step, Harris corner detection is used to detect the eyes location. In experimental results, the eyes location on both grayscale and color facial images were detected accurately and effectively.

  • PDF

Skin Color Based Facial Features Extraction

  • Alom, Md. Zahangir;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.351-354
    • /
    • 2011
  • This paper discusses on facial features extraction based on proposed skin color model. Different parts of face from input image are segmented based on skin color model. Moreover, this paper also discusses on concept to detect the eye and mouth position on face. A height and width ratio (${\delta}=1.1618$) based technique is also proposed to accurate detection of face region from the segmented image. Finally, we have cropped the desired part of the face. This exactly exacted face part is useful for face recognition and detection, facial feature analysis and expression analysis. Experimental results of propose method shows that the proposed method is robust and accurate.

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.

Detection of Facial Direction using Facial Features (얼굴 특징 정보를 이용한 얼굴 방향성 검출)

  • Park Ji-Sook;Dong Ji-Youn
    • Journal of Internet Computing and Services
    • /
    • v.4 no.6
    • /
    • pp.57-67
    • /
    • 2003
  • The recent rapid development of multimedia and optical technologies brings great attention to application systems to process facial Image features. The previous research efforts in facial image processing have been mainly focused on the recognition of human face and facial expression analysis, using front face images. Not much research has been carried out Into image-based detection of face direction. Moreover, the existing approaches to detect face direction, which normally use the sequential Images captured by a single camera, have limitations that the frontal image must be given first before any other images. In this paper, we propose a method to detect face direction by using facial features such as facial trapezoid which is defined by two eyes and the lower lip. Specifically, the proposed method forms a facial direction formula, which is defined with statistical data about the ratio of the right and left area in the facial trapezoid, to identify whether the face is directed toward the right or the left. The proposed method can be effectively used for automatic photo arrangement systems that will often need to set the different left or right margin of a photo according to the face direction of a person in the photo.

  • PDF

Comparative Analysis of Markerless Facial Recognition Technology for 3D Character's Facial Expression Animation -Focusing on the method of Faceware and Faceshift- (3D 캐릭터의 얼굴 표정 애니메이션 마커리스 표정 인식 기술 비교 분석 -페이스웨어와 페이스쉬프트 방식 중심으로-)

  • Kim, Hae-Yoon;Park, Dong-Joo;Lee, Tae-Gu
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.221-245
    • /
    • 2014
  • With the success of the world's first 3D computer animated film, "Toy Story" in 1995, industrial development of 3D computer animation gained considerable momentum. Consequently, various 3D animations for TV were produced; in addition, high quality 3D computer animation games became common. To save a large amount of 3D animation production time and cost, technological development has been conducted actively, in accordance with the expansion of industrial demand in this field. Further, compared with the traditional approach of producing animations through hand-drawings, the efficiency of producing 3D computer animations is infinitely greater. In this study, an experiment and a comparative analysis of markerless motion capture systems for facial expression animation has been conducted that aims to improve the efficiency of 3D computer animation production. Faceware system, which is a product of Image Metrics, provides sophisticated production tools despite the complexity of motion capture recognition and application process. Faceshift system, which is a product of same-named Faceshift, though relatively less sophisticated, provides applications for rapid real-time motion recognition. It is hoped that the results of the comparative analysis presented in this paper become baseline data for selecting the appropriate motion capture and key frame animation method for the most efficient production of facial expression animation in accordance with production time and cost, and the degree of sophistication and media in use, when creating animation.

Expression and Functional Analysis of cofilin1-like in Craniofacial Development in Zebrafish

  • Jin, Sil;Jeon, Haewon;Choe, Chong Pyo
    • Development and Reproduction
    • /
    • v.26 no.1
    • /
    • pp.23-36
    • /
    • 2022
  • Pharyngeal pouches, a series of outgrowths of the pharyngeal endoderm, are a key epithelial structure governing facial skeleton development in vertebrates. Pouch formation is achieved through collective cell migration and rearrangement of pouch-forming cells controlled by actin cytoskeleton dynamics. While essential transcription factors and signaling molecules have been identified in pouch formation, regulators of actin cytoskeleton dynamics have not been reported yet in any vertebrates. Cofilin1-like (Cfl1l) is a fish-specific member of the Actin-depolymerizing factor (ADF)/Cofilin family, a critical regulator of actin cytoskeleton dynamics in eukaryotic cells. Here, we report the expression and function of cfl1l in pouch development in zebrafish. We first showed that fish cfl1l might be an ortholog of vertebrate adf, based on phylogenetic analysis of vertebrate adf and cfl genes. During pouch formation, cfl1l was expressed sequentially in the developing pouches but not in the posterior cell mass in which future pouch-forming cells are present. However, pouches, as well as facial cartilages whose development is dependent upon pouch formation, were unaffected by loss-of-function mutations in cfl1l. Although it could not be completely ruled out a possibility of a genetic redundancy of Cfl1l with other Cfls, our results suggest that the cfl1l expression in the developing pouches might be dispensable for regulating actin cytoskeleton dynamics in pouch-forming cells.

Face Image Analysis using Adaboost Learning and Non-Square Differential LBP (아다부스트 학습과 비정방형 Differential LBP를 이용한 얼굴영상 특징분석)

  • Lim, Kil-Taek;Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.6
    • /
    • pp.1014-1023
    • /
    • 2016
  • In this study, we presented a method for non-square Differential LBP operation that can well describe the micro pattern in the horizontal and vertical component. We proposed a way to represent a LBP operation with various direction components as well as the diagonal component. In order to verify the validity of the proposed operation, Differential LBP was investigated with respect to accuracy, sensitivity, and specificity for the classification of facial expression. In accuracy comparison proposed LBP operation obtains better results than Square LBP and LBP-CS operations. Also, Proposed Differential LBP gets better results than previous two methods in the sensitivity and specificity indicators 'Neutral', 'Happiness', 'Surprise', and 'Anger' and excellence Differential LBP was confirmed.

Face-to-face Communication in Cyberspace using Analysis and Synthesis of Facial Expression

  • Shigeo Morishima
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.111-118
    • /
    • 1999
  • Recently computer can make cyberspace to walk through by an interactive virtual reality technique. An a avatar in cyberspace can bring us a virtual face-to-face communication environment. In this paper, an avatar is realized which has a real face in cyberspace and a multiuser communication system is constructed by voice transmitted through network. Voice from microphone is transmitted and analyzed, then mouth shape and facial expression of avatar are synchronously estimated and synthesized on real time. And also an entertainment application of a real-time voice driven synthetic face is introduced and this is an example of interactive movie. Finally, face motion capture system using physics based face model is introduced.

On Parameterizing of Human Expression Using ICA (독립 요소 분석을 이용한 얼굴 표정의 매개변수화)

  • Song, Ji-Hey;Shin, Hyun-Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.1
    • /
    • pp.7-15
    • /
    • 2009
  • In this paper, a novel framework that synthesizes and clones facial expression in parameter spaces is presented. To overcome the difficulties in manipulating face geometry models with high degrees of freedom, many parameterization methods have been introduced. In this paper, a data-driven parameterization method is proposed that represents a variety of expressions with a small set of fundamental independent movements based on the ICA technique. The face deformation due to the parameters is also learned from the data to capture the nonlinearity of facial movements. With this parameterization, one can control the expression of an animated character's face by the parameters. By separating the parameterization and the deformation learning process, we believe that we can adopt this framework for a variety applications including expression synthesis and cloning. The experimental result demonstrates the efficient production of realistic expressions using the proposed method.

  • PDF

The Relationship between Physically Disability Persons Participation in Exercise, Heart Rate Variance, and Facial Expression Recognition (지체장애인의 운동참여와 심박변이도(HRV), 표정정서인식력과의 관계)

  • Kim, Dong hwan;Baek, Jae keun
    • 재활복지
    • /
    • v.20 no.3
    • /
    • pp.105-124
    • /
    • 2016
  • The This study aims to verify the causal relationship among physically disability persons participation in exercise, heart rate variance, and facial expression recognition. To achieve such research goal, this study targeted 139 physically disability persons and as for sampling, purposive sampling method was applied. After visiting a sporting stadium and club facilities that sporting events were held and explaining the purpose of the research in detail, only with those who agreed to participate in the research, their heart rate variance and facial emotion awareness were measured. With the results of measurement, mean value, standard deviation, correlation analysis, and structural equating model were analyzed, and the results are as follows. The quantity of exercise positively affected sympathetic activity and parasympathetic activity of autonomic nervous system. Exercise history of physically disability persons was found to have a positive influence on LF/HF, and it had a negative influence on parasympathetic activity. Sympathetic activity of physically disability persons turned out to have a positive effect on the recognition of the emotion, happiness, while the quantity of exercise had a negative influence on the recognition of the emotion, sadness. These findings were discussed and how those mechanisms that are relevant to the autonomic nervous system, facial expression recognition of physical disability persons.