• Title/Summary/Keyword: Face expression

Search Result 454, Processing Time 0.025 seconds

Photon-counting linear discriminant analysis for face recognition at a distance

  • Yeom, Seok-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.3
    • /
    • pp.250-255
    • /
    • 2012
  • Face recognition has wide applications in security and surveillance systems as well as in robot vision and machine interfaces. Conventional challenges in face recognition include pose, illumination, and expression, and face recognition at a distance involves additional challenges because long-distance images are often degraded due to poor focusing and motion blurring. This study investigates the effectiveness of applying photon-counting linear discriminant analysis (Pc-LDA) to face recognition in harsh environments. A related technique, Fisher linear discriminant analysis, has been found to be optimal, but it often suffers from the singularity problem because the number of available training images is generally much smaller than the number of pixels. Pc-LDA, on the other hand, realizes the Fisher criterion in high-dimensional space without any dimensionality reduction. Therefore, it provides more invariant solutions to image recognition under distortion and degradation. Two decision rules are employed: one is based on Euclidean distance; the other, on normalized correlation. In the experiments, the asymptotic equivalence of the photon-counting method to the Fisher method is verified with simulated data. Degraded facial images are employed to demonstrate the robustness of the photon-counting classifier in harsh environments. Four types of blurring point spread functions are applied to the test images in order to simulate long-distance acquisition. The results are compared with those of conventional Eigen face and Fisher face methods. The results indicate that Pc-LDA is better than conventional facial recognition techniques.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

A Study on Fusionization of Woman Characters in Fusion Traditional Drama (사극드라마의 여자캐릭터의 분장특성 연구)

  • Kim, Yu-Gyoung;Cho, Ji-Na
    • Journal of Fashion Business
    • /
    • v.13 no.4
    • /
    • pp.60-76
    • /
    • 2009
  • Expression of woman characters in fusion traditional dramas plays a role of making the progress of dramas not bigoted and new. Especially, woman characters have a high weight as a heroine by an increase in their image in fusion traditional dramas. Expression of characters harmonizing modern with tradition has also given a help in reflecting a trend of the present times. Hair style and face makeup of woman characters in fusion traditional dramas are in the process of fusionization getting an effect from postmodernism. They are expressing the hair style of symbolization modern elements of hair style to traditional hair styles. They also expressed a neutral image with faded hair styles in the shaggy cut style and dye of neoplaticism. Neo-hippie style was changed into the style of naturalism and nationalism and the hair style braided in various branches as the one of Indians was changed into a primitive and national feature. It is producing a new style by permitting even a long-hair permanent wave hair style. Expression of straight hair style, a long-hair shaggy & bulging wave style and a hair style of neoplaticism, was distinguished. In the side of face makeup, they expressed its luxurious and splendid style by attaching great importance to its luster and are exposing images of characters by a smoky makeup emphasizing eye lines. Their face makeup was almost never separated from present dramas as using pearl shadow in a glossy lips makeup and color, which made it possible to express it more dramatically in fusion traditional dramas than in present dramas. In the event of the makeup element of fusion traditional dramas permitting diversity, the character makeup of fusion traditional dramas made a foundation to show people diverse elements of makeup by mix & match a present elements and past elements of historical research, which made it possible to express a unique makeup or a special makeup. Diverse makeup expressions were limited by reflection of illumination even in the existing videos. Therefore, 'Fusion' made it possible to express it more freely in fusion traditional dramas than in present dramas.

Development of Recognition Application of Facial Expression for Laughter Theraphy on Smartphone (스마트폰에서 웃음 치료를 위한 표정인식 애플리케이션 개발)

  • Kang, Sun-Kyung;Li, Yu-Jie;Song, Won-Chang;Kim, Young-Un;Jung, Sung-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.494-503
    • /
    • 2011
  • In this paper, we propose a recognition application of facial expression for laughter theraphy on smartphone. It detects face region by using AdaBoost face detection algorithm from the front camera image of a smartphone. After detecting the face image, it detects the lip region from the detected face image. From the next frame, it doesn't detect the face image but tracks the lip region which were detected in the previous frame by using the three step block matching algorithm. The size of the detected lip image varies according to the distance between camera and user. So, it scales the detected lip image with a fixed size. After that, it minimizes the effect of illumination variation by applying the bilateral symmetry and histogram matching illumination normalization. After that, it computes lip eigen vector by using PCA(Principal Component Analysis) and recognizes laughter expression by using a multilayer perceptron artificial network. The experiment results show that the proposed method could deal with 16.7 frame/s and the proposed illumination normalization method could reduce the variations of illumination better than the existing methods for better recognition performance.

A Review of Facial Expression Recognition Issues, Challenges, and Future Research Direction

  • Yan, Bowen;Azween, Abdullah;Lorita, Angeline;S.H., Kok
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.125-139
    • /
    • 2023
  • Facial expression recognition, a topical problem in the field of computer vision and pattern recognition, is a direct means of recognizing human emotions and behaviors. This paper first summarizes the datasets commonly used for expression recognition and their associated characteristics and presents traditional machine learning algorithms and their benefits and drawbacks from three key techniques of face expression; image pre-processing, feature extraction, and expression classification. Deep learning-oriented expression recognition methods and various algorithmic framework performances are also analyzed and compared. Finally, the current barriers to facial expression recognition and potential developments are highlighted.

Using a Multi-Faced Technique SPFACS Video Object Design Analysis of The AAM Algorithm Applies Smile Detection (다면기법 SPFACS 영상객체를 이용한 AAM 알고리즘 적용 미소검출 설계 분석)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.99-112
    • /
    • 2015
  • Digital imaging technology has advanced beyond the limits of the multimedia industry IT convergence, and to develop a complex industry, particularly in the field of object recognition, face smart-phones associated with various Application technology are being actively researched. Recently, face recognition technology is evolving into an intelligent object recognition through image recognition technology, detection technology, the detection object recognition through image recognition processing techniques applied technology is applied to the IP camera through the 3D image object recognition technology Face Recognition been actively studied. In this paper, we first look at the essential human factor, technical factors and trends about the technology of the human object recognition based SPFACS(Smile Progress Facial Action Coding System)study measures the smile detection technology recognizes multi-faceted object recognition. Study Method: 1)Human cognitive skills necessary to analyze the 3D object imaging system was designed. 2)3D object recognition, face detection parameter identification and optimal measurement method using the AAM algorithm inside the proposals and 3)Face recognition objects (Face recognition Technology) to apply the result to the recognition of the person's teeth area detecting expression recognition demonstrated by the effect of extracting the feature points.

Face Recognition using Fisherface Method with Fuzzy Membership Degree (퍼지 소속도를 갖는 Fisherface 방법을 이용한 얼굴인식)

  • 곽근창;고현주;전명근
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.784-791
    • /
    • 2004
  • In this study, we deal with face recognition using fuzzy-based Fisherface method. The well-known Fisherface method is more insensitive to large variation in light direction, face pose, and facial expression than Principal Component Analysis method. Usually, the various methods of face recognition including Fisherface method give equal importance in determining the face to be recognized, regardless of typicalness. The main point here is that the proposed method assigns a feature vector transformed by PCA to fuzzy membership rather than assigning the vector to particular class. In this method, fuzzy membership degrees are obtained from FKNN(Fuzzy K-Nearest Neighbor) initialization. Experimental results show better recognition performance than other methods for ORL and Yale face databases.

Face Detection based on Matched Filtering with Mobile Device (모바일 기기를 이용한 정합필터 기반의 얼굴 검출)

  • Yeom, Seok-Won;Lee, Dong-Su
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.3
    • /
    • pp.76-79
    • /
    • 2014
  • Face recognition is very challenging because of the unexpected changes of pose, expression, and illumination. Facial detection in the mobile environments has additional difficulty since the computational resources are very limited. This paper discusses face detection based on frequency domain matched filtering in the mobile environments. Face detection is performed by a linear or phase-only matched filter and sequential verification stages. The candidate window regions are selected by a number of peaks of the matched filtering outputs. The sequential stages comprise a skin-color test and an edge mask filtering tests, which aim to remove false alarms among selected candidate windows. The algorithms are built with JAVA language on the mobile device operated by the Android platform. The simulation and experimental results show that real-time face detection can be performed successfully in the mobile environments.

Sliding Active Camera-based Face Pose Compensation for Enhanced Face Recognition (얼굴 인식률 개선을 위한 선형이동 능동카메라 시스템기반 얼굴포즈 보정 기술)

  • 장승호;김영욱;박창우;박장한;남궁재찬;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.155-164
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user and is able to doface recognition, which is vital for many surveillance-based systems. The advantage of face recognition compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to the decreasing in dimension from image acquisition step and various changes associated with face pose and background. There are many factors that deteriorate performance of face recognition such as thedistance from camera to the face, changes in lighting, pose change, and change of facial expression. In this paper, we implement a new sliding active camera system to prevent various pose variation that influence face recognition performance andacquired frontal face images using PCA and HMM method to improve the face recognition. This proposed face recognition algorithm can be used for intelligent surveillance system and mobile robot system.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF