• Title/Summary/Keyword: Human-like Face

Search Result 114, Processing Time 0.048 seconds

Face and Hand Activity Detection Based on Haar Wavelet and Background Updating Algorithm

  • Shang, Yiting;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.992-999
    • /
    • 2011
  • This paper proposed a human body posture recognition program based on haar-like feature and hand activity detection. Its distinguishing features are the combination of face detection and motion detection. Firstly, the program uses the haar-like feature face detection to receive the location of human face. The haar-like feature is provided with the advantages of speed. It means the less amount of calculation the haar-like feature can exclude a large number of interference, and it can discriminate human face more accurately, and achieve the face position. Then the program uses the frame subtraction to achieve the position of human body motion. This method is provided with good performance of the motion detection. Afterwards, the program recognises the human body motion by calculating the relationship of the face position with the position of human body motion contour. By the test, we know that the recognition rate of this algorithm is more than 92%. The results show that, this algorithm can achieve the result quickly, and guarantee the exactitude of the result.

Face Contour Detection by Using B-spline Snake for Creating Human Face Caricature

  • Lee, Jang-Hee;Woo, Jae-Kun;Hoon Kang
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.399-402
    • /
    • 2003
  • This paper deals with the making avatar like a caricature from human face image which is made by web camera. Generally, the Image made by web camera is not low quality but also, there are always various lights and backgrounds. So, It is impossible to recognize a human face's contour by some methods which only find some feature points of a image. Therefore, In this paper, we propose a new method for overcoming defeat of that methods. First, we got the area of human face roughly by color information. And then, we could find the exact human face's contour by using B-spline Snake.

  • PDF

Functions and Driving Mechanisms for Face Robot Buddy (얼굴로봇 Buddy의 기능 및 구동 메커니즘)

  • Oh, Kyung-Geune;Jang, Myong-Soo;Kim, Seung-Jong;Park, Shin-Suk
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF

Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction (인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정)

  • Park Sung-Kee;Park Mignon;Lee Taigun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.

Emotion Recognition based on Multiple Modalities

  • Kim, Dong-Ju;Lee, Hyeon-Gu;Hong, Kwang-Seok
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.228-236
    • /
    • 2011
  • Emotion recognition plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between humans and computer. Most of previous work on emotion recognition focused on extracting emotions from face, speech or EEG information separately. Therefore, a novel approach is presented in this paper, including face, speech and EEG, to recognize the human emotion. The individual matching scores obtained from face, speech, and EEG are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. In the experiment results, the proposed approach gives an improvement of more than 18.64% when compared to the most successful unimodal approach, and also provides better performance compared to approaches integrating two modalities each other. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Which Agent is More Captivating for Winning the Users' Hearts?: Focusing on Paralanguage Voice and Human-like Face Agent

  • SeoYoung Lee
    • Asia pacific journal of information systems
    • /
    • v.34 no.2
    • /
    • pp.585-619
    • /
    • 2024
  • This paper delves into the comparative analysis of human interactions with AI agents based on the presence or absence of a facial representation, combined with the presence or absence of paralanguage voice elements. The "CASA (Computer-Are-Social-Actors)" paradigm posits that people perceive computers as social actors, not tools, unconsciously applying human norms and behaviors to computers. Paralanguages are speech voice elements such as pitch, tone, stress, pause, duration, speed that help to convey what a speaker is trying to communicate. The focus is on understanding how these elements collectively contribute to the generation of flow, intimacy, trust, and interactional enjoyment within the user experience. Subsequently, this study uses PLS analysis to explore the connections among all variables within the research framework. This paper has academic and practical implications.

Face Recognition System for Unattended reception interface (무인 접수 인터페이스를 위한 얼굴인식 시스템)

  • Park, Se-Hyun;Ryu, Jeong-Tak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.3
    • /
    • pp.1-7
    • /
    • 2012
  • As personal information is utilized as an important user authentication means, a trustable certification means is being required. Recently, a research on the biometrics system using a part of the human body like a password is being attempted a lot. The face recognition technology using characteristics of the personal face among several biometrics technologies is easy in extracting features. In this paper, we implement a face recognition system for unattended reception interface. Our method is performed by two steps. Firstly the face is extracted using Haar-like feature method. Secondly the method combining PCA and LDA for face recognition was used. To assess the effectiveness of the proposed system, it was tested and experimental results show that the proposed method is applicable for unattended reception interface.

Facial Detection using Haar-like Feature and Bezier Curve (Haar-like와 베지어 곡선을 이용한 얼굴 성분 검출)

  • An, Kyeoung-Jun;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.11 no.9
    • /
    • pp.311-318
    • /
    • 2013
  • For face detection techniques, the correctness of detection decreases with different lightings and backgrounds so such requires new methods and techniques. This study has aimed to obtain data for reasoning human emotional information by analyzing the components of the eyes and mouth that are critical in expressing emotions. To do this, existing problems in detecting face are addressed and a detection method that has a high detection rate and fast processing speed good at detecting environmental elements is proposed. This method must detect a specific part (eyes and a mouth) by using Haar-like Feature technique with the application of an integral image. After which, binaries detect elements based on color information, dividing the face zone and skin zone. To generate correct shape, the shape of detected elements is generated by using a bezier curve-a curve generation algorithm. To evaluate the performance of the proposed method, an experiment was conducted by using data in the Face Recognition Homepage. The result showed that Haar-like technique and bezier curve method were able to detect face elements more elaborately.

Multimodal Emotion Recognition using Face Image and Speech (얼굴영상과 음성을 이용한 멀티모달 감정인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Anthropometric Studies on the Analysis of Women's Beautiful Face (20대 여성의 미인형 분석을 위한 계측학적 연구)

  • Park, Oak-Reon;Song, Mi-Young
    • Korean Journal of Human Ecology
    • /
    • v.14 no.5
    • /
    • pp.813-820
    • /
    • 2005
  • The beauty itself cannot be changed by the time, but the concept of the beauty can be influence by the time and cultural background. The purpose of this study is to analyze the beautiful faces or ugly faces among the young women and to provide useful guideline to make up for the modem concept of beauty. The facial photographs of 180 adlut women(aged between 20 and 29) in Pusan and Ulsan area were sampled to be measured and classified as the beautiful or ordinary or ugly faces. Data were analyzed by Frequencies, Mean, Duncan's Multiple Range Test. The major results were as followings; the Beautiful face has a relatively small face with a broad forehead and a small lower face. It also has a wide palpebral fissure, narrow intercanthal distance, a narrow nose and a big mouth. Physiognomic face length was 182.38mm, the upper face length was 59.82mm, the middle face length 60.82mm, the lower face length 61.76mm, and the index of face length to face breadth was 1.35. And also the faces within the figures are considered as the beautiful or ordinary or ugly faces from those measurements like face length/bizygion breadth, intercanthal distance, mouth width, upper vermilion height, lower vermilion height.

  • PDF