• Title/Summary/Keyword: Korean face and human image

Search Result 152, Processing Time 0.032 seconds

Implementation of an automatic face recognition system using the object centroid (무게중심을 이용한 자동얼굴인식 시스템의 구현)

  • 풍의섭;김병화;안현식;김도현
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.8
    • /
    • pp.114-123
    • /
    • 1996
  • In this paper, we propose an automatic recognition algorithm using the object centroid of a facial image. First, we separate the facial image from the background image using the chroma-key technique and we find the centroid of the separated facial image. Second, we search nose in the facial image based on knowledge of human faces and the coordinate of the object centroid and, we calculate 17 feature parameters automatically. Finally, we recognize the facial image by using feature parameters in the neural networks which are trained through error backpropagation algorithm. It is illustrated by experiments by experiments using the proposed recogniton system that facial images can be recognized in spite of the variation of the size and the position of images.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

Eye detection on Rotated face using Principal Component Analysis (주성분 분석을 이용한 기울어진 얼굴에서의 눈동자 검출)

  • Choi, Yeon-Seok;Mun, Won-Ho;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.61-64
    • /
    • 2011
  • There are many applications that require robust and accurate eye tracking, such as human-computer interface(HCI). In this paper, a novel approach for eye tracking with a principal component analysis on rotated face. In the process of iris detection, intensity information is used. First, for select eye region using principal component analysis. Finally, for eye detection using eye region's intensity. The experimental results show good performance in detecting eye from FERET image include rotate face.

  • PDF

The Characteristics of the Post-Modern Self-portrait Photography (포스트모던 사진 자화상)

  • Chang, Sunkang
    • The Journal of Art Theory & Practice
    • /
    • no.15
    • /
    • pp.51-79
    • /
    • 2013
  • This paper examines the characteristics of post-modern self-portrait photography. Characteristics of postmodernism associated with the "loss of centeredness," such as the death of the author, interdisciplinarity, and intertextuality, brought about a number of changes within the self-portrait. The distinction between post-modern and modern self-portraiture can be characterized by the following qualities: appropriation, the use of photography, and the utilization of the human body as an art. The characteristics of post-modern self-portrait photography can be represented through the works of Cindy Sherman, Orlan, and Morimura Yasumasa. By presenting prototypical women in her works, Cindy Sherman not only represents images of those women, but also exposes her fictitious role in the work. She creates a distance between herself in the works and herself in reality and discloses a paternalistic gaze. Meanwhile, Orlan transforms her face into a distorted image and presents it as an alternative identity that is representative of postmodernism. She corrodes the standard concept of identity through plastic surgery and treats the face not as a place where the identity stays, but as a simple body part or fragment of skin. Orlan's post-human face is malleable according to the artist's desire to raise the issue of what the human face is, and opposes the structure of modernism. Morimura Yasumasa also appropriates images from masterpieces and presents a hybrid identity between Eastern and Western, male and female, original and replica, and subject and object. In order to dissect social prejudice, he puts forth every single structural dichotomy that coexists in his self-portrait and suppresses a strong ego. He also studies the relationship between 'seeing' and being 'seen' by trading the painter's role from that of the subject to that of the object.

  • PDF

PERSONAL SPACE-BASED MODELING OF RELATIONSHIPS BETWEEN PEOPLE FOR NEW HUMAN-COMPUTER INTERACTION

  • Amaoka, Toshitaka;Laga, Hamid;Saito, Suguru;Nakajima, Masayuki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.746-750
    • /
    • 2009
  • In this paper we focus on the Personal Space (PS) as a nonverbal communication concept to build a new Human Computer Interaction. The analysis of people positions with respect to their PS gives an idea on the nature of their relationship. We propose to analyze and model the PS using Computer Vision (CV), and visualize it using Computer Graphics. For this purpose, we define the PS based on four parameters: distance between people, their face orientations, age, and gender. We automatically estimate the first two parameters from image sequences using CV technology, while the two other parameters are set manually. Finally, we calculate the two-dimensional relationship of multiple persons and visualize it as 3D contours in real-time. Our method can sense and visualize invisible and unconscious PS distributions and convey the spatial relationship of users by an intuitive visual representation. The results of this paper can be used to Human Computer Interaction in public spaces.

  • PDF

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.4
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

Flesh Tone Balance Algorithm for AWB of Facial Pictures (인물 사진을 위한 자동 톤 균형 알고리즘)

  • Bae, Tae-Wuk;Lee, Sung-Hak;Lee, Jung-Wook;Sohng, Kyu-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11C
    • /
    • pp.1040-1048
    • /
    • 2009
  • This paper proposes an auto flesh tone balance algorithm for the picture that is taken for people. General white balance algorithms bring neutral region into focus. But, other objects can be basis if its spectral reflectance is known. In this paper the basis for white balance is human face. For experiment, first, transfer characteristic of image sensor is analyzed and camera output RGB on average face chromaticity under standard illumination is calculated. Second, Output rate for the image is adjusted to make RGB rate for the face photo area taken under unknown illumination RGB rate that is already calculated. Input tri-stimulus XYZ can be calculated from camera output RGB by camera transfer matrix. And input tri-stimulus XYZ is transformed to standard color space (sRGB) using sRGB transfer matrix. For display, RGB data is encoded as eight-bit data after gamma correction. Algorithm is applied to average face color that is light skin color of Macbeth color chart and average color of various face colors that are actually measured.

Identification System Based on Partial Face Feature Extraction (부분 얼굴 특징 추출에 기반한 신원 확인 시스템)

  • Choi, Sun-Hyung;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.168-173
    • /
    • 2012
  • This paper presents a new human identification algorithm using partial features of the uncovered portion of face when a person wears a mask. After the face area is detected, the feature is extracted from the eye area above the mask. The identification process is performed by comparing the acquired one with the registered features. For extracting features SIFT(scale invariant feature transform) algorithm is used. The extracted features are independent of brightness and size- and rotation-invariant for the image. The experiment results show the effectiveness of the suggested algorithm.

Rapid Implementation of 3D Facial Reconstruction from a Single Image on an Android Mobile Device

  • Truong, Phuc Huu;Park, Chang-Woo;Lee, Minsik;Choi, Sang-Il;Ji, Sang-Hoon;Jeong, Gu-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.5
    • /
    • pp.1690-1710
    • /
    • 2014
  • In this paper, we propose the rapid implementation of a 3-dimensional (3D) facial reconstruction from a single frontal face image and introduce a design for its application on a mobile device. The proposed system can effectively reconstruct human faces in 3D using an approach robust to lighting conditions, and a fast method based on a Canonical Correlation Analysis (CCA) algorithm to estimate the depth. The reconstruction system is built by first creating 3D facial mapping from a personal identity vector of a face image. This mapping is then applied to real-world images captured with a built-in camera on a mobile device to form the corresponding 3D depth information. Finally, the facial texture from the face image is extracted and added to the reconstruction results. Experiments with an Android phone show that the implementation of this system as an Android application performs well. The advantage of the proposed method is an easy 3D reconstruction of almost all facial images captured in the real world with a fast computation. This has been clearly demonstrated in the Android application, which requires only a short time to reconstruct the 3D depth map.

Human Ear Detection for Biometries (생체인식을 위한 귀 영역 검출)

  • Kim Young-Baek;Rhee Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.7
    • /
    • pp.813-816
    • /
    • 2005
  • Ear detection is an important part of an non-invasive ear recognition system. In this paper we propose human ear detection from side face images. The proposed method is made by imitating the human recognition process using feature information and color information. First, we search face candidate area in an input image by using 'skin-color model' and try to find an ear area based on edge information. Then, to verify whether it is the ear area or not, we use the SVM (Support Vector Machine) based on a statistical theory. The method shows high detection ratio in indoors environment with stable illumination.