• Title/Summary/Keyword: Facial Information

Search Result 1,071, Processing Time 0.025 seconds

Implementation of an automatic face recognition system using the object centroid (무게중심을 이용한 자동얼굴인식 시스템의 구현)

  • 풍의섭;김병화;안현식;김도현
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.8
    • /
    • pp.114-123
    • /
    • 1996
  • In this paper, we propose an automatic recognition algorithm using the object centroid of a facial image. First, we separate the facial image from the background image using the chroma-key technique and we find the centroid of the separated facial image. Second, we search nose in the facial image based on knowledge of human faces and the coordinate of the object centroid and, we calculate 17 feature parameters automatically. Finally, we recognize the facial image by using feature parameters in the neural networks which are trained through error backpropagation algorithm. It is illustrated by experiments by experiments using the proposed recogniton system that facial images can be recognized in spite of the variation of the size and the position of images.

  • PDF

A Study on Effective Facial Expression of 3D Character through Variation of Emotions (Model using Facial Anatomy) (감정변화에 따른 3D캐릭터의 표정연출에 관한 연구 (해부학적 구조 중심으로))

  • Kim, Ji-Ae
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.7
    • /
    • pp.894-903
    • /
    • 2006
  • Rapid technology growth of hardware have brought about development and expansion of various digital motion pictured information including 3-Dimension. 3D digital techniques can be used to be diversity in Animation, Virtual-Reality, Movie, Advertisement, Game and so on. 3D characters in digital motion picture take charge of the core as to communicate emotions and information to users through sounds, facial expression and characteristic motions. Concerns about 3D motion and facial expression is getting higher with extension of frequency in use and range about 3D character design. In this study, the facial expression can be used as a effective method about implicit emotions will be studied and research 3D character's facial expressions and muscles movement which are based on human anatomy and then try to find effective method of facial expression. Finally, also, study the difference and distinguishing between 2D and 3D character through the preceding study what I have researched before.

  • PDF

Robust Facial Expression Recognition using PCA Representation (PCA 표상을 이용한 강인한 얼굴 표정 인식)

  • Shin Young-Suk
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.323-331
    • /
    • 2005
  • This paper proposes an improved system for recognizing facial expressions in various internal states that is illumination-invariant and without detectable rue such as a neutral expression. As a preprocessing to extract the facial expression information, a whitening step was applied. The whitening step indicates that the mean of the images is set to zero and the variances are equalized as unit variances, which reduces murk of the variability due to lightening. After the whitening step, we used the facial expression information based on principal component analysis(PCA) representation excluded the first 1 principle component. Therefore, it is possible to extract the features in the lariat expression images without detectable cue of neutral expression from the experimental results, we ran also implement the various and natural facial expression recognition because we perform the facial expression recognition based on dimension model of internal states on the images selected randomly in the various facial expression images corresponding to 83 internal emotional states.

  • PDF

Web-based 3D Face Modeling System (웹기반 3차원 얼굴 모델링 시스템)

  • 김응곤;송승헌
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.3
    • /
    • pp.427-433
    • /
    • 2001
  • This paper proposes a web-based 3 dimensional face modeling system that makes a realistic facial model efficiently without any 30 scanner or camera that uses in the traditional methods. Without expensive image-input equipments, we can easily create 3B models only using front and side images. The system is available to make 3D facial models as we connect to the facial modeling server on the WWW which is independent from specific platforms and softwares. This system will be implemented using Java 3D API, which includes the functions and conveniences of developed graphic libraries. It is a Client/server architecture which consists of user connection module and 3D facial model creating module. Clients connect with the facial modeling server, input two facial photographic images, detects the feature points, and then create a 3D facial model modifying generic facial model with the points according to the procedures using only the web browser.

  • PDF

Lip Shape Synthesis of the Korean Syllable for Human Interface (휴먼인터페이스를 위한 한글음절의 입모양합성)

  • 이용동;최창석;최갑석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.614-623
    • /
    • 1994
  • Synthesizing speech and facial images is necessary for human interface that man and machine converse naturally as human do. The target of this paper is synthesizing the facial images. In synthesis of the facial images a three-dimensional (3-D) shape model of the face is used for realizating the facial expression variations and the lip shape variations. The various facial expressions and lip shapes harmonized with the syllables are synthesized by deforming the three-dimensional model on the basis of the facial muscular actions. Combications with the consonants and the vowels make 14.364 syllables. The vowels dominate most lip shapes but the consonants do a part of them. For determining the lip shapes, this paper investigates all the syllables and classifies the lip shapes pattern according to the vowels and the consonants. As the results, the lip shapes are classified into 8 patterns for the vowels and 2patterns for the consonants. In advance, the paper determines the synthesis rules for the classified lip shape patterns. This method permits us to obtain the natural facial image with the various facial expressions and lip shape patterns.

  • PDF

The Reduction Method of Facial Blemishes using Morphological Operation (모폴로지 연산을 이용한 얼굴 잡티 제거 기법)

  • Goo, Eun-jin;Heo, Woo-hyung;Kim, Mi-kyung;Cha, Eui-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.364-367
    • /
    • 2013
  • In this paper, we propose a method about reducing facial blemishes using Morphological Operation. First, we detect skin region using pixel data of RGB's each channel image. we create histogram of skin region R, G, B channel and save 3 pixel values that are high frequency pixel value in each channel. After than, we find facial blemishes using Black-hat operation. The pixel value of facial blemishes changes average of its pixel value, 8-neighborhood pixel value and high frequency pixel values. And the facial blemishes pixel is blurred with median filter. The result of this test with facial pictures that have facial blemishes, we prove that this system that correct the face skin using reduction facial Blemishes is more efficient method than correct the face skin just using lighting up.

  • PDF

Face Detection using Orientation(In-Plane Rotation) Invariant Facial Region Segmentation and Local Binary Patterns(LBP) (방향 회전에 불변한 얼굴 영역 분할과 LBP를 이용한 얼굴 검출)

  • Lee, Hee-Jae;Kim, Ha-Young;Lee, David;Lee, Sang-Goog
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.692-702
    • /
    • 2017
  • Face detection using the LBP based feature descriptor has issues in that it can not represent spatial information between facial shape and facial components such as eyes, nose and mouth. To address these issues, in previous research, a facial image was divided into a number of square sub-regions. However, since the sub-regions are divided into different numbers and sizes, the division criteria of the sub-region suitable for the database used in the experiment is ambiguous, the dimension of the LBP histogram increases in proportion to the number of sub-regions and as the number of sub-regions increases, the sensitivity to facial orientation rotation increases significantly. In this paper, we present a novel facial region segmentation method that can solve in-plane rotation issues associated with LBP based feature descriptors and the number of dimensions of feature descriptors. As a result, the proposed method showed detection accuracy of 99.0278% from a single facial image rotated in orientation.

Reconstruction of High-Resolution Facial Image Based on Recursive Error Back-Projection of Top-Down Machine Learning (하향식 기계학습의 반복적 오차 역투영에 기반한 고해상도 얼굴 영상의 복원)

  • Park, Jeong-Seon;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.266-274
    • /
    • 2007
  • This paper proposes a new reconstruction method of high-resolution facial image from a low-resolution facial image based on top-down machine learning and recursive error back-projection. A face is represented by a linear combination of prototypes of shape and that of texture. With the shape and texture information of each pixel in a given low-resolution facial image, we can estimate optimal coefficients for a linear combination of prototypes of shape and those that of texture by solving least square minimizations. Then high-resolution facial image can be obtained by using the optimal coefficients for linear combination of the high-resolution prototypes. In addition, a recursive error back-projection procedure is applied to improve the reconstruction accuracy of high-resolution facial image. The encouraging results of the proposed method show that our method can be used to improve the performance of the face recognition by applying our method to reconstruct high-resolution facial images from low-resolution images captured at a distance.

Head Gesture Recognition using Facial Pose States and Automata Technique (얼굴의 포즈 상태와 오토마타 기법을 이용한 헤드 제스처 인식)

  • Oh, Seung-Taek;Jun, Byung-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.12
    • /
    • pp.947-954
    • /
    • 2001
  • In this paper, we propose a method for the recognition of various head gestures with automata technique applied to the sequence of facial pose states. Facial regions as detected by using the optimum facial color of I-component in YIQ model and the difference of images adaptively selected. And eye regions are extracted by using Sobel operator, projection, and the geometric location of eyes Hierarchical feature analysis is used to classify facial states, and automata technique is applied to the sequence of facial pose states to recognize 13 gestures: Gaze Upward, Downward, Left ward, Rightward, Forward, Backward Left Wink Right Wink Left Double Wink, Left Double Wink , Right Double Wink Yes, and No As an experimental result with total 1,488 frames acquired from 8 persons, it shows 99.3% extraction rate for facial regions, 95.3% extraction rate for eye regions 94.1% recognition rate for facial states and finally 99.3% recognition rate for head gestures. .

  • PDF

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.