• Title/Summary/Keyword: face feature points

Search Result 127, Processing Time 0.026 seconds

Vision System for NN-based Emotion Recognition (신경회로망 기반 감성 인식 비젼 시스템)

  • Lee, Sang-Yun;Kim, Sung-Nam;Joo, Young-Hoon;Park, Chang-Hyun;Sim, Kwee-Bo
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2036-2038
    • /
    • 2001
  • In this paper, we propose the neural network based emotion recognition method for intelligently recognizing the human's emotion using vision system. In the proposed method, human's emotion is divided into four emotion (surprise, anger, happiness, sadness). Also, we use R,G,B(red, green, blue) color image data and the gray image data to get the highly trust rate of feature point extraction. For this, we propose an algorithm to extract four feature points (eyebrow, eye, nose, mouth) from the face image acquired by the color CCD camera and find some feature vectors from those. And then we apply back-prapagation algorithm to the secondary feature vector(position and distance among the feature points). Finally, we show the practical application possibility of the proposed method.

  • PDF

Analysis of Face Direction and Hand Gestures for Recognition of Human Motion (인간의 행동 인식을 위한 얼굴 방향과 손 동작 해석)

  • Kim, Seong-Eun;Jo, Gang-Hyeon;Jeon, Hui-Seong;Choe, Won-Ho;Park, Gyeong-Seop
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.4
    • /
    • pp.309-318
    • /
    • 2001
  • In this paper, we describe methods that analyze a human gesture. A human interface(HI) system for analyzing gesture extracts the head and hand regions after taking image sequence of and operators continuous behavior using CCD cameras. As gestures are accomplished with operators head and hands motion, we extract the head and hand regions to analyze gestures and calculate geometrical information of extracted skin regions. The analysis of head motion is possible by obtaining the face direction. We assume that head is ellipsoid with 3D coordinates to locate the face features likes eyes, nose and mouth on its surface. If was know the center of feature points, the angle of the center in the ellipsoid is the direction of the face. The hand region obtained from preprocessing is able to include hands as well as arms. For extracting only the hand region from preprocessing, we should find the wrist line to divide the hand and arm regions. After distinguishing the hand region by the wrist line, we model the hand region as an ellipse for the analysis of hand data. Also, the finger part is represented as a long and narrow shape. We extract hand information such as size, position, and shape.

  • PDF

Fuzzy Model-Based Emotion Recognition Using Color Image (퍼지 모델을 기반으로 한 컬러 영상에서의 감성 인식)

  • Joo, Young-Hoon;Jeong, Keun-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.330-335
    • /
    • 2004
  • In this paper, we propose the technique for recognizing the human emotion by using the color image. To do so, we first extract the skin color region from the color image by using HSI model. Second, we extract the face region from the color image by using Eigenface technique. Third, we find the man's feature points(eyebrows, eye, nose, mouse) from the face image and make the fuzzy model for recognizing the human emotions (surprise, anger, happiness, sadness) from the structural correlation of man's feature points. And then, we infer the human emotion from the fuzzy model. Finally, we have proven the effectiveness of the proposed method through the experimentation.

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 김동수;남기환;한준희;배철수;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1998.11a
    • /
    • pp.181-185
    • /
    • 1998
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives and vowels.

  • PDF

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.783-788
    • /
    • 2002
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face Image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives md vowels.

Gabor Descriptors Extraction in the SURF Feature Point for Improvement Accuracy in Face Recognition (얼굴 인식의 정확도 향상을 위한 SURF 특징점에서의 Gabor 기술어 추출)

  • Lee, Jae-Yong;Kim, Ji-Eun;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.17 no.5
    • /
    • pp.808-816
    • /
    • 2012
  • Face recognition has been actively studied and developed in various fields. In recent years, interest point extraction algorithms mainly used for object recognition were being applied to face recognition. The SURF(Speeded Up Robust Features) algorithm was used in this paper which was one of typical interest point extraction algorithms. Generally, the interest points extracted from human faces are less distinctive than the interest points extracted from objects due to the similar shapes of human faces. Thus, the accuracy of the face recognition using SURF tends to be low. In order to improve it, we propose a face recognition algorithm which performs interest point extraction by SURF and the Gabor wavelet transform to extract descriptors from the interest points. In the result, the proposed method shows around 23% better recognition accuracy than SURF-based conventional methods.

Face Recognition using Vector Quantizer in Eigenspace (아이겐공간에서 벡터 양자기를 이용한 얼굴인식)

  • 임동철;이행세;최태영
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.185-192
    • /
    • 2004
  • This paper presents face recognition using vector quantization in the eigenspace of the faces. The existing eigenface method is not enough for representing the variations of faces. For making up for its defects, the proposed method use a clustering of feature vectors by vector quantization in eigenspace of the faces. In the trainning stage, the face images are transformed the points in the eigenspace by eigeface(eigenvetor) and we represent a set of points for each people as the centroids of vector quantizer. In the recognition stage, the vector quantizer finds the centroid having the minimum quantization error between feature vector of input image and centriods of database. The experiments are performed by 600 faces in Faces94 database. The existing eigenface method has minimum 64 miss-recognition and the proposed method has minimum 20 miss-recognition when we use 4 codevectors. In conclusion, the proposed method is a effective method that improves recognition rate through overcoming the variation of faces.

Study on Weight Summation Storage Algorithm of Facial Recognition Landmark (가중치 합산 기반 안면인식 특징점 저장 알고리즘 연구)

  • Jo, Seonguk;You, Youngkyon;Kwak, Kwangjin;Park, Jeong-Min
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.163-170
    • /
    • 2022
  • This paper introduces a method of extracting facial features due to unrefined inputs in real life and improving the problem of not guaranteeing the ideal performance and speed of the object recognition model through a storage algorithm through weight summation. Many facial recognition processes ensure accuracy in ideal situations, but the problem of not being able to cope with numerous biases that can occur in real life is drawing attention, which may soon lead to serious problems in the face recognition process closely related to security. This paper presents a method of quickly and accurately recognizing faces in real time by comparing feature points extracted as input with a small number of feature points that are not overfit to multiple biases, using that various variables such as picture composition eventually take an average form.

Realization of 3D Virtual Face Using two Sheets of 2D photographs (두 장의 2D 사진을 이용한 3D 가상 얼굴의 구현)

  • 임낙현;서경호;김태효
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.4
    • /
    • pp.16-21
    • /
    • 2001
  • In this paper a virtual form of 3 dimensional face is synthesized from the two sheets of 2 dimensional photographs In this case two sheets of 2D face photographs, the front and the side photographs are used First of all a standard model for a general face is created and from this model the feature points which represents a construction of face are densely defined on part of ears. eyes, a nose and a lip but the other parts. for example, forehead, chin and hair are roughly determined because of flat region or the less individual points. Thereafter the side photograph is connected symmetrically on the left and right sides of the front image and it is gradually synthesized by use of affine transformation method. In order to remove the difference of color and brightness from the junction part, a linear interpolation method is used. As a result it is confirmed that the proposed model which general model of a face can be obtain the 3D virtual image of the individual face.

  • PDF

3D Head Pose Estimation Using The Stereo Image (스테레오 영상을 이용한 3차원 포즈 추정)

  • 양욱일;송환종;이용욱;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1887-1890
    • /
    • 2003
  • This paper presents a three-dimensional (3D) head pose estimation algorithm using the stereo image. Given a pair of stereo image, we automatically extract several important facial feature points using the disparity map, the gabor filter and the canny edge detector. To detect the facial feature region , we propose a region dividing method using the disparity map. On the indoor head & shoulder stereo image, a face region has a larger disparity than a background. So we separate a face region from a background by a divergence of disparity. To estimate 3D head pose, we propose a 2D-3D Error Compensated-SVD (EC-SVD) algorithm. We estimate the 3D coordinates of the facial features using the correspondence of a stereo image. We can estimate the head pose of an input image using Error Compensated-SVD (EC-SVD) method. Experimental results show that the proposed method is capable of estimating pose accurately.

  • PDF