• 제목/요약/키워드: Facial Feature

검색결과 510건 처리시간 0.024초

Facial Feature Extraction Based on Private Energy Map in DCT Domain

  • Kim, Ki-Hyun;Chung, Yun-Su;Yoo, Jang-Hee;Ro, Yong-Man
    • ETRI Journal
    • /
    • 제29권2호
    • /
    • pp.243-245
    • /
    • 2007
  • This letter presents a new feature extraction method based on the private energy map (PEM) technique to utilize the energy characteristics of a facial image. Compared with a non-facial image, a facial image shows large energy congestion in special regions of discrete cosine transform (DCT) coefficients. The PEM is generated by energy probability of the DCT coefficients of facial images. In experiments, higher face recognition performance figures of 100% for the ORL database and 98.8% for the ETRI database have been achieved.

  • PDF

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권2E호
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적 (A Gaze Tracking based on the Head Pose in Computer Monitor)

  • 오승환;이희영
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF

The Facial Expression Recognition using the Inclined Face Geometrical information

  • Zhao, Dadong;Deng, Lunman;Song, Jeong-Young
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2012년도 추계학술대회
    • /
    • pp.881-886
    • /
    • 2012
  • The paper is facial expression recognition based on the inclined face geometrical information. In facial expression recognition, mouth has a key role in expressing emotions, in this paper the features is mainly based on the shapes of mouth, followed by eyes and eyebrows. This paper makes its efforts to disperse every feature values via the weighting function and proposes method of expression classification with excellent classification effects; the final recognition model has been constructed.

  • PDF

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • 제19권3호
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템 (Emotion Recognition and Expression System of Robot Based on 2D Facial Image)

  • 이동훈;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권4호
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

체질진단에 활용되는 안면 특징 변수들의 반복성에 대한 예비 연구 (A Preliminary Study on the Repeatability of Facial Feature Variables Used in the Sasang Constitutional Diagnosis)

  • 노민영;김종열;도준형
    • 사상체질의학회지
    • /
    • 제29권1호
    • /
    • pp.29-39
    • /
    • 2017
  • Objectives Facial features can be utilized as an indicator of Korean medical diagnosis. They are often measured by using the diagnostic device for an objective diagnosis. Accordingly, it is necessary to verify the reliability of the features which are obtained from the device for the accurate diagnosis. In this study, we attempt to evaluate the repeatability of facial feature variables using the Sasang Constitutional Analysis Tool(SCAT) for the Sasang Constitutional face diagnosis. Methods Facial pictures of two subjects were taken 24 times respectively for two days according to a standard guideline. In order to evaluate the repeatability, the coefficient of variation was calculated for the facial features extracted from frontal and profile images. Results The coefficient of variation was less than 10% in most of the facial features except the upper lip, trichion, and chins related features. Conclusions It was confirmed that the coefficient of variation was small in most of the features which enables the objective and reliable analysis of face. However, some features showed the low reliability because the location of facial landmarks related to them is ambiguous. In order to solve the problem, a clear basis for the location discussion is required.

방향 회전에 불변한 얼굴 영역 분할과 LBP를 이용한 얼굴 검출 (Face Detection using Orientation(In-Plane Rotation) Invariant Facial Region Segmentation and Local Binary Patterns(LBP))

  • 이희재;김하영;이다빛;이상국
    • 정보과학회 논문지
    • /
    • 제44권7호
    • /
    • pp.692-702
    • /
    • 2017
  • LBP기반 특징점 기술자를 이용한 얼굴검출은 얼굴의 형태정보 및 눈, 코, 입과 같은 얼굴 요소들 간 공간정보를 표현할 수 없는 문제가 있다. 이러한 문제를 해결하기 위해 선행 연구들은 얼굴 영상을 다수개의 사각형 부분영역들로 분할하였다. 하지만, 연구마다 서로 다른 개수와 크기로 부분 영역을 분할하였기 때문에 실험에 사용하는 데이터베이스에 적합한 부분 영역의 분할 기준이 모호하며, 부분 영역의 수에 비례하여 LBP 히스토그램 차원이 증가되고, 부분 영역의 개수가 증가함에 따라 얼굴의 방향 회전에 대한 민감도가 크게 증가한다. 본 논문은 LBP기반 특징점 기술자의 방향 회전 문제와 특징점 차원의 수 문제를 해결할 수 있는 새로운 부분 영역 분할 방법을 제안한다. 실험 결과, 제안하는 방법은 방향 회전된 단일 얼굴 영상에서 99.0278%의 검출 정확도를 보였다.

간단한 사용자 인터페이스에 의한 벡터 그래픽 캐릭터의 자동 표정 생성 시스템 (Automatic facial expression generation system of vector graphic character by simple user interface)

  • 박태희;김재호
    • 한국멀티미디어학회논문지
    • /
    • 제12권8호
    • /
    • pp.1155-1163
    • /
    • 2009
  • 본 논문에서는 가우시안 프로세스 모델을 이용한 벡터 그래픽 캐릭터의 자동 표정 생성 시스템을 제안한다. 제안한 방법은 Russell의 내적 정서 상태의 차원 모형을 근거로 재정의된 캐릭터의 26가지 표정 데이터로 부터 주요 특징 벡터를 추출한다. 그리고 추출된 고차원의 특징 벡터에 대해 SGPLVM이라는 가우시안 프로세스 모델을 이용하여 저차원 특징 벡터를 찾고, 확률분포함수(PDF)를 학습한다. 확률분포함수의 모든 파라메타는 학습된 표정 데이터의 우도를 최대화함으로써 추정할 수 있으며, 이는 2차원 공간에서 사용자가 원하는 얼굴 표정을 실시간으로 선택하기 위해 사용된다. 시뮬레이션 결과 본 논문에서 제안한 표정 생성 프로그램은 얼굴 표정의 작은 데이터셋에도 잘 동작하며, 사용자는 표정과 정서간의 관련성에 관한 사전지식이 없이도 연속되는 다양한 캐릭터의 표정을 생성할 수 있음을 확인할 수 있었다.

  • PDF

스테레오 비전을 이용한 마커리스 정합 : 특징점 추출 방법과 스테레오 비전의 위치에 따른 정합 정확도 평가 (Markerless Image-to-Patient Registration Using Stereo Vision : Comparison of Registration Accuracy by Feature Selection Method and Location of Stereo Bision System)

  • 주수빈;문정환;신기영
    • 전자공학회논문지
    • /
    • 제53권1호
    • /
    • pp.118-125
    • /
    • 2016
  • 본 논문에서는 얼굴 영역 수술용 네비게이션을 위한 스테레오 비전과 CT 영상을 이용하여 환자-영상 간 정합(Image to patient registration) 알고리즘의 성능을 평가한다. 환자 영상 간 정합은 스테레오 비전 영상의 특징점 추출과 이를 통한 3차원 좌표 계산, 3차원 좌표와 3차원 CT 영상과의 정합 과정을 거친다. 스테레오 비전 영상에서 3가지 얼굴 특징점 추출 방법과 3가지 정합 방법을 사용하여 생성될 수 있는 5가지 조합 중 정합 정확도가 가장 높은 방법을 평가한다. 또한 머리의 회전에 따라 환자 영상 간 정합의 정확도를 비교한다. 실험을 통해 머리의 회전 각도가 약 20도의 범위 내에서 Active Appearance Model과 Pseudo Inverse Matching을 사용한 정합의 정확도가 가장 높았으며, 각도가 20도 이상일 경우 Speeded Up Robust Features와 Iterative Closest Point를 사용하였을 때 정합 정확도가 높았다. 이 결과를 통해 회전각도가 20도 범위 내에서는 Active Appearance Model과 Pseudo Inverse Matching 방법을 사용하고, 20도 이상의 경우 Speeded Up Robust Features와 Iterative Closest Point를 이용하는 것이 정합의 오차를 줄일 수 있다.