• Title/Summary/Keyword: facial feature

검색결과 517건 처리시간 0.023초

이미지 자동배치를 위한 얼굴 방향성 검출 (Detection of Facial Direction for Automatic Image Arrangement)

  • 동지연;박지숙;이환용
    • Journal of Information Technology Applications and Management
    • /
    • 제10권4호
    • /
    • pp.135-147
    • /
    • 2003
  • With the development of multimedia and optical technologies, application systems with facial features hare been increased the interests of researchers, recently. The previous research efforts in face processing mainly use the frontal images in order to recognize human face visually and to extract the facial expression. However, applications, such as image database systems which support queries based on the facial direction and image arrangement systems which place facial images automatically on digital albums, deal with the directional characteristics of a face. In this paper, we propose a method to detect facial directions by using facial features. In the proposed method, the facial trapezoid is defined by detecting points for eyes and a lower lip. Then, the facial direction formula, which calculates the right and left facial direction, is defined by the statistical data about the ratio of the right and left area in facial trapezoids. The proposed method can give an accurate estimate of horizontal rotation of a face within an error tolerance of $\pm1.31$ degree and takes an average execution time of 3.16 sec.

  • PDF

Hybrid-Feature Extraction for the Facial Emotion Recognition

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo;Jeong, In-Cheol;Ham, Ho-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1281-1285
    • /
    • 2004
  • There are numerous emotions in the human world. Human expresses and recognizes their emotion using various channels. The example is an eye, nose and mouse. Particularly, in the emotion recognition from facial expression they can perform the very flexible and robust emotion recognition because of utilization of various channels. Hybrid-feature extraction algorithm is based on this human process. It uses the geometrical feature extraction and the color distributed histogram. And then, through the independently parallel learning of the neural-network, input emotion is classified. Also, for the natural classification of the emotion, advancing two-dimensional emotion space is introduced and used in this paper. Advancing twodimensional emotion space performs a flexible and smooth classification of emotion.

  • PDF

Online Face Avatar Motion Control based on Face Tracking

  • Wei, Li;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제12권6호
    • /
    • pp.804-814
    • /
    • 2009
  • In this paper, a novel system for avatar motion controlling by tracking face is presented. The system is composed of three main parts: firstly, LCS (Local Cluster Searching) method based face feature detection algorithm, secondly, HMM based feature points recognition algorithm, and finally, avatar controlling and animation generation algorithm. In LCS method, face region can be divided into many small piece regions in horizontal and vertical direction. Then the method will judge each cross point that if it is an object point, edge point or the background point. The HMM method will distinguish the mouth, eyes, nose etc. from these feature points. Based on the detected facial feature points, the 3D avatar is controlled by two ways: avatar orientation and animation, the avatar orientation controlling information can be acquired by analyzing facial geometric information; avatar animation can be generated from the face feature points smoothly. And finally for evaluating performance of the developed system, we implement the system on Window XP OS, the results show that the system can have an excellent performance.

  • PDF

얼굴 특징 실시간 자동 추적 (Real-Time Automatic Tracking of Facial Feature)

  • 박호식;배철수
    • 한국정보통신학회논문지
    • /
    • 제8권6호
    • /
    • pp.1182-1187
    • /
    • 2004
  • 본 논문에서는 실시간으로 눈과 눈썹주위의 특징을 추적하는 새로운 알고리즘을 제안하고자 한다. 제안된 알고리즘은 적외선 LED와 적외선카메라로 밝은 동공 효과를 만들어 동공을 추적 한 후, 템플릿은 얼굴 특징을 매개변수화 하기 위해, 동공 좌표는 각각의 프레임에서 눈과 눈썹 영상을 추출하기 위하여 사용한다. 또한, 템플릿 변수는 표본 영상을 가지고 학습하는 과정에서 구성한 PCA기저를 이용하여 추출된 영상을 PCA 분석하여 구한다. 제안된 시스템은 초당 30 프레임의 영상에서 초기 설정 및 교정 작업 없이 머리 움직임이 많거나 폐색이 있는 경우에도 견실하게 동작하였다.

ICA-factorial 표현법을 이용한 얼굴감정인식 (Facial Expression Recognition using ICA-Factorial Representation Method)

  • 한수정;곽근창;고현주;김승석;전명근
    • 한국지능시스템학회논문지
    • /
    • 제13권3호
    • /
    • pp.371-376
    • /
    • 2003
  • 본 논문에서는 효과적인 정보를 표현하는 Independent Component Analysis(ICA)-factorial 표현방법을 이용하여 얼굴감정 인식을 수행한다. 얼굴감정인식은 두 단계인 특징추출 과정과 인식과정에 의해 이루어진다. 먼저 특징추출방법은 주성분 분석(Principal Component Analysis)을 이용하여 얼굴영상의 고차원 공간을 저차원 특징공간으로 변환한 후 ICA-factorial 표현방법을 통해 좀 더 효과적으로 특징벡터를 추출한다. 인식단계는 최소거리 분류방법인 유클리디안 거리에 근거한 K-Nearest Neighbor 알고리즘으로 얼굴감정을 인식한다. 6개의 기본감정(기쁨, 슬픔, 화남, 놀람, 공포, 혐오)에 대해 얼굴 감정 데이터베이스를 구축하고 실험해본 결과 기존의 방법보다 좋은 인식 성능을 얻었다.

Enhanced Independent Component Analysis of Temporal Human Expressions Using Hidden Markov model

  • 이지준;;김태성
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.487-492
    • /
    • 2008
  • Facial expression recognition is an intensive research area for designing Human Computer Interfaces. In this work, we present a new facial expression recognition system utilizing Enhanced Independent Component Analysis (EICA) for feature extraction and discrete Hidden Markov Model (HMM) for recognition. Our proposed approach for the first time deals with sequential images of emotion-specific facial data analyzed with EICA and recognized with HMM. Performance of our proposed system has been compared to the conventional approaches where Principal and Independent Component Analysis are utilized for feature extraction. Our preliminary results show that our proposed algorithm produces improved recognition rates in comparison to previous works.

  • PDF

A Review of Facial Expression Recognition Issues, Challenges, and Future Research Direction

  • Yan, Bowen;Azween, Abdullah;Lorita, Angeline;S.H., Kok
    • International Journal of Computer Science & Network Security
    • /
    • 제23권1호
    • /
    • pp.125-139
    • /
    • 2023
  • Facial expression recognition, a topical problem in the field of computer vision and pattern recognition, is a direct means of recognizing human emotions and behaviors. This paper first summarizes the datasets commonly used for expression recognition and their associated characteristics and presents traditional machine learning algorithms and their benefits and drawbacks from three key techniques of face expression; image pre-processing, feature extraction, and expression classification. Deep learning-oriented expression recognition methods and various algorithmic framework performances are also analyzed and compared. Finally, the current barriers to facial expression recognition and potential developments are highlighted.

Realtime Analysis of Sasang Constitution Types from Facial Features Using Computer Vision and Machine Learning

  • Abdullah;Shah Mahsoom Ali;Hee-Cheol Kim
    • Journal of information and communication convergence engineering
    • /
    • 제22권3호
    • /
    • pp.256-266
    • /
    • 2024
  • Sasang constitutional medicine (SCM) is one of the best traditional therapeutic approaches used in Korea. SCM prioritizes personalized treatment that considers the unique constitution of an individual and encompasses their physical characteristics, personality traits, and susceptibility to specific diseases. Facial features are essential for diagnosing Sasang constitutional types (SCTs). This study aimed to develop a real-time artificial intelligence-based model for diagnosing SCTs using facial images, building an SCTs prediction model based on a machine learning method. Facial features from all images were extracted to develop this model using feature engineering and machine learning techniques. The fusion of these features was used to train the AI model. We used four machine learning algorithms, namely, random forest (RF), multilayer perceptron (MLP), gradient boosting machine (GBM), and extreme gradient boosting (XGB), to investigate SCTs. The GBM outperformed all the other models. The highest accuracy achieved in the experiment was 81%, indicating the robustness of the proposed model and suitability for real-time applications.

LDP 기반의 얼굴 표정 인식 평가 시스템의 설계 및 구현 (A Study of Evaluation System for Facial Expression Recognition based on LDP)

  • 이태환;조영탁;안용학;채옥삼
    • 융합보안논문지
    • /
    • 제14권7호
    • /
    • pp.23-28
    • /
    • 2014
  • 본 논문에서는 기존에 제안된 LDP(Local Directional Pattern)를 기반으로 얼굴 표정 인식 시스템에 대한 설계 및 구현 방법을 제안한다. LDP는 얼굴 영상을 구성하고 있는 각 화소를 주변 화소들과의 관계를 고려하여 지역적인 미세 패턴(Local Micro Pattern)으로 표현해준다. 새롭게 제시된 LDP에서 생성되는 코드들이 다양한 조건하에서 정확한 정보를 포함할 수 있는지의 여부를 검증할 필요가 있다. 따라서, 새롭게 제안된 지역 미세 패턴인 LDP를 다양한 환경에서 신속하게 검증하기 위한 평가 시스템을 구축한다. 제안된 얼굴 표정인식 평가 시스템에서는 6개의 컴포넌트를 거쳐 얼굴 표정인식률을 계산할 수 있도록 구성하였으며, Gabor, LBP와 비교하여 LDP의 인식률을 검증한다.

Active Discrete Wavelet Transform를 이용한 얼굴 특징 점 추출 (A Study On Face Feature Points Using Active Discrete Wavelet Transform)

  • 전순용;챈즈징;지언호
    • 전자공학회논문지SC
    • /
    • 제47권1호
    • /
    • pp.7-16
    • /
    • 2010
  • 패턴 인식은 얼굴인식 영역에서 중요한 분야로 널리 사용 되고 있으며, 많은 연구가 이루어지고 있다. 얼굴 특징 점의 추출은 얼굴 인식 과정에서 중요한 단계로 정확한 얼굴 특징 추출은 인식기의 인식률에 가장 큰 영향을 미친다. 본 논문 에서는 능동형 이산 웨이브렛 변환을 통한 얼굴 특징 점 추출 방법을 제안했다. PC 카메라를 이용하여 취득된 얼굴 영상을 능동형 이산 웨이브렛 변환을 취하여 얼굴 영상 신호변환을 하였다. 변환된 영상 신호에 대하여 수직, 수평 투영법을 이용하여 얼굴 특징 추출을 하였으며, 추출 결과로부터 얼굴인식을 하였다. 제안된 능동형 이산 웨이브렛 변환은 얼굴 인식률 향상을 가져왔으며, 특징 점을 신속하고 정확하게 추출할 수 있었으며, 기존 이산 웨이브렛 변환을 이용한 특징 점 추출방식에 대하여 향상된 정확도와 안전성을 보였다.