• Title/Summary/Keyword: face expression recognition

Search Result 197, Processing Time 0.023 seconds

Face Expression Recognition using ICA-Factorial Representation (ICA-Factorial 표현을 이용한 얼굴감정인식)

  • 한수정;고현주;곽근창;전명근
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.329-332
    • /
    • 2002
  • 본 논문에서는 효과적인 정보를 표현하는 ICA(Independent Component Analysis)-Factorial 표현 방법을 이용하여 얼굴감정인식을 수행한다. 얼굴감정인식은 두 단계인 특징추출과 인식단계에 의해 이루어진다. 먼저 특징추출방법은 PCA(Principal Component Analysis)을 이용하여 얼굴영상의 고차원 공간을 저차원 특징공간으로 변환한 후 ICA-factorial 표현방법을 통해 좀 더 효과적으로 특징벡터를 추출한다. 인식단계는 최소거리 분류방법인 유클리디안 거리를 이용하여 얼굴감정을 인식한다. 이 방법의 유용성을 설명하기 위해 6개의 기본감정(행복, 슬픔, 화남, 놀람, 공포, 혐오)에 대해 얼굴데이터베이스를 구축하고, 기존의 방법인 Eigenfaces, Fishefaces와 비교하여 좋은 인식성능을 보이고자 한다.

Face Region Extraction for the Facial Expression Recognition System (얼굴 표정 인식 시스템을 위한 얼굴 영역 추출)

  • Lim Ju-Hyuk;Song Kun-Woen
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.11a
    • /
    • pp.903-906
    • /
    • 2004
  • 본 논문에서는 얼굴 표정 인식 시스템을 위한 얼굴 영역 추출 알고리즘을 제안한다. 이는 입력 영상으로부터 얼굴 후보 영역을 추출하고, 추출된 얼굴 후보 영역에서 눈의 위치를 정확히 추출한다. 그리고 추출된 눈 영역들의 정보와 타원 방정식을 이용하여 최종 얼굴 영역을 추출한다. 얼굴 후보 영역은 HSI 칼라 좌표계에 기반한 적응적 피부색 구간 범위를 설정하여 추출하였다. 추출된 얼굴 후보 영역에서의 눈 영역 추출을 위해 밝기 정보를 이용하여 먼저 눈의 후보 화소들을 추출하고, 레이블링 과정을 통하여 영역별로 그룹화하였다. 각 후보 영역들의 화소 수, 가로세로비 및 위치 정보를 고려하여 최종 눈 영역을 추출하였다. 추출된 두 눈 영역에서 무게중심을 구하고 이를 이용하여 장축과 단축을 설정하여 타원방정식을 이용 최종 얼굴 영역을 추출하였다. 제안된 알고리즘은 조명 변화, 다양한 배경들을 가지는 얼굴 영상에서도 정확히 얼굴 영역을 추출할 수 있었다.

  • PDF

A Study on the Visual Attention of Sexual Appeal Advertising Image Utilizing Eye Tracking (아이트래킹을 활용한 성적소구광고 이미지의 시각적 주의에 관한 연구)

  • Hwang, Mi-Kyung;Kwon, Mahn-Woo;Lee, Sang-Ho;Kim, Chee-Yong
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.207-212
    • /
    • 2020
  • This study analyzes the Soju(Korean alcohol) advertisement image, which is relatively easy to interpret subjectively, among sexual appeal advertisements that stimulate consumers' curiosity, where the image is verified through AOI (area of interest) 3 areas (face, body, product), and eye-tracking, one of the psychophysiological indicators. The result of the analysis reveals that visual attention, the interest in the advertising model, was higher in the face than in the body shape. Contrary to the prediction that men would be more interested in body shape than women, both men and women showed higher interest in the face than a body. Besides, it was derived that recognition and recollection of the product were not significant. This study is significant in terms of examining the pattern of visual attention such as the gaze point and gaze time of male and female consumers on sexual appeal advertisements. In further, the study looks forward to bringing a positive influence to the soju advertisement image by presenting the expression method that the soju advertisement image should pursue as well as the appropriate marketing direction.

Analysis of Facial Movement According to Opposite Emotions (상반된 감성에 따른 안면 움직임 차이에 대한 분석)

  • Lee, Eui Chul;Kim, Yoon-Kyoung;Bea, Min-Kyoung;Kim, Han-Sol
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.10
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, a study on facial movements are analyzed in terms of opposite emotion stimuli by image processing of Kinect facial image. To induce two opposite emotion pairs such as "Sad - Excitement"and "Contentment - Angry" which are oppositely positioned onto Russell's 2D emotion model, both visual and auditory stimuli are given to subjects. Firstly, 31 main points are chosen among 121 facial feature points of active appearance model obtained from Kinect Face Tracking SDK. Then, pixel changes around 31 main points are analyzed. In here, local minimum shift matching method is used in order to solve a problem of non-linear facial movement. At results, right and left side facial movements were occurred in cases of "Sad" and "Excitement" emotions, respectively. Left side facial movement was comparatively more occurred in case of "Contentment" emotion. In contrast, both left and right side movements were occurred in case of "Angry" emotion.

Sharing Activities in an Online Fashion Community - Focusing on Erving Goffman's Impression Management Theory - (온라인 패션 커뮤니티의 나눔 활동 - 어빙 고프만의 인상관리 이론을 중심으로 -)

  • Hyunjoo Hur;Jaehoon Chun
    • Fashion & Textile Research Journal
    • /
    • v.25 no.4
    • /
    • pp.449-459
    • /
    • 2023
  • This study focuses on online communities and the ritual conversations of users when participating in sharing activities. The study aims to understand the social and psychological phenomena that occur between users within the context of Erving Goffman's impression management theory. Case studies and a content analysis were conducted by collecting posts and comments related to fashion products in the sharing activities category on Naver Cafe "Family Sale." On the one hand, the study identified various disposition motives among givers, including a desire for recognition, self-expression, activation of the community, emotional sympathy, goodwill, play, and simple disposition. On the other hand, receivers' purchase motives included the need for a product, reciprocation based on a sense of belonging, play, gift-giving, and simple response. Analyzing the posts of givers and the comments of receivers of fashion products using impression management strategies and dramaturgical analysis, the study interpreted users' impression management and revealed propensities in fashion consumption: fashionability, conspicuousness, value orientation, and economic feasibility. Through ritual conversations, users managed to attain emotional stability on an individual level, while they reinforced collective bonds on a social level. They fulfilled their roles with their own narratives to achieve personal and collective goals in a non-face-to-face situations and non-monetary transactions. This study is significant in that it examines normative communication in an online community and user relationships to understand a recent phenomenon in the fashion industry.

Recognition of Facial Expressions of Animation Characters Using Dominant Colors and Feature Points (주색상과 특징점을 이용한 애니메이션 캐릭터의 표정인식)

  • Jang, Seok-Woo;Kim, Gye-Young;Na, Hyun-Suk
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.375-384
    • /
    • 2011
  • This paper suggests a method to recognize facial expressions of animation characters by means of dominant colors and feature points. The proposed method defines a simplified mesh model adequate for the animation character and detects its face and facial components by using dominant colors. It also extracts edge-based feature points for each facial component. It then classifies the feature points into corresponding AUs(action units) through neural network, and finally recognizes character facial expressions with the suggested AU specification. Experimental results show that the suggested method can recognize facial expressions of animation characters reliably.

Facial Feature Extraction for Face Expression Recognition (얼굴 표정인식을 위한 얼굴요소 추출)

  • 이경희;고재필;변혜란;이일병;정찬섭
    • Science of Emotion and Sensibility
    • /
    • v.1 no.1
    • /
    • pp.33-40
    • /
    • 1998
  • 본 논문은 얼굴인식 분야에 있어서 필수 과정인 얼굴 및 얼굴의 주요소인 눈과 입의 추출에 관한 방법을 제시한다. 얼굴 영역 추출은 복잡한 배경하에서 움직임 정보나 색상정보를 사용하지 않고 통계적인 모델에 기반한 일종의 형찬정합 방법을 사용하였다. 통계적인 모델은 입력된 얼굴 영상들의 Hotelling변환 과정에서 생성되는 고유 얼굴로, 복잡한 얼굴 영상을 몇 개의 주성분 갑으로 나타낼 수 있게 한다. 얼굴의 크기, 영상의 명암, 얼굴의 위치에 무관하게 얼굴을 추출하기 위해서, 단계적인 크기를 가지는 탐색 윈도우를 이용하여 영상을 검색하고 영상 강화 기법을 적용한 후, 영상을 고유얼굴 공간으로 투영하고 복원하는 과정을 통해 얼굴을 추출한다. 얼굴 요소의 추출은 각 요소별 특성을 고려한 엣지 추출과 이진화에 따른 프로젝션 히스토그램 분석에 의하여 눈과 입의 경계영역을 추출한다. 얼굴 영상에 관련된 윤곽선 추출에 관한 기존의 연구에서 주로 기하학적인 모양을 갖는 눈과 입의 경우에는 주로 가변 템플릿(Deformable Template)방법을 사용하여 특징을 추출하고, 비교적 다양한 모양을 갖는 눈썹, 얼굴 윤곽선 추출에는 스네이크(Snakes: Active Contour Model)를 이용하는 연구들이 이루어지고 있는데, 본 논문에서는 이러한 기존의 연구와는 달리 스네이크를 이용하여 적절한 파라미터의 선택과 에너지함수를 정의하여 눈과 입의 윤곽선 추출을 실험하였다. 복잡한 배경하에서 얼굴 영역의 추출, 추출된 얼굴 영역에서 눈과 입의 영역 추출 및 윤곽선 추출이 비교적 좋은 결과를 보이고 있다.

  • PDF

Noisy label based discriminative least squares regression and its kernel extension for object identification

  • Liu, Zhonghua;Liu, Gang;Pu, Jiexin;Liu, Shigang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2523-2538
    • /
    • 2017
  • In most of the existing literature, the definition of the class label has the following characteristics. First, the class label of the samples from the same object has an absolutely fixed value. Second, the difference between class labels of the samples from different objects should be maximized. However, the appearance of a face varies greatly due to the variations of the illumination, pose, and expression. Therefore, the previous definition of class label is not quite reasonable. Inspired by discriminative least squares regression algorithm (DLSR), a noisy label based discriminative least squares regression algorithm (NLDLSR) is presented in this paper. In our algorithm, the maximization difference between the class labels of the samples from different objects should be satisfied. Meanwhile, the class label of the different samples from the same object is allowed to have small difference, which is consistent with the fact that the different samples from the same object have some differences. In addition, the proposed NLDLSR is expanded to the kernel space, and we further propose a novel kernel noisy label based discriminative least squares regression algorithm (KNLDLSR). A large number of experiments show that our proposed algorithms can achieve very good performance.

Performance Comparison of Skin Color Detection Algorithms by the Changes of Backgrounds (배경의 변화에 따른 피부색상 검출 알고리즘의 성능 비교)

  • Jang, Seok-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • Accurately extracting skin color regions is very important in various areas such as face recognition and tracking, facial expression recognition, adult image identification, health-care, and so forth. In this paper, we evaluate the performances of several skin color detection algorithms in indoor environments by changing the distance between the camera and the object as well as the background colors of the object. The distance is from 60cm to 120cm and the background colors are white, black, orange, pink, and yellow, respectively. The algorithms that we use for the performance evaluation are Peer algorithm, NNYUV, NNHSV, LutYUV, and Kimset algorithm. The experimental results show that NNHSV, NNYUV and LutYUV algorithm are stable, but the other algorithms are somewhat sensitive to the changes of backgrounds. As a result, we expect that the comparative experimental results of this paper will be used very effectively when developing a new skin color extraction algorithm which are very robust to dynamic real environments.

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF