• Title/Summary/Keyword: 표정 강도

Search Result 47, Processing Time 0.028 seconds

Robust Facial Expression-Recognition Against Various Expression Intensity (표정 강도에 강건한 얼굴 표정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.395-402
    • /
    • 2009
  • This paper proposes an approach of a novel facial expression recognition to deal with different intensities to improve a performance of a facial expression recognition. Various expressions and intensities of each person make an affect to decrease the performance of the facial expression recognition. The effect of different intensities of facial expression has been seldom focused on. In this paper, a face expression template and an expression-intensity distribution model are introduced to recognize different facial expression intensities. These techniques, facial expression template and expression-intensity distribution model contribute to improve the performance of facial expression recognition by describing how the shift between multiple interest points in the vicinity of facial parts and facial parts varies for different facial expressions and its intensities. The proposed method has the distinct advantage that facial expression recognition with different intensities can be very easily performed with a simple calibration on video sequences as well as still images. Experimental results show a robustness that the method can recognize facial expression with weak intensities.

Effect of Depressive Mood on Identification of Emotional Facial Expression (우울감이 얼굴 표정 정서 인식에 미치는 영향)

  • Ryu, Kyoung-Hi;Oh, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.11 no.1
    • /
    • pp.11-21
    • /
    • 2008
  • This study was designed to examine the effect of depressive mood on identification of emotional facial expression. Participants were screened out of 305 college students on the basis of the BDI-II score. Students with BDI-II score higher than 14(upper 20%) were selected for the Depression Group and those with BDI-II score lower than 5(lower 20%) were selected for the Control Group. A final sample of 20 students in the Depression Group and 20 in the Control Group were presented with facial expression stimuli of an increasing degree of emotional intensity, slowly changing from a neutral to a full intensity of happy, sad, angry, or fearful expressions. The result showed that there was the significant interaction of Group by Emotion(esp. happy and sad) which suggested that depressive mood affects processing of emotional stimuli such as facial expressions. Implication of this result for mood-congruent information processing were discussed.

  • PDF

Realistic 3D Facial Expression Animation Based on Muscle Model (근육 모델 기반의 자연스러운 3차원 얼굴 표정 애니메이션)

  • Lee, Hye-Jin;Chung, Hyun-Sook;Lee, Yill-Byung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.265-268
    • /
    • 2002
  • 얼굴은 성별, 나이, 인종에 따라 다양한 특징을 가지고 있어서 개개인을 구별하기가 쉽고 내적인 상태를 쉽게 볼 수 있는 중요한 도구로 여겨지고 있다. 본 논문은 얼굴표정 애니메이션을 위한 효과적인 방법으로 실제얼굴의 피부조직 얼굴 근육 등 해부학적 구조에 기반한 근육기반모델링을 이용하는 방법을 소개하고자 한다. 제안하는 시스템의 구성은 얼굴 와이어프레임 구성과 폴리곤 메쉬분할 단계, 얼굴에 필요한 근육을 적용시키는 단계, 근육의 움직임에 따른 얼굴 표정생성단계로 이루어진다. 와이어프레임 구성과 폴리곤 메쉬 분할 단계에서는 얼굴모델을 Water[1]가 제안한 얼굴을 기반으로 하였고, 하나의 폴리곤 메쉬를 4등분으로 분할하여 부드러운 3D 얼굴모델을 보여준다. 다음 단계는 얼굴 표정생성에 필요한 근육을 30 개로 만들어 실제로 표정을 지을 때 많이 쓰는 부위에 적용시킨다. 그 다음으로 표정생성단계는 FACS 에서 제안한 Action Unit 을 조합하고 얼굴표정에 따라 필요한 근육의 강도를 조절하여 더 자연스럽고 실제감 있는 얼굴표정 애니메이션을 보여준다.

  • PDF

Facial Expression Control of 3D Avatar using Motion Data (모션 데이터를 이용한 3차원 아바타 얼굴 표정 제어)

  • Kim Sung-Ho;Jung Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.5
    • /
    • pp.383-390
    • /
    • 2004
  • This paper propose a method that controls facial expression of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. And we setup its system. The space of expression is created from about 2400 frames consist of motion captured data of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. But this space is not such a space where one state can go to another state via the straight trajectory between them. We derive trajectories between two states from the captured set of expressions in an approximate manner. First, two states are regarded adjacent if the distance between their distance matrices is below a given threshold. Any two states are considered to have a trajectory between them If there is a sequence of adjacent states between them. It is assumed . that one states goes to another state via the shortest trajectory between them. The shortest trajectories are found by dynamic programming. The space of facial expressions, as the set of distance matrices, is multidimensional. Facial expression of 3D avatar Is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the multidimensional scaling(MDS). To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. As a result of that, users estimate that system is very useful to control facial expression of 3D avatar in real-time.

Acoustic Emission Source Location of Fiberboard (섬유판에서 음향방출원의 위치표정)

  • 박익근;김용권;윤종학;노승남;서성원
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2003.10a
    • /
    • pp.170-173
    • /
    • 2003
  • 음향방출 신호를 이용하여 목재 섬유판(fiberboards)의 위치표정의 유용성 유무를 실험적으로 검증하였다 위치표정의 정확도를 향상하기 위해 신호처리 방법중의 하나인 웨이블릿 변환 디노이징 기법을 활용하여 저주파수인 대칭모드(굽힘파)를 활용하고, 고주파수인 비대칭모드(팽창파)를 제거하여 신호를 재구성함으로써 섬유관의 위치표정시 문턱값 통과방법을 사용할 때 발생하는 도달시간차를 최소화 할 수 있음을 확인하였다. 디노이징 기법을 활용한 섬유판의 위치 표정과 굽힘강도에 대한 사상총수를 기초로 하여 목재 구조물 및 문화재의 건전성을 평가 할 수 있을 것으로 기대된다.

  • PDF

Recognition of Facial Expressions Using Muscle-eased Feature Models (근육기반의 특징모델을 이용한 얼굴표정인식에 관한 연구)

  • 김동수;남기환;한준희;박호식;차영석;최현수;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.416-419
    • /
    • 1999
  • We Present a technique for recognizing facial expressions from image sequences. The technique uses muscle-based feature models for tracking facial features. Since the feature models are constructed with a small number of parameters and are deformable in the limited range and directions, each search space for a feature can be limited. The technique estimates muscular contractile degrees for classifying six principal facial express expressions. The contractile vectors are obtained from the deformations of facial muscle models. Similarities are defined between those vectors and representative vectors of principal expressions and are used for determining facial expressions.

  • PDF

ASM based The Lip Line Dectection System for The Smile Expression Recognition (웃음 표정 인식을 위한 ASM 기반 입술 라인 검출 시스템)

  • Hong, Won-Chang;Park, Jin-Woong;He, Guan-Feng;Kang, Sun-Kyung;Jung, Sung-Tae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.444-446
    • /
    • 2011
  • 본 논문은 실시간으로 카메라 영상으로부터 얼굴의 각 특징점을 검출하고, 검출된 특징점을 이용하여 웃음 표정을 인식하는 시스템을 제안한다. 제안된 시스템은 ASM(Active Shape Model)을 이용하여 실시간 검출부에서 얼굴 영상을 획득한 다음 ASM 학습부에서 학습된 결과를 가지고 얼굴의 특징을 찾는다. 얼굴 특징의 영상으로부터 입술 영역을 검출한다. 이렇게 검출된 입술 영역과 얼굴 특징점을 이용하여 사용자의 웃음 표정을 검출하고 인식하는 방법을 사용함으로써 웃음 표정 인식의 정확도를 높힐 수 있음을 알 수 있었다.

An Integrated Stress Analysis System using Facial and Voice Sentiment (표정과 음성 감성 분석을 통한 통합 스트레스 분석 시스템)

  • Lee, Aejin;Chun, Jiwon;Yu, Suhwa;Kim, Yoonhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.9-12
    • /
    • 2021
  • 현대 사회에서 극심한 스트레스로 고통을 호소하는 사람들이 많아짐에 따라 효과적인 스트레스 측정 시스템의 필요성이 대두되었다. 본 연구에서는 영상 속 인물의 표정과 음성 감성 분석을 통한 통합 스트레스 분석 시스템을 제안한다. 영상 속 인물의 표정과 음성 감성 분석 후 각 감성값에서 스트레스 지수를 도출하고 정량화한다. 표정과 음성 스트레스 지수로 도출된 통합 스트레스 지수가 높을수록 스트레스 강도가 높음을 증명하였다.

Development of a Recognition System of Smile Facial Expression for Smile Treatment Training (웃음 치료 훈련을 위한 웃음 표정 인식 시스템 개발)

  • Li, Yu-Jie;Kang, Sun-Kyung;Kim, Young-Un;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.47-55
    • /
    • 2010
  • In this paper, we proposed a recognition system of smile facial expression for smile treatment training. The proposed system detects face candidate regions by using Haar-like features from camera images. After that, it verifies if the detected face candidate region is a face or non-face by using SVM(Support Vector Machine) classification. For the detected face image, it applies illumination normalization based on histogram matching in order to minimize the effect of illumination change. In the facial expression recognition step, it computes facial feature vector by using PCA(Principal Component Analysis) and recognizes smile expression by using a multilayer perceptron artificial network. The proposed system let the user train smile expression by recognizing the user's smile expression in real-time and displaying the amount of smile expression. Experimental result show that the proposed system improve the correct recognition rate by using face region verification based on SVM and using illumination normalization based on histogram matching.

Children's Interpretation of Facial Expression onto Two-Dimension Structure of Emotion (정서의 이차원 구조에서 유아의 얼굴표정 해석)

  • Shin, Young-Suk;Chung, Hyun-Sook
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.1
    • /
    • pp.57-68
    • /
    • 2007
  • This study explores children's categories of emotion understanding from facial expressions onto two dimensional structure of emotion. Children of 89 from 3 to 5 years old were required to those facial expressions related the fourteen emotion terms. Facial expressions applied for experiment are used the photographs rated the degree of expression in each of the two dimensions (pleasure-displeasure dimension and arousal-sleep dimension) on a nine-point scale from 54 university students. The experimental results showed that children indicated the greater stability in arousal dimension than stability in pleasure-displeasure dimension. Emotions about sadness, sleepiness, anger and surprise onto two dimensions was understand very well, but emotions about fear, boredom were showed instability in pleasure-displeasure dimension. Specifically, 3 years old children indicated highly the perception in a degree of arousal-sleep than perception of pleasure-displeasure.

  • PDF