• 제목/요약/키워드: facial areas

검색결과 193건 처리시간 0.024초

얼굴 특징 변화에 따른 휴먼 감성 인식 (Human Emotion Recognition based on Variance of Facial Features)

  • 이용환;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제16권4호
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

공감-체계화 유형에 따른 얼굴 표정 읽기의 차이 - 정서읽기와 정서변별을 중심으로 - (Difference in reading facial expressions as the empathy-systemizing type - focusing on emotional recognition and emotional discrimination -)

  • 태은주;조경자;박수진;한광희;김혜리
    • 감성과학
    • /
    • 제11권4호
    • /
    • pp.613-628
    • /
    • 2008
  • 본 연구는 공감-체계화 유형, 얼굴제시영역, 정서유형에 따른 정서 인식과 정서 변별 간 관계를 알아보기 위하여 수행되었다. 실험 1에서는 개인의 공감-체계화 유형, 얼굴제시영역, 정서유형에 따라 정서 인식 정도가 어떻게 달라지는지 알아보았다. 그 결과 공감-체계화 유형에 따른 정서 인식 정도에는 유의미한 차이가 없었고, 얼굴제시영역과 정서유형에 따른 차이는 유의미하게 나타났다. 실험 2에서는 과제를 바꾸어 개인의 공감-체계화 유형, 얼굴제시영역, 정서유형에 따라 정서 변별 정도에 차이가 있는지 알아보았다. 그 결과 얼굴제시영역과 정서 유형에 따른 정서 변별 정도에 유의미한 차이가 있었다. 공감-체계화 유형과 정서유형 간 유의미한 상호작용이 있었는데, 기본정서에서는 공감-체계화 유형에 따른 변별 정도가 유의미한 차이를 보이지 않은 반면, 복합정서에서는 공감-체계화 유형 간 유의미한 차이를 보였다. 즉, 정서 인식과 달리 정서 변별에 있어서는 정서 유형에 따라 공감-체계화 유형 간 정확률에 차이가 나타났다. 이는 정서를 인식하는 것과 변별하는 것이 공감-체계화 유형에 따라 다르게 나타난다는 것을 보여준다. 본 연구를 통해 한 개인이 가지고 있는 공감하기와 체계화하기 특성, 얼굴제시영역, 정서유형이 정서인식과 정서 변별에 서로 다른 영향을 줄 수 있다는 것을 밝혔다.

  • PDF

간단한 얼굴 방향성 검출방법 (A Simple Way to Find Face Direction)

  • 박지숙;엄성용;조현희;정민교
    • 한국멀티미디어학회논문지
    • /
    • 제9권2호
    • /
    • pp.234-243
    • /
    • 2006
  • 최근 급속한 HCI(Human-Computer Interaction) 및 감시 기술의 발달로, 얼굴영상을 처리하는 다양한 시스템들에 대한 관심이 높아지고 있다. 그러나 이런 얼굴영상을 처리하는 시스템들에 대한 연구는 주로 얼굴인식이나 얼굴 표정분석과 같은 분야에 집중되었고, 얼굴의 방향성 검출과 같은 분야에는 많은 연구가 수행되지 못하였다. 본 논문은 두 눈썹과 아래 입술로 구성된 얼굴삼각형(Facial Triangle)이라는 특징을 이용하여 얼굴의 방향성을 쉽게 측정하는 방법을 제안한다. 특히, 하나의 이미지만을 사용하여 얼굴의 수평 회전각과 수직 회전각을 구하는 간단한 공식을 소개한다. 수평회전각은 좌 우 얼굴삼각형간의 면적비율을 이용하여 계산하고, 수직회전각은 얼굴삼각형의 밑변과 높이 비율을 이용하여 계산한다. 실험을 통해, 제안하는 방법은 오차범위 ${\pm}1.68^{\circ}$ 내에서 수평회전각을 구할 수 있었고, 수직회전각은 회전각이 증가할수록 오류가 줄어드는 경향을 보여주었다.

  • PDF

얼굴 표정 인식을 위한 방향성 LBP 특징과 분별 영역 학습 (Learning Directional LBP Features and Discriminative Feature Regions for Facial Expression Recognition)

  • 강현우;임길택;원철호
    • 한국멀티미디어학회논문지
    • /
    • 제20권5호
    • /
    • pp.748-757
    • /
    • 2017
  • In order to recognize the facial expressions, good features that can express the facial expressions are essential. It is also essential to find the characteristic areas where facial expressions appear discriminatively. In this study, we propose a directional LBP feature for facial expression recognition and a method of finding directional LBP operation and feature region for facial expression classification. The proposed directional LBP features to characterize facial fine micro-patterns are defined by LBP operation factors (direction and size of operation mask) and feature regions through AdaBoost learning. The facial expression classifier is implemented as a SVM classifier based on learned discriminant region and directional LBP operation factors. In order to verify the validity of the proposed method, facial expression recognition performance was measured in terms of accuracy, sensitivity, and specificity. Experimental results show that the proposed directional LBP and its learning method are useful for facial expression recognition.

주색상 기반의 애니메이션 캐릭터 얼굴과 구성요소 검출 (Face and Its Components Extraction of Animation Characters Based on Dominant Colors)

  • 장석우;신현민;김계영
    • 한국컴퓨터정보학회논문지
    • /
    • 제16권10호
    • /
    • pp.93-100
    • /
    • 2011
  • 애니메이션 캐릭터의 감정과 성격을 가장 잘 표출해 낼 수 있는 부분이 캐릭터의 얼굴이므로 애니메이션 캐릭터의 얼굴과 얼굴의 구성요소를 효과적으로 분석하여 필요한 정보를 추출하는 연구의 필요성이 증가하고 있는 추세이다. 본 논문에서는 애니메이션 캐릭터 얼굴의 특성에 맞게 수정한 메쉬모델을 정의하고, 주색상 정보를 이용하여 애니메이션 캐릭터의 얼굴과 얼굴의 구성요소를 효과적으로 검출하는 방법을 제안한다. 제안된 시스템은 먼저 애니메이션 캐릭터 얼굴에 맞는 메쉬모델을 생성하고, 이 메쉬모델을 처음 입력되는 애니메이션 캐릭터의 얼굴에 정합시켜 얼굴과 얼굴의구성요소에 대한 주색상 값을 추출한다. 그리고 추출된 주색상 값을 이용하여 새롭게 입력되는 영상으로부터 캐릭터의 얼굴과 얼굴 구성요소의 후보 영역을 선정한 후, 모델로부터 추출한 주색상 정보와 후보 영역들의 주색상 사이의 유사도를 측정하여 최종적인 애니메이션 캐릭터의 얼굴과 얼굴의 구성요소를 검출한다. 본 논문의 실험에서는 제안된 애니메이션 캐릭터 얼굴과 구성요소 검출 방법의 성능을 평가하기 위한 실험결과를 보인다.

칼라 참조 맵과 움직임 정보를 이용한 얼굴영역 추출 (Facial region Extraction using Skin-color reference map and Motion Information)

  • 이병석;이동규;이두수
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 제14회 신호처리 합동 학술대회 논문집
    • /
    • pp.139-142
    • /
    • 2001
  • This paper presents a highly fast and accurate facial region extraction method by using the skin-color-reference map and motion information. First, we construct the robust skin-color-reference map and eliminate the background in image by this map. Additionally, we use the motion information for accurate and fast detection of facial region in image sequences. Then we further apply region growing in the remaining areas with the aid of proposed criteria. The simulation results show the improvement in execution time and accurate detection.

  • PDF

Center Position Tracking Enhancement of Eyes and Iris on the Facial Image

  • Chai Duck-hyun;Ryu Kwang-ryol
    • Journal of information and communication convergence engineering
    • /
    • 제3권2호
    • /
    • pp.110-113
    • /
    • 2005
  • An enhancement of tracking capacity for the centering position of eye and iris on the facial image is presented. A facial image is acquisitioned with a CCD camera to be converted into a binary image. The eye region to be a specified brightness and shapes is used the FRM method using the neighboring five mask areas, and the iris on the eye is tracked with FPDP method. The experimental result shows that the proposed methods lead the centering position tracking capability to be enhanced than the pixel average coordinate values method.

Soft-tissue thickness of South Korean adults with normal facial profiles

  • Cha, Kyung-Suk
    • 대한치과교정학회지
    • /
    • 제43권4호
    • /
    • pp.178-185
    • /
    • 2013
  • Objective: To standardize the facial soft-tissue characteristics of South Korean adults according to gender by measuring the soft-tissue thickness of young men and women with normal facial profiles by using three-dimensional (3D) reconstructed models. Methods: Computed tomographic images of 22 men aged 20 - 27 years and 18 women aged 20 - 26 years with normal facial profiles were obtained. The hard and soft tissues were three-dimensionally reconstructed by using Mimics software. The soft-tissue thickness was measured from the underlying bony surface at bilateral (frontal eminence, supraorbital, suborbital, inferior malar, lateral orbit, zygomatic arch, supraglenoid, gonion, supraM2, occlusal line, and subM2) and midline (supraglabella, glabella, nasion, rhinion, mid-philtrum, supradentale, infradentale, supramentale, mental eminence, and menton) landmarks. Results: The men showed significantly thicker soft tissue at the supraglabella, nasion, rhinion, mid-philtrum, supradentale, and supraglenoid points. In the women, the soft tissue was significantly thicker at the lateral orbit, inferior malar, and gonion points. Conclusions: The soft-tissue thickness in different facial areas varies according to gender. Orthodontists should use a different therapeutic approach for each gender.

Transfer Learning for Face Emotions Recognition in Different Crowd Density Situations

  • Amirah Alharbi
    • International Journal of Computer Science & Network Security
    • /
    • 제24권4호
    • /
    • pp.26-34
    • /
    • 2024
  • Most human emotions are conveyed through facial expressions, which represent the predominant source of emotional data. This research investigates the impact of crowds on human emotions by analysing facial expressions. It examines how crowd behaviour, face recognition technology, and deep learning algorithms contribute to understanding the emotional change according to different level of crowd. The study identifies common emotions expressed during congestion, differences between crowded and less crowded areas, changes in facial expressions over time. The findings can inform urban planning and crowd event management by providing insights for developing coping mechanisms for affected individuals. However, limitations and challenges in using reliable facial expression analysis are also discussed, including age and context-related differences.

행간(行間)(LR2) 전침자극(電鍼刺戟)이 적외선(赤外線) 체열진단상(體熱診斷上) 안면부(顔面部) 온도변화(溫度變化)에 미치는 영향(影響) (Effects of electroacupuncture stimulation at Xingjian(LR2) on the facial thermal change by D.I.T.I)

  • 김종욱;최성용;진경선;황우준;민상준;이순호;이상룡
    • Journal of Acupuncture Research
    • /
    • 제21권1호
    • /
    • pp.226-239
    • /
    • 2004
  • Objective: Purpose of this study was to examine the effect of electroacupuncture(EA) at Xingjian(LR2) as 'Fire(火)' point of The Leg Absolute Um Liver Meridan(足厥陰肝經 : Chok-Kworum-Kan-Kyong) on the facial thermal change. Methods: Subjects of this study were 15 patients with upperpart(includes head and facial part) fever of human body and two examinations were carried out in each other day. We divided cases of two examinations into two groups. One is experimental group(N=15) that was carried out electroacupuncture stimulation at Xingjian(LR2), the other is control group(N=15) which was carried out electroacupuncture stimulation at optional point(in space between 1st and 2nd fingers) except acupuncture points of 12 meridians. We took the temperature of fixed areas on face by digital infrared thermal image(D.I.T.I.) before and after electroacupuncture stimulation. Those fixed areas on face that was taken temperature are Jingming(BL1), Sibai(ST2), Dicang(ST4), Indang, Shuigou(GV26), Chengjiang(CV24) areas. In cases of temperature of Jingming(BL1), Sibai(ST2), Dicang(ST4) areas, we applied each mean of left and right temperature to statical analysis. Results: In the group of electroacupuncture stimulation at Xingjian(LR2), temperature of every fixed areas on face fell: Jingming(BL1) area's ${\Delta}T=-0.7007{\pm}0.78642$, Sibai(ST2) area's ${\Delta}T=-0.6280{\pm}0.56439$, Dicang(ST4) area's ${\Delta}T=-0.5940{\pm}0.60179$, Indang area's ${\Delta}T=-0.7200{\pm}0.64515$, Shuigou(GV26) area's ${\Delta}T=-0.6160{\pm}0.80487$, Chengjiang(CV24) area's ${\Delta}T=-0.5627{\pm}0.72615$. In Xingjian(LR2) electroacupuncture group, each temperature of Jingming(BL1), Sibai(ST2), Indang areas showed a drop significantly in comparison with control group (p<0.05). But each temperature of Dicang(ST4), Shuigou(GV26), Chengjiang(CV24) areas did not showed a drop significantly in comparison with control group(p>0.05). Conclusions: The results mentioned above showed that electroacupuncture stimulation at Xingjian(LR2) significantly decreased the temperature on face of patients with upperpart fever of human body. In Xingjian(LR2) electroacupuncture group, especially temperature of upper part of face includes eye, cheekbone, forehead regions showed a drop significantly in comparison with control group.

  • PDF