• Title/Summary/Keyword: 얼굴정서

Search Result 93, Processing Time 0.022 seconds

3D Facial Modeling and Synthesis System for Realistic Facial Expression (자연스러운 표정 합성을 위한 3차원 얼굴 모델링 및 합성 시스템)

  • 심연숙;김선욱;한재현;변혜란;정창섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.1-10
    • /
    • 2000
  • Realistic facial animation research field which communicates with human and computer using face has increased recently. The human face is the part of the body we use to recognize individuals and the important communication channel that understand the inner states like emotion. To provide the intelligent interface. computer facial animation looks like human in talking and expressing himself. Facial modeling and animation research is focused on realistic facial animation recently. In this article, we suggest the method of facial modeling and animation for realistic facial synthesis. We can make a 3D facial model for arbitrary face by using generic facial model. For more correct and real face, we make the Korean Generic Facial Model. We can also manipulate facial synthesis based on the physical characteristics of real facial muscle and skin. Many application will be developed such as teleconferencing, education, movies etc.

  • PDF

The Emotional Boundary Decision in a Linear Affect-Expression Space for Effective Robot Behavior Generation (효과적인 로봇 행동 생성을 위한 선형의 정서-표정 공간 내 감정 경계의 결정 -비선형의 제스처 동기화를 위한 정서, 표정 공간의 영역 결정)

  • Jo, Su-Hun;Lee, Hui-Sung;Park, Jeong-Woo;Kim, Min-Gyu;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.540-546
    • /
    • 2008
  • In the near future, robots should be able to understand human's emotional states and exhibit appropriate behaviors accordingly. In Human-Human Interaction, the 93% consist of the speaker's nonverbal communicative behavior. Bodily movements provide information of the quantity of emotion. Latest personal robots can interact with human using multi-modality such as facial expression, gesture, LED, sound, sensors and so on. However, a posture needs a position and an orientation only and in facial expression or gesture, movements are involved. Verbal, vocal, musical, color expressions need time information. Because synchronization among multi-modalities is a key problem, emotion expression needs a systematic approach. On the other hand, at low intensity of surprise, the face could be expressed but the gesture could not be expressed because a gesture is not linear. It is need to decide the emotional boundaries for effective robot behavior generation and synchronization with another expressible method. If it is so, how can we define emotional boundaries? And how can multi-modality be synchronized each other?

  • PDF

Alexithymia and the Recognition of Facial Emotion in Schizophrenic Patients (정신분열병 환자에서의 감정표현불능증과 얼굴정서인식결핍)

  • Noh, Jin-Chan;Park, Sung-Hyouk;Kim, Kyung-Hee;Kim, So-Yul;Shin, Sung-Woong;Lee, Koun-Seok
    • Korean Journal of Biological Psychiatry
    • /
    • v.18 no.4
    • /
    • pp.239-244
    • /
    • 2011
  • Objectives Schizophrenic patients have been shown to be impaired in both emotional self-awareness and recognition of others' facial emotions. Alexithymia refers to the deficits in emotional self-awareness. The relationship between alexithymia and recognition of others' facial emotions needs to be explored to better understand the characteristics of emotional deficits in schizophrenic patients. Methods Thirty control subjects and 31 schizophrenic patients completed the Toronto Alexithymia Scale-20-Korean version (TAS-20K) and facial emotion recognition task. The stimuli in facial emotion recognition task consist of 6 emotions (happiness, sadness, anger, fear, disgust, and neutral). Recognition accuracy was calculated within each emotion category. Correlations between TAS-20K and recognition accuracy were analyzed. Results The schizophrenic patients showed higher TAS-20K scores and lower recognition accuracy compared with the control subjects. The schizophrenic patients did not demonstrate any significant correlations between TAS-20K and recognition accuracy, unlike the control subjects. Conclusions The data suggest that, although schizophrenia may impair both emotional self-awareness and recognition of others' facial emotions, the degrees of deficit can be different between emotional self-awareness and recognition of others' facial emotions. This indicates that the emotional deficits in schizophrenia may assume more complex features.

Multidimensional Affective model-based Multimodal Complex Emotion Recognition System using Image, Voice and Brainwave (다차원 정서모델 기반 영상, 음성, 뇌파를 이용한 멀티모달 복합 감정인식 시스템)

  • Oh, Byung-Hun;Hong, Kwang-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.04a
    • /
    • pp.821-823
    • /
    • 2016
  • 본 논문은 다차원 정서모델 기반 영상, 음성, 뇌파를 이용한 멀티모달 복합 감정인식 시스템을 제안한다. 사용자의 얼굴 영상, 목소리 및 뇌파를 기반으로 각각 추출된 특징을 심리학 및 인지과학 분야에서 인간의 감정을 구성하는 정서적 감응요소로 알려진 다차원 정서모델(Arousal, Valence, Dominance)에 대한 명시적 감응 정도 데이터로 대응하여 스코어링(Scoring)을 수행한다. 이후, 스코어링을 통해 나온 결과 값을 이용하여 다차원으로 구성되는 3차원 감정 모델에 매핑하여 인간의 감정(단일감정, 복합감정)뿐만 아니라 감정의 세기까지 인식한다.

Emotion Training: Image Color Transfer with Facial Expression and Emotion Recognition (감정 트레이닝: 얼굴 표정과 감정 인식 분석을 이용한 이미지 색상 변환)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.4
    • /
    • pp.1-9
    • /
    • 2018
  • We propose an emotional training framework that can determine the initial symptom of schizophrenia by using emotional analysis method through facial expression change. We use Emotion API in Microsoft to obtain facial expressions and emotion values at the present time. We analyzed these values and recognized subtle facial expressions that change with time. The emotion states were classified according to the peak analysis-based variance method in order to measure the emotions appearing in facial expressions according to time. The proposed method analyzes the lack of emotional recognition and expressive ability by using characteristics that are different from the emotional state changes classified according to the six basic emotions proposed by Ekman. As a result, the analyzed values are integrated into the image color transfer framework so that users can easily recognize and train their own emotional changes.

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

The Effect of Bilateral Eye Movements on Face Recognition in Patients with Schizophrenia (양측성 안구운동이 조현병 환자의 얼굴 재인에 미치는 영향)

  • Lee, Na-Hyun;Kim, Ji-Woong;Im, Woo-Young;Lee, Sang-Min;Lim, Sanghyun;Kwon, Hyukchan;Kim, Min-Young;Kim, Kiwoong;Kim, Seung-Jun
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.24 no.1
    • /
    • pp.102-108
    • /
    • 2016
  • Objectives : The deficit of recognition memory has been found as one of the common neurocognitive impairments in patients with schizophrenia. In addition, they were reported to fail to enhance the memory about emotional stimuli. Previous studies have shown that bilateral eye movements enhance the memory retrieval. Therefore, this study was conducted in order to investigate the memory enhancement of bilaterally alternating eye movements in schizophrenic patients. Methods : Twenty one patients with schizophrenia participated in this study. The participants learned faces (angry or neutral faces), and then performed a recognition memory task in relation to the faces after bilateral eye movements and central fixation. Recognition accuracy, response bias, and mean response time to hits were compared and analysed. Two-way repeated measure analysis of variance was performed for statistical analysis. Results : There was a significant effect of bilateral eye movements condition in mean response time(F=5.812, p<0.05) and response bias(F=10.366, p<0.01). Statistically significant interaction effects were not observed between eye movement condition and face emotion type. Conclusions : Irrespective of the emotional difference of facial stimuli, recognition memory processing was more enhanced after bilateral eye movements in patients with schizophrenia. Further study will be needed to investigate the underlying neural mechanism of bilateral eye movements-induced memory enhancement in patients with schizophrenia.

Differentiation of Facial EMG Responses Induced by Positive and Negative Emotions in Children (긍정정서와 부정정서에 따른 아동의 안면근육반응 차이)

  • Jang Eun-Hye;Lim Hye-Jin;Lee Young-Chang;Chung Soon-Cheol;Sohn Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.8 no.2
    • /
    • pp.161-167
    • /
    • 2005
  • The study is to examine how facial EMG responses change when children experience a positive emotion(happiness) and a negative emotion(fear). It is to prove that the positive emotion(happiness) could be distinguishable from the negative emotion(fear) by the EMG responses. Audiovisual film clips were used for evoking the positive emotion(happiness) and the negative emotion(fear). 47 children (11-13 years old, 23 boys and 24 girls) participated in the study Facial EMG (right corrugator and orbicularis oris) was measured while children were experiencing the positive or negative emotion. Emotional assessment scale was used for measuring children's psychological responses. It showed more than $85\%$ appropriateness and 3.15, 4.04 effectiveness (5 scale) for happiness and fear, respectively. Facial EMG responses were significantly different between a resting state and a emotional state both in happiness and in fear (p<001). Result suggests that each emotion was distinguishable by corrugator and orbicularis oris responses. Specifically, corrugator was more activated in the positive emotion(happiness) than in the negative emotion(fear), whereas orbicularis oris was more activated in the negative emotion(fear) than in the positive emotion(fear).

  • PDF

Passports Recognition using ART2 Algorithm and Face Verification (ART2 알고리즘과 얼굴 인증을 이용한 여권 인식)

  • Jang, Do-Won;Kim, Kwang-Baek
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.05a
    • /
    • pp.190-197
    • /
    • 2005
  • 본 논문에서는 출입국자 관리의 효율성과 체계적인 출입국 관리를 위하여 여권 코드를 자동으로 인식하고 위조 여권을 판별할 수 있는 여권 인식 및 얼굴 인증 방법을 제안한다. 여권 이미지는 기울어진 상태로 스캔되어 획득되어질 수도 있으므로 기울기 보정은 문자 분할 및 인식, 얼굴 인증에 있어 매우 중요하다. 따라서 본 논문에서는 여권 영상을 스미어링한 후, 추출된 문자열 중에서 가장 긴 문자열을 선택하고 이 문자열의 좌측과 우측 부분의 두께 중심을 연결하는 직선과 수평선과의 기울기를 이용하여 여권 여상에 대한 각도 보정을 수행한다. 여권 코드 추출은 소벨 연산자와 수평 스미어링, 8방향 윤곽선 추적 알고리즘을 적용하여 여권 코드의 문자열 영역을 추출하고, 추출된 여권 코드 문자열 영역에 대해 반복 이지화 방법을 적용하여 코드의 문자열 영역을 이진화한다. 이진화된 문자열 영역에 대해 CDM 마스크를 적용하여 문자열의 코드들을 복원하고 8방향 윤곽선 추적 알고리즘을 적용하여 개별 코드를 추출한다. 추출된 개별 코드는 ART2 알고리즘을 적용하여 인식한다. 얼굴 인증을 위해 템플릿 매칭 알고리즘을 이용하여 얼굴 템플릿 데이터베이스를 구축하고 여권에서 추출된 얼굴 영역과의 유사도 측정을 통하여 여권 얼굴 영역의 위조 여부를 판별한다. 얼굴 인증을 위해서 Hue, YIQ-I, YCbCr-Cb 특징들의 유사도를 종합적으로 분석하여 얼굴 인증에 적용한다. 제안된 여권 인식 및 얼굴 인증 방법의 성능을 평가를 위하여 원본 여권에 얼굴 부분을 위조한 여권과 노이즈, 대비 증가 및 감소, 밝기 증가 및 감소 및 여권 영상을 흐리게 하여 실험한 결과, 제안된 방법이 여권 코드 인식 및 얼굴 인증에 있어서 우수한 성능이 있음을 확인하였다.권 영상에서 획득되어진 얼굴 영상의 특징벡터와 데이터베이스에 있는 얼굴 영상의 특징벡터와의 거리 값을 계산하여 사진 위조 여부를 판별한다. 제안된 여권 인식 및 얼굴 인증 방법의 성능을 평가를 위하여 원본 여권에서 얼굴 부분을 위조한 여권과 기울어진 여권 영상을 대상으로 실험한 결과, 제안된 방법이 여권의 코드 인식 및 얼굴 인증에 있어서 우수한 성능이 있음을 확인하였다.진행하고 있다.태도와 유아의 창의성간에는 상관이 없는 것으로 나타났고, 일반 유아의 아버지 양육태도와 유아의 창의성간의 상관에서는 아버지 양육태도의 성취-비성취 요인에서와 창의성제목의 추상성요인에서 상관이 있는 것으로 나타났다. 따라서 창의성이 높은 아동의 아버지의 양육태도는 일반 유아의 아버지와 보다 더 애정적이며 자율성이 높지만 창의성이 높은 아동의 집단내에서 창의성에 특별한 영향을 더 미치는 아버지의 양육방식은 발견되지 않았다. 반면 일반 유아의 경우 아버지의 성취지향성이 낮을 때 자녀의 창의성을 향상시킬 수 있는 것으로 나타났다. 이상에서 자녀의 창의성을 향상시키는 중요한 양육차원은 애정성이나 비성취지향성으로 나타나고 있어 정서적인 측면의 지원인 것으로 밝혀졌다.징에서 나타나는 AD-SR맥락의 반성적 탐구가 자주 나타났다. 반성적 탐구 척도 두 그룹을 비교 했을 때 CON 상호작용의 특징이 낮게 나타나는 N그룹이 양적으로 그리고 내용적으로 더 의미 있는 반성적 탐구를 했다용을 지원하는 홈페이지를 만들어 자료 제공 사이트에 대한 메타 자료를 데이터베이스화했으며 이를 통해 학생들이 원하는 실시간 자료를 검색하여 찾을 수 있고 홈페이지를 방분했을 때 이해하기 어려운 그래프나 각 홈페이지가 제공하는 자료들에 대한 처리 방법을 도움말로 제공받을 수 있게 했다. 실

  • PDF

An Analytical Study on Precedents of Emotional Child Abuse at Daycare Centers of Korea : Focusing on Emotional Abuse Type, Issues, and Preventive Measures (국내 어린이집의 아동학대 판례 분석 연구 : 정서적 학대 유형, 쟁점 사안 및 예방대책을 중심으로)

  • Youn, Ki-Hyok
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.5
    • /
    • pp.157-167
    • /
    • 2017
  • This study aims to establish measures to prevent emotional abuse by analyzing court precedents related to emotional abuse at daycare centers of Korea. For this, ten precedents of first trial and four precedents of appellate trial related to emotional abuse were analyzed in depth. In the results of study, there were diverse types of emotional abuse at daycare centers such as assaulting(hitting head, face, and buttocks with hands and feet), throwing things, neglecting, forcefully feeding food, stopping mouth with handkerchief and wet tissue, skipping meals, and showing scary images. And the main issues of precedents included the matter of justifiable act as circumstances precluding wrongfulness, and the matter of applying the joint penal provision. Based on such results, the measures to prevent emotional abuse at daycare centers were suggested.