• Title/Summary/Keyword: 얼굴표정 인식기술

Search Result 66, Processing Time 0.033 seconds

A Simple Way to Find Face Direction (간단한 얼굴 방향성 검출방법)

  • Park Ji-Sook;Ohm Seong-Yong;Jo Hyun-Hee;Chung Min-Gyo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.2
    • /
    • pp.234-243
    • /
    • 2006
  • The recent rapid development of HCI and surveillance technologies has brought great interests in application systems to process faces. Much of research efforts in these systems has been primarily focused on such areas as face recognition, facial expression analysis and facial feature extraction. However, not many approaches have been reported toward face direction detection. This paper proposes a method to detect the direction of a face using a facial feature called facial triangle, which is formed by two eyebrows and the lower lip. Specifically, based on the single monocular view of the face, the proposed method introduces very simple formulas to estimate the horizontal or vertical rotation angle of the face. The horizontal rotation angle can be calculated by using a ratio between the areas of left and right facial triangles, while the vertical angle can be obtained from a ratio between the base and height of facial triangle. Experimental results showed that our method makes it possible to obtain the horizontal angle within an error tolerance of ${\pm}1.68^{\circ}$, and that it performs better as the magnitude of the vertical rotation angle increases.

  • PDF

Enhancing e-Learning Interactivity vla Emotion Recognition Computing Technology (감성 인식 컴퓨팅 기술을 적용한 이러닝 상호작용 기술 연구)

  • Park, Jung-Hyun;Kim, InOk;Jung, SangMok;Song, Ki-Sang;Kim, JongBaek
    • The Journal of Korean Association of Computer Education
    • /
    • v.11 no.2
    • /
    • pp.89-98
    • /
    • 2008
  • Providing appropriate interactions between learner and e- Learning system is an essential factor of a successful e-Learning system. Although many interaction functions are employed in multimedia Web-based Instruction content, learner experience a lack of similar feedbacks from educators in real- time due to the limitation of Human-Computer Interaction techniques. In this paper, an emotion recognition system via learner facial expressions has been developed and applied to a tutoring system. As human educators do, the system observes learners' emotions from facial expressions and provides any or all pertinent feedback. And various feedbacks can bring to motivations and get rid of isolation from e-Learning environments by oneself. The test results showed that this system may provide significant improvement in terms of interesting and educational achievement.

  • PDF

Robust AAM-based Face Tracking with Occlusion Using SIFT Features (SIFT 특징을 이용하여 중첩상황에 강인한 AAM 기반 얼굴 추적)

  • Eom, Sung-Eun;Jang, Jun-Su
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.355-362
    • /
    • 2010
  • Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.

Development of Emotion Recognition and Expression module with Speech Signal for Entertainment Robot (엔터테인먼트 로봇을 위한 음성으로부터 감정 인식 및 표현 모듈 개발)

  • Mun, Byeong-Hyeon;Yang, Hyeon-Chang;Sim, Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.82-85
    • /
    • 2007
  • 현재 가정을 비롯한 여러 분야에서 서비스 로봇(청소 로봇, 애완용 로봇, 멀티미디어 로봇 둥)의 사용이 증가하고 있는 시장상황을 보이고 있다. 개인용 서비스 로봇은 인간 친화적 특성을 가져야 그 선호도가 높아질 수 있는데 이를 위해서 사용자의 감정 인식 및 표현 기술은 필수적인 요소이다. 사람들의 감정 인식을 위해 많은 연구자들은 음성, 사람의 얼굴 표정, 생체신호, 제스쳐를 통해서 사람들의 감정 인식을 하고 있다. 특히, 음성을 인식하고 적용하는 것에 관한 연구가 활발히 진행되고 있다. 본 논문은 감정 인식 시스템을 두 가지 방법으로 제안하였다. 현재 많이 개발 되어지고 있는 음성인식 모듈을 사용하여 단어별 감정을 분류하여 감정 표현 시스템에 적용하는 것과 마이크로폰을 통해 습득된 음성신호로부터 특정들을 검출하여 Bayesian Learning(BL)을 적용시켜 normal, happy, sad, surprise, anger 등 5가지의 감정 상태로 패턴 분류를 한 후 이것을 동적 감정 표현 알고리즘의 입력값으로 하여 dynamic emotion space에 사람의 감정을 표현할 수 있는 ARM 플랫폼 기반의 음성 인식 및 감정 표현 시스템 제안한 것이다.

  • PDF

Local Prominent Directional Pattern for Gender Recognition of Facial Photographs and Sketches (Local Prominent Directional Pattern을 이용한 얼굴 사진과 스케치 영상 성별인식 방법)

  • Makhmudkhujaev, Farkhod;Chae, Oksam
    • Convergence Security Journal
    • /
    • v.19 no.2
    • /
    • pp.91-104
    • /
    • 2019
  • In this paper, we present a novel local descriptor, Local Prominent Directional Pattern (LPDP), to represent the description of facial images for gender recognition purpose. To achieve a clearly discriminative representation of local shape, presented method encodes a target pixel with the prominent directional variations in local structure from an analysis of statistics encompassed in the histogram of such directional variations. Use of the statistical information comes from the observation that a local neighboring region, having an edge going through it, demonstrate similar gradient directions, and hence, the prominent accumulations, accumulated from such gradient directions provide a solid base to represent the shape of that local structure. Unlike the sole use of gradient direction of a target pixel in existing methods, our coding scheme selects prominent edge directions accumulated from more samples (e.g., surrounding neighboring pixels), which, in turn, minimizes the effect of noise by suppressing the noisy accumulations of single or fewer samples. In this way, the presented encoding strategy provides the more discriminative shape of local structures while ensuring robustness to subtle changes such as local noise. We conduct extensive experiments on gender recognition datasets containing a wide range of challenges such as illumination, expression, age, and pose variations as well as sketch images, and observe the better performance of LPDP descriptor against existing local descriptors.

A study on baby face makeup to create a baby face image (동안이미지 연출을 위한 동안 메이크업에 관한 연구)

  • Yong-Shin Kim
    • Journal of the Korean Applied Science and Technology
    • /
    • v.40 no.1
    • /
    • pp.146-159
    • /
    • 2023
  • As a makeup technique for a baby-faced image, there will be a difference in perception of the expression technique of baby-faced makeup according to general matters.' Two hypotheses were supported: 'There will be a difference in perception of the expression technique of baby face makeup depending on the general characteristics', and the makeup technique for creating a baby face image is an important function for both men and women, as well as appearance. As a 'physical resource' for social activities, it was confirmed that there is an improvement in the efficiency of the body and mind and an outstanding improvement in mental ability in daily life. Through the results of the study on 'expression of baby face image makeup', awareness and interest in baby face images are high, but research on the production of baby face images is needed. The need for facial expression elements for baby face makeup is expected to be used as basic data for developing baby face images, and this study focuses on external face management for baby face images and baby face makeup.

The Effect of Bilateral Eye Movements on Face Recognition in Patients with Schizophrenia (양측성 안구운동이 조현병 환자의 얼굴 재인에 미치는 영향)

  • Lee, Na-Hyun;Kim, Ji-Woong;Im, Woo-Young;Lee, Sang-Min;Lim, Sanghyun;Kwon, Hyukchan;Kim, Min-Young;Kim, Kiwoong;Kim, Seung-Jun
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.24 no.1
    • /
    • pp.102-108
    • /
    • 2016
  • Objectives : The deficit of recognition memory has been found as one of the common neurocognitive impairments in patients with schizophrenia. In addition, they were reported to fail to enhance the memory about emotional stimuli. Previous studies have shown that bilateral eye movements enhance the memory retrieval. Therefore, this study was conducted in order to investigate the memory enhancement of bilaterally alternating eye movements in schizophrenic patients. Methods : Twenty one patients with schizophrenia participated in this study. The participants learned faces (angry or neutral faces), and then performed a recognition memory task in relation to the faces after bilateral eye movements and central fixation. Recognition accuracy, response bias, and mean response time to hits were compared and analysed. Two-way repeated measure analysis of variance was performed for statistical analysis. Results : There was a significant effect of bilateral eye movements condition in mean response time(F=5.812, p<0.05) and response bias(F=10.366, p<0.01). Statistically significant interaction effects were not observed between eye movement condition and face emotion type. Conclusions : Irrespective of the emotional difference of facial stimuli, recognition memory processing was more enhanced after bilateral eye movements in patients with schizophrenia. Further study will be needed to investigate the underlying neural mechanism of bilateral eye movements-induced memory enhancement in patients with schizophrenia.

Multi-classifier Decision-level Fusion for Face Recognition (다중 분류기의 판정단계 융합에 의한 얼굴인식)

  • Yeom, Seok-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.77-84
    • /
    • 2012
  • Face classification has wide applications in intelligent video surveillance, content retrieval, robot vision, and human-machine interface. Pose and expression changes, and arbitrary illumination are typical problems for face recognition. When the face is captured at a distance, the image quality is often degraded by blurring and noise corruption. This paper investigates the efficacy of multi-classifier decision level fusion for face classification based on the photon-counting linear discriminant analysis with two different cost functions: Euclidean distance and negative normalized correlation. Decision level fusion comprises three stages: cost normalization, cost validation, and fusion rules. First, the costs are normalized into the uniform range and then, candidate costs are selected during validation. Three fusion rules are employed: minimum, average, and majority-voting rules. In the experiments, unfocusing and motion blurs are rendered to simulate the effects of the long distance environments. It will be shown that the decision-level fusion scheme provides better results than the single classifier.

A Study on the Emoticon Extraction based on Facial Expression Recognition using Deep Learning Technique (딥 러닝 기술 이용한 얼굴 표정 인식에 따른 이모티콘 추출 연구)

  • Jeong, Bong-Jae;Zhang, Fan
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.2
    • /
    • pp.43-53
    • /
    • 2017
  • In this paper, the pattern of extracting the same expression is proposed by using the Android intelligent device to identify the facial expression. The understanding and expression of expression are very important to human computer interaction, and the technology to identify human expressions is very popular. Instead of searching for the emoticons that users often use, you can identify facial expressions with acamera, which is a useful technique that can be used now. This thesis puts forward the technology of the third data is available on the website of the set, use the content to improve the infrastructure of the facial expression recognition accuracy, in order to improve the synthesis of neural network algorithm, making the facial expression recognition model, the user's facial expressions and similar e xpressions, reached 66%.It doesn't need to search for emoticons. If you use the camera to recognize the expression, itwill appear emoticons immediately. So this service is the emoticons used when people send messages to others, and it can feel a lot of convenience. In countless emoticons, there is no need to find emoticons, which is an increasing trend in deep learning. So we need to use more suitable algorithm for expression recognition, and then improve accuracy.