• 제목/요약/키워드: facial recognition

검색결과 711건 처리시간 0.032초

Emotion Recognition Using Eigenspace

  • Lee, Sang-Yun;Oh, Jae-Heung;Chung, Geun-Ho;Joo, Young-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.111.1-111
    • /
    • 2002
  • System configuration 1. First is the image acquisition part 2. Second part is for creating the vector image and for processing the obtained facial image. This part is for finding the facial area from the skin color. To do this, we can first find the skin color area with the highest weight from eigenface that consists of eigenvector. And then, we can create the vector image of eigenface from the obtained facial area. 3. Third is recognition module portion.

  • PDF

공감-체계화 유형에 따른 얼굴 표정 읽기의 차이 - 정서읽기와 정서변별을 중심으로 - (Difference in reading facial expressions as the empathy-systemizing type - focusing on emotional recognition and emotional discrimination -)

  • 태은주;조경자;박수진;한광희;김혜리
    • 감성과학
    • /
    • 제11권4호
    • /
    • pp.613-628
    • /
    • 2008
  • 본 연구는 공감-체계화 유형, 얼굴제시영역, 정서유형에 따른 정서 인식과 정서 변별 간 관계를 알아보기 위하여 수행되었다. 실험 1에서는 개인의 공감-체계화 유형, 얼굴제시영역, 정서유형에 따라 정서 인식 정도가 어떻게 달라지는지 알아보았다. 그 결과 공감-체계화 유형에 따른 정서 인식 정도에는 유의미한 차이가 없었고, 얼굴제시영역과 정서유형에 따른 차이는 유의미하게 나타났다. 실험 2에서는 과제를 바꾸어 개인의 공감-체계화 유형, 얼굴제시영역, 정서유형에 따라 정서 변별 정도에 차이가 있는지 알아보았다. 그 결과 얼굴제시영역과 정서 유형에 따른 정서 변별 정도에 유의미한 차이가 있었다. 공감-체계화 유형과 정서유형 간 유의미한 상호작용이 있었는데, 기본정서에서는 공감-체계화 유형에 따른 변별 정도가 유의미한 차이를 보이지 않은 반면, 복합정서에서는 공감-체계화 유형 간 유의미한 차이를 보였다. 즉, 정서 인식과 달리 정서 변별에 있어서는 정서 유형에 따라 공감-체계화 유형 간 정확률에 차이가 나타났다. 이는 정서를 인식하는 것과 변별하는 것이 공감-체계화 유형에 따라 다르게 나타난다는 것을 보여준다. 본 연구를 통해 한 개인이 가지고 있는 공감하기와 체계화하기 특성, 얼굴제시영역, 정서유형이 정서인식과 정서 변별에 서로 다른 영향을 줄 수 있다는 것을 밝혔다.

  • PDF

3차원 안면자동인식기(3D-AFRA)의 안면 표준점 인식 정확도 검증 (Point Recognition Precision Test of 3D Automatic Face Recognition Apparatus(3D-AFRA))

  • 석재화;조경래;조용범;유정희;곽창규;황민우;고병희;김종원;김규곤;이의주
    • 사상체질의학회지
    • /
    • 제19권1호
    • /
    • pp.50-59
    • /
    • 2007
  • 1. Objectives The Face is an important standard for the classification of Sasang Contitutions. Now We are developing 3D Automatic Face Recognition Apparatus to analyse the facial characteristics. This apparatus show us 3D image of man's face and measure facial figure. We should examine accuracy of position recognition in 3D Automatic Face Recognition Apparatus(3D-AFRA). 2. Methods We took a photograph of Face status with Land Mark by using 3D-AFRA. And We scanned Face status by using laser scanner(vivid 700). We analysed error average of distance between Facial Definition Points. We compare the average between using 3D-AFRA and using laser scanner. So We examined the accuracy of position recognition in 3D-AFRA at indirectly. 3. Results and Conclusions The error average of distance between Right Pupil and The Other Facial Definition Points is 0.5140mm and the error average of distance between Left Pupil and The Other Facial Definition Points is 0.5949mm in frontal image of face. The error average of distance between Left Pupil and The Other Facial Definition Points is 0.5308mm and the error average of distance between Left Tragion and The Other Facial Definition Points is 0.6529mm in laterall image of face. In conclusion, We assessed that accuracy of position recognition in 3D-AFRA is considerably good.

  • PDF

고객만족도 피드백을 위한 효율적인 얼굴감정 인식시스템에 대한 연구 (A Study on Efficient Facial Expression Recognition System for Customer Satisfaction Feedback)

  • 강민식
    • 융합보안논문지
    • /
    • 제12권4호
    • /
    • pp.41-47
    • /
    • 2012
  • B2C(Business to Customer) 산업에 있어 효율적인 성과관리를 위해서는 고객이 원하는 서비스 요소를 추론하여 고객이 원하는 서비스를 제공하고 그 결과를 평가하여 지속적으로 서비스품질 및 성과를 향상 할 수 있도록 해야 한다. 그것을 위한 중요한 요소는 고객 만족도의 정확한 피드백인데 현재 국내에는 고객의 만족도 측정에 대한 정량적이고 표준화된 시스템이 열악한 상황이다. 최근 얼굴표정 및 생체데이터를 감지하여 사람의 감정을 인식하는 휴대폰 및 관련서비스 기술에 관한 연구가 증가하고 있다. 얼굴에서의 감정인식은 현재 연구되어지는 여러 가지 감정인식 중에서 효율적이고 자연스러운 휴먼 인터페이스로 기대되고 있다. 본 연구에서는 효율적인 얼굴감정 인식에 대한 분석을 하고 고객의 얼굴감정인식을 이용하여 고객의 만족도를 추론할 수 있는 고객피드백시스템을 제안한다.

멀티스케일 LBP를 이용한 얼굴 감정 인식 (Recognition of Facial Emotion Using Multi-scale LBP)

  • 원철호
    • 한국멀티미디어학회논문지
    • /
    • 제17권12호
    • /
    • pp.1383-1392
    • /
    • 2014
  • In this paper, we proposed a method to automatically determine the optimal radius through multi-scale LBP operation generalizing the size of radius variation and boosting learning in facial emotion recognition. When we looked at the distribution of features vectors, the most common was $LBP_{8.1}$ of 31% and sum of $LBP_{8.1}$ and $LBP_{8.2}$ was 57.5%, $LBP_{8.3}$, $LBP_{8.4}$, and $LBP_{8.5}$ were respectively 18.5%, 12.0%, and 12.0%. It was found that the patterns of relatively greater radius express characteristics of face well. In case of normal and anger, $LBP_{8.1}$ and $LBP_{8.2}$ were mainly distributed. The distribution of $LBP_{8.3}$ is greater than or equal to the that of $LBP_{8.1}$ in laugh and surprise. It was found that the radius greater than 1 or 2 was useful for a specific emotion recognition. The facial expression recognition rate of proposed multi-scale LBP method was 97.5%. This showed the superiority of proposed method and it was confirmed through various experiments.

감성ICT 교육을 위한 얼굴감성 인식 시스템에 관한 연구 (A Study on the System of Facial Expression Recognition for Emotional Information and Communication Technology Teaching)

  • 송은지
    • 한국실천공학교육학회논문지
    • /
    • 제4권2호
    • /
    • pp.171-175
    • /
    • 2012
  • 최근 정보기술을 이용하여 인간의 감정을 인식하고 소통할 수 있는 ICT(Information and Communication Technology)의 연구가 증가하고 있다. 예를 들어 상대방의 마음을 읽기 위해서 그 사람과의 관계를 형성하고 활동을 해야만 하는 시대에서 사회의 디지털화로 그 경험이 디지털화 되어가며 마인드를 리딩 할 수 있는 디지털기기들이 출현하고 있다. 즉, 인간만이 예측할 수 있었던 감정을 디지털 기기가 대신해 줄 수 있게 된 것이다. 얼굴에서의 감정인식은 현재 연구되어지는 여러 가지 감정인식 중에서 효율적이고 자연스러운 휴먼 인터페이스로 기대되고 있다. 본 논문에서는 감성 ICT에 대한 고찰을 하고 그 사례로서 얼굴감정 인식 시스템에 대한 메카니즘을 살펴보고자 한다.

  • PDF

Conflict Resolution: Analysis of the Existing Theories and Resolution Strategies in Relation to Face Recognition

  • A. A. Alabi;B. S. Afolabi;B. I. Akhigbe;A. A. Ayoade
    • International Journal of Computer Science & Network Security
    • /
    • 제23권9호
    • /
    • pp.166-176
    • /
    • 2023
  • A scenario known as conflict in face recognition may arise as a result of some disparity-related issues (such as expression, distortion, occlusion and others) leading to a compromise of someone's identity or contradiction of the intended message. However, addressing this requires the determination and application of appropriate procedures among the various conflict theories both in terms of concepts as well as resolution strategies. Theories such as Marxist, Game theory (Prisoner's dilemma, Penny matching, Chicken problem), Lanchester theory and Information theory were analyzed in relation to facial images conflict and these were made possible by trying to provide answers to selected questions as far as resolving facial conflict is concerned. It has been observed that the scenarios presented in the Marxist theory agree with the form of resolution expected in the analysis of conflict and its related issues as they relate to face recognition. The study observed that the issue of conflict in facial images can better be analyzed using the concept introduced by the Marxist theory in relation to the Information theory. This is as a result of its resolution strategy which tends to seek a form of balance as result as opposed to the win or lose case scenarios applied in other concepts. This was also consolidated by making reference to the main mechanisms and result scenario applicable in Information theory.

얼굴 표정 인식 기술의 동향과 향후 방향: 텍스트 마이닝 분석을 중심으로 (Trends and Future Directions in Facial Expression Recognition Technology: A Text Mining Analysis Approach)

  • 전인수;이병천;임수빈;문지훈
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.748-750
    • /
    • 2023
  • Facial expression recognition technology's rapid growth and development have garnered significant attention in recent years. This technology holds immense potential for various applications, making it crucial to stay up-to-date with the latest trends and advancements. Simultaneously, it is essential to identify and address the challenges that impede the technology's progress. Motivated by these factors, this study aims to understand the latest trends, future directions, and challenges in facial expression recognition technology by utilizing text mining to analyze papers published between 2020 and 2023. Our research focuses on discerning which aspects of these papers provide valuable insights into the field's recent developments and issues. By doing so, we aim to present the information in an accessible and engaging manner for readers, enabling them to understand the current state and future potential of facial expression recognition technology. Ultimately, our study seeks to contribute to the ongoing dialogue and facilitate further advancements in this rapidly evolving field.

Person-Independent Facial Expression Recognition with Histograms of Prominent Edge Directions

  • Makhmudkhujaev, Farkhod;Iqbal, Md Tauhid Bin;Arefin, Md Rifat;Ryu, Byungyong;Chae, Oksam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권12호
    • /
    • pp.6000-6017
    • /
    • 2018
  • This paper presents a new descriptor, named Histograms of Prominent Edge Directions (HPED), for the recognition of facial expressions in a person-independent environment. In this paper, we raise the issue of sampling error in generating the code-histogram from spatial regions of the face image, as observed in the existing descriptors. HPED describes facial appearance changes based on the statistical distribution of the top two prominent edge directions (i.e., primary and secondary direction) captured over small spatial regions of the face. Compared to existing descriptors, HPED uses a smaller number of code-bins to describe the spatial regions, which helps avoid sampling error despite having fewer samples while preserving the valuable spatial information. In contrast to the existing Histogram of Oriented Gradients (HOG) that uses the histogram of the primary edge direction (i.e., gradient orientation) only, we additionally consider the histogram of the secondary edge direction, which provides more meaningful shape information related to the local texture. Experiments on popular facial expression datasets demonstrate the superior performance of the proposed HPED against existing descriptors in a person-independent environment.

Pose-normalized 3D Face Modeling for Face Recognition

  • Yu, Sun-Jin;Lee, Sang-Youn
    • 한국통신학회논문지
    • /
    • 제35권12C호
    • /
    • pp.984-994
    • /
    • 2010
  • Pose variation is a critical problem in face recognition. Three-dimensional(3D) face recognition techniques have been proposed, as 3D data contains depth information that may allow problems of pose variation to be handled more effectively than with 2D face recognition methods. This paper proposes a pose-normalized 3D face modeling method that translates and rotates any pose angle to a frontal pose using a plane fitting method by Singular Value Decomposition(SVD). First, we reconstruct 3D face data with stereo vision method. Second, nose peak point is estimated by depth information and then the angle of pose is estimated by a facial plane fitting algorithm using four facial features. Next, using the estimated pose angle, the 3D face is translated and rotated to a frontal pose. To demonstrate the effectiveness of the proposed method, we designed 2D and 3D face recognition experiments. The experimental results show that the performance of the normalized 3D face recognition method is superior to that of an un-normalized 3D face recognition method for overcoming the problems of pose variation.