• 제목/요약/키워드: facial emotion processing

검색결과 35건 처리시간 0.023초

Discriminative Effects of Social Skills Training on Facial Emotion Recognition among Children with Attention-Deficit/Hyperactivity Disorder and Autism Spectrum Disorder

  • Lee, Ji-Seon;Kang, Na-Ri;Kim, Hui-Jeong;Kwak, Young-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • 제29권4호
    • /
    • pp.150-160
    • /
    • 2018
  • Objectives: This study investigated the effect of social skills training (SST) on facial emotion recognition and discrimination in children with attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). Methods: Twenty-three children aged 7 to 10 years participated in our SST. They included 15 children diagnosed with ADHD and 8 with ASD. The participants' parents completed the Korean version of the Child Behavior Checklist (K-CBCL), the ADHD Rating Scale, and Conner's Scale at baseline and post-treatment. The participants completed the Korean Wechsler Intelligence Scale for Children-IV (K-WISC-IV) and the Advanced Test of Attention at baseline and the Penn Emotion Recognition and Discrimination Task at baseline and post-treatment. Results: No significant changes in facial emotion recognition and discrimination occurred in either group before and after SST. However, when controlling for the processing speed of K-WISC and the social subscale of K-CBCL, the ADHD group showed more improvement in total (p=0.049), female (p=0.039), sad (p=0.002), mild (p=0.015), female extreme (p=0.005), male mild (p=0.038), and Caucasian (p=0.004) facial expressions than did the ASD group. Conclusion: SST improved facial expression recognition for children with ADHD more effectively than it did for children with ASD, in whom additional training to help emotion recognition and discrimination is needed.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • 제17권4호
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템 (Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID)

  • 이상현;양성훈;오승진;강진범
    • 지능정보연구
    • /
    • 제28권1호
    • /
    • pp.89-106
    • /
    • 2022
  • 최근 영상 데이터의 급증으로 이를 효과적으로 처리하기 위해 객체 탐지 및 추적, 행동 인식, 표정 인식, 재식별(Re-ID)과 같은 다양한 컴퓨터비전 기술에 대한 수요도 급증했다. 그러나 객체 탐지 및 추적 기술은 객체의 영상 촬영 장소 이탈과 재등장, 오클루전(Occlusion) 등과 같이 성능을 저하시키는 많은 어려움을 안고 있다. 이에 따라 객체 탐지 및 추적 모델을 근간으로 하는 행동 및 표정 인식 모델 또한 객체별 데이터 추출에 난항을 겪는다. 또한 다양한 모델을 활용한 딥러닝 아키텍처는 병목과 최적화 부족으로 성능 저하를 겪는다. 본 연구에서는 YOLOv5기반 DeepSORT 객체추적 모델, SlowFast 기반 행동 인식 모델, Torchreid 기반 재식별 모델, 그리고 AWS Rekognition의 표정 인식 모델을 활용한 영상 분석 시스템에 단일 연결 계층적 군집화(Single-linkage Hierarchical Clustering)를 활용한 재식별(Re-ID) 기법과 GPU의 메모리 스루풋(Throughput)을 극대화하는 처리 기법을 적용한 행동 및 표정 검출용 영상 분석 시스템을 제안한다. 본 연구에서 제안한 시스템은 간단한 메트릭을 사용하는 재식별 모델의 성능보다 높은 정확도와 실시간에 가까운 처리 성능을 가지며, 객체의 영상 촬영 장소 이탈과 재등장, 오클루전 등에 의한 추적 실패를 방지하고 영상 내 객체별 행동 및 표정 인식 결과를 동일 객체에 지속적으로 연동하여 영상을 효율적으로 분석할 수 있다.

Emotion Recognition Using Eigenspace

  • Lee, Sang-Yun;Oh, Jae-Heung;Chung, Geun-Ho;Joo, Young-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.111.1-111
    • /
    • 2002
  • System configuration 1. First is the image acquisition part 2. Second part is for creating the vector image and for processing the obtained facial image. This part is for finding the facial area from the skin color. To do this, we can first find the skin color area with the highest weight from eigenface that consists of eigenvector. And then, we can create the vector image of eigenface from the obtained facial area. 3. Third is recognition module portion.

  • PDF

Energy-Efficient DNN Processor on Embedded Systems for Spontaneous Human-Robot Interaction

  • Kim, Changhyeon;Yoo, Hoi-Jun
    • Journal of Semiconductor Engineering
    • /
    • 제2권2호
    • /
    • pp.130-135
    • /
    • 2021
  • Recently, deep neural networks (DNNs) are actively used for action control so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, the real-time operation is essential in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. In this paper, we propose an energy-efficient DNN processor with a LUT-based processing engine and near-zero skipper. A CNN-based facial emotion recognition and an RNN-based emotional dialogue generation model is integrated for natural HRI system and tested with the proposed processor. It supports 1b to 16b variable weight bit precision with and 57.6% and 28.5% lower energy consumption than conventional MAC arithmetic units for 1b and 16b weight precision. Also, the near-zero skipper reduces 36% of MAC operation and consumes 28% lower energy consumption for facial emotion recognition tasks. Implemented in 65nm CMOS process, the proposed processor occupies 1784×1784 um2 areas and dissipates 0.28 mW and 34.4 mW at 1fps and 30fps facial emotion recognition tasks.

Applying MetaHuman Facial Animation with MediaPipe: An Alternative Solution to Live Link iPhone.

  • Balgum Song;Arminas Baronas
    • International journal of advanced smart convergence
    • /
    • 제13권3호
    • /
    • pp.191-198
    • /
    • 2024
  • This paper presents an alternative solution for applying MetaHuman facial animations using MediaPipe, providing a versatile option to the Live Link iPhone system. Our approach involves capturing facial expressions with various camera devices, including webcams, laptop cameras, and Android phones, processing the data for landmark detection, and applying these landmarks in Unreal Engine Blueprint to animate MetaHuman characters in real-time. Techniques such as the Eye Aspect Ratio (EAR) for blink detection and the One Euro Filter for data smoothing ensure accurate and responsive animations. Experimental results demonstrate that our system provides a cost-effective and flexible alternative for iPhone non-users, enhancing the accessibility of advanced facial capture technology for applications in digital media and interactive environments. This research offers a practical and adaptable method for real-time facial animation, with future improvements aimed at integrating more sophisticated emotion detection features.

앙상블 학습 알고리즘과 인공지능 표정 인식 기술을 활용한 사용자 감정 맞춤 힐링 서비스 (Using Ensemble Learning Algorithm and AI Facial Expression Recognition, Healing Service Tailored to User's Emotion)

  • 양성연;홍다혜;문재현
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.818-820
    • /
    • 2022
  • The keyword 'healing' is essential to the competitive society and culture of Koreans. In addition, as the time at home increases due to COVID-19, the demand for indoor healing services has increased. Therefore, this thesis analyzes the user's facial expression so that people can receive various 'customized' healing services indoors, and based on this, provides lighting, ASMR, video recommendation service, and facial expression recording service.The user's expression was analyzed by applying the ensemble algorithm to the expression prediction results of various CNN models after extracting only the face through object detection from the image taken by the user.

유발된 정서가 대학생의 부정적 어휘정보 처리에 미치는 효과 (The Effects of Priming Emotion among College Students at the Processes of Words Negativity Information)

  • 김충명
    • 융합정보논문지
    • /
    • 제10권10호
    • /
    • pp.318-324
    • /
    • 2020
  • 본 연구는 정상 및 불안 집단 대학생을 대상으로 하나 또는 그 이상의 부정적 어휘를 포함하는 서술어의 의미추론 과정에서 정서유형 및 부정어휘 출현의 정도가 과제처리 속도에 미치는 영향을 알아보고자 수행되었다. 정서 3유형, 자극 2유형 그리고 부정어휘 횟수 3유형을 피험자 내 변인으로, 벡(Beck) 불안척도로 구분된 불안수준을 피험자 간 변인으로 혼합반복측정 설계를 적용하여 피험자 반응시간에 대해 분석한 결과, 정서유형과 자극의 종류 그리고 부정어휘 횟수에 대한 주효과를 확인하였으며, 불안수준 x 부정어 횟수에서 상호작용이 발견되었다. 긍정적 정서에 비해 부정적 정서에서, 비언어 자극보다는 언어 자극 환경에서 과제처리에 더 효율적이었지만, 부정어휘 변인에서는 그 횟수의 증가가 정상집단의 신속한 반응과 불안집단의 지연된 반응으로 분기되면서 부정어휘처리 반응시간의 지체로 나타났다. 또한 유입 정서유형 및 자극의 종류와 상관없이 불안수준은 과제처리 속도를 지연시키는 요인으로 확인되었다. 아울러 추후 연구를 위한 함의와 한계를 논의하였다.

인공지능 표정 인식 기술을 활용한 사용자 감정 맞춤 힐링·광고 서비스 (Using AI Facial Expression Recognition, Healing and Advertising Service Tailored to User's Emotion)

  • 김민식;정현우;문윤지;문재현
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 추계학술발표대회
    • /
    • pp.1160-1163
    • /
    • 2021
  • DOOH(Degital Out of Home) advertisement market is developing steadily, and the case of use is also increasing, In advertisement market, personalized services is actively being provided with technological development. On the other hand, personalized services are difficult to be provided in DOOH and are p rovided by only personal information, not feelings. This study aims to construct personalized DOOH se rvices by using AI facial expression recognition and suggesting a solution optimized for interaction bet ween user and services by providing healing and advertisement.

심리로봇적용을 위한 얼굴 영역 처리 속도 향상 및 강인한 얼굴 검출 방법 (Improving the Processing Speed and Robustness of Face Detection for a Psychological Robot Application)

  • 류정탁;양진모;최영숙;박세현
    • 한국산업정보학회논문지
    • /
    • 제20권2호
    • /
    • pp.57-63
    • /
    • 2015
  • 얼굴 표정인식 기술은 다른 감정인식기술에 비해 비접촉성, 비강제성, 편리성의 특징을 가지고 있다. 비전 기술을 심리로봇에 적용하기 위해서는 표정인식을 하기 전 단계에서 얼굴 영역을 정확하고 빠르게 추출할 수 있어야 한다. 본 논문에서는 성능이 향상된 얼굴영역 검출을 위해서 먼저 영상에서 YCbCr 피부색 색상 정보를 이용하여 배경을 제거하고 상태 기반 방법인 Haar-like Feature 방법을 이용하였다. 입력영상에 대하여 배경을 제거함으로써 처리속도가 향상된, 배경에 강건한 얼굴검출 결과를 얻을 수 있었다.