• Title/Summary/Keyword: 분노 검출

Search Result 4, Processing Time 0.02 seconds

Multi-Time Window Feature Extraction Technique for Anger Detection in Gait Data

  • Beom Kwon;Taegeun Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.41-51
    • /
    • 2023
  • In this paper, we propose a technique of multi-time window feature extraction for anger detection in gait data. In the previous gait-based emotion recognition methods, the pedestrian's stride, time taken for one stride, walking speed, and forward tilt angles of the neck and thorax are calculated. Then, minimum, mean, and maximum values are calculated for the entire interval to use them as features. However, each feature does not always change uniformly over the entire interval but sometimes changes locally. Therefore, we propose a multi-time window feature extraction technique that can extract both global and local features, from long-term to short-term. In addition, we also propose an ensemble model that consists of multiple classifiers. Each classifier is trained with features extracted from different multi-time windows. To verify the effectiveness of the proposed feature extraction technique and ensemble model, a public three-dimensional gait dataset was used. The simulation results demonstrate that the proposed ensemble model achieves the best performance compared to machine learning models trained with existing feature extraction techniques for four performance evaluation metrics.

Fuzzy Model-Based Emotion Recognition Using Color Image (퍼지 모델을 기반으로 한 컬러 영상에서의 감성 인식)

  • Joo, Young-Hoon;Jeong, Keun-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.330-335
    • /
    • 2004
  • In this paper, we propose the technique for recognizing the human emotion by using the color image. To do so, we first extract the skin color region from the color image by using HSI model. Second, we extract the face region from the color image by using Eigenface technique. Third, we find the man's feature points(eyebrows, eye, nose, mouse) from the face image and make the fuzzy model for recognizing the human emotions (surprise, anger, happiness, sadness) from the structural correlation of man's feature points. And then, we infer the human emotion from the fuzzy model. Finally, we have proven the effectiveness of the proposed method through the experimentation.

Maskinator : An Efficient Mask Detection Program (Maskinator: 효율적인 마스크 착용 여부 판단 프로그램)

  • Ye, Andrew Sangwoo;Park, Junho;Kim, Hosook
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.195-198
    • /
    • 2021
  • COVID-19 전염병은 우리의 일상 생활에 빠르게, 그리고 엄청난 영향을 미쳤다. 현재는 마스크를 착용하는 것이 새로운 평범함이 되었고, 이에 따라 많은 서비스 제공업체들은 고객들에게 그들의 서비스를 이용하기 위해 마스크를 착용하도록 요구하고 있다. 공공 버스도 이에 포함된다. 여러 뉴스 기사에 따르면 마스크를 써 달라는 버스 기사의 부탁에 버스 기사를 폭행한 사건이 여러 번 발생하였다. 이에 기계가 마스크를 쓰지 않은 사람을 가려내고 마스크를 쓰라고 한다면 버스 기사에게 향하는 비이성적 분노가 줄어들 것이라고 생각하였다. 따라서, 본 논문에서는 Keras와 같은 기본적인 기계 학습 패키지를 사용하여 빠르고 정확하게 마스크의 착용여부를 확인할 수 있는 방식을 제안한다. 제안된 방식은 고성능 컴퓨터 및 그래픽카드의 필요없이 CPU에서만 작동하는 마스크 착용 판별프로그렘으로, 추가적으로 알림을 보낼 수 있는 웹사이트와 음성 경고 시스템도 함께 구현하였다. 이 방법은 테스트 데이터셋에서 99.5% 이상의 정확도를 달성했고, GPU가 아닌 CPU에서 6fps 정도의 속도를 지원하여 실생활에 사용될 수 있다.

  • PDF

Recognition of Hmm Facial Expressions using Optical Flow of Feature Regions (얼굴 특징영역상의 광류를 이용한 표정 인식)

  • Lee Mi-Ae;Park Ki-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.570-579
    • /
    • 2005
  • Facial expression recognition technology that has potentialities for applying various fields is appling on the man-machine interface development, human identification test, and restoration of facial expression by virtual model etc. Using sequential facial images, this study proposes a simpler method for detecting human facial expressions such as happiness, anger, surprise, and sadness. Moreover the proposed method can detect the facial expressions in the conditions of the sequential facial images which is not rigid motion. We identify the determinant face and elements of facial expressions and then estimates the feature regions of the elements by using information about color, size, and position. In the next step, the direction patterns of feature regions of each element are determined by using optical flows estimated gradient methods. Using the direction model proposed by this study, we match each direction patterns. The method identifies a facial expression based on the least minimum score of combination values between direction model and pattern matching for presenting each facial expression. In the experiments, this study verifies the validity of the Proposed methods.