• Title/Summary/Keyword: Face Expression Detection

Search Result 66, Processing Time 0.022 seconds

A Study on Non-Contact Care Robot System through Deep Learning

  • Hyun-Sik Ham;Sae Jun Ko
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.33-40
    • /
    • 2023
  • As South Korea enters the realm of an super-aging society, the demand for elderly welfare services has been steadily rising. However, the current shortage of welfare personnel has emerged as a social issue. To address this challenge, there is active research underway on elderly care robots designed to mitigate the social isolation of the elderly and provide emergency contact capabilities in critical situations. Nonetheless, these functionalities require direct user contact, which represents a limitation of conventional elderly care robots. In this paper, we propose a solution to overcome these challenges by introducing a care robot system capable of interacting with users without the need for direct physical contact. This system leverages commercialized elderly care robots and cameras. We have equipped the care robot with an edge device that incorporates facial expression recognition and action recognition models. The models were trained and validated using public available data. Experimental results demonstrate high accuracy rates, with facial expression recognition achieving 96.5% accuracy and action recognition reaching 90.9%. Furthermore, the inference times for these processes are 50ms and 350ms, respectively. These findings affirm that our proposed system offers efficient and accurate facial and action recognition, enabling seamless interaction even in non-contact situations.

Performance Comparison of Skin Color Detection Algorithms by the Changes of Backgrounds (배경의 변화에 따른 피부색상 검출 알고리즘의 성능 비교)

  • Jang, Seok-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • Accurately extracting skin color regions is very important in various areas such as face recognition and tracking, facial expression recognition, adult image identification, health-care, and so forth. In this paper, we evaluate the performances of several skin color detection algorithms in indoor environments by changing the distance between the camera and the object as well as the background colors of the object. The distance is from 60cm to 120cm and the background colors are white, black, orange, pink, and yellow, respectively. The algorithms that we use for the performance evaluation are Peer algorithm, NNYUV, NNHSV, LutYUV, and Kimset algorithm. The experimental results show that NNHSV, NNYUV and LutYUV algorithm are stable, but the other algorithms are somewhat sensitive to the changes of backgrounds. As a result, we expect that the comparative experimental results of this paper will be used very effectively when developing a new skin color extraction algorithm which are very robust to dynamic real environments.

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

The Effect on the Contents of Self-Disclosure Activities using Ubiquitous Home Robots (자기노출 심리를 이용한 유비쿼터스 로봇 콘텐츠의 효과)

  • Kim, Su-Jung;Han, Jeong-Hye
    • Journal of The Korean Association of Information Education
    • /
    • v.12 no.1
    • /
    • pp.57-63
    • /
    • 2008
  • This study uses the identification which is one of the critical components of psychological mechanism and enables replacing one's own self because of the needs of self-expression(disclosure) and creation. The study aims to improve educational effects using the realistic by increasing sense of the virtual reality and the attention. After the computer-based contents were developed and converted to be applied into robot, and then the contents were combined the student's photo and the avatar using automatic loading. Finally each one of the contents was applied to the students. The results of the investigation indicated that there were significant effects of the contents based on identification. In other words, the contents effect on student's attention, but not their academic achievement. The study could find the effect of the identification's application using the educational robot. We suggested that improving technical ability of the augmented virtuality as a face-detection and sensitive interaction may lead to the specific suggestions for educational effects for further research.

  • PDF

Toward an integrated model of emotion recognition methods based on reviews of previous work (정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구)

  • Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.101-116
    • /
    • 2011
  • Current researches on emotion detection classify emotions by using the information from facial, vocal, and bodily expressions, or physiological responses. This study was to review three representative emotion recognition methods, which were based on psychological theory of emotion. Firstly, literature review on the emotion recognition methods based on facial expressions was done. These studies were supported by Darwin's theory. Secondly, review on the emotion recognition methods based on changes in physiology was conducted. These researches were relied on James' theory. Lastly, a review on the emotion recognition was conducted on the basis of multimodality(i.e., combination of signals from face, dialogue, posture, or peripheral nervous system). These studies were supported by both Darwin's and James' theories. In each part, research findings was examined as well as theoretical backgrounds which each method was relied on. This review proposed a need for an integrated model of emotion recognition methods to evolve the way of emotion recognition. The integrated model suggests that emotion recognition methods are needed to include other physiological signals such as brain responses or face temperature. Also, the integrated model proposed that emotion recognition methods are needed to be based on multidimensional model and take consideration of cognitive appraisal factors during emotional experience.

  • PDF

A Study on Human-Robot Interaction Trends Using BERTopic (BERTopic을 활용한 인간-로봇 상호작용 동향 연구)

  • Jeonghun Kim;Kee-Young Kwahk
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.185-209
    • /
    • 2023
  • With the advent of the 4th industrial revolution, various technologies have received much attention. Technologies related to the 4th industry include the Internet of Things (IoT), big data, artificial intelligence, virtual reality (VR), 3D printers, and robotics, and these technologies are often converged. In particular, the robotics field is combined with technologies such as big data, artificial intelligence, VR, and digital twins. Accordingly, much research using robotics is being conducted, which is applied to distribution, airports, hotels, restaurants, and transportation fields. In the given situation, research on human-robot interaction is attracting attention, but it has not yet reached the level of user satisfaction. However, research on robots capable of perfect communication is steadily being conducted, and it is expected that it will be able to replace human emotional labor. Therefore, it is necessary to discuss whether the current human-robot interaction technology can be applied to business. To this end, this study first examines the trend of human-robot interaction technology. Second, we compare LDA (Latent Dirichlet Allocation) topic modeling and BERTopic topic modeling methods. As a result, we found that the concept of human-robot interaction and basic interaction was discussed in the studies from 1992 to 2002. From 2003 to 2012, many studies on social expression were conducted, and studies related to judgment such as face detection and recognition were conducted. In the studies from 2013 to 2022, service topics such as elderly nursing, education, and autism treatment appeared, and research on social expression continued. However, it seems that it has not yet reached the level that can be applied to business. As a result of comparing LDA (Latent Dirichlet Allocation) topic modeling and the BERTopic topic modeling method, it was confirmed that BERTopic is a superior method to LDA.