• Title/Summary/Keyword: 감정형 로봇

Search Result 34, Processing Time 0.022 seconds

Design and Implementation of the ChamCham and WordChain Play Robot for Reduction of Symptoms of Depressive Disorder Patient (우울증 진단 환자의 증상 완화를 위한 참참참, 끝말잇기 놀이 로봇 설계 및 구현)

  • Eom, Hyun-Young;Seo, Dong-Yoon;Lee, Gyeong-Min;Lee, Seong-Ung;Choi, Ji-Hwan;Lee, Kang-Hee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.561-566
    • /
    • 2020
  • We propose to design and to implement a recreational and end - of - play robot for symptom relief in patients with depression. The main symptom of depression is the loss of interest and interest in life. The depression diagnosis patient confirms the emotional analysis revealed by his / her robot through the robot, and performs the greeting or ending play. After analyzing the emotions in the expressions after the play, the function of the embodying robot is confirmed by receiving the report. A simple play can not completely cure a patient with a diagnosis of depression, but it can contribute to symptom relief through gradual use. The design of the play-by-play robot is using Q.bo One, an open-source robot that can interact with Thecorpora. Q.bo One's system captures a user's face, takes a picture, passes the value to the Azure server, and checks the emotional analysis before and after the play with the accumulated data.Play is implemented in Rasubian, the OS of Q.bo One, using the programming language Python and interacting with external sensors. The purpose of this paper is to help the symptom relief of depressive patients in a relatively short time with a play robot.

Analyzing the Acoustic Elements and Emotion Recognition from Speech Signal Based on DRNN (음향적 요소분석과 DRNN을 이용한 음성신호의 감성 인식)

  • Sim, Kwee-Bo;Park, Chang-Hyun;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.45-50
    • /
    • 2003
  • Recently, robots technique has been developed remarkably. Emotion recognition is necessary to make an intimate robot. This paper shows the simulator and simulation result which recognize or classify emotions by learning pitch pattern. Also, because the pitch is not sufficient for recognizing emotion, we added acoustic elements. For that reason, we analyze the relation between emotion and acoustic elements. The simulator is composed of the DRNN(Dynamic Recurrent Neural Network), Feature extraction. DRNN is a learning algorithm for pitch pattern.

An Exploratory study on Student-Intelligent Robot Teacher relationship recognized by Middle School Students (중학생이 인식하는 학습자-지능형로봇 교사의 관계 형성 요인)

  • Lee, Sang-Soog;Kim, Jinhee
    • Journal of Digital Convergence
    • /
    • v.18 no.4
    • /
    • pp.37-44
    • /
    • 2020
  • This study aimed to explore the relationship between Intelligent Robot Reacher(IRT)-student by examining the factors of their relationship perceived by middle school students. In doing so, we developed questionnaires based on the existing teacher-student relationship scale and conducted an online survey of 283 first graders in middle school. The collected date were analyzed using exploratory factor analyses with SPSS 23 and confirmatory factor analysis with Amos 21. The study findings identified four factors of IRT-student relationship namely "trust", "competence", "emotional exchange", and "tolerance". It is expected that the study can be used to discuss ways to enhance educationally significant interaction between students-IRT and teaching methods using intelligent robots(IRs). Also, the study will contribute to the understanding and development of various services using IRs. Based on the study finidngs, future studies should investigate the perception of various education stockholders (teachers, parets, etc) on IRT to elevate the Human-Robot Interaction in the education field.

Development of FACS-based Android Head for Emotional Expressions (감정표현을 위한 FACS 기반의 안드로이드 헤드의 개발)

  • Choi, Dongwoon;Lee, Duk-Yeon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.537-544
    • /
    • 2020
  • This paper proposes the creation of an android robot head based on the facial action coding system(FACS), and the generation of emotional expressions by FACS. The term android robot refers to robots with human-like appearance. These robots have artificial skin and muscles. To make the expression of emotions, the location and number of artificial muscles had to be determined. Therefore, it was necessary to anatomically analyze the motions of the human face by FACS. In FACS, expressions are composed of action units(AUs), which work as the basis of determining the location and number of artificial muscles in the robots. The android head developed in this study had servo motors and wires, which corresponded to 30 artificial muscles. Moreover, the android head was equipped with artificial skin in order to make the facial expressions. Spherical joints and springs were used to develop micro-eyeball structures, and the arrangement of the 30 servo motors was based on the efficient design of wire routing. The developed android head had 30-DOFs and could express 13 basic emotions. The recognition rate of these basic emotional expressions was evaluated at an exhibition by spectators.

Implementation of Intelligent and Human-Friendly Home Service Robot (인간 친화적인 가정용 지능형 서비스 로봇 구현)

  • Choi, Woo-Kyung;Kim, Seong-Joo;Kim, Jong-Soo;Jeo, Jae-Yong;Jeon, Hong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.6
    • /
    • pp.720-725
    • /
    • 2004
  • Robot systems have applied to manufacturing or industrial field for reducing the need for human presence in dangerous and/or repetitive tasks. However, robot applications are transformed from industrial field to human life in recent tendency Nowadays, final goal of robot is to make a intelligent robot that can understand what human say and learn by itself and have internal emotion. For example Home service robots are able to provice functions such as security, housework, entertainment, education and secretary To provide various functions, home robots need to recognize human`s requirement and environment, and it is indispensable to use artificial intelligence technology for implementation of home robots. In this paper, implemented robot system takes data from several sensors and fuses the data to recognize environment information. Also, it can select a proper behavior for environment using soft computing method. Each behavior is composed with intuitive motion and sound in order to let human realize robot behavior well.

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.

Life-like Facial Expression of Mascot-Type Robot Based on Emotional Boundaries (감정 경계를 이용한 로봇의 생동감 있는 얼굴 표정 구현)

  • Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.281-288
    • /
    • 2009
  • Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions. Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robot's emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.

  • PDF

The Ethics of Robots and Humans in the Post-Human Age (포스트휴먼 시대의 로봇과 인간의 윤리)

  • You, Eun-Soon;Cho, Mi-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.3
    • /
    • pp.592-600
    • /
    • 2018
  • As the field of robots is evolving to intelligent robots that can replace even humans' mental or emotional labor, 'robot ethics' needed in relationship between humans and robots is becoming a crucial issue these days. The purpose of this study is to consider the ethics of robots and humans that is essential in this post-human age. It will deal with the followings as the main contents. First, with the cases of developing ethics software intended to make robots practice ethics, the authors begin this research being conscious about the matter of whether robots can really judge what is right or wrong only with the ethics codes entered forcibly. Second, regarding robot ethics, we should consider unethicality that might arise from learning data internalizing human biasness and also reflect ethical differences between countries or between cultures, that is, ethical relativism. Third, robot ethics should not be just about ethics codes intended for robots but reflect the new concept of 'human ethics' that allows humans and robots to coevolve.

Analysis of Livestock Vocal Data using Lightweight MobileNet (경량화 MobileNet을 활용한 축산 데이터 음성 분석)

  • Se Yeon Chung;Sang Cheol Kim
    • Smart Media Journal
    • /
    • v.13 no.6
    • /
    • pp.16-23
    • /
    • 2024
  • Pigs express their reactions to their environment and health status through a variety of sounds, such as grunting, coughing, and screaming. Given the significance of pig vocalizations, their study has recently become a vital source of data for livestock industry workers. To facilitate this, we propose a lightweight deep learning model based on MobileNet that analyzes pig vocal patterns to distinguish pig voices from farm noise and differentiate between vocal sounds and coughing. This model was able to accurately identify pig vocalizations amidst a variety of background noises and cough sounds within the pigsty. Test results demonstrated that this model achieved a high accuracy of 98.2%. Based on these results, future research is expected to address issues such as analyzing pig emotions and identifying stress levels.

Study on User Characteristics based on Conversation Analysis between Social Robots and Older Adults: With a focus on phenomenological research and cluster analysis (소셜 로봇과 노년층 사용자 간 대화 분석 기반의 사용자 특성 연구: 현상학적 분석 방법론과 군집 분석을 중심으로)

  • Na-Rae Choi;Do-Hyung Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.211-227
    • /
    • 2023
  • Personal service robots, a type of social robot that has emerged with the aging population and technological advancements, are undergoing a transformation centered around technologies that can extend independent living for older adults in their homes. For older adults to accept and use social robot innovations in their daily lives on a long-term basis, it is crucial to have a deeper understanding of user perspectives, contexts, and emotions. This research aims to comprehensively understand older adults by utilizing a mixed-method approach that integrates quantitative and qualitative data. Specifically, we employ the Van Kaam phenomenological methodology to group conversations into nine categories based on emotional cues and conversation participants as key variables, using voice conversation records between older adults and social robots. We then personalize the conversations based on frequency and weight, allowing for user segmentation. Additionally, we conduct profiling analysis using demographic data and health indicators obtained from pre-survey questionnaires. Furthermore, based on the analysis of conversations, we perform K-means cluster analysis to classify older adults into three groups and examine their respective characteristics. The proposed model in this study is expected to contribute to the growth of businesses related to understanding users and deriving insights by providing a methodology for segmenting older adult s, which is essential for the future provision of social robots with caregiving functions in everyday life.