• 제목/요약/키워드: Robot skin

검색결과 60건 처리시간 0.028초

2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템 (Emotion Recognition and Expression System of Robot Based on 2D Facial Image)

  • 이동훈;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권4호
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

얼굴로봇 Buddy의 기능 및 구동 메커니즘 (Functions and Driving Mechanisms for Face Robot Buddy)

  • 오경균;장명수;김승종;박신석
    • 로봇학회논문지
    • /
    • 제3권4호
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF

심리로봇적용을 위한 얼굴 영역 처리 속도 향상 및 강인한 얼굴 검출 방법 (Improving the Processing Speed and Robustness of Face Detection for a Psychological Robot Application)

  • 류정탁;양진모;최영숙;박세현
    • 한국산업정보학회논문지
    • /
    • 제20권2호
    • /
    • pp.57-63
    • /
    • 2015
  • 얼굴 표정인식 기술은 다른 감정인식기술에 비해 비접촉성, 비강제성, 편리성의 특징을 가지고 있다. 비전 기술을 심리로봇에 적용하기 위해서는 표정인식을 하기 전 단계에서 얼굴 영역을 정확하고 빠르게 추출할 수 있어야 한다. 본 논문에서는 성능이 향상된 얼굴영역 검출을 위해서 먼저 영상에서 YCbCr 피부색 색상 정보를 이용하여 배경을 제거하고 상태 기반 방법인 Haar-like Feature 방법을 이용하였다. 입력영상에 대하여 배경을 제거함으로써 처리속도가 향상된, 배경에 강건한 얼굴검출 결과를 얻을 수 있었다.

강인한 손가락 끝 추출과 확장된 CAMSHIFT 알고리즘을 이용한 자연스러운 Human-Robot Interaction을 위한 손동작 인식 (A Robust Fingertip Extraction and Extended CAMSHIFT based Hand Gesture Recognition for Natural Human-like Human-Robot Interaction)

  • 이래경;안수용;오세영
    • 제어로봇시스템학회논문지
    • /
    • 제18권4호
    • /
    • pp.328-336
    • /
    • 2012
  • In this paper, we propose a robust fingertip extraction and extended Continuously Adaptive Mean Shift (CAMSHIFT) based robust hand gesture recognition for natural human-like HRI (Human-Robot Interaction). Firstly, for efficient and rapid hand detection, the hand candidate regions are segmented by the combination with robust $YC_bC_r$ skin color model and haar-like features based adaboost. Using the extracted hand candidate regions, we estimate the palm region and fingertip position from distance transformation based voting and geometrical feature of hands. From the hand orientation and palm center position, we find the optimal fingertip position and its orientation. Then using extended CAMSHIFT, we reliably track the 2D hand gesture trajectory with extracted fingertip. Finally, we applied the conditional density propagation (CONDENSATION) to recognize the pre-defined temporal motion trajectories. Experimental results show that the proposed algorithm not only rapidly extracts the hand region with accurately extracted fingertip and its angle but also robustly tracks the hand under different illumination, size and rotation conditions. Using these results, we successfully recognize the multiple hand gestures.

퍼지 색상 필터를 이용한 얼굴 영역 추출 (Extraction of Facial Region Using Fuzzy Color Filter)

  • 김문환;박진배;정근호;주영훈;이재연;조영조
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.147-149
    • /
    • 2004
  • There are no authentic solutions in a face region extraction problem though it is an important part of pattern recognition and has diverse application fields. It is not easy to develop the facial region extraction algorithm because the facial image is very sensitive according to age, sex, and illumination. In this paper, to solve these difficulties, a fuzzy color filer based on the facial region extraction algorithm is proposed. The fuzzy color filter makes the robust facial region extraction enable by modeling the skin color. Especially, it is robust in facial region extraction with various illuminations. In addition, to identify the fuzzy color filter, a linear matrix inequality(LMI) optimization method is used. Finally, the simulation result is given to confirm the superiority of the proposed algorithm.

  • PDF

인간의 손의 능력을 응용한 로봇 핸드의 힘 제어 (Control of Grasp Forces for Robotic Hands Based on Human Capabilities)

  • 김일환
    • 산업기술연구
    • /
    • 제16권
    • /
    • pp.71-81
    • /
    • 1996
  • This paper discusses a physiological approach motivated by the study of human hands for robot hand force control. It begins with an analysis of the human's grasping behavior to see how humans determine the grasp forces. The human controls the grasp force by sensing the friction force, that is, the weight of the object which is felt on his hand, but when slip is detected by sensing skin acceleration, the grasp force becomes much greater than the minimum force required for grasping by adding the force which is proportional to the acceleration. And two methods that can predict when and how fingers will slip upon a grasped object are considered. To emulate the human's capabilities, we propose a method for determination of as grasp force, which uses the change in the friction force. Experimental results show that the proposed method can be applied to control of robot hands to grasp objects of arbitrary weight stably without skin-like slip sensors.

  • PDF

A Face Robot Actuated With Artificial Muscle Based on Dielectric Elastomer

  • Kwak Jong Won;Chi Ho June;Jung Kwang Mok;Koo Ja Choon;Jeon Jae Wook;Lee Youngkwan;Nam Jae-do;Ryew Youngsun;Choi Hyouk Ryeol
    • Journal of Mechanical Science and Technology
    • /
    • 제19권2호
    • /
    • pp.578-588
    • /
    • 2005
  • Face robots capable of expressing their emotional status, can be adopted as an efficient tool for friendly communication between the human and the machine. In this paper, we present a face robot actuated with artificial muscle based on dielectric elastomer. By exploiting the properties of dielectric elastomer, it is possible to actuate the covering skin, eyes as well as provide human-like expressivity without employing complicated mechanisms. The robot is driven by seven actuator modules such eye, eyebrow, eyelid, brow, cheek, jaw and neck module corresponding to movements of facial muscles. Although they are only part of the whole set of facial motions, our approach is sufficient to generate six fundamental facial expressions such as surprise, fear, angry, disgust, sadness, and happiness. In the robot, each module communicates with the others via CAN communication protocol and according to the desired emotional expressions, the facial motions are generated by combining the motions of each actuator module. A prototype of the robot has been developed and several experiments have been conducted to validate its feasibility.

Development of Face Robot Actuated by Artificial Muscle

  • Choi, H.R.;Kwak, J.W.;Chi, H.J.;Jung, K.M.;Hwang, S.H.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1229-1234
    • /
    • 2004
  • Face robots capable of expressing their emotional status, can be adopted as an e cient tool for friendly communication between the human and the machine. In this paper, we present a face robot actuated with arti cial muscle based on dielectric elastomer. By exploiting the properties of polymers, it is possible to actuate the covering skin, and provide human-like expressivity without employing complicated mechanisms. The robot is driven by seven types of actuator modules such as eye, eyebrow, eyelid, brow, cheek, jaw and neck module corresponding to movements of facial muscles. Although they are only part of the whole set of facial motions, our approach is su cient to generate six fundamental facial expressions such as surprise, fear, angry, disgust, sadness, and happiness. Each module communicates with the others via CAN communication protocol and according to the desired emotional expressions, the facial motions are generated by combining the motions of each actuator module. A prototype of the robot has been developed and several experiments have been conducted to validate its feasibility.

  • PDF

화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발 (Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions)

  • 진용규;유수정;조혜경
    • 로봇학회논문지
    • /
    • 제10권3호
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.