• 제목/요약/키워드: robot face

검색결과 186건 처리시간 0.032초

A Face Robot Actuated With Artificial Muscle Based on Dielectric Elastomer

  • Kwak Jong Won;Chi Ho June;Jung Kwang Mok;Koo Ja Choon;Jeon Jae Wook;Lee Youngkwan;Nam Jae-do;Ryew Youngsun;Choi Hyouk Ryeol
    • Journal of Mechanical Science and Technology
    • /
    • 제19권2호
    • /
    • pp.578-588
    • /
    • 2005
  • Face robots capable of expressing their emotional status, can be adopted as an efficient tool for friendly communication between the human and the machine. In this paper, we present a face robot actuated with artificial muscle based on dielectric elastomer. By exploiting the properties of dielectric elastomer, it is possible to actuate the covering skin, eyes as well as provide human-like expressivity without employing complicated mechanisms. The robot is driven by seven actuator modules such eye, eyebrow, eyelid, brow, cheek, jaw and neck module corresponding to movements of facial muscles. Although they are only part of the whole set of facial motions, our approach is sufficient to generate six fundamental facial expressions such as surprise, fear, angry, disgust, sadness, and happiness. In the robot, each module communicates with the others via CAN communication protocol and according to the desired emotional expressions, the facial motions are generated by combining the motions of each actuator module. A prototype of the robot has been developed and several experiments have been conducted to validate its feasibility.

비전 방식을 이용한 감정인식 로봇 개발 (Development of an Emotion Recognition Robot using a Vision Method)

  • 신영근;박상성;김정년;서광규;장동식
    • 산업공학
    • /
    • 제19권3호
    • /
    • pp.174-180
    • /
    • 2006
  • This paper deals with the robot system of recognizing human's expression from a detected human's face and then showing human's emotion. A face detection method is as follows. First, change RGB color space to CIElab color space. Second, extract skin candidate territory. Third, detect a face through facial geometrical interrelation by face filter. Then, the position of eyes, a nose and a mouth which are used as the preliminary data of expression, he uses eyebrows, eyes and a mouth. In this paper, the change of eyebrows and are sent to a robot through serial communication. Then the robot operates a motor that is installed and shows human's expression. Experimental results on 10 Persons show 78.15% accuracy.

Color-based Face Detection for Alife Robot

  • Feng, Xiongfeng;Kubik, K.Bogunia
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.49.2-49
    • /
    • 2001
  • In this paper, a skin-color model in the HSV space was developed. Based on it, face region can be separated from other parts in a image. Face can be detected by the methods of Template and eye-pair. This realized in our robot.

  • PDF

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

의료서비스로봇을 위한 얼굴추출 방법 (Face Detection for Medical Service Robot)

  • 박세현;류정탁
    • 한국산업정보학회논문지
    • /
    • 제16권3호
    • /
    • pp.1-10
    • /
    • 2011
  • 본 논문에서는 의료 서비스로봇을 위한 얼굴추출 방법을 제안한다. 제안된 방법은 기존의 얼굴 추출방법의 단점을 보완하여 배경과 조명에 강건한 방법이다. 본 방법은 먼저 평균 이동 알고리즘을 이용하여 배경을 제거하고, 컬러 공간에서 얼굴을 추출한 후 외형 기반의 Haar-like feature 방식으로 최종 얼굴을 검출하게 된다. 제안된 시스템의 효율을 위해 실험을 하였고, 실험결과가 제안된 방법이 의료서비스 로봇에 적용 가능함을 보였다.

Development of Face Robot Actuated by Artificial Muscle

  • Choi, H.R.;Kwak, J.W.;Chi, H.J.;Jung, K.M.;Hwang, S.H.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1229-1234
    • /
    • 2004
  • Face robots capable of expressing their emotional status, can be adopted as an e cient tool for friendly communication between the human and the machine. In this paper, we present a face robot actuated with arti cial muscle based on dielectric elastomer. By exploiting the properties of polymers, it is possible to actuate the covering skin, and provide human-like expressivity without employing complicated mechanisms. The robot is driven by seven types of actuator modules such as eye, eyebrow, eyelid, brow, cheek, jaw and neck module corresponding to movements of facial muscles. Although they are only part of the whole set of facial motions, our approach is su cient to generate six fundamental facial expressions such as surprise, fear, angry, disgust, sadness, and happiness. Each module communicates with the others via CAN communication protocol and according to the desired emotional expressions, the facial motions are generated by combining the motions of each actuator module. A prototype of the robot has been developed and several experiments have been conducted to validate its feasibility.

  • PDF

로봇 사진사를 위한 오메가 형상 추적기와 얼굴 검출기 융합을 이용한 강인한 머리 추적 (Robust Head Tracking using a Hybrid of Omega Shape Tracker and Face Detector for Robot Photographer)

  • 김지성;정지훈;안광호;유연걸;이원형;정명진
    • 로봇학회논문지
    • /
    • 제5권2호
    • /
    • pp.152-159
    • /
    • 2010
  • Finding a head of a person in a scene is very important for taking a well composed picture by a robot photographer because it depends on the position of the head. So in this paper, we propose a robust head tracking algorithm using a hybrid of an omega shape tracker and local binary pattern (LBP) AdaBoost face detector for the robot photographer to take a fine picture automatically. Face detection algorithms have good performance in terms of finding frontal faces, but it is not the same for rotated faces. In addition, when the face is occluded by a hat or hands, it has a hard time finding the face. In order to solve this problem, the omega shape tracker based on active shape model (ASM) is presented. The omega shape tracker is robust to occlusion and illuminationchange. However, whenthe environment is dynamic,such as when people move fast and when there is a complex background, its performance is unsatisfactory. Therefore, a method combining the face detection algorithm and the omega shape tracker by probabilistic method using histograms of oriented gradient (HOG) descriptor is proposed in this paper, in order to robustly find human head. A robot photographer was also implemented to abide by the 'rule of thirds' and to take photos when people smile.

모바일 로봇을 위한 저해상도 영상에서의 원거리 얼굴 검출 (Detection of Faces Located at a Long Range with Low-resolution Input Images for Mobile Robots)

  • 김도형;윤우한;조영조;이재연
    • 로봇학회논문지
    • /
    • 제4권4호
    • /
    • pp.257-264
    • /
    • 2009
  • This paper proposes a novel face detection method that finds tiny faces located at a long range even with low-resolution input images captured by a mobile robot. The proposed approach can locate extremely small-sized face regions of $12{\times}12$ pixels. We solve a tiny face detection problem by organizing a system that consists of multiple detectors including a mean-shift color tracker, short- and long-rage face detectors, and an omega shape detector. The proposed method adopts the long-range face detector that is well trained enough to detect tiny faces at a long range, and limiting its operation to only within a search region that is automatically determined by the mean-shift color tracker and the omega shape detector. By focusing on limiting the face search region as much as possible, the proposed method can accurately detect tiny faces at a long distance even with a low-resolution image, and decrease false positives sharply. According to the experimental results on realistic databases, the performance of the proposed approach is at a sufficiently practical level for various robot applications such as face recognition of non-cooperative users, human-following, and gesture recognition for long-range interaction.

  • PDF

소비자 시선 분석을 통한 소셜로봇 태도 형성 메커니즘 연구: 로봇의 얼굴을 중심으로 (A Study on the Mechanism of Social Robot Attitude Formation through Consumer Gaze Analysis: Focusing on the Robot's Face)

  • 하상집;이은주;유인진;박도형
    • 지능정보연구
    • /
    • 제28권1호
    • /
    • pp.243-262
    • /
    • 2022
  • 본 연구는 소셜로봇 디자인 연구의 흐름 중 하나인 로봇의 외형에 관하여 시선 추적(Eye Tracking)을 활용하여 로봇에 대한 사용자의 태도를 형성하는 메커니즘을 발견하고, 로봇 디자인 시 참고할 수 있는 구체적인 인사이트를 발굴하고자 하였다. 소셜로봇의 몸 전체, 얼굴, 눈, 입술 등을 관심 영역(Area of Interest: AOI)으로 설정하여 측정된 사용자의 시선 추적 지표와 디자인평가 설문을 통하여 파악된 사용자의 태도를 연결하여 소셜로봇 디자인의 연구 모형을 구성하였다. 구체적으로 본 연구에서 사용된 시선 추적 지표는 고정된 시간(Fixation), 첫 응시 시간(First Visit), 전체 머문 시간(Total Viewed), 그리고 재방문 횟수(Revisits)이며, 관심 영역인 AOI(Areas of Interests)는 소셜로봇의 얼굴, 눈, 입술, 그리고 몸체로 설계하였다. 그리고 디자인평가 설문을 통하여 소셜로봇의 감정 표현(Expressive), 인간다움(Human-like), 얼굴 두각성(Face-highlighted) 등의 소비자 신념을 수집하였고, 종속변수로 로봇에 대한 태도를 설정하였다. 시선 반응에 따른 소셜로봇에 대한 태도를 형성하는 과정에서 두가지 경로를 통해 영향을 미치는 것을 확인되었다. 첫번째는 시선이 태도에 직접적으로 미치는 영향으로 소셜로봇의 얼굴과 눈의 응시에 따라 긍정적인 태도 인 것으로 나타났다. 구체적으로, 로봇의 첫 응시 시점이 이를수록 눈에서는 머문 시간이 길고 재방문 빈도가 낮을수록 로봇에 대한 태도를 긍정적으로 평가하였다. 즉 소셜로봇을 얼굴보다 눈에 집중해서 보게 될 때 피험자들이 로봇에 대한 판단에 있어 직접적으로 영향을 주는 것으로 나타났다. 두번째로는 로봇에 대한 인지적 지각된 측면을 고려하여 얼굴 두각성(Face-highlighted), 의인화(Human-like), 감정 표현(Expressive)이 태도에 미치는 영향의 결과로 모두 긍정적인 영향을 미치는 것으로 나타났다. 또한 소셜로봇에 대한 지각이 구체적으로 로봇의 어떤 외형적 요소가 연관성을 가지는지 살펴본 결과 소셜로봇의 얼굴과 입술에 머문 시간이 길수록 입술을 다시 주시하지 않을수록 Face-highlighted에 긍정적인 영향을 주는 것으로 나타났다. 그리고 전신은 첫 응시가 늦을수록, 입술은 첫 응시가 빠르고 시선이 고정된 시간이 짧을수록 Human-like에 긍정적인 영향이 미치는 것으로 나타났다. 마지막으로 소셜로봇의 얼굴에 머문 시간은 길수록 Expressive에 긍정적인 영향이 미치는 것으로 나타났다.

3D 캐릭터를 이용한 감정 기반 헤드 로봇 시스템 (Emotional Head Robot System Using 3D Character)

  • 안호석;최정환;백영민;샤밀;나진희;강우성;최진영
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.328-330
    • /
    • 2007
  • Emotion is getting one of the important elements of the intelligent service robots. Emotional communication can make more comfortable relation between humans and robots. We developed emotional head robot system using 3D character. We designed emotional engine for making emotion of the robot. The results of face recognition and hand recognition is used for the input data of emotional engine. 3D character expresses nine emotions and speaks about own emotional status. The head robot has memory of a degree of attraction. It can be chaIU!ed by input data. We tested the head robot and conform its functions.

  • PDF