• Title/Summary/Keyword: facial robot

Search Result 84, Processing Time 0.037 seconds

Development of Dental Light Robotic System using Image Processing Technology (영상처리 기술을 이용한 치과용 로봇 조명장치의 개발)

  • Moon, Hyun-Il;Kim, Myoung-Nam;Lee, Kyu-Bok
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.26 no.3
    • /
    • pp.285-296
    • /
    • 2010
  • Robot-assisted illuminating equipment based on image-processing technology was developed and then its accuracy was measured. The current system was designed to detect facial appearance using a camera and to illuminate it using a robot-assisted system. It was composed of a motion control component, a light control component and an image-processing component. Images were captured with a camera and following their acquisition the images that showed motion change were extracted in accordance with the Adaboost algorithm. Following the detection experiment for the oral cavity of patients based on image-processing technology, a higher degree of the facial recognition was obtained from the frontal view and the light robot arm was stably controlled.

Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System (감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발)

  • Kim, Do-Woo;Chung, Ki-Chull;Park, Won-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF

Functions and Driving Mechanisms for Face Robot Buddy (얼굴로봇 Buddy의 기능 및 구동 메커니즘)

  • Oh, Kyung-Geune;Jang, Myong-Soo;Kim, Seung-Jong;Park, Shin-Suk
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF

Color and Blinking Control to Support Facial Expression of Robot for Emotional Intensity (로봇 감정의 강도를 표현하기 위한 LED 의 색과 깜빡임 제어)

  • Kim, Min-Gyu;Lee, Hui-Sung;Park, Jeong-Woo;Jo, Su-Hun;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.547-552
    • /
    • 2008
  • Human and robot will have closer relation in the future, and we can expect that the interaction between human and robot will be more intense. To take the advantage of people's innate ability of communication, researchers concentrated on the facial expression so far. But for the robot to express emotional intensity, other modalities such as gesture, movement, sound, color are also needed. This paper suggests that the intensity of emotion can be expressed with color and blinking so that it is possible to apply the result to LED. Color and emotion definitely have relation, however, the previous results are difficult to implement due to the lack of quantitative data. In this paper, we determined color and blinking period to express the 6 basic emotions (anger, sadness, disgust, surprise, happiness, fear). It is implemented on avatar and the intensities of emotions are evaluated through survey. We figured out that the color and blinking helped to express the intensity of emotion for sadness, disgust, anger. For fear, happiness, surprise, the color and blinking didn't play an important role; however, we may improve them by adjusting the color or blinking.

  • PDF

Performance Evaluation Method for Detection Algorithms of Face Region and Facial Components (얼굴영역 및 얼굴요소 검출 알고리즘의 성능평가 방법)

  • Park, Kwang-Hyun;Kim, Dae-Jin;Hong, Ji- Man;Jeong, Young-Sook;Choi, Byoung-Wook
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.3
    • /
    • pp.192-200
    • /
    • 2009
  • In this paper, we report the progress in the development of performance evaluation method for detection algorithms of face region and facial components. This paper aims to provide a standardized evaluation method for general approach in face recognition application as a potential component in futuristic intelligent robot systems. From an image capture process to the retrieval of face-related information, all the necessary steps are shown with examples.

  • PDF

Characteristics of Formative Factor Influencing Robot Design's Preference Response (로봇디자인에 대한 선호 반응에 영향을 미치는 조형요소의 특성)

  • Heo, Seong-Cheol;Jung, Jung-Pil
    • Science of Emotion and Sensibility
    • /
    • v.11 no.4
    • /
    • pp.511-520
    • /
    • 2008
  • The fundamental goal of this study is to analyze characteristics of combined relations of formative element factors that compose robot's face based on a result of preference response from robot's design. Also, in order to improve preference from the analysis result, this study intended to inquire into possibilities of suggesting design guideline. For these, pictures of 27 different kinds of robot faces were selected as experimental stimuli, and experiments of preference response and association response were performed. As a result of the experiments, various characteristics were achieved such as robot's eye shape having greater influences than facial structure, etc. Based on the result, formative element factor characteristics that could positively influence preference response on robot's face could be drawn and a basic design guideline could also be suggested. An eye should be oval so that the length-to-width ratio may be 1.67:1. The distance between both eyes should be 35% of the facial width. Also, eyes should be above the central latitude of the face so that they may be visually stable. It is advisable to round the face generally. Eyes should be harmonious with the face so that the robot may seem cute and charming.

  • PDF

Development of Humanoid Robot Platform to Identify Biological Concepts of Children (유아의 생물 개념 발달 연구를 위한 인간형 로봇 플랫폼의 개발)

  • Kim, Minkyung;Shin, Youngkwang;Yi, Soonhyung;Lee, Donghun
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.376-384
    • /
    • 2017
  • In this paper, we describe a case of using robot technology in child studies to examine children's judgement and reasoning of the life phenomenon on boundary objects. In order to control the effects of the appearance of the robot, which children observe or interact directly with, on the children's judgement and reasoning of the life phenomenon, we developed a robot similar to human. Unit experimental scenarios representing biological and psychological properties were implemented based on control of robot's motion, speech, and facial expressions. Experimenters could combine these multiple unit scenarios in a cascade to implement various scenarios of the human-robot interaction. Considering that the experimenters are researchers of child studies, there was a need to develop a remote operation console that can be easily used by non-experts in the robot field. Using the developed robot platform, researchers of child studies could implement various scenarios by manipulating the biological and psychological properties of the robot based on their research hypothesis. As a result, we could clearly see the effects of robot's properties on children's understanding about boundary object like robots.

Rapid Implementation of 3D Facial Reconstruction from a Single Image on an Android Mobile Device

  • Truong, Phuc Huu;Park, Chang-Woo;Lee, Minsik;Choi, Sang-Il;Ji, Sang-Hoon;Jeong, Gu-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.5
    • /
    • pp.1690-1710
    • /
    • 2014
  • In this paper, we propose the rapid implementation of a 3-dimensional (3D) facial reconstruction from a single frontal face image and introduce a design for its application on a mobile device. The proposed system can effectively reconstruct human faces in 3D using an approach robust to lighting conditions, and a fast method based on a Canonical Correlation Analysis (CCA) algorithm to estimate the depth. The reconstruction system is built by first creating 3D facial mapping from a personal identity vector of a face image. This mapping is then applied to real-world images captured with a built-in camera on a mobile device to form the corresponding 3D depth information. Finally, the facial texture from the face image is extracted and added to the reconstruction results. Experiments with an Android phone show that the implementation of this system as an Android application performs well. The advantage of the proposed method is an easy 3D reconstruction of almost all facial images captured in the real world with a fast computation. This has been clearly demonstrated in the Android application, which requires only a short time to reconstruct the 3D depth map.

Biosign Recognition based on the Soft Computing Techniques with application to a Rehab -type Robot

  • Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.29.2-29
    • /
    • 2001
  • For the design of human-centered systems in which a human and machine such as a robot form a human-in system, human-friendly interaction/interface is essential. Human-friendly interaction is possible when the system is capable of recognizing human biosigns such as5 EMG Signal, hand gesture and facial expressions so the some humanintention and/or emotion can be inferred and is used as a proper feedback signal. In the talk, we report our experiences of applying the Soft computing techniques including Fuzzy, ANN, GA and rho rough set theory for efficiently recognizing various biosigns and for effective inference. More specifically, we first observe characteristics of various forms of biosigns and propose a new way of extracting feature set for such signals. Then we show a standardized procedure of getting an inferred intention or emotion from the signals. Finally, we present examples of application for our model of rehabilitation robot named.

  • PDF

Development of an Emotion Recognition Robot using a Vision Method (비전 방식을 이용한 감정인식 로봇 개발)

  • Shin, Young-Geun;Park, Sang-Sung;Kim, Jung-Nyun;Seo, Kwang-Kyu;Jang, Dong-Sik
    • IE interfaces
    • /
    • v.19 no.3
    • /
    • pp.174-180
    • /
    • 2006
  • This paper deals with the robot system of recognizing human's expression from a detected human's face and then showing human's emotion. A face detection method is as follows. First, change RGB color space to CIElab color space. Second, extract skin candidate territory. Third, detect a face through facial geometrical interrelation by face filter. Then, the position of eyes, a nose and a mouth which are used as the preliminary data of expression, he uses eyebrows, eyes and a mouth. In this paper, the change of eyebrows and are sent to a robot through serial communication. Then the robot operates a motor that is installed and shows human's expression. Experimental results on 10 Persons show 78.15% accuracy.