• Title/Summary/Keyword: Robot skin

Search Result 60, Processing Time 0.034 seconds

Development of an Emotion Recognition Robot using a Vision Method (비전 방식을 이용한 감정인식 로봇 개발)

  • Shin, Young-Geun;Park, Sang-Sung;Kim, Jung-Nyun;Seo, Kwang-Kyu;Jang, Dong-Sik
    • IE interfaces
    • /
    • v.19 no.3
    • /
    • pp.174-180
    • /
    • 2006
  • This paper deals with the robot system of recognizing human's expression from a detected human's face and then showing human's emotion. A face detection method is as follows. First, change RGB color space to CIElab color space. Second, extract skin candidate territory. Third, detect a face through facial geometrical interrelation by face filter. Then, the position of eyes, a nose and a mouth which are used as the preliminary data of expression, he uses eyebrows, eyes and a mouth. In this paper, the change of eyebrows and are sent to a robot through serial communication. Then the robot operates a motor that is installed and shows human's expression. Experimental results on 10 Persons show 78.15% accuracy.

Development of FACS-based Android Head for Emotional Expressions (감정표현을 위한 FACS 기반의 안드로이드 헤드의 개발)

  • Choi, Dongwoon;Lee, Duk-Yeon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.537-544
    • /
    • 2020
  • This paper proposes the creation of an android robot head based on the facial action coding system(FACS), and the generation of emotional expressions by FACS. The term android robot refers to robots with human-like appearance. These robots have artificial skin and muscles. To make the expression of emotions, the location and number of artificial muscles had to be determined. Therefore, it was necessary to anatomically analyze the motions of the human face by FACS. In FACS, expressions are composed of action units(AUs), which work as the basis of determining the location and number of artificial muscles in the robots. The android head developed in this study had servo motors and wires, which corresponded to 30 artificial muscles. Moreover, the android head was equipped with artificial skin in order to make the facial expressions. Spherical joints and springs were used to develop micro-eyeball structures, and the arrangement of the 30 servo motors was based on the efficient design of wire routing. The developed android head had 30-DOFs and could express 13 basic emotions. The recognition rate of these basic emotional expressions was evaluated at an exhibition by spectators.

Dynamic analysis and control of a robot leg with a shock absorber (완충기를 가진 로봇다리의 동역학 해석 및 동적 보행제어)

  • Oh, Chang-Geun;Kang, Sung-Chul;Lee, Soo-Yong;Kim, Mun-Sang;Yoo, Hong-Hee
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.22 no.4
    • /
    • pp.768-778
    • /
    • 1998
  • Human beings usually absorb a shock from terrain during walking through the damping effects of joints, muscles and skin. With this analogy, a robot-leg with a shock absorber is built to absorb the impact forces at its foot during high-speed walking on irregular terrain. To control the hip position while walking, the dynamic controller suitable for high speed walking is designed and implemented based on a dynamic model by Kane's equation. The hip position tracking performances of various controllers (PID controller, computed torque controller and feedforward torque controller) are compared through the experiments of the real robot-leg.

Gesture Extraction for Ubiquitous Robot-Human Interaction (유비쿼터스 로봇과 휴먼 인터액션을 위한 제스쳐 추출)

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.12
    • /
    • pp.1062-1067
    • /
    • 2005
  • This paper discusses a skeleton feature extraction method for ubiquitous robot system. The skeleton features are used to analyze human motion and pose estimation. In different conventional feature extraction environment, the ubiquitous robot system requires more robust feature extraction method because it has internal vibration and low image quality. The new hybrid silhouette extraction method and adaptive skeleton model are proposed to overcome this constrained environment. The skin color is used to extract more sophisticated feature points. Finally, the experimental results show the superiority of the proposed method.

Modeling of Superficial Pain using ANNs

  • Matsunaga, Nobutomo;Kuroki, Asayo;Kawaji, Shigeyasu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1293-1298
    • /
    • 2005
  • In the environment where human coexists with robot, the problem of safety is very important. But it is difficult to separate the robot from the human in time-domain or space-domain unlike the case of factory automation, so a new concept is needed. One approach is to notice sensory and emotional feeling of human, and in this study "pain" is focused, which is a typical unpleasant feeling when the robot contacts us. In this paper, to design the controller based on the pain, an artificial superficial pain model caused by impact is proposed. This ASPM model consists of mechanical pain model, skin model and gate control by artificial neural networks (ANNs). The proposed ASPM is evaluated by experiments.

  • PDF

Hand gesture based a pet robot control (손 제스처 기반의 애완용 로봇 제어)

  • Park, Se-Hyun;Kim, Tae-Ui;Kwon, Kyung-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.4
    • /
    • pp.145-154
    • /
    • 2008
  • In this paper, we propose the pet robot control system using hand gesture recognition in image sequences acquired from a camera affixed to the pet robot. The proposed system consists of 4 steps; hand detection, feature extraction, gesture recognition and robot control. The hand region is first detected from the input images using the skin color model in HSI color space and connected component analysis. Next, the hand shape and motion features from the image sequences are extracted. Then we consider the hand shape for classification of meaning gestures. Thereafter the hand gesture is recognized by using HMMs (hidden markov models) which have the input as the quantized symbol sequence by the hand motion. Finally the pet robot is controlled by a order corresponding to the recognized hand gesture. We defined four commands of sit down, stand up, lie flat and shake hands for control of pet robot. And we show that user is able to control of pet robot through proposed system in the experiment.

  • PDF

Application of Multiple Fuzzy-Neuro Controllers of an Exoskeletal Robot for Human Elbow Motion Support

  • Kiguchi, Kazuo;Kariya, Shingo;Wantanabe, Keigo;Fukude, Toshio
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.1
    • /
    • pp.49-55
    • /
    • 2002
  • A decrease in the birthrate and aging are progressing in Japan and several countries. In that society, it is important that physically weak persons such as elderly persons are able to take care of themselves. We have been developing exoskeletal robots for human (especially for physically weak persons) motion support. In this study, the controller controls the angular position and impedance of the exoskeltal robot system using multiple fuzzy-neuro controllers based on biological signals that reflect the human subject's intention. Skin surface electromyogram (EMG) signals and the generated wrist force by the human subject during the elbow motion have been used as input information of the controller. Since the activation level of working muscles tends to vary in accordance with the flexion angle of elbow, multiple fuzzy-neuro controllers are applied in the proposed method. The multiple fuzzy-neuro controllers are moderately switched in accordance with the elbow flexion angle. Because of the adaptation ability of the fuzzy-neuro controllers, the exoskeletal robot is flexible enough to deal with biological signal such as EMG. The experimental results show the effectiveness of the proposed controller.

Verification of Effectiveness of Wearing Compression Pants in Wearable Robot Based on Bio-signals (생체신호에 기반한 웨어러블 로봇 내 부분 압박 바지 착용 시 효과 검증)

  • Park, Soyoung;Lee, Yejin
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.45 no.2
    • /
    • pp.305-316
    • /
    • 2021
  • In this study, the effect of wearing functional compression pants is verified using a lower-limb wearable robot through a bio-signal analysis and subjective fit evaluation. First, the compression area to be applied to the functional compression pants is derived using the quad method for nine men in their 20s. Subsequently, functional compression pants are prepared, and changes in Electroencephalogram (EEG) and Electrocardiogram (ECG) signals when wearing the functional compression and normal regular pants inside a wearable robot are measured. The EEG and ECG signals are measured with eyes closed and open. Results indicate that the Relative alpha (RA) and Relative gamma wave (RG) of the EEG signal differ significantly, resulting in increased stability and reduced anxiety and stress when wearing the functional compression pants. Furthermore, the ECG analysis results indicate statistically significant differences in the Low frequency (LF)/High frequency (HF) index, which reflect the overall balance of the autonomic nervous system and can be interpreted as feeling comfortable and balanced when wearing the functional compression pants. Moreover, subjective sense is discovered to be effective in assessing wear fit, ease of movement, skin friction, and wear comfort when wearing the functional compression pants.

Adaptive Skin Color Segmentation in a Single Image using Image Feedback (영상 피드백을 이용한 단일 영상에서의 적응적 피부색 검출)

  • Do, Jun-Hyeong;Kim, Keun-Ho;Kim, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.112-118
    • /
    • 2009
  • Skin color segmentation techniques have been widely utilized for face/hand detection and tracking in many applications such as a diagnosis system using facial information, human-robot interaction, an image retrieval system. In case of a video image, it is common that the skin color model for a target is updated every frame for the robust target tracking against illumination change. As for a single image, however, most of studies employ a fixed skin color model which may result in low detection rate or high false positive errors. In this paper, we propose a novel method for effective skin color segmentation in a single image, which modifies the conditions for skin color segmentation iteratively by the image feedback of segmented skin color region in a given image.

Multi-legged robot system enabled to decide route and recognize obstacle based on hand posture recognition (손모양 인식기반의 경로교사와 장애물 인식이 가능한 자율보행 다족로봇 시스템)

  • Kim, Min-Sung;Jeong, Woo-Won;Kwan, Bae-Guen;Kang, Dong-Joong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.8
    • /
    • pp.1925-1936
    • /
    • 2010
  • In this paper, multi-legged robot was designed and produced using stable walking pattern algorithm. The robot had embedded camera and wireless communication function and it is possible to recognize both hand posture and obstacles. The algorithm decided moving paths, and recognized and avoided obstacles through Hough Transform using Edge Detection of inputed image from image sensor. The robot can be controlled by hand posture using Mahalanobis Distance and average value of skin's color pixel, which is previously learned in order to decide the destination. The developed system has shown obstacle detection rate of 96% and hand posture recognition rate of 94%.