• Title/Summary/Keyword: Avatar Robot

Search Result 14, Processing Time 0.024 seconds

Design of a Robot-in-the-Loop Simulation Based on OPRoS (OPRoS 기반 로봇시스템의 Robot-in-the-Loop Simulation 구조)

  • Kim, Seong-Hoon;Park, Hong Seong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.3
    • /
    • pp.248-255
    • /
    • 2013
  • This paper proposes the architecture of the RILS (Robot-in-the-Loop-Simulation) consisting of the robot, the virtual robot, and the avatar robot which is the type of virtual robots operating according to the robot status and behavior. And the synchronization algorithm for mobilization part of the avatar robot is suggested, which reduces the difference between behaviors of the robot and those of the avatar robot. This difference occurs due to the environmental and mechanical mismatches between the robot and avatar robot. In order to reduce this difference in robots behaviors, the synchronization algorithm controls the avatar robot based on the data observed from the robot's behavior. The proposed architecture and the synchronization algorithm are validated from some simulation results.

3-Finger Robotic Hand and Hand Posture Mapping Algorithm for Avatar Robot (아바타 로봇을 위한 3지 로봇 손과 손 자세 맵핑 알고리즘)

  • Kim, Seungyeon;Sung, Eunho;Park, Jaeheung
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.322-333
    • /
    • 2022
  • The Avatar robot, which is one of the teleoperation robots, aims to enable users to feel the robot as a part of the body to intuitively and naturally perform various tasks. Considering the purpose of the avatar robot, an end-effector identical to a human hand is advantageous, but a robotic hand with human hand level performance has not yet been developed. In this paper we propose a new 3-finger robotic hand with human-avatar hand posture mapping algorithm which were integrated with TOCABI-AVATAR, one of the teleoperation system. Due to the flexible rolling contact joints and tendon driven mechanism applied to the finger, the finger could implement adaptive grasping and absorb the impact force caused by unexpected contacts. In addition, human-avatar hand mapping algorithm using five calibration hand postures propose to compensate physical differences between operators. Using the TOCABI-AVATAR system with the robotic hands and mapping algorithm, the operator can perform 13 out of 16 hand postures of grasping taxonomy and 4 gestures. In addition, using the system, we participated in the ANA AVATAR XPRIZE Semi-final and successfully performed three scenarios which including various social interactions as well as object manipulation.

EasyLab : An Avatar Robot for Algorithm Education (알고리즘 교육을 위한 아바타 로봇 : EasyLab)

  • Park Young-Mok;Kim Ho-Yong;Seo Yeong-Geon
    • Journal of Digital Contents Society
    • /
    • v.5 no.1
    • /
    • pp.35-40
    • /
    • 2004
  • Today`s education is in the 7th education curriculum. But, there is nothing that can be used in the classroom as a tool for education supplement. Easylab is a GUI-programing tool for students who not good at using computers. EasyLab is used in the classroom as a kind of tool to give a rise to ingenuity and creation which need at present education curriculum. When use it, first, learners think of programming-ideas, then program through icon-based software-EasyLab. After programing, the leaner can see the result directly thorough the programing code which are delivered by EasyRobot. So, leaner can study and discuss with the robot`s result. If, the result is incorrect, the robot will do a feedback as a kind of rule. One of the EasyLab`s specific property is that consisted icon-based flow-chart model. And leaner can practice with the robot that have input and output sense.

  • PDF

Vision-based Human-Robot Motion Transfer in Tangible Meeting Space (실감만남 공간에서의 비전 센서 기반의 사람-로봇간 운동 정보 전달에 관한 연구)

  • Choi, Yu-Kyung;Ra, Syun-Kwon;Kim, Soo-Whan;Kim, Chang-Hwan;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.2
    • /
    • pp.143-151
    • /
    • 2007
  • This paper deals with a tangible interface system that introduces robot as remote avatar. It is focused on a new method which makes a robot imitate human arm motions captured from a remote space. Our method is functionally divided into two parts: capturing human motion and adapting it to robot. In the capturing part, we especially propose a modified potential function of metaballs for the real-time performance and high accuracy. In the adapting part, we suggest a geometric scaling method for solving the structural difference between a human and a robot. With our method, we have implemented a tangible interface and showed its speed and accuracy test.

  • PDF

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

A Motion Capture and Mapping System: Kinect Based Human-Robot Interaction Platform (동작포착 및 매핑 시스템: Kinect 기반 인간-로봇상호작용 플랫폼)

  • Yoon, Joongsun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8563-8567
    • /
    • 2015
  • We propose a human-robot interaction(HRI) platform based on motion capture and mapping. Platform consists of capture, processing/mapping, and action parts. A motion capture sensor, computer, and avatar and/or physical robots are selected as capture, processing/mapping, and action part(s), respectively. Case studies-an interactive presentation and LEGO robot car are presented to show the design and implementation process of Kinect based HRI platform.

The Effect on the Contents of Self-Disclosure Activities using Ubiquitous Home Robots (자기노출 심리를 이용한 유비쿼터스 로봇 콘텐츠의 효과)

  • Kim, Su-Jung;Han, Jeong-Hye
    • Journal of The Korean Association of Information Education
    • /
    • v.12 no.1
    • /
    • pp.57-63
    • /
    • 2008
  • This study uses the identification which is one of the critical components of psychological mechanism and enables replacing one's own self because of the needs of self-expression(disclosure) and creation. The study aims to improve educational effects using the realistic by increasing sense of the virtual reality and the attention. After the computer-based contents were developed and converted to be applied into robot, and then the contents were combined the student's photo and the avatar using automatic loading. Finally each one of the contents was applied to the students. The results of the investigation indicated that there were significant effects of the contents based on identification. In other words, the contents effect on student's attention, but not their academic achievement. The study could find the effect of the identification's application using the educational robot. We suggested that improving technical ability of the augmented virtuality as a face-detection and sensitive interaction may lead to the specific suggestions for educational effects for further research.

  • PDF

Sign Language Avatar System Based on Hyper Sign Sentence (하이퍼 수화문장을 사용한 수화 생성 시스템)

  • Oh Young-Joon;Park Kwang-Hyun;Jang Hyo-Young;Kim Dae-Jin;Jung Jin-Woo;Bien Zeung-Nam
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.621-624
    • /
    • 2006
  • 본 논문은 기존의 수화 발생 시스템이 갖는 처리 성능의 한계와 신체요소의 움직임에 대한 문제점을 지적하고, 이를 개선하기 위해 하이퍼 수화문장을 제안한다. 하이퍼 수화문장은 기존 수화문장의 구조를 확장하여 수화단어와 신체효소의 동작기호로 구성된 수화문장이다. 제안한 하이퍼 수화문장 생성 방법에 따라 하이퍼 수화어절을 연결하여 수화동작을 합성하고 수화문장에 대한 아바타의 움직임을 실제 수화자와 유사하게 생성하는 시스템을 보인다.

  • PDF

A Brain-Computer Interface Based Human-Robot Interaction Platform (Brain-Computer Interface 기반 인간-로봇상호작용 플랫폼)

  • Yoon, Joongsun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.11
    • /
    • pp.7508-7512
    • /
    • 2015
  • We propose a brain-machine interface(BMI) based human-robot interaction(HRI) platform which operates machines by interfacing intentions by capturing brain waves. Platform consists of capture, processing/mapping, and action parts. A noninvasive brain wave sensor, PC, and robot-avatar/LED/motor are selected as capture, processing/mapping, and action part(s), respectively. Various investigations to ensure the relations between intentions and brainwave sensing have been explored. Case studies-an interactive game, on-off controls of LED(s), and motor control(s) are presented to show the design and implementation process of new BMI based HRI platform.

Improvement of Sign Word Dictionary for Korean Sign Language Avatar (한국 수화 아바타를 위한 수화 사전의 개선 방법)

  • Oh, Young-Joon;Park, Kwang-Hyun;Bien, Zeung-Nam
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06c
    • /
    • pp.167-170
    • /
    • 2007
  • 본 논문에서는 수화 아바타가 실제 청각장애인처럼 자연스러운 수화 동작을 표현하면서 정확한 의사를 전달할 수 있도록 동음이의어에 대한 처리를 다룬다. 기존의 수화 사전에 품사 정보를 추가하고 한글 형태소 분석기를 활용하여 동음이의어를 구분할 수 있도록 수화 사전을 개선하는 방법을 제안한다.

  • PDF