• 제목/요약/키워드: human and robot tracking

검색결과 111건 처리시간 0.028초

감시용 로봇의 시각을 위한 인공 신경망 기반 겹친 사람의 구분 (Dividing Occluded Humans Based on an Artificial Neural Network for the Vision of a Surveillance Robot)

  • 도용태
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.505-510
    • /
    • 2009
  • In recent years the space where a robot works has been expanding to the human space unlike traditional industrial robots that work only at fixed positions apart from humans. A human in the recent situation may be the owner of a robot or the target in a robotic application. This paper deals with the latter case; when a robot vision system is employed to monitor humans for a surveillance application, each person in a scene needs to be identified. Humans, however, often move together, and occlusions between them occur frequently. Although this problem has not been seriously tackled in relevant literature, it brings difficulty into later image analysis steps such as tracking and scene understanding. In this paper, a probabilistic neural network is employed to learn the patterns of the best dividing position along the top pixels of an image region of partly occlude people. As this method uses only shape information from an image, it is simple and can be implemented in real time.

DIND Data Fusion with Covariance Intersection in Intelligent Space with Networked Sensors

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제7권1호
    • /
    • pp.41-48
    • /
    • 2007
  • Latest advances in network sensor technology and state of the art of mobile robot, and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. In this study, as the preliminary step for developing a multi-purpose "Intelligent Space" platform to implement advanced technologies easily to realize smart services to human. We will give an explanation for the ISpace system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the DIND data fusion with CI of Intelligent Space. We will conclude by discussing some possible future extensions of ISpace. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions tracking multiple objects, human detection and motion assessment, with the results from the simulations run.

On Motion Planning for Human-Following of Mobile Robot in a Predictable Intelligent Space

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제4권1호
    • /
    • pp.101-110
    • /
    • 2004
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, humans and robots need to be in close proximity to each other as much as possible. Moreover, it is necessary for their interactions to occur naturally. It is desirable for a robot to carry out human following, as one of the human-affinitive movements. The human-following robot requires several techniques: the recognition of the moving objects, the feature extraction and visual tracking, and the trajectory generation for following a human stably. In this research, a predictable intelligent space is used in order to achieve these goals. An intelligent space is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.

화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발 (Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions)

  • 진용규;유수정;조혜경
    • 로봇학회논문지
    • /
    • 제10권3호
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

로봇 캐릭터와의 상호작용에서 사용자의 시선 배분 분석 (Analysis of User's Eye Gaze Distribution while Interacting with a Robotic Character)

  • 장세윤;조혜경
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.74-79
    • /
    • 2019
  • In this paper, we develop a virtual experimental environment to investigate users' eye gaze in human-robot social interaction, and verify it's potential for further studies. The system consists of a 3D robot character capable of hosting simple interactions with a user, and a gaze processing module recording which body part of the robot character, such as eyes, mouth or arms, the user is looking at, regardless of whether the robot is stationary or moving. To verify that the results acquired on this virtual environment are aligned with those of physically existing robots, we performed robot-guided quiz sessions with 120 participants and compared the participants' gaze patterns with those in previous works. The results included the followings. First, when interacting with the robot character, the user's gaze pattern showed similar statistics as the conversations between humans. Second, an animated mouth of the robot character received longer attention compared to the stationary one. Third, nonverbal interactions such as leakage cues were also effective in the interaction with the robot character, and the correct answer ratios of the cued groups were higher. Finally, gender differences in the users' gaze were observed, especially in the frequency of the mutual gaze.

Wearable Robot Arm의 제작 및 제어 (Design and Control of a Wearable Robot)

  • 정연구;김윤경;김경환;박종오
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2001년도 춘계학술대회논문집B
    • /
    • pp.277-282
    • /
    • 2001
  • As human-friendly robot techniques improve, the concept of the wearability of robotic arms becomes important. A master arm that detects human arm motion and provides virtual forces to the operator is an embodied concept of a wearable robotic arm. In this study, we design a 7 DOF wearable robotic arm with high joint torques. An operator wearing this robotic arm can move around freely because this robotic arm was designed to have its fixed point at the shoulder part of the operator. The proposed robotic arm uses parallel mechanisms at the shoulder part and the wrist part on the model of the human muscular structure of an upper limb. To reduce the computational load in solving the forward kinematics and to prevent singularity motions of the parallel mechanism, yawing motion of the parallel mechanisms was separated using a slip ling mechanism. The total weight of the proposed robotic arm is about 4 kg. An experimental result of force tracking test for the pneumatic control system and an application example for VR robot are described to show the validity of the robot.

  • PDF

유니사이클 로봇의 곡선경로 추종을 위한 퍼지규칙 (Fuzzy Rule for Curve Path Tracking of a Unicycle Robot)

  • 김중완;정희균
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1996년도 추계학술대회 논문집
    • /
    • pp.425-429
    • /
    • 1996
  • Our unicycle has simple mechanical structure. But unicycle's dynamic system is a very sensitive unstable nonlinear system. In this paper, a fuzzy inference control mechanism was established throughout an inquiry into human riding a unicycle, and we developed a direct fuzzy controller to control our unicycle robot. This proposed fuzzy controller is consisted with fuzzy logic controllers for attitude stability and wheel's velocity. Computer simulation results show that our fuzzy controller has very powerful performance to unstable nonlinear unicycle robot system.

  • PDF

Human Tracking using Multiple-Camera-Based Global Color Model in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권1호
    • /
    • pp.39-46
    • /
    • 2006
  • We propose an global color model based method for tracking motions of multiple human using a networked multiple-camera system in intelligent space as a human-robot coexistent system. An intelligent space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of intelligent space as well. One of the main goals of intelligent space is to assist humans and to do different services for them. In order to be capable of doing that, intelligent space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and intelligent space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

화학물질 저장시설의 사고대응 및 훈련을 위한 로봇기반 누출감지 및 추적시스템 (Mobile Robot-based Leak Detection and Tracking System for Advanced Response and Training to Hazardous Materials Incidents)

  • 박명남;김창완;김태옥;신동일
    • 한국가스학회지
    • /
    • 제23권2호
    • /
    • pp.17-27
    • /
    • 2019
  • 화학물질의 사용 증대와 더불어 위험물 및 독성가스 누출 사고가 빈번하게 일어나고 있다. 그 중에서도 위험물 저장설비 사고는 누출이 감지되었을 때 대응하기 위한 초동 조치가 가장 중요하지만, 전적으로 조업자에 의한 경험에 비중이 크기 때문에, 잘못된 판단으로 인한 더 큰 물적, 인적 피해가 발생할 가능성이 높다. 본 연구는 기존 고정식 감지기를 통한 알람 발생 후 수동적인 대응을 취하는 현 접근 방식에서 벗어나, 오픈소스기술을 적용하여 쉽게 제작이 가능한 로봇 플랫폼에서 작동하는 이동식 센서를 활용한 능동적인 누출원 추적 시스템을 설계하였다. 아울러 프로토타입 시스템의 검증을 통해 누출 초기의 정확한 현장 상황파악 및 조기대응을 바탕으로 사고의 확산 및 피해 최소화의 기틀을 마련하고자 하였다.

얼굴로봇 Buddy의 기능 및 구동 메커니즘 (Functions and Driving Mechanisms for Face Robot Buddy)

  • 오경균;장명수;김승종;박신석
    • 로봇학회논문지
    • /
    • 제3권4호
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF