• 제목/요약/키워드: human and robot tracking

검색결과 111건 처리시간 0.02초

인공지능 공간상의 다중객체 구분을 위한 컬러 패턴 인식과 추적 (Color Pattern Recognition and Tracking for Multi-Object Tracking in Artificial Intelligence Space)

  • 진태석
    • 한국산업융합학회 논문집
    • /
    • 제27권2_2호
    • /
    • pp.319-324
    • /
    • 2024
  • In this paper, the Artificial Intelligence Space(AI-Space) for human-robot interface is presented, which can enable human-computer interfacing, networked camera conferencing, industrial monitoring, service and training applications. We present a method for representing, tracking, and objects(human, robot, chair) following by fusing distributed multiple vision systems in AI-Space. The article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguous conditions. We propose to track the moving objects(human, robot, chair) by generating hypotheses not in the image plane but on the top-view reconstruction of the scene.

행동차단을 위한 이동로봇의 추적경로 생성 (Tracking Path Generation of Mobile Robot for Interrupting Human Behavior)

  • 진태석
    • 한국지능시스템학회논문지
    • /
    • 제23권5호
    • /
    • pp.460-465
    • /
    • 2013
  • 본 논문은 실내외 공간에서 인간을 포한함 이동물체의 위치를 인식하고 출입금지 구역으로의 이동에 대해서 보안목적의 이동로봇이 센서를 이용하여 이동물체 및 인간의 행동 움직임을 인식하고 진입을 제한하는 주행기법을 제시하고 있다. 제시한 방법은 로봇자체의 DR센서 정보와 레이져스케너에서 얻은 환경정보로부터 로봇의 위치추정방법을 결합 한 것이다. 이동로봇은 인간의 속도벡터를 계산 및 주행할 경로를 계획하고 인간의 진행방향을 차단할 수 있도록 예측된 경로를 따라 주행을 하게 된다. 이때, 인간의 움직임은 포인터 물체로 간주하였으며 로봇의 기구학에 기반하여 인간의 위치를 추정하는 기본 방법을 제시하고 그 타당성을 검정하기위해 로봇을 이용한 위치추정 및 추적 실험결과를 제시하였다.

Kinect 센서를 이용한 효율적인 사람 추종 로봇의 예측 제어 (Predictive Control of an Efficient Human Following Robot Using Kinect Sensor)

  • 허신녕;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제20권9호
    • /
    • pp.957-963
    • /
    • 2014
  • This paper proposes a predictive control for an efficient human following robot using Kinect sensor. Especially, this research is focused on detecting of foot-end-point and foot-vector instead of human body which can be occluded easily by the obstacles. Recognition of the foot-end-point by the Kinect sensor is reliable since the two feet images can be utilized, which increases the detection possibility of the human motion. Depth image features and a decision tree have been utilized to estimate the foot end-point precisely. A tracking point average algorithm is also adopted in this research to estimate the location of foot accurately. Using the continuous locations of foot, the human motion trajectory is estimated to guide the mobile robot along a smooth path to the human. It is verified through the experiments that detecting foot-end-point is more reliable and efficient than detecting the human body. Finally, the tracking performance of the mobile robot is demonstrated with a human motion along an 'L' shape course.

Human and Robot Tracking Using Histogram of Oriented Gradient Feature

  • Lee, Jeong-eom;Yi, Chong-ho;Kim, Dong-won
    • Journal of Platform Technology
    • /
    • 제6권4호
    • /
    • pp.18-25
    • /
    • 2018
  • This paper describes a real-time human and robot tracking method in Intelligent Space with multi-camera networks. The proposed method detects candidates for humans and robots by using the histogram of oriented gradients (HOG) feature in an image. To classify humans and robots from the candidates in real time, we apply cascaded structure to constructing a strong classifier which consists of many weak classifiers as follows: a linear support vector machine (SVM) and a radial-basis function (RBF) SVM. By using the multiple view geometry, the method estimates the 3D position of humans and robots from their 2D coordinates on image coordinate system, and tracks their positions by using stochastic approach. To test the performance of the method, humans and robots are asked to move according to given rectangular and circular paths. Experimental results show that the proposed method is able to reduce the localization error and be good for a practical application of human-centered services in the Intelligent Space.

Human Robot Interaction via Intelligent Space

  • Hideki Hashimoto;Lee, Joo-Ho;Kazuyuki Morioka
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.49.1-49
    • /
    • 2002
  • $\textbullet$ Intelligent Space 1. Optimal Camera Arrangement 2. People Tracking 3. Physical Robot 4. Robot Control 5. People Following Robot $\textbullet$ Initial stage for making high-level human robot interaction. http://dfs.iis.u-tokyo.ac.jp/∼leejooho/ispace/.

  • PDF

단일레이저거리센서를 탑재한 실내용이동서비스로봇의 사람추종 (Human following of Indoor mobile service robots with a Laser Range Finder)

  • 유윤규;김호연;정우진;박주영
    • 로봇학회논문지
    • /
    • 제6권1호
    • /
    • pp.86-96
    • /
    • 2011
  • The human-following is one of the significant procedure in human-friendly navigation of mobile robots. There are many approaches of human-following technology. Many approaches have adopted various multiple sensors such as vision system and Laser Range Finder (LRF). In this paper, we propose detection and tracking approaches for human legs by the use of a single LRF. We extract four simple attributes of human legs. To define the boundary of extracted attributes mathematically, we used a Support Vector Data Description (SVDD) scheme. We establish an efficient leg-tracking scheme by exploiting a human walking model to achieve robust tracking under occlusions. The proposed approaches were successfully verified through various experiments.

순차적 파티클 필터를 이용한 다중증거기반 얼굴추적 (Probabilistic Head Tracking Based on Cascaded Condensation Filtering)

  • 김현우;기석철
    • 로봇학회논문지
    • /
    • 제5권3호
    • /
    • pp.262-269
    • /
    • 2010
  • This paper presents a probabilistic head tracking method, mainly applicable to face recognition and human robot interaction, which can robustly track human head against various variations such as pose/scale change, illumination change, and background clutters. Compared to conventional particle filter based approaches, the proposed method can effectively track a human head by regularizing the sample space and sequentially weighting multiple visual cues, in the prediction and observation stages, respectively. Experimental results show the robustness of the proposed method, and it is worthy to be mentioned that some proposed probabilistic framework could be easily applied to other object tracking problems.

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

파티클 필터를 이용한 다중 객체의 움직임 환경에서 특정 객체의 움직임 추적 (Specified Object Tracking in an Environment of Multiple Moving Objects using Particle Filter)

  • 김형복;고광은;강진식;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제21권1호
    • /
    • pp.106-111
    • /
    • 2011
  • 영상 기반의 움직이는 객체의 검출 및 추적은 실시간 감시 시스템이나 영상회의 시스템 등에서 널리 사용되어지고 있다. 또한 인간-컴퓨터 상호 작용(Human-Computer Interface)이나 인간-로봇 상호 작용(Human-Robot Interface)으로 확장되어 사용할 수 있기 때문에 움직이는 객체의 추적 기술은 중요한 핵심 기술 중에 하나이다. 특히 다중 객체의 움직임 환경에서 특정 객체의 움직임만을 추적할 수 있다면 다양한 응용이 가능할 것이다. 본 논문에서는 파티클 필터를 이용한 특정 객체의 움직임 추적에 관하여 연구 하였다. 실험 결과들로부터 파티클 필터를 이용한 단일 객체의 움직임 추적과 다중 객체의 움직임 환경에서 특정 객체의 움직임 추적에서 좋은 결과를 얻을 수 있었다.

위치기반 비주얼 서보잉을 위한 견실한 위치 추적 및 양팔 로봇의 조작작업에의 응용 (Robust Position Tracking for Position-Based Visual Servoing and Its Application to Dual-Arm Task)

  • 김찬오;최성;정주노;양광웅;김홍석
    • 로봇학회논문지
    • /
    • 제2권2호
    • /
    • pp.129-136
    • /
    • 2007
  • This paper introduces a position-based robust visual servoing method which is developed for operation of a human-like robot with two arms. The proposed visual servoing method utilizes SIFT algorithm for object detection and CAMSHIFT algorithm for object tracking. While the conventional CAMSHIFT has been used mainly for object tracking in a 2D image plane, we extend its usage for object tracking in 3D space, by combining the results of CAMSHIFT for two image plane of a stereo camera. This approach shows a robust and dependable result. Once the robot's task is defined based on the extracted 3D information, the robot is commanded to carry out the task. We conduct several position-based visual servoing tasks and compare performances under different conditions. The results show that the proposed visual tracking algorithm is simple but very effective for position-based visual servoing.

  • PDF