• 제목/요약/키워드: multi-vision

검색결과 483건 처리시간 0.028초

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Multi-Object Tracking using the Color-Based Particle Filter in ISpace with Distributed Sensor Network

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권1호
    • /
    • pp.46-51
    • /
    • 2005
  • Intelligent Space(ISpace) is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. And the article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguity conditions. We propose to track the moving objects by generating hypotheses not in the image plan but on the top-view reconstruction of the scene. Comparative results on real video sequences show the advantage of our method for multi-object tracking. Simulations are carried out to evaluate the proposed performance. Also, the method is applied to the intelligent environment and its performance is verified by the experiments.

멀티터치를 위한 테이블-탑 디스플레이 기술 동향 (Survey: Tabletop Display Techniques for Multi-Touch Recognition)

  • 김송국;이칠우
    • 한국콘텐츠학회논문지
    • /
    • 제7권2호
    • /
    • pp.84-91
    • /
    • 2007
  • 최근에 인간과 컴퓨터 상호작용을 위한 사용자 의도 및 행위 인식에 관한 비전 기반 연구가 활발히 진행되고 있다. 그 중에서도 테이블-탑 디스플레이 시스템은 터치 감지 기술의 발전, 협력적인 작업 추구에 발맞추어 다양한 응용으로 발전하였다. 이전의 테이블-탑 디스플레이는 오직 한 명의 사용자만을 지원하였으나 현재에는 멀티터치를 통한 멀티유저를 지원하게 되었다. 따라서 테이블-탑 디스플레이의 궁극적인 목적인 협력적인 작업과 네 가지 원소 (인간, 컴퓨터, 투영된 객체, 물리적 객체) 의 상호작용이 실현 가능하게 되었다. 일반적으로 테이블-탑 디스플레이 시스템은 다음의 네 가지 측면; 맨 손을 이용한 멀티 터치 상호작용, 동시적인 사용자 상호작용을 통한 협력적인 작업의 구현, 임의의 위치 터치를 이용한 정보 조작, 상호작용의 도구로서 물리적인 객체의 사용을 중심으로 설계되어 있다. 본 논문에서는 테이블-탑 디스플레이 시스템을 위한 최첨단의 멀티터치 센싱 기술을 시각기반 방법, 비-시각 기반 방법으로 분류하고 비판적인 견해에서 분석을 하였다. 또한 테이블-탑 디스플레이 관련 연구들을 시스템 구성방식에 따라 분류하고 그 장단점과 실제 사용되는 응용 분야에 대해 기술하였다.

비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정 (Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion)

  • 박진성;박영진;박윤식;홍덕화
    • 제어로봇시스템학회논문지
    • /
    • 제17권6호
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.

저하된 로봇 비전에서의 물체 인식을 위한 진화적 생성 기반의 컬러 검출 기법 (Evolutionary Generation Based Color Detection Technique for Object Identification in Degraded Robot Vision)

  • 김경태;서기성
    • 전기학회논문지
    • /
    • 제64권7호
    • /
    • pp.1040-1046
    • /
    • 2015
  • This paper introduces GP(Genetic Programming) based color detection model for an object detection of humanoid robot vision. Existing color detection methods have used linear/nonlinear transformation of RGB color-model. However, most of cases have difficulties to classify colors satisfactory because of interference of among color channels and susceptibility for illumination variation. Especially, they are outstanding in degraded images from robot vision. To solve these problems, we propose illumination robust and non-parametric multi-colors detection model using evolution of GP. The proposed method is compared to the existing color-models for various environments in robot vision for real humanoid Nao.

On a Multi-Agent System for Assisting Human Intention

  • Tawaki, Hajime;Tan, Joo Kooi;Kim, Hyoung-Seop;Ishikawa, Seiji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1126-1129
    • /
    • 2003
  • In this paper, we propose a multi-agent system for assisting those who need help in taking objects around him/her. One may imagine this kind of situation when a person is lying in bed and wishes to take an object on a distant table that cannot be reached only by stretching his/her hand. The proposed multi-agent system is composed of three main independent agents; a vision agent, a robot agent, and a pass agent. Once a human expresses his/her intention by pointing to a particular object using his/her hand and a finger, these agents cooperatively bring the object to him/her. Natural communication between a human and the multi-agent system is realized in this way. Performance of the proposed system is demonstrated in an experiment, in which a human intends to take one of the four objects on the floor and the three agents successfully cooperate to find out the object and to bring it to the human.

  • PDF

하이브리드 센싱 기반 다중참여형 가상현실 이동 플랫폼 개발에 관한 연구 (A Study on the Development of Multi-User Virtual Reality Moving Platform Based on Hybrid Sensing)

  • 장용훈;장민혁;정하형
    • 한국멀티미디어학회논문지
    • /
    • 제24권3호
    • /
    • pp.355-372
    • /
    • 2021
  • Recently, high-performance HMDs (Head-Mounted Display) are becoming wireless due to the growth of virtual reality technology. Accordingly, environmental constraints on the hardware usage are reduced, enabling multiple users to experience virtual reality within a single space simultaneously. Existing multi-user virtual reality platforms use the user's location tracking and motion sensing technology based on vision sensors and active markers. However, there is a decrease in immersion due to the problem of overlapping markers or frequent matching errors due to the reflected light. Goal of this study is to develop a multi-user virtual reality moving platform in a single space that can resolve sensing errors and user immersion decrease. In order to achieve this goal hybrid sensing technology was developed, which is the convergence of vision sensor technology for position tracking, IMU (Inertial Measurement Unit) sensor motion capture technology and gesture recognition technology based on smart gloves. In addition, integrated safety operation system was developed which does not decrease the immersion but ensures the safety of the users and supports multimodal feedback. A 6 m×6 m×2.4 m test bed was configured to verify the effectiveness of the multi-user virtual reality moving platform for four users.