• Title/Summary/Keyword: omni-directional camera

Search Result 41, Processing Time 0.023 seconds

Tele-presence System using Homography-based Camera Tracking Method (호모그래피기반의 카메라 추적기술을 이용한 텔레프레즌스 시스템)

  • Kim, Tae-Hyub;Choi, Yoon-Seok;Nam, Bo-Dam;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.3
    • /
    • pp.27-33
    • /
    • 2012
  • Tele-presence and tele-operation techniques are used to build up an immersive scene and control environment for the distant user. This paper presents a novel tele-presence system using the camera tracking based on planar homography. In the first step, the user wears the HMD(head mounted display) with the camera and his/her head motion is estimated. From the panoramic image by the omni-directional camera mounted on the mobile robot, a viewing image by the user is generated and displayed through HMD. The homography of 3D plane with markers is used to obtain the head motion of the user. For the performance evaluation, the camera tracking results by ARToolkit and the homography based method are compared with the really measured positions of the camera.

The navigation method of mobile robot using a omni-directional position detection system (전방향 위치검출 시스템을 이용한 이동로봇의 주행방법)

  • Ryu, Ji-Hyoung;Kim, Jee-Hong;Lee, Chang-Goo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.2
    • /
    • pp.237-242
    • /
    • 2009
  • Comparing with fixed-type Robots, Mobile Robots have the advantage of extending their workspaces. But this advantage need some sensors to detect mobile robot's position and find their goal point. This article describe the navigation teaching method of mobile robot using omni-directional position detection system. This system offers the brief position data to a processor with simple devices. In other words, when user points a goal point, this system revise the error by comparing its heading angle and position with the goal. For these processes, this system use a conic mirror and a single camera. As a result, this system reduce the image processing time to search the target for mobile robot navigation ordered by user.

A Study on Automatic Detection of Speed Bump by using Mathematical Morphology Image Filters while Driving (수학적 형태학 처리를 통한 주행 중 과속 방지턱 자동 탐지 방안)

  • Joo, Yong Jin;Hahm, Chang Hahk
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.3
    • /
    • pp.55-62
    • /
    • 2013
  • This paper aims to detect Speed Bump by using Omni-directional Camera and to suggest Real-time update scheme of Speed Bump through Vision Based Approach. In order to detect Speed Bump from sequence of camera images, noise should be removed as well as spot estimated as shape and pattern for speed bump should be detected first. Now that speed bump has a regular form of white and yellow area, we extracted speed bump on the road by applying erosion and dilation morphological operations and by using the HSV color model. By collecting huge panoramic images from the camera, we are able to detect the target object and to calculate the distance through GPS log data. Last but not least, we evaluated accuracy of obtained result and detection algorithm by implementing SLAMS (Simultaneous Localization and Mapping system).

Control of an Omni-directional Mobile Robot Based on Camera Image (카메라 영상기반 전방향 이동 로봇의 제어)

  • Kim, Bong Kyu;Ryoo, Jung Rae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.84-89
    • /
    • 2014
  • In this paper, an image-based visual servo control strategy for tracking a target object is applied to a camera-mounted omni-directional mobile robot. In order to get target angular velocity of each wheel from image coordinates of the target object, in general, a mathematical image Jacobian matrix is built using a camera model and a mobile robot kinematics. Unlike to the well-known mathematical image Jacobian, a simple rule-based control strategy is proposed to generate target angular velocities of the wheels in conjunction with size of the target object captured in a camera image. A camera image is divided into several regions, and a pre-defined rule corresponding to the target-located image region is applied to generate target angular velocities of wheels. The proposed algorithm is easily implementable in that no mathematical description for image Jacobian is required and a small number of rules are sufficient for target tracking. Experimental results are presented with descriptions about the overall experimental system.

Autonomous Omni-Directional Cleaning Robot System Design

  • Choi, Jun-Yong;Ock, Seung-Ho;Kim, San;Kim, Dong-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2019-2023
    • /
    • 2005
  • In this paper, an autonomous omni directional cleaning robot which recognizes an obstacle and a battery charger is introduced. It utilizes a robot vision, ultra sonic sensors, and infrared sensors information along with appropriate algorithm. Three omni-directional wheels make the robot move any direction, enabling a faster maneuvering than a simple track typed robot. The robot system transfers command and image data through Blue-tooth wireless modules to be operated in a remote place. The robot vision associated with sensor data makes the robot proceed in an autonomous behavior. An autonomous battery charger searching is implemented by using a map-building which results in overcoming the error due to the slip on the wheels, and camera and sensor information.

  • PDF

Global Localization of Mobile Robots Using Omni-directional Images (전방위 영상을 이용한 이동 로봇의 전역 위치 인식)

  • Han, Woo-Sup;Min, Seung-Ki;Roh, Kyung-Shik;Yoon, Suk-June
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.4
    • /
    • pp.517-524
    • /
    • 2007
  • This paper presents a global localization method using circular correlation of an omni-directional image. The localization of a mobile robot, especially in indoor conditions, is a key component in the development of useful service robots. Though stereo vision is widely used for localization, its performance is limited due to computational complexity and its narrow view angle. To compensate for these shortcomings, we utilize a single omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Nodes around a robot are extracted by the correlation coefficients of CHL (Circular Horizontal Line) between the landmark and the current captured image. After finding possible near nodes, the robot moves to the nearest node based on the correlation values and the positions of these nodes. To accelerate computation, correlation values are calculated based on Fast Fourier Transforms. Experimental results and performance in a real home environment have shown the feasibility of the method.

Determination of 3D Object Coordinates from Overlapping Omni-directional Images Acquired by a Mobile Mapping System (모바일매핑시스템으로 취득한 중첩 전방위 영상으로부터 3차원 객체좌표의 결정)

  • Oh, Tae-Wan;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.305-315
    • /
    • 2010
  • This research aims to develop a method to determine the 3D coordinates of an object point from overlapping omni-directional images acquired by a ground mobile mapping system and assess their accuracies. In the proposed method, we first define an individual coordinate system on each sensor and the object space and determine the geometric relationships between the systems. Based on these systems and their relationships, we derive a straight line of the corresponding object point candidates for a point of an omni-directional image, and determine the 3D coordinates of the object point by intersecting a pair of straight lines derived from a pair of matched points. We have compared the object coordinates determined through the proposed method with those measured by GPS and a total station for the accuracy assessment and analysis. According to the experimental results, with the appropriate length of baseline and mutual positions between cameras and objects, we can determine the relative coordinates of the object point with the accuracy of several centimeters. The accuracy of the absolute coordinates is ranged from several centimeters to 1 m due to systematic errors. In the future, we plan to improve the accuracy of absolute coordinates by determining more precisely the relationship between the camera and GPS/INS coordinates and performing the calibration of the omni-directional camera

Coordinate Calibration of the ODVS using Delta-bar-Delta Neural Network (Delta-bar-Delta 알고리즘을 이용한 ODVS의 좌표 교정)

  • Kim Do-Hyeon;Park Young-Min;Cha Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.3
    • /
    • pp.669-675
    • /
    • 2005
  • This paper proposes coordinates transformation and calibration algorithm using 3D parabolic coordinate transformation and delta-bar-delta neural algorithm for the omni-directional image captured by catadioptric camera. Experimental results shows that the proposed algorithm has accuracy and confidence in coordinate transformation which is sensitive to environmental variables.

Real time Omni-directional Object Detection Using Background Subtraction of Fisheye Image (어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지)

  • Choi, Yun-Won;Kwon, Kee-Koo;Kim, Jong-Hyo;Na, Kyung-Jin;Lee, Suk-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.8
    • /
    • pp.766-772
    • /
    • 2015
  • This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).

Real-time Human Detection under Omni-dir ectional Camera based on CNN with Unified Detection and AGMM for Visual Surveillance

  • Nguyen, Thanh Binh;Nguyen, Van Tuan;Chung, Sun-Tae;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1345-1360
    • /
    • 2016
  • In this paper, we propose a new real-time human detection under omni-directional cameras for visual surveillance purpose, based on CNN with unified detection and AGMM. Compared to CNN-based state-of-the-art object detection methods. YOLO model-based object detection method boasts of very fast object detection, but with less accuracy. The proposed method adapts the unified detecting CNN of YOLO model so as to be intensified by the additional foreground contextual information obtained from pre-stage AGMM. Increased computational time incurred by additional AGMM processing is compensated by speed-up gain obtained from utilizing 2-D input data consisting of grey-level image data and foreground context information instead of 3-D color input data. Through various experiments, it is shown that the proposed method performs better with respect to accuracy and more robust to environment changes than YOLO model-based human detection method, but with the similar processing speeds to that of YOLO model-based one. Thus, it can be successfully employed for embedded surveillance application.