• Title/Summary/Keyword: Omnidirectional Images

Search Result 35, Processing Time 0.01 seconds

An Efficient Hardware Architecture of Coordinate Transformation for Panorama Unrolling of Catadioptric Omnidirectional Images

  • Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.15 no.1
    • /
    • pp.10-14
    • /
    • 2011
  • In this paper, we present an efficient hardware architecture of unrolling image mapper of catadioptric omnidirectional imaging systems. The catadioptric omnidirectional imaging systems generate images of 360 degrees of view and need to be transformed into panorama images in rectangular coordinate. In most application, it has to perform the panorama unrolling in real-time and at low-cost, especially for high-resolution images. The proposed hardware architecture adopts a software/hardware cooperative structure and employs several optimization schemes using look-up-table(LUT) of coordinate conversion. To avoid the on-line division operation caused by the coordinate transformation algorithm, the proposed architecture has the LUT which has pre-computed division factors. And then, the amount of memory used by the LUT is reduced to 1/4 by using symmetrical characteristic compared with the conventional architecture. Experimental results show that the proposed hardware architecture achieves an effective real-time performance and lower implementation cost, and it can be applied to other kinds of catadioptric omnidirectional imaging systems.

An Omnidirectional Vision-Based Moving Obstacle Detection in Mobile Robot

  • Kim, Jong-Cheol;Suga, Yasuo
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.6
    • /
    • pp.663-673
    • /
    • 2007
  • This paper presents a new moving obstacle detection method using an optical flow in mobile robot with an omnidirectional camera. Because an omnidirectional camera consists of a nonlinear mirror and CCD camera, the optical flow pattern in omnidirectional image is different from the pattern in perspective camera. The geometry characteristic of an omnidirectional camera has influence on the optical flow in omnidirectional image. When a mobile robot with an omnidirectional camera moves, the optical flow is not only theoretically calculated in omnidirectional image, but also investigated in omnidirectional and panoramic images. In this paper, the panoramic image is generalized from an omnidirectional image using the geometry of an omnidirectional camera. In particular, Focus of expansion (FOE) and focus of contraction (FOC) vectors are defined from the estimated optical flow in omnidirectional and panoramic images. FOE and FOC vectors are used as reference vectors for the relative evaluation of optical flow. The moving obstacle is turned out through the relative evaluation of optical flows. The proposed algorithm is tested in four motions of a mobile robot including straight forward, left turn, right turn and rotation. The effectiveness of the proposed method is shown by the experimental results.

Using Omnidirectional Images for Semi-Automatically Generating IndoorGML Data

  • Claridades, Alexis Richard;Lee, Jiyeong;Blanco, Ariel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.319-333
    • /
    • 2018
  • As human beings spend more time indoors, and with the growing complexity of indoor spaces, more focus is given to indoor spatial applications and services. 3D topological networks are used for various spatial applications that involve navigation indoors such as emergency evacuation, indoor positioning, and visualization. Manually generating indoor network data is impractical and prone to errors, yet current methods in automation need expensive sensors or datasets that are difficult and expensive to obtain and process. In this research, a methodology for semi-automatically generating a 3D indoor topological model based on IndoorGML (Indoor Geographic Markup Language) is proposed. The concept of Shooting Point is defined to accommodate the usage of omnidirectional images in generating IndoorGML data. Omnidirectional images were captured at selected Shooting Points in the building using a fisheye camera lens and rotator and indoor spaces are then identified using image processing implemented in Python. Relative positions of spaces obtained from CAD (Computer-Assisted Drawing) were used to generate 3D node-relation graphs representing adjacency, connectivity, and accessibility in the study area. Subspacing is performed to more accurately depict large indoor spaces and actual pedestrian movement. Since the images provide very realistic visualization, the topological relationships were used to link them to produce an indoor virtual tour.

Global Positioning of a Mobile Robot based on Color Omnidirectional Image Understanding (컬러 전방향 영상 이해에 기반한 이동 로봇의 위치 추정)

  • Kim, Tae-Gyun;Lee, Yeong-Jin;Jeong, Myeong-Jin
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.49 no.6
    • /
    • pp.307-315
    • /
    • 2000
  • For the autonomy of a mobile robot it is first needed to know its position and orientation. Various methods of estimating the position of a robot have been developed. However, it is still difficult to localize the robot without any initial position or orientation. In this paper we present the method how to make the colored map and how to calculate the position and direction of a robot using the angle data of an omnidirectional image. The wall of the map is rendered with the corresponding color images and the color histograms of images and the coordinates of feature points are stored in the map. Then a mobile robot gets the color omnidirectional image at arbitrary position and orientation, segments it and recognizes objects by multiple color indexing. Using the information of recognized objects robot can have enough feature points and localize itself.

  • PDF

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Design and Analysis of Illumination Optics for Image Uniformity in Omnidirectional Vision Inspection System for Screw Threads (나사산 전면검사 비전시스템의 영상 균일도 향상을 위한 조명 광학계 설계 및 해석)

  • Lee, Chang Hun;Lim, Yeong Eun;Park, Keun;Ra, Seung Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.31 no.3
    • /
    • pp.261-268
    • /
    • 2014
  • Precision screws have a wide range of industrial applications such as electrical and automotive products. To produce screw threads with high precision, not only high precision manufacturing technology but also reliable measurement technology is required. Machine vision systems have been used in the automatic inspection of screw threads based on backlight illumination, which cannot detect defects on the thread surface. Recently, an omnidirectional inspection system for screw threads was developed to obtain $360^{\circ}$ images of screws, based on front light illumination. In this study, the illumination design for the omnidirectional inspection system was modified by adding a light shield to improve the image uniformity. Optical simulation for various shield designs was performed to analyze image uniformity of the obtained images. The simulation results were analyzed statistically using response surface method, from which optical performance of the omnidirectional inspection system could be optimized in terms of image quality and uniformity.

Development of Omnidirectional Ranging System Based on Structured Light Image (구조광 영상기반 전방향 거리측정 시스템 개발)

  • Shin, Jin;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.5
    • /
    • pp.479-486
    • /
    • 2012
  • In this paper, a ranging system is proposed that is able to measure 360 degree omnidirectional distances to environment objects. The ranging system is based on the structured light imaging system with catadioptric omnidirectional mirror. In order to make the ranging system robust against environmental illumination, efficient structured light image processing algorithms are developed; sequential integration of difference images with modulated structured light and radial search based on Bresenham line drawing algorithm. A dedicated FPGA image processor is developed to speed up the overall image processing. Also the distance equation is derived in the omnidirectional imaging system with a hyperbolic mirror. It is expected that the omnidirectional ranging system is useful for mapping and localization of mobile robot. Experiments are carried out to verify the performance of the proposed ranging system.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

Feature matching toy Omnidirectional Image based on Singular Value Decomposition

  • Kim, Do-Yoon;Lee, Young-Jin;Myung jin Chung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.98.2-98
    • /
    • 2002
  • $\textbullet$ Omnidirectional feature matching $\textbullet$ SVD-based matching algorithm $\textbullet$ Using SSD instead of the zero-mean correlation $\textbullet$ The similarity with the Gaussian weighted $\textbullet$ Low computational cost $\textbullet$ It describes the similarity of the matched pairs in omnidirectional images.

  • PDF