• 제목/요약/키워드: Omnidirectional Images

검색결과 35건 처리시간 0.023초

An Efficient Hardware Architecture of Coordinate Transformation for Panorama Unrolling of Catadioptric Omnidirectional Images

  • Lee, Seung-Ho
    • 전기전자학회논문지
    • /
    • 제15권1호
    • /
    • pp.10-14
    • /
    • 2011
  • In this paper, we present an efficient hardware architecture of unrolling image mapper of catadioptric omnidirectional imaging systems. The catadioptric omnidirectional imaging systems generate images of 360 degrees of view and need to be transformed into panorama images in rectangular coordinate. In most application, it has to perform the panorama unrolling in real-time and at low-cost, especially for high-resolution images. The proposed hardware architecture adopts a software/hardware cooperative structure and employs several optimization schemes using look-up-table(LUT) of coordinate conversion. To avoid the on-line division operation caused by the coordinate transformation algorithm, the proposed architecture has the LUT which has pre-computed division factors. And then, the amount of memory used by the LUT is reduced to 1/4 by using symmetrical characteristic compared with the conventional architecture. Experimental results show that the proposed hardware architecture achieves an effective real-time performance and lower implementation cost, and it can be applied to other kinds of catadioptric omnidirectional imaging systems.

An Omnidirectional Vision-Based Moving Obstacle Detection in Mobile Robot

  • Kim, Jong-Cheol;Suga, Yasuo
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권6호
    • /
    • pp.663-673
    • /
    • 2007
  • This paper presents a new moving obstacle detection method using an optical flow in mobile robot with an omnidirectional camera. Because an omnidirectional camera consists of a nonlinear mirror and CCD camera, the optical flow pattern in omnidirectional image is different from the pattern in perspective camera. The geometry characteristic of an omnidirectional camera has influence on the optical flow in omnidirectional image. When a mobile robot with an omnidirectional camera moves, the optical flow is not only theoretically calculated in omnidirectional image, but also investigated in omnidirectional and panoramic images. In this paper, the panoramic image is generalized from an omnidirectional image using the geometry of an omnidirectional camera. In particular, Focus of expansion (FOE) and focus of contraction (FOC) vectors are defined from the estimated optical flow in omnidirectional and panoramic images. FOE and FOC vectors are used as reference vectors for the relative evaluation of optical flow. The moving obstacle is turned out through the relative evaluation of optical flows. The proposed algorithm is tested in four motions of a mobile robot including straight forward, left turn, right turn and rotation. The effectiveness of the proposed method is shown by the experimental results.

Using Omnidirectional Images for Semi-Automatically Generating IndoorGML Data

  • Claridades, Alexis Richard;Lee, Jiyeong;Blanco, Ariel
    • 한국측량학회지
    • /
    • 제36권5호
    • /
    • pp.319-333
    • /
    • 2018
  • As human beings spend more time indoors, and with the growing complexity of indoor spaces, more focus is given to indoor spatial applications and services. 3D topological networks are used for various spatial applications that involve navigation indoors such as emergency evacuation, indoor positioning, and visualization. Manually generating indoor network data is impractical and prone to errors, yet current methods in automation need expensive sensors or datasets that are difficult and expensive to obtain and process. In this research, a methodology for semi-automatically generating a 3D indoor topological model based on IndoorGML (Indoor Geographic Markup Language) is proposed. The concept of Shooting Point is defined to accommodate the usage of omnidirectional images in generating IndoorGML data. Omnidirectional images were captured at selected Shooting Points in the building using a fisheye camera lens and rotator and indoor spaces are then identified using image processing implemented in Python. Relative positions of spaces obtained from CAD (Computer-Assisted Drawing) were used to generate 3D node-relation graphs representing adjacency, connectivity, and accessibility in the study area. Subspacing is performed to more accurately depict large indoor spaces and actual pedestrian movement. Since the images provide very realistic visualization, the topological relationships were used to link them to produce an indoor virtual tour.

컬러 전방향 영상 이해에 기반한 이동 로봇의 위치 추정 (Global Positioning of a Mobile Robot based on Color Omnidirectional Image Understanding)

  • 김태균;이영진;정명진
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제49권6호
    • /
    • pp.307-315
    • /
    • 2000
  • For the autonomy of a mobile robot it is first needed to know its position and orientation. Various methods of estimating the position of a robot have been developed. However, it is still difficult to localize the robot without any initial position or orientation. In this paper we present the method how to make the colored map and how to calculate the position and direction of a robot using the angle data of an omnidirectional image. The wall of the map is rendered with the corresponding color images and the color histograms of images and the coordinates of feature points are stored in the map. Then a mobile robot gets the color omnidirectional image at arbitrary position and orientation, segments it and recognizes objects by multiple color indexing. Using the information of recognized objects robot can have enough feature points and localize itself.

  • PDF

인라이어 분포를 이용한 전방향 카메라의 보정 (Calibration of Omnidirectional Camera by Considering Inlier Distribution)

  • 홍현기;황용호
    • 한국게임학회 논문지
    • /
    • 제7권4호
    • /
    • pp.63-70
    • /
    • 2007
  • 넓은 시야각을 갖는 전방향(omnidirectional) 카메라 시스템은 적은 수의 영상으로도 주변 장면에 대해 많은 정보를 취득할 수 있는 장점으로 감시, 3차원 해석 등의 분야에 널리 응용되고 있다. 본 논문에서는 어안(fisheye) 렌즈를 이용한 전방향 카메라로 입력된 영상으로부터 카메라의 이동 및 회전 파라미터를 자동으로 추정하는 새로운 자동보정 알고리즘이 제안되었다. 먼저, 카메라 위치를 임의의 각 도로 변환하여 얻어진 영상을 이용해 일차 매개변수로 표현된 카메라의 사영(projection)모델을 추정한다. 그리고 이후 다양하게 변환되는 카메라의 위치에 따라 에센셜(essential) 행렬을 구하며, 이 과정에서 대상 영상으로부터 적합한 인라이어(inlier) 집합을 구하기 위해 특징점이 영역 내에 분포 정도를 반영하는 표준편차(standard deviation)를 정량적(quantitative) 기준으로 이용한다. 다양한 실험을 통해 제안된 알고리즘이 전방향 카메라의 사영 모델과 회전, 이동 등의 변환 파라미터를 정확하게 추정함을 확인하였다.

  • PDF

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

나사산 전면검사 비전시스템의 영상 균일도 향상을 위한 조명 광학계 설계 및 해석 (Design and Analysis of Illumination Optics for Image Uniformity in Omnidirectional Vision Inspection System for Screw Threads)

  • 이창훈;임영은;박근;나승우
    • 한국정밀공학회지
    • /
    • 제31권3호
    • /
    • pp.261-268
    • /
    • 2014
  • Precision screws have a wide range of industrial applications such as electrical and automotive products. To produce screw threads with high precision, not only high precision manufacturing technology but also reliable measurement technology is required. Machine vision systems have been used in the automatic inspection of screw threads based on backlight illumination, which cannot detect defects on the thread surface. Recently, an omnidirectional inspection system for screw threads was developed to obtain $360^{\circ}$ images of screws, based on front light illumination. In this study, the illumination design for the omnidirectional inspection system was modified by adding a light shield to improve the image uniformity. Optical simulation for various shield designs was performed to analyze image uniformity of the obtained images. The simulation results were analyzed statistically using response surface method, from which optical performance of the omnidirectional inspection system could be optimized in terms of image quality and uniformity.

구조광 영상기반 전방향 거리측정 시스템 개발 (Development of Omnidirectional Ranging System Based on Structured Light Image)

  • 신진;이수영
    • 제어로봇시스템학회논문지
    • /
    • 제18권5호
    • /
    • pp.479-486
    • /
    • 2012
  • In this paper, a ranging system is proposed that is able to measure 360 degree omnidirectional distances to environment objects. The ranging system is based on the structured light imaging system with catadioptric omnidirectional mirror. In order to make the ranging system robust against environmental illumination, efficient structured light image processing algorithms are developed; sequential integration of difference images with modulated structured light and radial search based on Bresenham line drawing algorithm. A dedicated FPGA image processor is developed to speed up the overall image processing. Also the distance equation is derived in the omnidirectional imaging system with a hyperbolic mirror. It is expected that the omnidirectional ranging system is useful for mapping and localization of mobile robot. Experiments are carried out to verify the performance of the proposed ranging system.

투영곡선의 자동정합을 이용한 전방향 카메라 보정 (Using Contour Matching for Omnidirectional Camera Calibration)

  • 황용호;홍현기
    • 대한전자공학회논문지SP
    • /
    • 제45권6호
    • /
    • pp.125-132
    • /
    • 2008
  • 감시 및 로보트 분야 등에서 다양하게 사용되는 전방향(omnidirectional) 카메라 시스템은 넓은 시야각을 제공한다. 전방향 카메라의 사영모델과 외부변수를 추정하는 대부분의 기존 연구에서는 사전에 설정된 영상 간의 대응관계를 가정한다. 본 논문에서는 두 장의 전방향 영상으로부터 투영곡선을 자동으로 정합하여 카메라의 외부변수를 추정하는 새로운 알고리즘이 제안된다. 먼저 두 영상에서 대응되는 특징점으로부터 에피폴라 구속조건을 계산하여 초기 카메라 변수를 계산한다. 검출된 특징점과 투영곡선을 대상으로 능동적(active) 정합방법으로 대응관계를 결정한다. 최종 단계에서 대응 투영곡선을 구성하는 양 끝점의 에피폴라(epipolar) 평면과 3차원 벡터의 각도 오차를 최소화하는 카메라 변수를 추정한다. 합성영상과 어안렌즈(fisheye lens)로 취득된 실제 영상을 대상으로 제안된 알고리즘이 기존 방법에 비해 카메라의 외부변수를 정확하게 추정함을 확인하였다.

Feature matching toy Omnidirectional Image based on Singular Value Decomposition

  • Kim, Do-Yoon;Lee, Young-Jin;Myung jin Chung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.98.2-98
    • /
    • 2002
  • $\textbullet$ Omnidirectional feature matching $\textbullet$ SVD-based matching algorithm $\textbullet$ Using SSD instead of the zero-mean correlation $\textbullet$ The similarity with the Gaussian weighted $\textbullet$ Low computational cost $\textbullet$ It describes the similarity of the matched pairs in omnidirectional images.

  • PDF