• Title/Summary/Keyword: fisheye

Search Result 95, Processing Time 0.027 seconds

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Comparison the Mapping Accuracy of Construction Sites Using UAVs with Low-Cost Cameras

  • Jeong, Hohyun;Ahn, Hoyong;Shin, Dongyoon;Choi, Chuluong
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.1
    • /
    • pp.1-13
    • /
    • 2019
  • The advent of a fourth industrial revolution, built on advances in digital technology, has coincided with studies using various unmanned aerial vehicles (UAVs) being performed worldwide. However, the accuracy of different sensors and their suitability for particular research studies are factors that need to be carefully evaluated. In this study, we evaluated UAV photogrammetry using smart technology. To assess the performance of digital photogrammetry, the accuracy of common procedures for generating orthomosaic images and digital surface models (DSMs) using terrestrial laser scanning (TLS) techniques was measured. Two different type of non-surveying camera(Smartphone camera, fisheye camera) were attached to UAV platform. For fisheye camera, lens distortion was corrected by considering characteristics of lens. Accuracy of orthoimage and DSM generated were comparatively analyzed using aerial and TLS data. Accuracy comparison analysis proceeded as follows. First, we used Ortho mosaic image to compare the check point with a certain area. In addition, vertical errors of camera DSM were compared and analyzed based on TLS. In this study, we propose and evaluate the feasibility of UAV photogrammetry which can acquire 3 - D spatial information at low cost in a construction site.

A Nationwide Study on Optical Analysis for Expecting HEOs to Support Ambulances

  • Nakajima, Isao;Tsuda, Kazuhide;Juzoji, Hiroshi;Ta, Masuhisa;Nakajima, Atsushi
    • Journal of Multimedia Information System
    • /
    • v.6 no.2
    • /
    • pp.107-118
    • /
    • 2019
  • This paper deals with actual optical data from rural as well as urban areas in a nationwide study captured with Fisheye cameras. Simultaneously data was collected (of the receiving power density) from the mobile communications satellite N-STAR. The visibility of the satellite is easily determined by checking the value of the pixels in the binarized fisheye image of its position. The process of determining the visible satellite is automatically performed. Based on the analyses of the field data measured in Japan, we are expecting HEOs (Highly inclined Elliptical Orbiters) that would reduce blockage in the extreme northern region of Wakkanai City well as in the most crowded urban area, in Tokyo Ginza. In case of HEOs operation, the elevation angle will improve from 37 with N-STAR GEO to 75 degrees. HEOs could replace 5G/Ka-band or support in rural areas where broadband circuit is not available. We are proposing combination operations with HEOs and 5G/Ka-band to solve blockage problems, because HEOs can keep line-of-sight propagation with high elevation angle for long duration. In such operations, the communications profile on the vehicle based on actual optical data will be very useful to predict blockages and to select/switch a suitable circuit.

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

Visualization of Web Information using Fisheye View in Mobile device (Mobile Device에서 Fisheye View를 이용한 웹 정보 시각화)

  • Kim Sun-Hee;Lee Jung-Hun;Yoo Hee-Yong;Cheon Suh-Hyun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11a
    • /
    • pp.619-621
    • /
    • 2005
  • 모바일 디바이스의 보급 증가로 인해 사용자의 인터넷 접속이 증가하고 있다. 하지만 모바일 디바이스의 제한된 작은 인터페이스로 인하여 많은 양에 정 보를 화면에 표현 하는데 어려움을 가지고 있다. 이를 해결하기 위해 스크롤과 같은 방법을 이용하지만, 이런 방법은 전체 정보의 연관관계를 파악 하기 어렵는 문제점을 안고 있으며,사이클 존재하는 그래프 구조를 표현하기에는 부적합하다. 본 논문에서는 이러한 문제점을 개선하기 위해 모바일 디바이스의 디스플레이 공간을 효과적으로 이용하기 위한 정보 layout 방법 과 사용자가 효율적으로 정보를 검색할 수 있는 시각화 기법을 제안한다. 제안하는 기법은 원을 이용하여 정보를 배치한 Radial layout을 개선한 Rectangle layout을 사용하여 스크린 모서리 공간 손실을 줄인다. 그 다음 Rectangle상에 layout된 정보에 대해 정보간의 연관관계를 한눈에 쉽게 파악하여 원하는 정보에 더욱 빠르게 접근할 수 있는 시각화 알고리즘을 적용한다.

  • PDF

An Interpolation Method for a Barrel Distortion Using Nearest Pixels on a Corrected Image (방사왜곡을 고려한 보정 영상 위최근접 화소 이용 보간법)

  • Choi, Changwon;Yi, Joonhwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.181-190
    • /
    • 2013
  • We propose an interpolation method considering barrel distortion of fisheye lens using nearest pixels on a corrected image. The correction of barrel distortion comprises coordinate transformation and interpolation. This paper focuses on interpolation. The proposed interpolation method uses nearest four coordinates on a corrected image rather than on a distorted image unlike existing techniques. Experimental results show that both subjective and objective image qualities are improved.

Verification Method of Omnidirectional Camera Model by Projected Contours (사영된 컨투어를 이용한 전방향 카메라 모델의 검증 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.994-999
    • /
    • 2007
  • 전방향(omnidirectional) 카메라 시스템은 보다 적은 수의 영상으로부터 주변 장면(scene)에 대한 많은 정보를 취득할 수 있는 장점이 있기 때문에 전방향 영상을 이용한 자동교정(self-calibration)과 3차원 재구성 등의 연구가 활발히 진행되고 있다. 본 논문에서는 기존에 제안된 교정 방법들을 이용하여 추정된 사영모델(projection model)의 정확성을 검증하기 위한 새로운 방법이 제안된다. 실 세계에서 다양하게 존재하는 직선 성분들은 전방향 영상에 컨투어(contour)의 형태로 사영되며, 사영모델과 컨투어의 양 끝점 좌표 값을 이용하여 그 궤적을 추정할 수 있다. 추정된 컨투어의 궤적과 영상에 존재하는 컨투어와의 거리 오차(distance error)로부터 전방향 카메라의 사영모델의 정확성을 검증할 수 있다. 제안된 방법의 성능을 평가하기 위해서 구 맵핑(spherical mapping)된 합성(synthetic) 영상과 어안렌즈(fisheye lens)로 취득한 실제 영상에 대해 제안된 알고리즘을 적용하여 사영모델의 정확성을 판단하였다.

  • PDF

Realization for Image Distortion Correction Processing System with Fisheye Lens Camera

  • Kim, Ja-Hwan;Ryu, Kwang-Ryol;Sclabassi, Robert J.
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.281-284
    • /
    • 2007
  • A realization for image distortion correction processing system with DSP processor is presented in this paper. The image distortion correcting algorithm is realized by DSP processor for focusing on more real time processing than image quality. The lens and camera distortion coefficients are processed by YCbCr Lookup Tables and the correcting algorithm is applied to reverse mapping method for geometrical transform. The system experimentation results in the processing time about 34.6 msec on $720{\times}480$ curved image at 150 degree visual range.

  • PDF

Visualization of web pages for information search and analysis based on data adjacency in Internet Environment (인터넷 환경에서 데이터 인접성에 기반한 정보 검색 및 분석을 위한 웹페이지 시각화)

  • Byeon, Hyeon-Su;Kim, Jin-Hwa
    • Proceedings of the Korea Database Society Conference
    • /
    • 2008.05a
    • /
    • pp.211-224
    • /
    • 2008
  • As a lot of information and media are given to users in Internet space nowadays, users feel disoriented or "lost in space" intensively. So it is suggested that we have the system to reduce information overload and to propose effective and efficient information. In this study we present a visualizing technique which uses fisheye views on data adjacency to combine global context and local details for presentation of many results in limited space. Data Adjacency on graph theory is applied to set up degree of interest which is main focus in fisheye views. Graph theory is useful to solve the problem resulted from various combinational optimization, especially it has advantages to analyze issues in information space like Internet. To test the usability of the proposed visualization technique, we compared the effectiveness of different visualization techniques. Results show that our method is evaluated with respect to less time and high satisfaction for a task accomplishment.

  • PDF