• Title/Summary/Keyword: fisheye images

Search Result 43, Processing Time 0.027 seconds

Multi-robot Formation based on Object Tracking Method using Fisheye Images (어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어)

  • Choi, Yun Won;Kim, Jong Uk;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

Tunnel Mosaic Images Using Fisheye Lens Camera (어안렌즈 카메라를 이용한 터널 모자이크 영상 제작)

  • Kim, Gi-Hong;Song, Yeong-Sun;Kim, Baek-Seok
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.1
    • /
    • pp.105-111
    • /
    • 2009
  • A construction can be more convenient and safer with adequate informations. Consequently, studies on collecting various informations using newest surveying technology and applying these informations to a construction have been making progress recently. Digital images are easy to obtain and contain various informations. Therefore, with the recent development of image processing technology, the application field of digital images is getting wider. In this study, we proposed to use a fisheye lens camera in underground construction sites, especially tunnels, to overcome inconvenience in photographing with general lens cameras. A program for mapping the surface of a tunnel and making a mosaic image is also developed. This mosaic image can be applied to observe and analyze abnormal phenomenons on tunnel surface like cracks, water leakage, exfoliates, and so on.

  • PDF

De-blurring Algorithm for Performance Improvement of Searching a Moving Vehicle on Fisheye CCTV Image (어안렌즈사용 CCTV이미지에서 차량 정보 수집의 성능개선을 위한 디블러링 알고리즘)

  • Lee, In-Jung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.4C
    • /
    • pp.408-414
    • /
    • 2010
  • When we are collecting traffic information on CCTV images, we have to install the detect zone in the image area during pan-tilt system is on duty. An automation of detect zone with pan-tilt system is not easy because of machine error. So the fisheye lens attached camera or convex mirror camera is needed for getting wide area images. In this situation some troubles are happened, that is a decreased system speed or image distortion. This distortion is caused by occlusion of angled ray as like trembled snapshot in digital camera. In this paper, we propose two methods of de-blurring to overcome distortion, the one is image segmentation by nonlinear diffusion equation and the other is deformation for some segmented area. As the results of doing de-blurring methods, the de-blurring image has 15 decibel increased PSNR and the detection rate of collecting traffic information is more than 5% increasing than in distorted images.

Distortion Correction of Boundary Lines in a Tunnel Image Captured by Fisheye Lens (어안렌즈 터널영상의 경계선 왜곡 보정)

  • Kim, Gi-Hong;Jeong, Soo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.4
    • /
    • pp.55-63
    • /
    • 2011
  • Having a wide angle of view, a fisheye lens is useful for obtaining images of the inside wall of a tunnel. A circular fisheye tunnel image can be transformed into a familiar rectangular image by applying the concept of cylindrical projection. This projection transformation causes several types of distortions in the projected image. This paper discusses the distortion on the boundary lines between smoothly curved wall and flat ground. We analyzed the cause of this boundary distortion, developed transformation model, and derived a correction formular. A distortion correction software programmed in Visual C++ applied to projected image. Consequently, boundary-corrected image could be obtained. Research into other distortions of projected image will helpful in obtaining tunnel image that resembles real tunnel from fisheye tunnel image.

Calibration of Fisheye Lens Images Using a Spiral Pattern and Compensation for Geometric Distortion (나선형 패턴을 사용한 어안렌즈 영상 교정 및 기하학적 왜곡 보정)

  • Kim, Seon-Yung;Yoon, In-Hye;Kim, Dong-Gyun;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.16-22
    • /
    • 2012
  • In this paper, we present spiral pattern which suits for optical simulator to calibrate fisheye lens and compensate geometric distortion. Using spiral pattern, we present calibration without mathematical modeling in advance. Proposed spiral pattern used to input image of optical simulator. Using fisheye lens image, we calibrate a fisheye lens by matching geometrically moved dots to corresponding original dots which leads not to need mathematical modeling. Proposed algorithm calibrates using dot matching which matches spiral pattern image dot to distorted image dot. And this algorithm does not need modeling in advance so it is effective. Proposed algorithm is enabled at processing of pattern recognition which has to get the exact information using fisheye lens for digital zooming. And this makes possible at compensation of geometric distortion and calibration of fisheye lens image applying in various image processing.

Sky Condition Analysis using the Processing of Digital Images (디지털 이미지 처리를 통한 천공상태 분석)

  • Park, Seong-Ye;Sim, Yeon-Ji;Hong, Seong-Kwan;Choi, An-Seop
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.30 no.1
    • /
    • pp.14-20
    • /
    • 2016
  • The accurate analysis of the outside sky conditions is necessary to increase the efficiency of blind PV system. To conduct the accurate analysis, this paper suggested a method to analyze the sky condition using a specific image processing technique. While a fisheye lens has a wide field-of-views, it causes a large distortion to the sky images. Therefore, this paper calculated the exchange ratio of sky images to consider a lens distortion. As results of the study, there was a difference of 7% to cloud area ratio F4 and F11. Also, it had a different result depending on the position of the cloud.

A Study on Fisheye Lens based Features on the Ceiling for Self-Localization (실내 환경에서 자기위치 인식을 위한 어안렌즈 기반의 천장의 특징점 모델 연구)

  • Choi, Chul-Hee;Choi, Byung-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.442-448
    • /
    • 2011
  • There are many research results about a self-localization technique of mobile robot. In this paper we present a self-localization technique based on the features of ceiling vision using a fisheye lens. The features obtained by SIFT(Scale Invariant Feature Transform) can be used to be matched between the previous image and the current image and then its optimal function is derived. The fisheye lens causes some distortion on its images naturally. So it must be calibrated by some algorithm. We here propose some methods for calibration of distorted images and design of a geometric fitness model. The proposed method is applied to laboratory and aile environment. We show its feasibility at some indoor environment.

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.