• 제목/요약/키워드: Omni-directional lens

검색결과 12건 처리시간 0.031초

Tolerance Analysis and Compensation Method Using Zernike Polynomial Coefficients of Omni-directional and Fisheye Varifocal Lens

  • Kim, Jin Woo;Ryu, Jae Myung;Kim, Young-Joo
    • Journal of the Optical Society of Korea
    • /
    • 제18권6호
    • /
    • pp.720-731
    • /
    • 2014
  • There are many kinds of optical systems to widen a field of view. Fisheye lenses with view angles of 180 degrees and omni-directional systems with the view angles of 360 degrees are recognized as proper systems to widen a field of view. In this study, we proposed a new optical system to overcome drawbacks of conventional omni-directional systems such as a limited field of view in the central area and difficulties in manufacturing. Thus we can eliminate the undesirable reflection components of the omni-directional system and solve the primary drawback of the conventional system. Finally, tolerance analysis using Zernike polynomial coefficients was performed to confirm the productivity of the new optical system. Furthermore, we established a method of optical axis alignment and compensation schemes for the proposed optical system as a result of tolerance analysis. In a sensitivity calculation, we investigated performance degradation due to manufacturing error using Code V(R) macro function. Consequently, we suggested compensation schemes using a lens group decentering. This paper gives a good guidance for the optical design and tolerance analysis including the compensation method in the extremely wide angle system.

어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM (3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner)

  • 최윤원;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제21권7호
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

경관 특성 파악에 있어서의 시퀀스적 시점장 선정과 전방위 화상정보의 유효성 검증에 관한 연구 (A Study of Selecting Sequential Viewpoint and Examining the Effectiveness of Omni-directional Angle Image Information in Grasping the Characteristics of Landscape)

  • 김홍만;이인희
    • KIEAE Journal
    • /
    • 제9권2호
    • /
    • pp.81-90
    • /
    • 2009
  • Relating to grasping sequential landscape characteristics in consideration of the behavioral characteristics of the subject experiencing visual perception, this study was made on the subject of main walking line section for visitors of three treasures of Buddhist temples. Especially, as a method of obtaining data for grasping sequential visual perception landscape, the researcher employed [momentum sequential viewpoint setup] according to [the interval of pointers arbitrarily] and fisheye-lens-camera photography using the obtained omni-directional angle visual perception information. As a result, in terms of viewpoint selection, factors like approach road form, change in circulation axis, change in the ground surface level, appearance of objects, etc. were verified to make effect, and among these, approach road form and circulation axis change turned out to be the greatest influences. In addition, as a result of reviewing the effectiveness via the subjects, for the sake of qualitative evaluation of landscape components using the VR picture image obtained in the process of acquiring omni-directional angle visual perception information, a positive result over certain values was earned in terms of panoramic vision, scene reproduction, three-dimensional perspective, etc. This convinces us of the possibility to activate the qualitative evaluation of omni-directional angle picture information and the study of landscape through it henceforth.

어안 렌즈를 이용한 전방향 감시 및 움직임 검출 (Omni-directional Surveillance and Motion Detection using a Fish-Eye Lens)

  • 조석빈;이운근;백광렬
    • 대한전자공학회논문지SP
    • /
    • 제42권5호
    • /
    • pp.79-84
    • /
    • 2005
  • 일반적인 카메라의 시야는 사람에 비하여 매우 좁기 때문에 큰 물체를 한 화면으로 얻기 힘들며, 그 움직임도 넓게 감시하기에 어려움 점이 많다. 이에 본 논문에서는 어안 렌즈(Fish-Eye Lens)를 사용하여 넓은 시야의 영상을 획득하고 전방향 감시를 위한 투시(perspective) 영상과 파노라마(panorama) 영상을 복원하는 방법을 제시한다. 영상 변환 과정에서 어안 렌즈의 특성으로 인한 해상도 차이를 보완하기 위하여 여러 가지 영상 보간법을 적용하고 결과를 비교하였다. 그리고 ME(Moving Edge) 방법으로 움직임을 검출하여 다중 물체를 추적할 수 있도록 하였다.

어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM (Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image)

  • 최윤원;최정원;대염염;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권8호
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

모바일매핑시스템으로 취득된 전방위 영상의 광속조정법 (Bundle Block Adjustment of Omni-directional Images by a Mobile Mapping System)

  • 오태완;이임평
    • 대한원격탐사학회지
    • /
    • 제26권5호
    • /
    • pp.593-603
    • /
    • 2010
  • 대부분의 모바일 공간정보 획득시스템은 촬영범위가 좁고 기선 길이에 대한 제약이 따르는 프레임 카메라를 탑재하고 있다, 촬영지점을 기준으로 모든 방향으로의 영상정보 획득이 기능한 전방위 카메라 탑재를 통해 프레임 카메라의 촬영 범위 및 기선 거리에 대한 문제점을 해결할 수 있다. 광속조정법(Bundle Block Adjustment)은 다수의 중첩된 영상의 외부표정요소를 결정하는 대표적인 지오레퍼런싱(Georeferencing) 방법이다. 본 연구에서는 전방위 영상에 적합한 광속조정법의 수학적 모델을 제안하여 전방위 영상의 외부표정요소 및 지상점을 추정하고자 한다. 먼저 전방위 영상에 적합한 공선조건식을 이용해 관측방정식을 수립한다. 그리고 지상 모바일매핑시스템(GMMS, Ground Mobile Mapping System)에 탑재되어 있는 GPS/INS로부터 획득된 데이터와 정지 GPS 및 토털 스테이션(Total Station)을 통해 측정한 지상기준점을 이용한 확률제약조건 (Stochastic Constraints)식을 수립한다. 마지막으로 확률제약조건 요소 및 추정 미지수를 조합하여 다양한 종류의 수학적 모델을 수립하고 모델별로 추정된 지상점 좌표의 정확도를 검증한다. 그 결과, 지상기준점을 확률제약조건으로 사용하는 모델에 적용한 경우에 지상점이 ${\pm}5cm$ 정도로 정확하게 추정되었다. 연구의 결과를 통해 전방위 카메라 영상으로부터 대상객체의 3차원 모델 추출이 가능함을 알 수 있었다.

회전식 라인 카메라로 획득한 실내 전방위 영상의 지오레퍼런싱 (Georeferencing of Indoor Omni-Directional Images Acquired by a Rotating Line Camera)

  • 오소정;이임평
    • 한국측량학회지
    • /
    • 제30권2호
    • /
    • pp.211-221
    • /
    • 2012
  • 회전식 라인 카메라로 취득한 전방위 영상을 실내 공간정보 서비스에 활용하려면 취득한 영상을 실내 좌표계를 기준으로 정교하게 참조할 수 있어야 한다. 이에 본 연구는 실내 전방위 영상의 외부표정요소 - 취득한 시점의 카메라의 위치와 자세를 정확하게 추정 할 수 있는 지오레퍼런싱 방법을 제안한다. 이를 위하여 먼저 회전식 라인 카메라를 기하학적으로 모델링하여 전방위 영상에 대한 공선방정식을 유도한다. 실내 기준점을 공선방정식에 적용하여 실내 전방위 영상의 외부표정요소를 추정한다. 실측데이터에 적용한 결과 외부표정요소의 위치는 1.4mm의 정밀도로, 자세는 $0.05^{\circ}$의 정밀도로 추정할 수 있었다. 수평방향으로 약3픽셀, 수직방향으로 약 10픽셀 정도의 잔차가 남아 있었다. 특히 수직방향으로는 렌즈의 왜곡에 의한 시스템적 오차가 포함되어 있는 것으로 분석되었고 이는 카메라 캘리브레이션을 통해 제거되야 할 것으로 판단된다. 제시된 방법을 이용하여 정밀하게 지오레퍼런싱된 전방위 영상으로부터 고해상도 실내 3차원 모델을 생성하고 이에 기반한 정교한 증강현실 서비스가 가능할 것으로 기대된다.

어안 이미지 기반의 전방향 영상 SLAM을 이용한 충돌 회피 (Collision Avoidance Using Omni Vision SLAM Based on Fisheye Image)

  • 최윤원;최정원;임성규;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제22권3호
    • /
    • pp.210-216
    • /
    • 2016
  • This paper presents a novel collision avoidance technique for mobile robots based on omni-directional vision simultaneous localization and mapping (SLAM). This method estimates the avoidance path and speed of a robot from the location of an obstacle, which can be detected using the Lucas-Kanade Optical Flow in images obtained through fish-eye cameras mounted on the robots. The conventional methods suggest avoidance paths by constructing an arbitrary force field around the obstacle found in the complete map obtained through the SLAM. Robots can also avoid obstacles by using the speed command based on the robot modeling and curved movement path of the robot. The recent research has been improved by optimizing the algorithm for the actual robot. However, research related to a robot using omni-directional vision SLAM to acquire around information at once has been comparatively less studied. The robot with the proposed algorithm avoids obstacles according to the estimated avoidance path based on the map obtained through an omni-directional vision SLAM using a fisheye image, and returns to the original path. In particular, it avoids the obstacles with various speed and direction using acceleration components based on motion information obtained by analyzing around the obstacles. The experimental results confirm the reliability of an avoidance algorithm through comparison between position obtained by the proposed algorithm and the real position collected while avoiding the obstacles.

어안 영상을 이용한 물체 추적 기반의 한 멀티로봇의 대형 제어 (Multi-robot Formation based on Object Tracking Method using Fisheye Images)

  • 최윤원;김종욱;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제19권6호
    • /
    • pp.547-554
    • /
    • 2013
  • This paper proposes a novel formation algorithm of identical robots based on object tracking method using omni-directional images obtained through fisheye lenses which are mounted on the robots. Conventional formation methods of multi-robots often use stereo vision system or vision system with reflector instead of general purpose camera which has small angle of view to enlarge view angle of camera. In addition, to make up the lack of image information on the environment, robots share the information on their positions through communication. The proposed system estimates the region of robots using SURF in fisheye images that have $360^{\circ}$ of image information without merging images. The whole system controls formation of robots based on moving directions and velocities of robots which can be obtained by applying Lucas-Kanade Optical Flow Estimation for the estimated region of robots. We confirmed the reliability of the proposed formation control strategy for multi-robots through both simulation and experiment.

TEST OF A LOW COST VEHICLE-BORNE 360 DEGREE PANORAMA IMAGE SYSTEM

  • Kim, Moon-Gie;Sung, Jung-Gon
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2008년도 International Symposium on Remote Sensing
    • /
    • pp.137-140
    • /
    • 2008
  • Recently many areas require wide field of view images. Such as surveillance, virtual reality, navigation and 3D scene reconstruction. Conventional camera systems have a limited filed of view and provide partial information about the scene. however, omni directional vision system can overcome these disadvantages. Acquiring 360 degree panorama images requires expensive omni camera lens. In this study, 360 degree panorama image was tested using a low cost optical reflector which captures 360 degree panoramic views with single shot. This 360 degree panorama image system can be used with detailed positional information from GPS/INS. Through this study result, we show 360 degree panorama image is very effective tool for mobile monitoring system.

  • PDF