• 제목/요약/키워드: Panoramic vision

검색결과 33건 처리시간 0.027초

항공기 야간투시경 운용 (Operation of Night Vision Goggle for Aircraft)

  • 권종광;김환우
    • 한국군사과학기술학회지
    • /
    • 제9권2호
    • /
    • pp.34-41
    • /
    • 2006
  • This paper describes the general considerations and operational applications of aircraft night vision goggle(NVG). The operators should recognize the NVG characteristics, natural night light sources and interoperability between them. And they have to keep attention to the method to take steps to avoid misperceptions, illusions during flight, and the negative impact on the effectiveness and safety of the mission. The considerations based on the interoperability between NVG and night operation suggest the evaluation items of NVG field test and flight test.

Fixed Homography-Based Real-Time SW/HW Image Stitching Engine for Motor Vehicles

  • Suk, Jung-Hee;Lyuh, Chun-Gi;Yoon, Sanghoon;Roh, Tae Moon
    • ETRI Journal
    • /
    • 제37권6호
    • /
    • pp.1143-1153
    • /
    • 2015
  • In this paper, we propose an efficient architecture for a real-time image stitching engine for vision SoCs found in motor vehicles. To enlarge the obstacle-detection distance and area for safety, we adopt panoramic images from multiple telegraphic cameras. We propose a stitching method based on a fixed homography that is educed from the initial frame of a video sequence and is used to warp all input images without regeneration. Because the fixed homography is generated only once at the initial state, we can calculate it using SW to reduce HW costs. The proposed warping HW engine is based on a linear transform of the pixel positions of warped images and can reduce the computational complexity by 90% or more as compared to a conventional method. A dual-core SW/HW image stitching engine is applied to stitching input frames in parallel to improve the performance by 70% or more as compared to a single-core engine operation. In addition, a dual-core structure is used to detect a failure in state machines using rock-step logic to satisfy the ISO26262 standard. The dual-core SW/HW image stitching engine is fabricated in SoC with 254,968 gate counts using Global Foundry's 65 nm CMOS process. The single-core engine can make panoramic images from three YCbCr 4:2:0 formatted VGA images at 44 frames per second and frequency of 200 MHz without an LCD display.

An Omnidirectional Vision-Based Moving Obstacle Detection in Mobile Robot

  • Kim, Jong-Cheol;Suga, Yasuo
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권6호
    • /
    • pp.663-673
    • /
    • 2007
  • This paper presents a new moving obstacle detection method using an optical flow in mobile robot with an omnidirectional camera. Because an omnidirectional camera consists of a nonlinear mirror and CCD camera, the optical flow pattern in omnidirectional image is different from the pattern in perspective camera. The geometry characteristic of an omnidirectional camera has influence on the optical flow in omnidirectional image. When a mobile robot with an omnidirectional camera moves, the optical flow is not only theoretically calculated in omnidirectional image, but also investigated in omnidirectional and panoramic images. In this paper, the panoramic image is generalized from an omnidirectional image using the geometry of an omnidirectional camera. In particular, Focus of expansion (FOE) and focus of contraction (FOC) vectors are defined from the estimated optical flow in omnidirectional and panoramic images. FOE and FOC vectors are used as reference vectors for the relative evaluation of optical flow. The moving obstacle is turned out through the relative evaluation of optical flows. The proposed algorithm is tested in four motions of a mobile robot including straight forward, left turn, right turn and rotation. The effectiveness of the proposed method is shown by the experimental results.

등거리 스테레오 전방위 렌즈 영상에 대한 위치 측정 알고리즘 (Range finding algorithm of equidistance stereo catadioptric mirror)

  • 최영호
    • 인터넷정보학회논문지
    • /
    • 제6권6호
    • /
    • pp.149-161
    • /
    • 2005
  • 전방위 렌즈의 단점은 균일하지 않은 해상도에 있다. 등거리 전방위 렌즈는 이러한 단점을 해결하기 위한 새로운 대안으로 볼 수 있으며, 등거리 스테레오 전방위 렌즈는 한 개의 카메라를 통해 스테레오 영상을 획득할 수 있다는 점에서 매우 효율적인 시스템이라 말할 수 있다. 그러나 등거리 스테레오 전방위 렌즈는 단일 등거리 전방위 렌즈에 비해 획득 영상의 크기가 상대적으로 작게 되어 해상도가 낮아진다는 단점이 있다. 정확한 거울의 위치, 카메라 축과 거울 중심과의 정확한 정렬등의 문제는 정밀도를 높여 해결할 수 있지만, 영상 획득 시 필수적으로 필요한 렌즈의 초점 거리 변화는 피할 수 없게 된다. 본 논문에서는 먼저 초점 거리 변화가 물체의 거리 측정에 미치는 영향을 고찰한 후 스테레오 영상에서 보이는 물체의 시야 각은 두 영상에서 거의 일정하다는 가정하에 실제 초점 거리를 계산하는 방법을 제시한다.

  • PDF

어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM (Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image)

  • 최윤원;최정원;대염염;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권8호
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

3D Vision-Based Local Path Planning System of a Humanoid Robot for Obstacle Avoidance

  • Kang, Tae-Koo;Lim, Myo-Taeg;Park, Gwi-Tae;Kim, Dong W.
    • Journal of Electrical Engineering and Technology
    • /
    • 제8권4호
    • /
    • pp.879-888
    • /
    • 2013
  • This paper addresses the vision based local path planning system for obstacle avoidance. To handle the obstacles which exist beyond the field of view (FOV), we propose a Panoramic Environment Map (PEM) using the MDGHM-SIFT algorithm. Moreover, we propose a Complexity Measure (CM) and Fuzzy logic-based Avoidance Motion Selection (FAMS) system to enable a humanoid robot to automatically decide its own direction and walking motion when avoiding an obstacle. The CM provides automation in deciding the direction of avoidance, whereas the FAMS system chooses the avoidance path and walking motion, based on environment conditions such as the size of the obstacle and the available space around it. The proposed system was applied to a humanoid robot that we designed. The results of the experiment show that the proposed method can be effectively applied to decide the avoidance direction and the walking motion of a humanoid robot.

SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리 (Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm)

  • 김정호;김대원
    • 전자공학회논문지
    • /
    • 제51권4호
    • /
    • pp.144-159
    • /
    • 2014
  • 다중의 영상을 이용하여 하나의 파노라마 영상을 제작하는 기법은 컴퓨터 비전, 컴퓨터 그래픽스 등과 같은 여러 분야에서 널리 연구되고 있다. 파노라마 영상은 하나의 카메라에서 얻을 수 있는 영상의 한계, 즉 예를 들어 화각, 화질, 정보량 등의 한계를 극복할 수 있는 좋은 방법으로서 가상현실, 로봇비전 등과 같이 광각의 영상이 요구되는 다양한 분야에서 응용될 수 있다. 파노라마 영상은 단일 영상과 비교하여 보다 큰 몰입감을 제공한다는 점에서 큰 의미를 갖는다. 현재 다양한 파노라마 영상 제작 기법들이 존재하지만, 대부분의 기법들이 공통적으로 파노라마 영상을 구성할 때 각 영상에 존재하는 특징점 및 대응점을 검출하는 방식을 사용하고 있다. 또한, 대응점을 이용한 RANSAC(RANdom SAmple Consensus) 알고리즘을 사용, Homography Matrix를 구하여 영상을 변환하는 방법을 사용한다. 본 논문에서 사용한 SURF(Speeded Up Robust Features) 알고리즘은 영상의 특징점을 검출할 때 영상의 흑백정보와 지역 공간 정보를 활용하는데, 영상의 크기 변화와 시점 검출에 강하며 SIFT(Scale Invariant Features Transform) 알고리즘에 비해 속도가 빠르다는 장점이 있어서 널리 사용되고 있다. SURF 알고리즘은 대응점 검출 시 잘못된 대응점을 검출하는 경우가 생긴다는 단점이 존재하는데 이는 RANSAC 알고리즘의 수행속도를 늦추며, 그로인해 CPU 사용 점유율을 높이기도 한다. 대응점 검출 오류는 파노라마 영상의 정확성 및 선명성을 떨어뜨리는 핵심 요인이 된다. 본 논문에서는 이러한 대응점 검출의 오류를 최소화하기 위하여 대응점 좌표 주변 $3{\times}3$ 영역의 RGB값을 사용하여 잘못된 대응점들을 제거하는 중간 필터링 과정을 수행하고, 문제해결을 시도하는 동시에 파노라마 이미지구성 처리 속도 및 CPU 사용 점유율 등의 성능 향상 결과와 추출된 대응점 감소율, 정확도 등과 관련한 분석 및 평가 결과를 제시하였다.

어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘 (Localization using Ego Motion based on Fisheye Warping Image)

  • 최윤원;최경식;최정원;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

광역관찰 카메라 시스템을 위한 카메라의 대응관계 계산 (Correspondence Estimation for Wide Area Watching Camera System)

  • 이동휘;최승현;이칠우
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 제14회 신호처리 합동 학술대회 논문집
    • /
    • pp.415-418
    • /
    • 2001
  • The automatic construction of large, high-resolution image mosaics is an active area of reasearch in the fields of photogrammetry, computer vision, image processing, and computer graphics. In this study, we describe a automatic mosaicing method that makes a panorama from images by placing camera in a emitted-grid. In the images captured by cameras, there must be a matched area and the area is in the particular area of the image. Initial transformation matrix, there(ore, is calculated from points searched in the partial area. It is possible to find best transformation matrix by Levenberg-Marquardt method. Finally, each images are multiplied by blending function and stitched by the transformation matrix to complete panoramic image.

  • PDF

TEST OF A LOW COST VEHICLE-BORNE 360 DEGREE PANORAMA IMAGE SYSTEM

  • Kim, Moon-Gie;Sung, Jung-Gon
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2008년도 International Symposium on Remote Sensing
    • /
    • pp.137-140
    • /
    • 2008
  • Recently many areas require wide field of view images. Such as surveillance, virtual reality, navigation and 3D scene reconstruction. Conventional camera systems have a limited filed of view and provide partial information about the scene. however, omni directional vision system can overcome these disadvantages. Acquiring 360 degree panorama images requires expensive omni camera lens. In this study, 360 degree panorama image was tested using a low cost optical reflector which captures 360 degree panoramic views with single shot. This 360 degree panorama image system can be used with detailed positional information from GPS/INS. Through this study result, we show 360 degree panorama image is very effective tool for mobile monitoring system.

  • PDF