• Title/Summary/Keyword: Least squares image matching

Search Result 16, Processing Time 0.017 seconds

Analysis on 3D Positioning Precision Using Mobile Mapping System Images in Photograrmmetric Perspective (사진측량 관점에서 차량측량시스템 영상을 이용한 3차원 위치의 정밀도 분석)

  • 조우석;황현덕
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.6
    • /
    • pp.431-445
    • /
    • 2003
  • In this paper, we experimentally investigated the precision of 3D positioning using 4S-Van images in photograrmmetric perspective. The 3D calibration target was built over building facade outside and was captured separately by two CCD cameras installed in 4S-Van. After then, we determined the interior orientation parameter for each CCD camera through self-calibration technique. With the interior orientation parameter computed, the bundle adjustment was performed to obtain the exterior orientation parameters simultaneously for two CCD cameras using calibration target image and object coordinates. The reverse lens distortion coefficients were computed and acquired by least squares method so as to introduce lens distortion into epipolar line. It was shown that the reverse lens distortion coefficients could transform image coordinates into lens distorted image coordinates within about 0.5 pixel. The proposed semi-automatic matching scheme incorporated with lens distorted epipolar line was implemented with scene images captured by 4S-Van in moving. The experimental results showed that the precision of 3D positioning from 4S-Van images in photograrmmetric perspective is within 2cm in the range of 20m from the camera.

The Geometric Modeling for 3D Information of X-ray Inspection (3차원 정보 제공을 위한 X-선 검색장치의 기하학적 모델링)

  • Lee, Heung-Ho;Lee, Seung-Min
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.8
    • /
    • pp.1151-1156
    • /
    • 2013
  • In this study, to clearly establish the concept of a geometric modeling I apply for the concept of Pushbroom, limited to two-dimensional radiation Locator to provide a three-dimensional information purposes. Respect to the radiation scanner Pushbroom modeling techniques, geometric modeling method was presented introduced to extract three-dimensional information as long as the rotational component of the Gamma-Ray Linear Pushbroom Stereo System, introduced the two-dimensional and three-dimensional spatial information in the matching relation that can be induced. In addition, the pseudo-inverse matrix by using the conventional least-squares method, GCP(Ground Control Point) to demonstrate compliance by calculating the key parameters. Projection transformation matrix is calculated for obtaining three-dimensional information from two-dimensional information can be used as the primary relationship, and through the application of a radiation image matching technology will make it possible to extract three-dimensional information from two-dimensional X-ray imaging.

Image alignment method based on CUDA SURF for multi-spectral machine vision application (다중 스펙트럼 머신비전 응용을 위한 CUDA SURF 기반의 영상 정렬 기법)

  • Maeng, Hyung-Yul;Kim, Jin-Hyung;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1041-1051
    • /
    • 2014
  • In this paper, we propose a new image alignment technique based on CUDA SURF in order to solve the initial image alignment problem that frequently occurs in machine vision applications. Machine vision systems using multi-spectral images have recently become more common for solving various decision problems that cannot be performed by the human vision system. These machine vision systems mostly use markers for the initial image alignment. However, there are some applications where the markers cannot be used and the alignment techniques have to be changed whenever their markers are changed. In order to solve these problems, we propose a new image alignment method for multi-spectral machine vision applications based on SURF extracting image features without depending on markers. In this paper, we propose an image alignment method that obtains a sufficient number of feature points from multi-spectral images using SURF and removes outlier iteratively based on a least squares method. We further propose an effective preliminary scheme for removing mismatched feature point pairs that may affect the overall performance of the alignment. In addition, we reduce the execution time by implementing the proposed method using CUDA based on GPGPU in order to guarantee real-time operation. Simulation results show that the proposed method is able to align images effectively in applications where markers cannot be used.

WAVEFRONT SENSING TECHNOLOGY FOR ADAPTIVE OPTICAL SYSTEMS

  • Uhma Tae-Kyoung;Rohb Kyung-Wan;Kimb Ji-Yeon;Park Kang-Soo;Lee Jun-Ho;Youn Sung-Kie
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.628-632
    • /
    • 2005
  • Remote sensing through atmospheric turbulence had been hard works for a long time, because wavefront distortion due to the Earth's atmospheric turbulence deteriorates image quality. But due to the appearance of adaptive optics, it is no longer difficult things. Adaptive optics is the technology to correct random optical wavefront distortions in real time. For past three decades, research on adaptive optics has been performed actively. Currently, most of newly built telescopes have adaptive optical systems. Adaptive optical system is typically composed of three parts, wavefront sensing, wavefront correction and control. In this work, the wavefront sensing technology for adaptive optical system is treated. More specifically, shearing interferometers and Shack-Hartmann wavefront sensors are considered. Both of them are zonal wavefront sensors and measure the slope of a wavefront. . In this study, the shearing interferometer is made up of four right-angle prisms, whose relative sliding motions provide the lateral shearing and phase shifts necessary for wavefront measurement. Further, a special phase-measuring least-squares algorithm is adopted to compensate for the phase-shifting error caused by the variation in the thickness of the index-matching oil between the prisms. Shack-Hartmann wavefront sensors are widely used in adaptive optics for wavefront sensing. It uses an array of identical positive lenslets. And each lenslet acts as a subaperture and produces spot image. Distortion of an input wavefront changes the location of spot image. And the slope of a wavefront is obtained by measuring this relative deviation of spot image. Structures and measuring algorithms of each sensor will be presented. Also, the results of wavefront measurement will be given. Using these wavefront sensing technology, an adaptive optical system will be built in the future.

  • PDF

Robust Estimation of Camera Motion Using A Local Phase Based Affine Model (국소적 위상기반 어파인 모델을 이용한 강인한 카메라 움직임 추정)

  • Jang, Suk-Yoon;Yoon, Chang-Yong;Park, Mig-Non
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.1
    • /
    • pp.128-135
    • /
    • 2009
  • Techniques for tracking the same region of physical space with the temporal sequences of images by matching the contours of constant phase show robust and stable performance in relative to the tracking techniques using or assuming the constant intensity. Using this property, we describe an algorithm for obtaining the robust motion parameters caused by the global camera motion. First, we obtain the optical flow based on the phase of spacially filtered sequential images on the region in a direction orthogonal to orientation of each component of gabor filter bank. And then, we apply the least squares method to the optical flow to determine the affine motion parameters. We demonstrate hat proposed method can be applied to the vision based pointing device which estimate its motion using the image including the display device which cause lighting condition varieties and noise.

Indoor Localization by Matching of the Types of Vertices (모서리 유형의 정합을 이용한 실내 환경에서의 자기위치검출)

  • Ahn, Hyun-Sik
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.6
    • /
    • pp.65-72
    • /
    • 2009
  • This paper presents a vision based localization method for indoor mobile robots using the types of vertices from a monocular image. In the images captured from a camera of a robot, the types of vertices are determined by searching vertical edges and their branch edges with a geometric constraints. For obtaining correspondence between the comers of a 2-D map and the vertex of images, the type of vertices and geometrical constraints induced from a geometric analysis. The vertices are matched with the comers by a heuristic method using the type and position of the vertices and the comers. With the matched pairs, nonlinear equations derived from the perspective and rigid transformations are produced. The pose of the robot is computed by solving the equations using a least-squares optimization technique. Experimental results show that the proposed localization method is effective and applicable to the localization of indoor environments.