• Title/Summary/Keyword: Fusion matching

Search Result 141, Processing Time 0.027 seconds

Multi-Image Stereo Method Using DEM Fusion Technique (DEM 융합 기법을 이용한 다중영상스테레오 방법)

  • Lim Sung-Min;Woo Dong-Min
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.4
    • /
    • pp.212-222
    • /
    • 2003
  • The ability to efficiently and robustly recover accurate 3D terrain models from sets of stereoscopic images is important to many civilian and military applications. A stereo matching has been an important tool for reconstructing three dimensional terrain. However, there exist many factors causing stereo matching error, such as occlusion, no feature or repetitive pattern in the correlation window, intensity variation, etc. Among them, occlusion can be only resolved by true multi-image stereo. In this paper, we present multi-image stereo method using DEM fusion as one of efficient and reliable true multi-image methods. Elevations generated by all pairs of images are combined by the fusion process which accepts an accurate elevation and rejects an outlier. We propose three fusion schemes: THD(Thresholding), BPS(Best Pair Selection) and MS(Median Selection). THD averages elevations after rejecting outliers by thresholding, while BPS selects the most reliable elevation. To determine the reliability of a elevation or detect the outlier, we employ the measure of self-consistency. The last scheme, MS, selects the median value of elevations. We test the effectiveness of the proposed methods with a quantitative analysis using simulated images. Experimental results indicate that all three fusion schemes showed much better improvement over the conventional binocular stereo in natural terrain of 29 Palms and urban site of Avenches.

Boundary Stitching Algorithm for Fusion of Vein Pattern (정맥패턴 융합을 위한 Boundary Stitching Algorithm)

  • Lim, Young-Kyu;Jang, Kyung-Sik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.521-524
    • /
    • 2005
  • This paper proposes a fusion algorithm which merges multiple vein pattern images into a single image, larger than those images. As a preprocessing step of template matching, during the verification of biometric data such as fingerprint image, vein pattern image of hand, etc., the fusion technique is used to make reference image larger than the candidate images in order to enhance the matching performance. In this paper, a new algorithm, called BSA (Boundary Stitching Algorithm) is proposed, in which the boundary rectilinear parts extracted from the candidate images are stitched to the reference image in order to enlarge its matching space. By applying BSA to practical vein pattern verification system, its verification rate was increased by about 10%.

  • PDF

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

Motion Estimation Using Feature Matching and Strongly Coupled Recurrent Module Fusion (특징정합과 순환적 모듈융합에 의한 움직임 추정)

  • 심동규;박래홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.12
    • /
    • pp.59-71
    • /
    • 1994
  • This paper proposes a motion estimation method in video sequences based on the feature based matching and anistropic propagation. It measures translation and rotation parameters using a relaxation scheme at feature points and object orinted anistropic propagation in continuous and discontinuous regions. Also an iterative improvement motion extimation based on the strongly coupled module fusion and adaptive smoothing is proposed. Computer simulation results show the effectiveness of the proposed algorithm.

  • PDF

Fusion Matching According to Land Cover Property of High Resolution Images (고해상도 위성영상의 토지피복 특성에 따른 혼합정합)

  • Lee, Hyoseong;Park, Byunguk;Ahn, Kiweon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_1
    • /
    • pp.583-590
    • /
    • 2012
  • This study proposes fusion image matching method according to land cover property to generate a detailed DEM using the high resolution IKONOS-2 stereo pair. A classified image, consists of building, crop-land, forest, road and shadow-water, is produced by color image with four bands. Edges and points are also extracted from panchromatic image. Matching is performed by the cross-correlation computing after five classes are automatically selected in a reference image. In each of building class, crop-land class, forest class and road class, matching was performed by the grid and edge, only grid, only grid, grid and point, respectively. Shadow-water class was excepted in the matching because this area causes excessive error of the DEM. As the results, edge line, building and residential area could be expressed more dense than DEM by the conventional method.

Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images (깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법)

  • 엄기문;안충현;이수인;김강연;이관행
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.185-195
    • /
    • 2004
  • This paper presents a multi-depth map fusion method for the 3D scene reconstruction. It fuses depth maps obtained from the stereo matching technique and the depth camera. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. Depth map obtained from the depth camera is globally accurate but noisy and provide a limited depth range. In order to get better depth estimates than these two conventional techniques, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. We first obtain two depth maps generated from the stereo matching of 3-view images. Moreover, a depth map is obtained from the depth camera for the center-view image. After preprocessing each depth map, we select a depth value for each pixel among them. Simulation results showed a few improvements in some background legions by proposed fusion technique.

Experiment for 3D Coregistration between Scanned Point Clouds of Building using Intensity and Distance Images (강도영상과 거리영상에 의한 건물 스캐닝 점군간 3차원 정합 실험)

  • Jeon, Min-Cheol;Eo, Yang-Dam;Han, Dong-Yeob;Kang, Nam-Gi;Pyeon, Mu-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.1
    • /
    • pp.39-45
    • /
    • 2010
  • This study used the keypoint observed simultaneously on two images and on twodimensional intensity image data, which was obtained along with the two point clouds data that were approached for automatic focus among points on terrestrial LiDAR data, and selected matching point through SIFT algorithm. Also, for matching error diploid, RANSAC algorithm was applied to improve the accuracy of focus. As calculating the degree of three-dimensional rotating transformation, which is the transformation-type parameters between two points, and also the moving amounts of vertical/horizontal, the result was compared with the existing result by hand. As testing the building of College of Science at Konkuk University, the difference of the transformation parameters between the one through automatic matching and the one by hand showed 0.011m, 0.008m, and 0.052m in X, Y, Z directions, which concluded to be used as the data for automatic focus.

Multi-information fusion based localization algorithm for Mars rover

  • Jiang, Xiuqiang;Li, Shuang;Tao, Ting;Wang, Bingheng
    • Advances in aircraft and spacecraft science
    • /
    • v.1 no.4
    • /
    • pp.455-469
    • /
    • 2014
  • High-precision autonomous localization technique is essential for future Mars rovers. This paper addresses an innovative integrated localization algorithm using a multiple information fusion approach. Firstly, the output of IMU is employed to construct the two-dimensional (2-D) dynamics equation of Mars rover. Secondly, radio beacon measurement and terrain image matching are considered as external measurements and included into the navigation filter to correct the inertial basis and drift. Then, extended Kalman filtering (EKF) algorithm is designed to estimate the position state of Mars rovers and suppress the measurement noise. Finally, the localization algorithm proposed in this paper is validated by computer simulation with different parameter sets.