• Title/Summary/Keyword: imagery registration

Search Result 44, Processing Time 0.026 seconds

Development and Performance Analysis of a Near Real-Time Sensor Model Correction System for Frame Motion Imagery (프레임동영상의 근실시간 센서모델 보정시스템 개발 및 성능분석)

  • Kwon, Hyuk Tae;Koh, Jin-Woo;Kim, Sanghee;Park, Se Hyoung
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.3
    • /
    • pp.315-322
    • /
    • 2018
  • Due to the increasing demand for more rapid, precise and accurate geolocation of the targets on video frames from UAVs, an efficient and timely method for correcting sensor models of motion imagery is required. In this paper, we propose a method to adjust or correct sensor models of motion imagery frames using space resection via image matching with reference data. The proposed method adopts image matching between the motion imagery frames and the reference frames which are synthesized from reference data. Ground or reference control points are generated or selected through the matching process in near real time, and are used for space resection to get adjusted sensor models. Finally, more precise and accurate geolocation of the targets can possibly be done on the fly, and we have got the promising result on performance analysis in terms of the geolocation quality.

Building Extraction from Lidar Data and Aerial Imagery using Domain Knowledge about Building Structures

  • Seo, Su-Young
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.3
    • /
    • pp.199-209
    • /
    • 2007
  • Traditionally, aerial images have been used as main sources for compiling topographic maps. In recent years, lidar data has been exploited as another type of mapping data. Regarding their performances, aerial imagery has the ability to delineate object boundaries but omits much of these boundaries during feature extraction. Lidar provides direct information about heights of object surfaces but have limitations with respect to boundary localization. Considering the characteristics of the sensors, this paper proposes an approach to extracting buildings from lidar and aerial imagery, which is based on the complementary characteristics of optical and range sensors. For detecting building regions, relationships among elevation contours are represented into directional graphs and searched for the contours corresponding to external boundaries of buildings. For generating building models, a wing model is proposed to assemble roof surface patches into a complete building model. Then, building models are projected and checked with features in aerial images. Experimental results show that the proposed approach provides an efficient and accurate way to extract building models.

Image Registration Improvement Based-on FFT Techniques with the Affine Transform Estimation

  • Wisetphanichkij, Sompong;Pasomkusolsil, Sanchaiya;Dejhan, Kobchai;Cheevasuvit, Fusak;Mitatha, Somsak;Sra-Ium, Napat;Vorrawat, Vinai;Pienvijarnpong, Chanchai
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.260-262
    • /
    • 2003
  • New Image registration techniques are developed for determining geometric distortions between two images of the same scene. First, the properties of the Fourier transform of a two dimensional function under the affine transformation are given. As a result, techniques for the estimation of the coefficients of the distortion model using the spectral frequency information are developed. Image registration can be achieved by applying the fast Fourier transform (FFT) technique for cross correlation of misregistered imagery to determine spatial distances. The correlation results may be rather broad, making detection of the peak difficult, what can be suppressed by enhancing cross-correlation technique. Yield greatly improves the delectability and high precision of image misregistration.

  • PDF

Precise Geometric Registration of Aerial Imagery and LIDAR Data

  • Choi, Kyoung-Ah;Hong, Ju-Seok;Lee, Im-Pyeong
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.506-516
    • /
    • 2011
  • In this paper, we develop a registration method to eliminate the geometric inconsistency between the stereo-images and light detection and ranging (LIDAR) data obtained by an airborne multisensor system. This method consists of three steps: registration primitive extraction, correspondence establishment, and exterior orientation parameter (EOP) adjustment. As the primitives, we employ object points and linked edges from the stereo-images and planar patches and intersection edges from the LIDAR data. After extracting these primitives, we establish the correspondence between them, being classified into vertical and horizontal groups. These corresponding pairs are simultaneously incorporated as stochastic constraints into aerial triangulation based on the bundle block adjustment. Finally, the EOPs of the images are adjusted to minimize the inconsistency. The results from the application of our method to real data demonstrate that the inconsistency between both data sets is significantly reduced from the range of 0.5 m to 2 m to less than 0.05 m. Hence, the results show that the proposed method is useful for the data fusion of aerial images and LIDAR data.

Image Fusion for Improving Classification

  • Lee, Dong-Cheon;Kim, Jeong-Woo;Kwon, Jay-Hyoun;Kim, Chung;Park, Ki-Surk
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1464-1466
    • /
    • 2003
  • classification of the satellite images provides information about land cover and/or land use. Quality of the classification result depends mainly on the spatial and spectral resolutions of the images. In this study, image fusion in terms of resolution merging, and band integration with multi-source of the satellite images; Landsat ETM+ and Ikonos were carried out to improve classification. Resolution merging and band integration could generate imagery of high resolution with more spectral bands. Precise image co-registration is required to remove geometric distortion between different sources of images. Combination of unsupervised and supervised classification of the fused imagery was implemented to improve classification. 3D display of the results was possible by combining DEM with the classification result so that interpretability could be improved.

  • PDF

CO-REGISTRATION OF KOMPSAT IMAGERY AND DIGITAL MAP

  • Han, Dong-Yeob;Lee, Hyo-Seong
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.11-13
    • /
    • 2008
  • This study proposes the method to use existing digital maps as one of the technologies to exclude individual differences that occur in the process of manually determining GCP for the geometric correction of KOMPSAT images and applying it to the images and to automate the generation of ortho-images. It is known that, in case high-resolution satellite images are corrected geometrically by using RPC, first order polynomials are generally applied as the correction formula in order to obtain good results. In this study, we matched the corresponding objects between 1:25,000 digital map and a KOMPSAT image to obtain the coefficients of the zero order polynomial and showed the differences in the pixel locations obtained through the matching. We performed proximity corrections using the Boolean operation between the point data of the surface linear objects and the point data of the edge objects of the image. The surface linear objects are road, water, building from topographic map.

  • PDF

Image Registration Method for KOMPSAT-2 clouds imagery (구름이 존재하는 아리랑 2호 영상의 영상정합 방법)

  • Kim, Tae-Young;Choi, Myung-Jin
    • Proceedings of the KSRS Conference
    • /
    • 2009.03a
    • /
    • pp.250-253
    • /
    • 2009
  • 고해상도 컬러 위성 영상 촬영을 위한 다중분광 센서를 탑재한 위성의 영상은, 탑재체에 장착된 센서의 위치에 따라, 동일 지역에 대해 센서 간의 촬영시각의 차이가 발생한다. 만약 이동하는 구름이 촬영될 경우, 센서별 촬영 영상간에는 구름과 지상과의 상대적 위치가 달라진다. 고해상도 컬러 위성 영상을 생성하기 위해, 영상 정합(image registration) 기법이 사용되는 데, 일반적인 영상 정합 알고리즘은 촬영 영상에서의 특징점(feature point)이 움직이지 않는 것을 전제로 수행한다. 그 결과 이동하는 구름 경계부에서 정합점(matching point)이 추출될 경우, 지상 영역에서의 정합품질이 좋지 않다. 따라서, 본 연구에서는 구름 경계부에서 정합점이 추출되지 않는 알고리즘을 제안하였다. 실험 영상으로 구름이 존재하는 아리랑2호 영상을 사용하였고, 제안된 영상 정합 알고리즘은 지상 영역에서의 정합 품질이 높였음을 보였다.

  • PDF

Photogrammetric Georeferencing Using LIDAR Linear and Areal Features

  • HABIB Ayman;GHANMA Mwafag;MITISHITA Edson
    • Korean Journal of Geomatics
    • /
    • v.5 no.1
    • /
    • pp.7-19
    • /
    • 2005
  • Photogrammetric mapping procedures have gone through major developments due to significant improvements in its underlying technologies. The availability of GPS/INS systems greatly assist in direct geo-referencing of the acquired imagery. Still, photogrammetric datasets taken without the aid of positioning and navigation systems need control information for the purpose of surface reconstruction. Point features were, and still are, the primary source of control for the photogrammetric triangulation although other higher-order features are available and can be used. LIDAR systems supply dense geometric surface information in the form of three dimensional coordinates with respect to certain reference system. Considering the accuracy improvement of LIDAR systems in the recent years, LIDAR data is considered a viable supply of photogrammetric control. To exploit LIDAR data, new challenges are poised concerning the representation and reference system by which both the photogrammetric and LIDAR datasets are described. In this paper, registration methodologies will be devised for the purpose of integrating the LIDAR data into the photogrammetric triangulation. Such registration methodologies have to deal with three issues: registration primitives, transformation parameters, and similarity measures. Two methodologies will be introduced that utilize straight-line and areal features derived from both datasets as the registration primitives. The first methodology directly incorporates the LIDAR lines as control information in the photogrammetric triangulation, while in the second methodology, LIDAR patches are used to produce and align the photogrammetric model. Also, camera self-calibration experiments were conducted on simulated and real data to test the feasibility of using LIDAR patches for this purpose.

  • PDF

RNCC-based Fine Co-registration of Multi-temporal RapidEye Satellite Imagery (RNCC 기반 다시기 RapidEye 위성영상의 정밀 상호좌표등록)

  • Han, Youkyung;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.581-588
    • /
    • 2018
  • The aim of this study is to propose a fine co-registration approach for multi-temporal satellite images acquired from RapidEye, which has an advantage of availability for time-series analysis. To this end, we generate multitemporal ortho-rectified images using RPCs (Rational Polynomial Coefficients) provided with RapidEye images and then perform fine co-registration between the ortho-rectified images. A DEM (Digital Elevation Model) extracted from the digital map was used to generate the ortho-rectified images, and the RNCC (Registration Noise Cross Correlation) was applied to conduct the fine co-registration. Experiments were carried out using 4 RapidEye 1B images obtained from May 2015 to November 2016 over the Yeonggwang area. All 5 bands (blue, green, red, red edge, and near-infrared) that RapidEye provided were used to carry out the fine co-registration to show their possibility of being applicable for the co-registration. Experimental results showed that all the bands of RapidEye images could be co-registered with each other and the geometric alignment between images was qualitatively/quantitatively improved. Especially, it was confirmed that stable registration results were obtained by using the red and red edge bands, irrespective of the seasonal differences in the image acquisition.