• Title/Summary/Keyword: 상호좌표등록

Search Result 12, Processing Time 0.02 seconds

RNCC-based Fine Co-registration of Multi-temporal RapidEye Satellite Imagery (RNCC 기반 다시기 RapidEye 위성영상의 정밀 상호좌표등록)

  • Han, Youkyung;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.581-588
    • /
    • 2018
  • The aim of this study is to propose a fine co-registration approach for multi-temporal satellite images acquired from RapidEye, which has an advantage of availability for time-series analysis. To this end, we generate multitemporal ortho-rectified images using RPCs (Rational Polynomial Coefficients) provided with RapidEye images and then perform fine co-registration between the ortho-rectified images. A DEM (Digital Elevation Model) extracted from the digital map was used to generate the ortho-rectified images, and the RNCC (Registration Noise Cross Correlation) was applied to conduct the fine co-registration. Experiments were carried out using 4 RapidEye 1B images obtained from May 2015 to November 2016 over the Yeonggwang area. All 5 bands (blue, green, red, red edge, and near-infrared) that RapidEye provided were used to carry out the fine co-registration to show their possibility of being applicable for the co-registration. Experimental results showed that all the bands of RapidEye images could be co-registered with each other and the geometric alignment between images was qualitatively/quantitatively improved. Especially, it was confirmed that stable registration results were obtained by using the red and red edge bands, irrespective of the seasonal differences in the image acquisition.

Fine Co-registration Performance of KOMPSAT-3·3A Imagery According to Convergence Angles (수렴각에 따른 KOMPSAT-3·3A호 영상 간 정밀 상호좌표등록 결과 분석)

  • Han, Youkyung;Kim, Taeheon;Kim, Yeji;Lee, Jeongho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.491-498
    • /
    • 2019
  • This study analyzed how the accuracy of co-registration varies depending on the convergence angles between two KOMPSAT-3·3A images. Most very-high-resolution satellite images provide initial coordinate information through metadata. Since the search area for performing image co-registration can be reduced by using the initial coordinate information, in this study, the mutual information method showing high matching reliability in the small search area is used. Initial coarse co-registration was performed by using multi-spectral images with relatively low resolution, and precise fine co-registration was conducted centering on the region of interest of the panchromatic image for more accurate co-registration performance. The experiment was conducted by 120 combination of 16 KOMPSAT-3·3A 1G images taken in Daejeon area. Experimental results show that a correlation coefficient between the convergence angles and fine co-registration errors was 0.59. In particular, we have shown the larger the convergence angle, the lower the accuracy of co-registration performance.

Automated Image Co-registration using Pre-qualified Area Based Mating and Outlier Removal (사전검수 영역기반정합법과 과대오차제거를 이용한 '자동영상좌표 상호등록')

  • Kim Jong-Hong;Joon Heo;Sohn Hong-Gyoo
    • Proceedings of the KSRS Conference
    • /
    • 2006.03a
    • /
    • pp.49-52
    • /
    • 2006
  • 최근 대규모 지역 혹은 전 지구에 걸친 분석 및 모니터링을 위한 위성영상의 사용이 늘어나면서 이를 처리하기 위한 효율적인 '영상좌표 상호등록'법이 요구되고 있다. 이에 본 연구에서는 일반적으로 오랜 시간이 소요되는 '영상좌표 상호등록'의 효율성을 높이기 위해 '사전검수영역기반정합법'(Pre-qualified area based matching)을 사용하였다. 이를 통해 '영상좌표 상호등록'시 연산시간을 현저히 단축시켰고 추출된 정합점에 과대오차제거법을 적용함으로서 단순히 영역기반정합법을 적용한 경우에 비해서 정확도가 향상됨을 확인할 수 있었다. 제안한 알고리즘을 이용하여 테스트 프로그램을 작성, 한반도 Landsat ETM+ 영상 3장을 이용하여 테스트하였다. 정합점 간의 평균제곱오차는 0.436 영상소, 정합점은 평균 38,475개로 나타났다. 연산시 간은 평균 약 8분으로 나타났다.

  • PDF

Co-registration Between PAN and MS Bands Using Sensor Modeling and Image Matching (센서모델링과 영상매칭을 통한 PAN과 MS 밴드간 상호좌표등록)

  • Lee, Chang No;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.1
    • /
    • pp.13-21
    • /
    • 2021
  • High-resolution satellites such as Kompsat-3 and CAS-500 include optical cameras of MS (Multispectral) and PAN (Panchromatic) CCD (Charge Coupled Device) sensors installed with certain offsets. The offsets between the CCD sensors produce geometric discrepancy between MS and PAN images because a ground target is imaged at slightly different times for MS and PAN sensors. For precise pan-sharpening process, we propose a co-registration process consisting the physical sensor modeling and image matching. The physical sensor model enables the initial co-registration and the image matching is carried out for further refinement. An experiment with Kompsat-3 images produced RMSE (Root Mean Square Error) 0.2pixels level of geometric discrepancy between MS and PAN images.

Analysis of Co-registration Performance According to Geometric Processing Level of KOMPSAT-3/3A Reference Image (KOMPSAT-3/3A 기준영상의 기하품질에 따른 상호좌표등록 결과 분석)

  • Yun, Yerin;Kim, Taeheon;Oh, Jaehong;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.221-232
    • /
    • 2021
  • This study analyzed co-registration results according to the geometric processing level of reference image, which are Level 1R and Level 1G provided from KOMPSAT-3 and KOMPSAT-3A images. We performed co-registration using each Level 1R and Level 1G image as a reference image, and Level 1R image as a sensed image. For constructing the experimental dataset, seven Level 1R and 1G images of KOMPSAT-3 and KOMPSAT-3A acquired from Daejeon, South Korea, were used. To coarsely align the geometric position of the two images, SURF (Speeded-Up Robust Feature) and PC (Phase Correlation) methods were combined and then repeatedly applied to the overlapping region of the images. Then, we extracted tie-points using the SURF method from coarsely aligned images and performed fine co-registration through affine transformation and piecewise Linear transformation, respectively, constructed with the tie-points. As a result of the experiment, when Level 1G image was used as a reference image, a relatively large number of tie-points were extracted than Level 1R image. Also, in the case where the reference image is Level 1G image, the root mean square error of co-registration was 5 pixels less than the case of Level 1R image on average. We have shown from the experimental results that the co-registration performance can be affected by the geometric processing level related to the initial geometric relationship between the two images. Moreover, we confirmed that the better geometric quality of the reference image achieved the more stable co-registration performance.

Automated Satellite Image Co-Registration using Pre-Qualified Area Matching and Studentized Outlier Detection (사전검수영역기반정합법과 't-분포 과대오차검출법'을 이용한 위성영상의 '자동 영상좌표 상호등록')

  • Kim, Jong Hong;Heo, Joon;Sohn, Hong Gyoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4D
    • /
    • pp.687-693
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene, one of which represents a reference image, while the other is geometrically transformed to the one. In order to improve efficiency and effectiveness of the co-registration approach, the author proposed a pre-qualified area matching algorithm which is composed of feature extraction with canny operator and area matching algorithm with cross correlation coefficient. For refining matching points, outlier detection using studentized residual was used and iteratively removes outliers at the level of three standard deviation. Throughout the pre-qualification and the refining processes, the computation time was significantly improved and the registration accuracy is enhanced. A prototype of the proposed algorithm was implemented and the performance test of 3 Landsat images of Korea. showed: (1) average RMSE error of the approach was 0.435 pixel; (2) the average number of matching points was over 25,573; (3) the average processing time was 4.2 min per image with a regular workstation equipped with a 3 GHz Intel Pentium 4 CPU and 1 Gbytes Ram. The proposed approach achieved robustness, full automation, and time efficiency.

Automatic Co-registration of Cloud-covered High-resolution Multi-temporal Imagery (구름이 포함된 고해상도 다시기 위성영상의 자동 상호등록)

  • Han, You Kyung;Kim, Yong Il;Lee, Won Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.4
    • /
    • pp.101-107
    • /
    • 2013
  • Generally the commercial high-resolution images have their coordinates, but the locations are locally different according to the pose of sensors at the acquisition time and relief displacement of terrain. Therefore, a process of image co-registration has to be applied to use the multi-temporal images together. However, co-registration is interrupted especially when images include the cloud-covered regions because of the difficulties of extracting matching points and lots of false-matched points. This paper proposes an automatic co-registration method for the cloud-covered high-resolution images. A scale-invariant feature transform (SIFT), which is one of the representative feature-based matching method, is used, and only features of the target (cloud-covered) images within a circular buffer from each feature of reference image are used for the candidate of the matching process. Study sites composed of multi-temporal KOMPSAT-2 images including cloud-covered regions were employed to apply the proposed algorithm. The result showed that the proposed method presented a higher correct-match rate than original SIFT method and acceptable registration accuracies in all sites.

Topographic Mapping using SAR Interferometry Method (레이다 간섭기법(SAR Interferometry)을 이용한 지형도 제작)

  • Jeong, Do-Chan;Kim, Byung-Guk
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2000.06a
    • /
    • pp.67-76
    • /
    • 2000
  • Recently, SAR Interferometry method is actively being studied as a new technic in topographic mapping using satellite imageries. it extract height values using two SAR imageries covering same areas. Unlike when using SPOT imageries, it isn't affected by atmospheric conditions and time. But it is difficult to process radar imageries and the height accuracy is very low where relief displacements are high. In this study, we produced DEM(Digital Elevation Model) using ERS-1, ERS-2 tandem data and analysed the height accuracy over 14 ground control points. The mean error in height was 14.06m. But when using airborne SAR data, it Is expected that we can produce more accurate DEM which will be able to ue used in updating 1/10,000 or 1/25,000 map.

  • PDF

Geocoding of the Free Stereo Mosaic Image Generated from Video Sequences (비디오 프레임 영상으로부터 제작된 자유 입체 모자이크 영상의 실좌표 등록)

  • Noh, Myoung-Jong;Cho, Woo-Sug;Park, Jun-Ku;Kim, Jung-Sub;Koh, Jin-Woo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.3
    • /
    • pp.249-255
    • /
    • 2011
  • The free-stereo mosaics image without GPS/INS and ground control data can be generated by using relative orientation parameters on the 3D model coordinate system. Its origin is located in one reference frame image. A 3D coordinate calculated by conjugate points on the free-stereo mosaic images is represented on the 3D model coordinate system. For determining 3D coordinate on the 3D absolute coordinate system utilizing conjugate points on the free-stereo mosaic images, transformation methodology is required for transforming 3D model coordinate into 3D absolute coordinate. Generally, the 3D similarity transformation is used for transforming each other 3D coordinates. Error of 3D model coordinates used in the free-stereo mosaic images is non-linearly increased according to distance from 3D model coordinate and origin point. For this reason, 3D model coordinates used in the free-stereo mosaic images are difficult to transform into 3D absolute coordinates by using linear transformation. Therefore, methodology for transforming nonlinear 3D model coordinate into 3D absolute coordinate is needed. Also methodology for resampling the free-stereo mosaic image to the geo-stereo mosaic image is needed for overlapping digital map on absolute coordinate and stereo mosaic images. In this paper, we propose a 3D non-linear transformation for converting 3D model coordinate in the free-stereo mosaic image to 3D absolute coordinate, and a 2D non-linear transformation based on 3D non-linear transformation converting the free-stereo mosaic image to the geo-stereo mosaic image.

Automated Image Co-registration Using Pre-qualified Area Based Matching Technique (사전검수 영역기반 정합법을 활용한 영상좌표 상호등록)

  • Kim Jong-Hong;Heo Joon;Sohn Hong-Gyoo
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.181-185
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene, one of which represents a reference image, while the other is geometrically transformed to the one. In order to improve efficiency and effectiveness of the co-registration approach, the author proposed a pre-qualified area matching algorithm which is composed of feature extraction with canny operator and area matching algorithm with cross correlation coefficient. For refining matching points, outlier detection using studentized residual was used and iteratively removes outliers at the level of three standard deviation. Throughout the pre-qualification and the refining processes, the computation time was significantly improved and the registration accuracy is enhanced. A prototype of the proposed algorithm was implemented and the performance test of 3 Landsat images of Korea showed: (1) average RMSE error of the approach was 0.436 Pixel (2) the average number of matching points was over 38,475 (3) the average processing time was 489 seconds per image with a regular workstation equipped with a 3 GHz Intel Pentium 4 CPU and 1 Gbytes Ram. The proposed approach achieved robustness, full automation, and time efficiency.

  • PDF