• Title/Summary/Keyword: 정합기준

Search Result 347, Processing Time 0.03 seconds

Accuracy Analysis According to the Number of GCP Matching (지상기준점 정합수에 따른 정확도 분석)

  • LEE, Seung-Ung;MUN, Du-Yeoul;SEONG, Woo-Kyung;KIM, Jae-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.3
    • /
    • pp.127-137
    • /
    • 2018
  • Recently, UAVs and Drones have been used for various applications. In particular, in the field of surveying, there are studies on the technology for monitoring the terrain based on the high resolution image data obtained by using the UAV-equipped digital camera or various sensors, or for generating high resolution orthoimage, DSM, and DEM. In this study, we analyzed the accuracy of GCP(Ground control point) matching using UAV and VRS-GPS. First, we used VRS-GPS to pre-empt the ground reference point, and then imaged at a base altitude of 150m using UAV. To obtain DSM and orthographic images of 646 images, RMSE was analyzed using pix4d mapper version As a result, even if the number of GCP matches is more than five, the error range of the national basic map(scale : 1/5,000) production work regulations is observed, and it is judged that the digital map revision and gauging work can be utilized sufficiently.

Two-Stage Fast Full Search Algorithm for Black Motion Estimation (블록 움직임 추정을 위한 2단계 고속 전역 탐색 알고리듬)

  • 정원식;이법기;이경환;최정현;김경규;김덕규;이건일
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.9A
    • /
    • pp.1392-1400
    • /
    • 1999
  • In this paper, we propose a two-stage fast full search algorithm for block motion estimation that produces the same performance to that of full search algorithm (FSA) but with remarkable computation reduction. The proposed algorithm uses the search region subsampling and the difference of adjacent pixels in the current block. In the first stage, we subsample the search region by a factor of 9, and then calculate mean absolute error (MAE) at the subsampled search points. And in the second stage, we reduce the search points that need block matching process by using the lower bound of MAE value at each search Point. We Set the lower bound of MAE value for each search point from the MAE values which are calculated at the first stage and the difference of adjacent pixels in the current block. The experimental results show that we can reduce the computational complexity considerably without any degradation of picture quality.

  • PDF

Block Matching Motion Estimation Using Fast Search Algorithm (고속 탐색 알고리즘을 이용한 블록정합 움직임 추정)

  • 오태명
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.36T no.3
    • /
    • pp.32-40
    • /
    • 1999
  • In this paper, we present a fast block matching motion estimation algorithm based on successive elimination algorithm (SEA). Based on the characteristic of center-biased motion vector distribution in the search area, the proposed method improves the performance of the SEA with a reduced the number of the search positions in the search area, In addition, to reduce the computational load, this method is combined with both the reduced bits mean absolute difference (RBMAD) matching criterion which can be reduced the computation complexity of pixel comparison in the block matching and pixel decimation technique which reduce the number of pixels used in block matching. Simulation results show that the proposed method provides better performance than existing fast algorithms and similar to full-search block motion estimation algorithm.

  • PDF

Automated Geometric Correction of Geostationary Weather Satellite Images (정지궤도 기상위성의 자동기하보정)

  • Kim, Hyun-Suk;Hur, Dong-Seok;Rhee, Soo-Ahm;Kim, Tae-Jung
    • Proceedings of the KSRS Conference
    • /
    • 2007.03a
    • /
    • pp.70-75
    • /
    • 2007
  • 2008년 12월에 우리나라 최초의 통신해양기상위성(Communications, Oceanography and Meteorology Satellite, COMS)이 발사될 예정이다. 통신해양기상위성의 영상데이터의 기하보정을 위하여 다음과 같은 연구를 수행하였다. 기상위성은 정지궤도상에 위치하여 전지구적인 영상을 얻는다. 영상의 전지구적인 해안선은 구름 등으로 가려져서 명확한 정보를 제공할 수 없게 된다. 구름 등으로 방해되지 않는 명확한 해안선 정보를 얻기 위하여 구름 추출을 한다. 실시간으로 기상정보를 얻는 기상위성의 특성상 정합에 전체 영상을 사용하면 수행시간이 다소 소요된다. 정합시 전체 영상에서 정합을 위한 후보점 추출을 위하여 GSHHS(Global Self-consistent Hierarchical High-resolution Shoreline)의 해안선 데이터베이스를 사용하여 211 개 의 랜드마크 칩들을 구축하였다. 이때 구축된 랜드마크 칩은 실험에 사용한 GOES-9의 위치 동경 155도를 반영하여 구축하였다. 전체 영상에서 구축된 랜드마크 칩들의 위치를 중심으로 구름추출을 수행한다. 전체 211 개의 후보점 중 구름이 제거된 나머지 후보점에 대하여 정합을 수행한다. 랜드마크 칩과 위성영상 간의 정합 중 참정합과 오정합이 존재하는데 자동으로 오정합을 검출하기 위하여 강인추정기법 (RANSAC, Random Sample Consensus)을 사용한다. 이때 자동으로 판별되어 오정합이 제거된 정합결과로 최종적인 기하보정을 수행한다. 기하보정을 위한 센서모델은 GOES-9 위성의 센서특정을 고려하여 개발되었다. 정합 및 RANSAC결과로 얻어진 기준점으로 정밀 센서모델을 수립하여 기하보정을 실시하였다. 이때 일련의 수행과정을 통신해양기상위성의 실시간 처리요구사항에 맞도록 속도를 최적화하여 진행되도록 개발하였다.

  • PDF

A Study on the Stereo Image Matching using MRF model and segmented image (MRF 모델과 분할 영상을 이용한 영상정합에 관한 연구)

  • 변영기;한동엽;김용일
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2004.03a
    • /
    • pp.511-516
    • /
    • 2004
  • 수치표고모델, 정사영상과 같은 공간영상정보를 구축하기 위해서는 입체영상을 이동한 영상정합(image matching)의 과정이 필수적이며, 단영상 또는 스테레오 영상을 이용하여 대상물의 3차원 정보를 재구성하고 복원하는 기술은 사진측량 및 컴퓨터 비전 분야의 주요 연구 중의 하나이다. 본 연구에서는 화소값의 유사성과 상호관계성을 고려하는 MRF 모델을 이용하여 영상정합을 수행하였다. MRF 모델은 공간분석이나 물리적 현상의 전후관계(contextural dependencies)의 분석을 위한 확률이론의 한 분야로 다양한 공간정보를 통합할 수 있는 방법을 제공한다. 본 연구에서는 기준영상의 화소에 시차를 할당하는 접근 방법으로 확률모델의 일종인 마르코프 랜덤필드(MRF)모델에 기반한 영상정합기법을 제안하였고, 공간내 화소의 상호관계를 고려해주므로 대상물의 경계부분에서의 매칭 정확도를 향상시켰다. 영상정합문제에서의 MRF 기본가정은 영상 내 특정화소의 시차는 그 주위화소의 시차에 의한 부분정보에 따라 결정이 가능하다는 것이다. 깁스분포(gibbs distribution)를 사용하여 사후(posteriori) 확률값을 유도해내고, 이를 최대사후확률(MAP: Maximum a Posteriori)추정법을 이용하여 에너지함수를 생성하였다. 생성된 에너지함수의 최적화(Optimization)를 위하여 본 연구에서는 전역최적화기법인 multiway cut 기법을 사용하여 영상정합에 있어 에너지함수를 최소로 하는 이미지화소에 대한 시차레이블을 구하여 영상정합을 수행하였다.

  • PDF

A study high speed remote sensing image registration using deep learning-based keypoints filtering (딥러닝 기반 특징점 필터링을 이용한 원격 탐사 영상 정합 고속화 연구)

  • Lee, Wooju;Sim, Donggyu;Oh, Seoung-jun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.97-99
    • /
    • 2021
  • 본 논문에서는 딥러닝 기반 특징점 필터링 방법을 이용한 원격 탐사 영상에 대한 영상 정합 (Image Registration) 고속화 방법을 제안한다. 기존의 특징 기반 영상 정합 방법의 복잡도는 특징 매칭 (Feature Matching) 단계에서 발생한다. 이 복잡도를 줄이기 위하여 본 논문에서는 특징 매칭이 영상의 인공구조물에서 검출된 특징점으로 매칭되는 것을 확인하여 특징점 검출기에서 검출된 특징점 중에서 인공구조물에서 검출된 특징점만 필터링하는 방법을 제안한다. 딥러닝 기반 특징점 필터링은 영상 정합을 위하여 필수적인 특징점을 잃지 않으면서 그 수를 줄이기 위하여 인공구조물의 경계와 인접한 특징점을 보존하고, 축소한 영상을 사용하며, 영상 분할(Image Segmentation) 방법의 결과에서 생기는 영상 패치 경계의 잡음을 제거하기 위하여 영상 패치를 중복하여 잘라 냄으로써 정합 속도와 정확도를 향상시킨다. 영상 정합 고속화 방법을 의 성능을 검증하기 위하여 아리랑 3 호 위성 원격 탐사 영상을 사용하여 기존 특징점 추출 방법과 속도와 정확도를 비교하였다. 딥러닝 기반 영상 정합 방법을 기준으로 하여 비교하였을 때 특징점의 수를 약 82% 감소시키면서 속도를 약 9.17 배 향상시켰지만 정확도가 0.985 에서 0.855 으로 저하되었다.

  • PDF

An Analysis of Similarity Measures for Area-based Multi-Image Matching (다중영상 영역기반 영상정합을 위한 유사성 측정방법 분석)

  • Noh, Myoung-Jong;Kim, Jung-Sub;Cho, Woo-Sug
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.143-152
    • /
    • 2012
  • It is well-known that image matching is necessary for automatic generation of 3D data such as digital surface data from aerial images. Recently developed aerial digital cameras allow to capture multi-strip images with higher overlaps and less occluded areas than conventional analogue cameras and that much of researches on multi-image matching have been performed, particularly effective methods of measuring a similarity among multi-images using point features as well as linear features. This research aims to investigate similarity measuring methods such as SSD and SNCC incorporated into a area based multi-image matching method based on vertical line locus. In doing this, different similarity measuring entities such as grey value, grey value gradient, and average of grey value and its gradient are implemented and analyzed. Further, both dynamic and pre-fixed adaptive-window size are tested and analyzed in their behaviors in measuring similarity among multi-images. The aerial images used in the experiments were taken by a DMC aerial frame camera in three strips. The over-lap and side-lap are about 80% and 60%, respectively. In the experiment, it was found that the SNCC as similarity measuring method, the average of grey value and its gradient as similarity measuring entity, and dynamic adaptive-window size can be best fit to measuring area-based similarity in area based multi-image matching method based on vertical line locus.

Automated Geometric Correction of Geostationary Weather Satellite Images (정지궤도 기상위성의 자동기하보정)

  • Kim, Hyun-Suk;Lee, Tae-Yoon;Hur, Dong-Seok;Rhee, Soo-Ahm;Kim, Tae-Jung
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.297-309
    • /
    • 2007
  • The first Korean geostationary weather satellite, Communications, Oceanography and Meteorology Satellite (COMS) will be launched in 2008. The ground station for COMS needs to perform geometric correction to improve accuracy of satellite image data and to broadcast geometrically corrected images to users within 30 minutes after image acquisition. For such a requirement, we developed automated and fast geometric correction techniques. For this, we generated control points automatically by matching images against coastline data and by applying a robust estimation called RANSAC. We used GSHHS (Global Self-consistent Hierarchical High-resolution Shoreline) shoreline database to construct 211 landmark chips. We detected clouds within the images and applied matching to cloud-free sub images. When matching visible channels, we selected sub images located in day-time. We tested the algorithm with GOES-9 images. Control points were generated by matching channel 1 and channel 2 images of GOES against the 211 landmark chips. The RANSAC correctly removed outliers from being selected as control points. The accuracy of sensor models established using the automated control points were in the range of $1{\sim}2$ pixels. Geometric correction was performed and the performance was visually inspected by projecting coastline onto the geometrically corrected images. The total processing time for matching, RANSAC and geometric correction was around 4 minutes.

Image Mosaicing using Voronoi Distance Matching (보로노이 거리(Voronoi Distance)정합을 이용한 영상 모자익)

  • 이칠우;정민영;배기태;이동휘
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.7
    • /
    • pp.1178-1188
    • /
    • 2003
  • In this paper, we describe image mosaicing techniques for constructing a large high-resolution image with images taken by a video camera in hand. we propose the method which is automatically retrieving the exact matching area using color information and shape information. The proposed method extracts first candidate areas which have similar form using a Voronoi Distance Matching Method which is rapidly estimating the correspondent points between adjacent images, and calculating initial transformations of them and finds the final matching area using color information. It is a method that creates Voronoi Surface which set the distance value among feature points and other points on the basis of each feature point of a image, and extracts the correspondent points which minimize Voronoi Distance in matching area between an input image and a basic image using the binary search method. Using the Levenberg-Marquadt method we turn an initial transformation matrix to an optimal transformation matrix, and using this matrix combine a basic image with a input image.

  • PDF

Multimodality Image Registration and Fusion using Feature Extraction (특징 추출을 이용한 다중 영상 정합 및 융합 연구)

  • Woo, Sang-Keun;Kim, Jee-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.123-130
    • /
    • 2007
  • The aim of this study was to propose a fusion and registration method with heterogeneous small animal acquisition system in small animal in-vivo study. After an intravenous injection of $^{18}F$-FDG through tail vain and 60 min delay for uptake, mouse was placed on an acryl plate with fiducial markers that were made for fusion between small animal PET (microPET R4, Concorde Microsystems, Knoxville TN) and Discovery LS CT images. The acquired emission list-mode data was sorted to temporally framed sinograms and reconstructed using FORE rebining and 2D-OSEM algorithms without correction of attenuation and scatter. After PET imaging, CT images were acquired by mean of a clinical PET/CT with high-resolution mode. The microPET and CT images were fusion and co-registered using the fiducial markers and segmented lung region in both data sets to perform a point-based rigid co-registration. This method improves the quantitative accuracy and interpretation of the tracer.

  • PDF