• Title/Summary/Keyword: 영상 정합

Search Result 1,426, Processing Time 0.029 seconds

Three-Dimensional Image Registration using a Locally Weighted-3D Distance Map (지역적 가중치 거리맵을 이용한 3차원 영상 정합)

  • Lee, Ho;Hong, Helen;Shin, Yeong-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.939-948
    • /
    • 2004
  • In this paper. we Propose a robust and fast image registration technique for motion correction in brain CT-CT angiography obtained from same patient to be taken at different time. First, the feature points of two images are respectively extracted by 3D edge detection technique, and they are converted to locally weighted 3D distance map in reference image. Second, we search the optimal location whore the cross-correlation of two edges is maximized while floating image is transformed rigidly to reference image. This optimal location is determined when the maximum value of cross-correlation does't change any more and iterates over constant number. Finally, two images are registered at optimal location by transforming floating image. In the experiment, we evaluate an accuracy and robustness using artificial image and give a visual inspection using clinical brain CT-CT angiography dataset. Our proposed method shows that two images can be registered at optimal location without converging at local maximum location robustly and rapidly by using locally weighted 3D distance map, even though we use a few number of feature points in those images.

Analysis of Uncertainties due to Digitally Reconstructed Radiographic (DRR) Image Quality in 2D-2D Matching between DRRs and kV X-ray Images from the On-Board Imager (OBI) (디지털 재구성 방사선영상과 온보드 영상장치를 이용한 2D-2D 정합 시 디지털 재구성 방사선영상의 질이 정합 정확도에 미치는 영향 분석)

  • Cheong Kwang-Ho;Cho Byung-Chul;Kaug Sei-Kwon;Kim Kyoung-Joo;Bae Hoon-Sik;Suh Tae-Suk
    • Progress in Medical Physics
    • /
    • v.17 no.2
    • /
    • pp.67-76
    • /
    • 2006
  • We evaluated the accuracy of a patient setup error correction due to reference image quality for a 2D-2D matching process. Digitally reconstructed radiographs (DRRs) generated by use of the Pinnacle3 and the Eclipse for various regions of a humanoid phantom and a patient for different CT slice thickness were employed as a reference images and kV X-ray Images from the On-Board Imager were registered to the reference DRRs. In comparison of the DRRs and profiles, DRR image quality was getting worse with an increase of CT image slice thickness. However there were only slight differences of setup errors evaluation between matching results for good and poor reference DRRs. Although DRR image quality did not strongly affect to the 2D-2D matching accuracy, there are still potential errors for matching procedure, therefore we recommend that DRR images are needed to be generated with less than 3mm slice thickness for 2D-2D matching.

  • PDF

Automated Geometric Correction of Geostationary Weather Satellite Images (정지궤도 기상위성의 자동기하보정)

  • Kim, Hyun-Suk;Lee, Tae-Yoon;Hur, Dong-Seok;Rhee, Soo-Ahm;Kim, Tae-Jung
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.297-309
    • /
    • 2007
  • The first Korean geostationary weather satellite, Communications, Oceanography and Meteorology Satellite (COMS) will be launched in 2008. The ground station for COMS needs to perform geometric correction to improve accuracy of satellite image data and to broadcast geometrically corrected images to users within 30 minutes after image acquisition. For such a requirement, we developed automated and fast geometric correction techniques. For this, we generated control points automatically by matching images against coastline data and by applying a robust estimation called RANSAC. We used GSHHS (Global Self-consistent Hierarchical High-resolution Shoreline) shoreline database to construct 211 landmark chips. We detected clouds within the images and applied matching to cloud-free sub images. When matching visible channels, we selected sub images located in day-time. We tested the algorithm with GOES-9 images. Control points were generated by matching channel 1 and channel 2 images of GOES against the 211 landmark chips. The RANSAC correctly removed outliers from being selected as control points. The accuracy of sensor models established using the automated control points were in the range of $1{\sim}2$ pixels. Geometric correction was performed and the performance was visually inspected by projecting coastline onto the geometrically corrected images. The total processing time for matching, RANSAC and geometric correction was around 4 minutes.

Feature Matching Algorithm Robust To Noise (잡음에 강인한 특징점 정합 기법)

  • Jung, Hyunjo;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.9-12
    • /
    • 2015
  • In this paper, we propose a new feature matching algorithm by modifying and combining the FAST(Features from Accelerated Segment Test) feature detector and SURF feature descriptor which is robust to the distortion of the given image. Scale space is generated to consider the variation of the scale and determine the candidate of features in the image robust to the noise. The original FAST algorithm results in many feature points along edges. To solve this problem, we apply the principal curvatures for refining it. We also use SURF descriptor to make it robust against the variations in the image by rotation. Through the experiments, it is shown that the proposed algorithm has better performance than the conventional feature matching algorithms even though it has much less computational load. Especially, it shows a strength for noisy images.

  • PDF

Frame-level Matching for Near Duplicate Videos Using Binary Frame Descriptor (이진 프레임 기술자를 이용한 유사중복 동영상 프레임 단위 정합)

  • Kim, Kyung-Rae;Lee, Jun-Tae;Jang, Won-Dong;Kim, Chang-Su
    • Journal of Broadcast Engineering
    • /
    • v.20 no.4
    • /
    • pp.641-644
    • /
    • 2015
  • In this paper, we propose a precise frame-level near-duplicate video matching algorithm. First, a binary frame descriptor for near-duplicate video matching is proposed. The binary frame descriptor divides a frame into patches and represent the relations between patches in bits. Seconds, we formulate a cost function for the matching, composed of matching costs and compensatory costs. Then, we roughly determine initial matchings and refine the matchings iteratively to minimize the cost function. Experimental results demonstrate that the proposed algorithm provides efficient performance for frame-level near duplicate video matching.

A Study on Semi-Automatic Registration for Synthesizing Natural Video and Virtual Objects (합성 컨텐츠 저작을 위한 반자동 정합 기술에 관한 연구)

  • Jeong, Se-Yoon;Kim, Kyu-Heon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.661-664
    • /
    • 2002
  • 실사 영상에 가상 객체를 합성하기 위해서는 실사 영상 촬영 당시의 카메라 정보가 필요하다. 본 논문에서는 이러한 카메라 정보를 구하기 위하여 가상현실 분야에서 사용하고 있는 캘리브레이션 프리 정합 (Calibration-Free Registration) 기술을 기반으로 한 반자동 정합 기술을 제안하였다. 가상 현실은 실시간 응용인데 반하여 본 논문에서 제안하는 반자동 정합 기술은 합성 컨텐츠 저작을 위한 오프라인 응용에 적합한 방법으로 캘리브레이션 프리 정합기술의 합성 결과는 사용자의 입력정보와 밀접한 관계가 있다. 캘리브레이션 프리 정합기술은 두가지 사용자 입력을 필요로 한다. 첫번째 입력은 어파인공간 (Affine space)의 기저 (Basis vector)를 위한 특징점 정보이고, 두번째 입력 정보는 가상객체의 영상 투영점 입력이다. 본 논문에서는 이 두가지 사용자 입력중 기저를 위한 특징점 정보입력을 사용자가 쉽게, 정확한 정보를 입력할 수 있게하기 위하여, 사용자가 특징점을 개략적으로 입력하게 하고, 주변 영역에서 코너점 검출을 수행하여 사용자 입력을 수정하여 받아들리는 방법을 제안하였다. 실험결과 제안한 방법을 사용하여 구한 카메라 정보로 만족할 만한 합성 영상을 얻을 수 있었다.

  • PDF

2D-3D Vessel Registration for Image-guided Surgery based on distance map (영상유도시술을 위한 거리지도기반 2D-3D 혈관영상 정합)

  • 송수민;최유주;김민정;김명희
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04a
    • /
    • pp.913-915
    • /
    • 2004
  • 시술 중 제공되는 2D영상은 실시간으로 환자와 시술도구의 상태정보를 제공해주지만 환부의 입체적ㆍ해부학적 파악이 어렵다. 따라서 긴 촬영시간으로 시술 전 획득되는 3D영상과 시술 중 얻어지는 2D영상간 정합영상은 영상 유도술에 있어서 유용한 정보를 제공한다. 이를 위해 본 논문에서는 볼륨영상으로부터 혈관모델을 추출하고 이를 평면으로 투영하였다. 두 2D영상에서 정차대상이 되는 혈관골격을 추출한 후 혈관의 분기특성을 고려 한 초기정합을 수행하였다. 크기와 초기 위치를 맞춘 혈관골격을 골격간 거리가 최소가 되도록 반복적으로 혈관을 기하변환시키고 최종 변환된 혈관골격을 시술 중 제공되는 2D영상에 겹쳐 가시화 하였다. 이로써 시술시간 경감과 시술성공률 향상을 유도할 수 있는 시술경로맵을 제시하고자 하였다.

Stereo Disparity Estimation by Analyzing the Type of Matched Regions (정합영역의 유형분석에 의한 스테레오 변이 추정)

  • Kim Sung-Hun;Lee Joong-Jae;Kim Gye-Young;Choi Hyung-Il
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.1
    • /
    • pp.69-83
    • /
    • 2006
  • This paper describes an image disparity estimation method using a segmented-region based stereo matching. Segmented-region based disparity estimation yields a disparity map as the unit of segmented region. However, there is a problem that it estimates disparity imprecisely. The reason is that because it not only have matching errors but also apply an identical way to disparity estimation, which is not considered each type of matched regions. To solve this problem, we proposes a disparity estimation method which is considered the type of matched regions. That is, the proposed method classifies whole matched regions into similar-matched region, dissimilar-matched region, false-matched region and miss-matched region by analyzing the type of matched regions. We then performs proper disparity estimation for each type of matched regions. This method minimizes the error in estimating disparity which is caused by inaccurate matching and also improves the accuracy of disparity of the well-matched regions. For the purpose of performance evaluations, we perform tests on a variety of scenes for synthetic, indoor and outdoor images. As a result of tests, we can obtain a dense disparity map which has the improved accuracy. The remarkable result is that the accuracy of disparity is also improved considerably for complex outdoor images which are barely treatable in the previous methods.

Analysis of Shadow Effect on High Resolution Satellite Image Matching in Urban Area (도심지역의 고해상도 위성영상 정합에 대한 그림자 영향 분석)

  • Yeom, Jun Ho;Han, You Kyung;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.2
    • /
    • pp.93-98
    • /
    • 2013
  • Multi-temporal high resolution satellite images are essential data for efficient city analysis and monitoring. Yet even when acquired from the same location, identical sensors as well as different sensors, these multi-temporal images have a geometric inconsistency. Matching points between images, therefore, must be extracted to match the images. With images of an urban area, however, it is difficult to extract matching points accurately because buildings, trees, bridges, and other artificial objects cause shadows over a wide area, which have different intensities and directions in multi-temporal images. In this study, we analyze a shadow effect on image matching of high resolution satellite images in urban area using Scale-Invariant Feature Transform(SIFT), the representative matching points extraction method, and automatic shadow extraction method. The shadow segments are extracted using spatial and spectral attributes derived from the image segmentation. Also, we consider information of shadow adjacency with the building edge buffer. SIFT matching points extracted from shadow segments are eliminated from matching point pairs and then image matching is performed. Finally, we evaluate the quality of matching points and image matching results, visually and quantitatively, for the analysis of shadow effect on image matching of high resolution satellite image.

Automatic Co-registration of Cloud-covered High-resolution Multi-temporal Imagery (구름이 포함된 고해상도 다시기 위성영상의 자동 상호등록)

  • Han, You Kyung;Kim, Yong Il;Lee, Won Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.4
    • /
    • pp.101-107
    • /
    • 2013
  • Generally the commercial high-resolution images have their coordinates, but the locations are locally different according to the pose of sensors at the acquisition time and relief displacement of terrain. Therefore, a process of image co-registration has to be applied to use the multi-temporal images together. However, co-registration is interrupted especially when images include the cloud-covered regions because of the difficulties of extracting matching points and lots of false-matched points. This paper proposes an automatic co-registration method for the cloud-covered high-resolution images. A scale-invariant feature transform (SIFT), which is one of the representative feature-based matching method, is used, and only features of the target (cloud-covered) images within a circular buffer from each feature of reference image are used for the candidate of the matching process. Study sites composed of multi-temporal KOMPSAT-2 images including cloud-covered regions were employed to apply the proposed algorithm. The result showed that the proposed method presented a higher correct-match rate than original SIFT method and acceptable registration accuracies in all sites.