• Title/Summary/Keyword: image registration

Search Result 518, Processing Time 0.034 seconds

Automatic Image Registration Based on Extraction of Corresponding-Points for Multi-Sensor Image Fusion (다중센서 영상융합을 위한 대응점 추출에 기반한 자동 영상정합 기법)

  • Choi, Won-Chul;Jung, Jik-Han;Park, Dong-Jo;Choi, Byung-In;Choi, Sung-Nam
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.12 no.4
    • /
    • pp.524-531
    • /
    • 2009
  • In this paper, we propose an automatic image registration method for multi-sensor image fusion such as visible and infrared images. The registration is achieved by finding corresponding feature points in both input images. In general, the global statistical correlation is not guaranteed between multi-sensor images, which bring out difficulties on the image registration for multi-sensor images. To cope with this problem, mutual information is adopted to measure correspondence of features and to select faithful points. An update algorithm for projective transform is also proposed. Experimental results show that the proposed method provides robust and accurate registration results.

A New Landsat Image Co-Registration and Outlier Removal Techniques

  • Kim, Jong-Hong;Heo, Joon;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.5
    • /
    • pp.439-443
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene. One of which is a reference image, while the other (sensed image) is geometrically transformed to the one. Numerous methods were developed for the automated image co-registration and it is known as a timeconsuming and/or computation-intensive procedure. In order to improve efficiency and effectiveness of the co-registration of satellite imagery, this paper proposes a pre-qualified area matching, which is composed of feature extraction with Laplacian filter and area matching algorithm using correlation coefficient. Moreover, to improve the accuracy of co-registration, the outliers in the initial matching point should be removed. For this, two outlier detection techniques of studentized residual and modified RANSAC algorithm are used in this study. Three pairs of Landsat images were used for performance test, and the results were compared and evaluated in terms of robustness and efficiency.

Correlation Based Image Registration for Pressure Sensitive Paint (PSP를 이용한 압력측정에서의 상관법에 의한 이미지 등록)

  • Park Sang-Hyun;Sung Hyung Jin
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.63-66
    • /
    • 2003
  • A new algorithm, CBIR (Correlation Based Image Registration) was proposed to improve the resolution of image registration for PSP (Pressure Sensitive Paint). The local displacement vectors were obtained by finding the displacement which maximizes the cross-correlation between two interrogation windows of 'wind-off' and 'wind-on' images. A recursive multigrid processing was employed to increase the non-linear spatial resolutions. The variations of image were precisely measured without identifying the control points.

  • PDF

Correlation-Based Image Registration for Pressure Measurements Using Pressure-Sensitive Paint (PSP 압력측정을 위한 상관법에 의한 이미지 등록)

  • Park, Sang-Hyun;Sung, Hyung-Jin
    • Proceedings of the KSME Conference
    • /
    • 2004.04a
    • /
    • pp.1778-1782
    • /
    • 2004
  • A new algorithm, CBIR (Correlation-Based Image Registration) was proposed to improve the resolution of image registration for PSP (Pressure-Sensitive Paint). The local displacement vectors were obtained by finding the displacement which maximizes the cross-correlation between two interrogation windows of 'wind-off' and 'wind-on' images. A recursive multigrid processing was employed to increase the non-linear spatial resolutions. The variations of image were precisely measured without identifying the control points.

  • PDF

Accuracy of the Point-Based Image Registration Method in Integrating Radiographic and Optical Scan Images: A Pilot Study

  • Mai, Hai Yen;Lee, Du-Hyeong
    • Journal of Korean Dental Science
    • /
    • v.13 no.1
    • /
    • pp.28-34
    • /
    • 2020
  • Purpose: The purpose of this study was to investigate the influence of different implant computer software on the accuracy of image registration between radiographic and optical scan data. Materials and Methods: Cone-beam computed tomography and optical scan data of a partially edentulous jaw were collected and transferred to three different computer softwares: Blue Sky Plan (Blue Sky Bio), Implant Studio (3M Shape), and Geomagic DesignX (3D systems). In each software, the two image sets were aligned using a point-based automatic image registration algorithm. Image matching error was evaluated by measuring the linear discrepancies between the two images at the anterior and posterior area in the direction of the x-, y-, and z-axes. Kruskal-Wallis test and a post hoc Mann-Whitney U-test with Bonferroni correction were used for statistical analyses. The significance level was set at 0.05. Result: Overall discrepancy values ranged from 0.08 to 0.30 ㎛. The image registration accuracy among the software was significantly different in the x- and z-axes (P=0.009 and <0.001, respectively), but not different in the y-axis (P=0.064). Conclusion: The image registration accuracy performed by a point-based automatic image matching could be different depending on the computer software used.

Prostate MR and Pathology Image Fusion through Image Correction and Multi-stage Registration (영상보정 및 다단계 정합을 통한 전립선 MR 영상과 병리 영상간 융합)

  • Jung, Ju-Lip;Jo, Hyun-Hee;Hong, Helen
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.700-704
    • /
    • 2009
  • In this paper, we propose a method for combining MR image with histopathology image of the prostate using image correction and multi-stage registration. Our method consists of four steps. First, the intensity of prostate bleeding area on T2-weighted MR image is substituted for that on T1-weighted MR image. And two or four tissue sections of the prostate in histopathology image are combined to produce a single prostate image by manual stitching. Second, rigid registration is performed to find the affine transformations that to optimize mutual information between MR and histopathology images. Third, the result of affine registration is deformed by the TPS warping. Finally, aligned images are visualized by the intensity intermixing. Experimental results show that the prostate tumor lesion can be properly located and clearly visualized within MR images for tissue characterization comparison and that the registration error between T2-weighted MR and histopathology image was 0.0815mm.

Multimodal Medical Image Registration based on Image Sub-division and Bi-linear Transformation Interpolation (영상의 영역 분할과 이중선형 보간행렬을 이용한 멀티모달 의료 영상의 정합)

  • Kim, Yang-Wook;Park, Jun
    • Journal of Biomedical Engineering Research
    • /
    • v.30 no.1
    • /
    • pp.34-40
    • /
    • 2009
  • Transforms including translation and rotation are required for registering two or more images. In medical applications, different registration methods have been applied depending on the structures: for rigid bodies such as bone structures, affine transformation was widely used. In most previous research, a single transform was used for registering the whole images, which resulted in low registration accuracy especially when the degree of deformation was high between two images. In this paper, a novel registration method is introduced which is based image sub-division and bilinear interpolation of transformations. The proposed method enhanced the registration accuracy by 40% comparing with Trimmed ICP for registering color and MRI images.

Markerless Image-to-Patient Registration Using Stereo Vision : Comparison of Registration Accuracy by Feature Selection Method and Location of Stereo Bision System (스테레오 비전을 이용한 마커리스 정합 : 특징점 추출 방법과 스테레오 비전의 위치에 따른 정합 정확도 평가)

  • Joo, Subin;Mun, Joung-Hwan;Shin, Ki-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.1
    • /
    • pp.118-125
    • /
    • 2016
  • This study evaluates the performance of image to patient registration algorithm by using stereo vision and CT image for facial region surgical navigation. For the process of image to patient registration, feature extraction and 3D coordinate calculation are conducted, and then 3D CT image to 3D coordinate registration is conducted. Of the five combinations that can be generated by using three facial feature extraction methods and three registration methods on stereo vision image, this study evaluates the one with the highest registration accuracy. In addition, image to patient registration accuracy was compared by changing the facial rotation angle. As a result of the experiment, it turned out that when the facial rotation angle is within 20 degrees, registration using Active Appearance Model and Pseudo Inverse Matching has the highest accuracy, and when the facial rotation angle is over 20 degrees, registration using Speeded Up Robust Features and Iterative Closest Point has the highest accuracy. These results indicate that, Active Appearance Model and Pseudo Inverse Matching methods should be used in order to reduce registration error when the facial rotation angle is within 20 degrees, and Speeded Up Robust Features and Iterative Closest Point methods should be used when the facial rotation angle is over 20 degrees.

Self-Supervised Rigid Registration for Small Images

  • Ma, Ruoxin;Zhao, Shengjie;Cheng, Samuel
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.180-194
    • /
    • 2021
  • For small image registration, feature-based approaches are likely to fail as feature detectors cannot detect enough feature points from low-resolution images. The classic FFT approach's prediction accuracy is high, but the registration time can be relatively long, about several seconds to register one image pair. To achieve real-time and high-precision rigid registration for small images, we apply deep neural networks for supervised rigid transformation prediction, which directly predicts the transformation parameters. We train deep registration models with rigidly transformed CIFAR-10 images and STL-10 images, and evaluate the generalization ability of deep registration models with transformed CIFAR-10 images, STL-10 images, and randomly generated images. Experimental results show that the deep registration models we propose can achieve comparable accuracy to the classic FFT approach for small CIFAR-10 images (32×32) and our LSTM registration model takes less than 1ms to register one pair of images. For moderate size STL-10 images (96×96), FFT significantly outperforms deep registration models in terms of accuracy but is also considerably slower. Our results suggest that deep registration models have competitive advantages over conventional approaches, at least for small images.

Effect of image matching experience on the accuracy and working time for 3D image registration between radiographic and optical scan images (술자의 영상정합의 경험이 컴퓨터 단층촬영과 광학스캔 영상 간의 정합 정확성과 작업시간에 미치는 영향)

  • Mai, Hang-Nga;Lee, Du-Hyeong
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.59 no.3
    • /
    • pp.299-304
    • /
    • 2021
  • Purpose. The purpose of the present study was to investigate the effects of image matching experience of operators on the accuracy and working time of image registration between radiographic and optical scan images. Materials and methods. Computed tomography and optical scan of a dentate dental arch were obtained. Image matching between the computed tomography and the optical scan (IDC S1, Amann Girrbach, Koblah, Austria) was performed using the point-based automatic registration method in planning software programs (Implant Studio, 3Shape, Copenhagen, Denmark) using two different experience conditions on image registration: experienced group and inexperienced group (n = 15 per group, N = 30). The accuracy of image registration in each group was evaluated by measuring linear discrepancies between matched images, and working time was recorded. Independent t test was used to statistically analyze the result data (α = .05). Results. In the linear deviation, no statistically significant difference was found between the experienced and inexperienced groups. Meanwhile, the working time for image registration was significantly shorter in the experienced group than in the inexperienced group (P = .007). Conclusion. Difference in the image matching experience may not influence the accuracy of image registration of optical scan to computed tomography when the point-based automatic registration was used, but affect the working time for the image registration.