• Title/Summary/Keyword: Fine-image registration

Search Result 17, Processing Time 0.022 seconds

Automatic Registration of Images for Digital Subtraction Radiography Using Local Correlation (국소적 상관계수를 이용한 자동적 디지털 방사선 영상정합)

  • 이원진;허민석;이삼선;최순철;이재성
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.2
    • /
    • pp.111-117
    • /
    • 2004
  • Most of digital subtraction methods in dental radiography are based on registration using manual landmarks. We have developed an automatic registration method without using the manual selection of landmarks. By restricting a geometrical matching of images to a region of interest (ROl), we compare the cross-correlation coefficient only between the ROIs. The affine or perspective transform parameters satisfying maximum of cross-correlation between the local regions are searched iteratively by a fast searching strategy. The parameters are searched on the 1/4 scale image coarsely and then, the fine registration is performed on the original scale image. The developed method can match the images corrupted by Gaussian noise with the same accuracy for the images without any transform simulation. The registration accuracy of the perspective method shows a 17% improvement over the manual method. The application of the developed method to radiography of dental implants provides an automatic noise robust registration with high accuracy in almost real time.

Analysis of Co-registration Performance According to Geometric Processing Level of KOMPSAT-3/3A Reference Image (KOMPSAT-3/3A 기준영상의 기하품질에 따른 상호좌표등록 결과 분석)

  • Yun, Yerin;Kim, Taeheon;Oh, Jaehong;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.221-232
    • /
    • 2021
  • This study analyzed co-registration results according to the geometric processing level of reference image, which are Level 1R and Level 1G provided from KOMPSAT-3 and KOMPSAT-3A images. We performed co-registration using each Level 1R and Level 1G image as a reference image, and Level 1R image as a sensed image. For constructing the experimental dataset, seven Level 1R and 1G images of KOMPSAT-3 and KOMPSAT-3A acquired from Daejeon, South Korea, were used. To coarsely align the geometric position of the two images, SURF (Speeded-Up Robust Feature) and PC (Phase Correlation) methods were combined and then repeatedly applied to the overlapping region of the images. Then, we extracted tie-points using the SURF method from coarsely aligned images and performed fine co-registration through affine transformation and piecewise Linear transformation, respectively, constructed with the tie-points. As a result of the experiment, when Level 1G image was used as a reference image, a relatively large number of tie-points were extracted than Level 1R image. Also, in the case where the reference image is Level 1G image, the root mean square error of co-registration was 5 pixels less than the case of Level 1R image on average. We have shown from the experimental results that the co-registration performance can be affected by the geometric processing level related to the initial geometric relationship between the two images. Moreover, we confirmed that the better geometric quality of the reference image achieved the more stable co-registration performance.

Rapid Rigid Registration Method Between Intra-Operative 2D XA and Pre-operative 3D CTA Images (수술 중 촬영된 2D XA 영상과 수술 전 촬영된 3D CTA 영상의 고속 강체 정합 기법)

  • Park, Taeyong;Shin, Yongbin;Lim, Sunhye;Lee, Jeongjin
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1454-1464
    • /
    • 2013
  • In this paper, we propose a rapid rigid registration method for the fusion visualization of intra-operative 2D XA and pre-operative 3D CTA images. In this paper, we propose a global movement estimation based on a trilateration for the fast and robust initial registration. In addition, the principal axis of each image is generated and aligned, and the bounding box of the vascular shape is compared for more accurate initial registration. For the fine registration, two images are registered where the distance between two vascular structures is minimized by selective distance measure. In the experiment, we evaluate a speed, accuracy and robustness using five patients' data by comparing the previous registration method. Our proposed method shows that two volumes can be registered at optimal location rapidly, and robustly comparing with the previous method.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

On Shape Recovery of 3D Object from Multiple Range Images (시점이 다른 다수의 거리 영상으로부터 3차원 물체의 형상 복원)

  • Kim, Jun-Young;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.1
    • /
    • pp.1-15
    • /
    • 2000
  • To reconstruct 3- D shape, It is a common strategy to acquire multiple range Images from different viewpoints and integrate them into a common coordinates In this paper, we particularly focus on the registration and integration processes for combining all range Images into one surface model. For the registration, we propose the 2-step registration algorithm, which consists of 2 steps the rough registration step using all data points and the fine registration step using the high-curved data points For the integration, we propose a new algorithm, referred to as ‘multi-registration’ technique, to alleviate the error accumulation problem, which occurs during applying the pair-wise registration to each range image sequentially, in order to transform them into a common reference frame Intensive experiments are performed on the various real range data In experiments, all range images were registered within 1 minutes on Pentium 150MHz PC The results show that the proposed algorithms registrate and integrate multiple range Images within a tolerable error bound in a reasonable computation time, and the total error between all range Images are equalized with our proposed algorithms.

  • PDF

3D Reconstruction of an Indoor Scene Using Depth and Color Images (깊이 및 컬러 영상을 이용한 실내환경의 3D 복원)

  • Kim, Se-Hwan;Woo, Woon-Tack
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.53-61
    • /
    • 2006
  • In this paper, we propose a novel method for 3D reconstruction of an indoor scene using a multi-view camera. Until now, numerous disparity estimation algorithms have been developed with their own pros and cons. Thus, we may be given various sorts of depth images. In this paper, we deal with the generation of a 3D surface using several 3D point clouds acquired from a generic multi-view camera. Firstly, a 3D point cloud is estimated based on spatio-temporal property of several 3D point clouds. Secondly, the evaluated 3D point clouds, acquired from two viewpoints, are projected onto the same image plane to find correspondences, and registration is conducted through minimizing errors. Finally, a surface is created by fine-tuning 3D coordinates of point clouds, acquired from several viewpoints. The proposed method reduces the computational complexity by searching for corresponding points in 2D image plane, and is carried out effectively even if the precision of 3D point cloud is relatively low by exploiting the correlation with the neighborhood. Furthermore, it is possible to reconstruct an indoor environment by depth and color images on several position by using the multi-view camera. The reconstructed model can be adopted for interaction with as well as navigation in a virtual environment, and Mediated Reality (MR) applications.

  • PDF

Verification of Indicator Rotation Correction Function of a Treatment Planning Program for Stereotactic Radiosurgery (방사선수술치료계획 프로그램의 지시자 회전 오차 교정 기능 점검)

  • Chung, Hyun-Tai;Lee, Re-Na
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.2
    • /
    • pp.47-51
    • /
    • 2008
  • Objective: This study analyzed errors due to rotation or tilt of the magnetic resonance (MR) imaging indicator during image acquisition for a stereotactic radiosurgery. The error correction procedure of a commercially available stereotactic neurosurgery treatment planning program has been verified. Materials and Methods: Software virtual phantoms were built with stereotactic images generated by a commercial programming language, Interactive Data Language (version 5.5). The thickness of an image slice was 0.5 mm, pixel size was $0.5{\times}0.5mm$, field of view was 256 mm, and image resolution was $512{\times}512$. The images were generated under the DICOM 3.0 standard in order to be used with Leksell GammaPlan$^{(R)}$. For the verification of the rotation error correction function of Leksell GammaPlan$^{(R)}$, 45 measurement points were arranged in five axial planes. On each axial plane, there were nine measurement points along a square of length 100 mm. The center of the square was located on the z-axis and a measurement point was on the z-axis, too. Five axial planes were placed at z=-50.0, -30.0, 0.0, 30.0, 50.0 mm, respectively. The virtual phantom was rotated by $3^{\circ}$ around one of x, y, and z-axis. It was also rotated by $3^{\circ}$ around two axes of x, y, and z-axis, and rotated by $3^{\circ}$ along all three axes. The errors in the position of rotated measurement points were measured with Leksell GammaPlan$^{(R)}$ and the correction function was verified. Results: The image registration errors of the virtual phantom images was $0.1{\pm}0.1mm$ and it was within the requirement of stereotactic images. The maximum theoretical errors in position of measurement points were 2.6 mm for a rotation around one axis, 3.7 mm for a rotation around two axes, and 4.5 mm for a rotation around three axes. The measured errors in position was $0.1{\pm}0.1mm$ for a rotation around single axis, $0.2{\pm}0.2mm$ for double and triple axes. These small errors verified that the rotation error correction function of Leksell GammaPlan$^{(R)}$ is working fine. Conclusion: A virtual phantom was built to verify software functions of stereotactic neurosurgery treatment planning program. The error correction function of a commercial treatment planning program worked within nominal error range. The virtual phantom of this study can be applied in many other fields to verify various functions of treatment planning programs.