• Title/Summary/Keyword: 3D Image Registration

Search Result 134, Processing Time 0.024 seconds

Realistic 3D model generation of a real product based on 2D-3D registration (2D-3D 정합기반 실제 제품의 사실적 3D 모델 생성)

  • Kim, Gang Yeon;Son, Seong Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.11
    • /
    • pp.5385-5391
    • /
    • 2013
  • As on-line purchases is activated, customers' demand increases for the realistic and accurate digital information of a product design. In this paper, we propose a practical method that can generate a realistic 3D model of a real product using a 3D geometry obtained by a 3D scanner and its photographic images. In order to register images to the 3D geometry, the camera focal length, the CCD scanning aspect ratio and the transformation matrix between the camera coordinate and the 3D object coordinate must be determined. To perform this 2D-3D registration with consideration of computational complexity, a three-step method is applied, which consists of camera calibration, determination of a temporary optimum translation vector (TOTV) and nonlinear optimization for three rotational angles. A case study for a metallic coated industrial part, of which the colour appearance is hardly obtained by a 3D colour scanner has performed to demonstrate the effectiveness of the proposed method.

Rotational Characteristics of Target Registration Error for Contour-based Registration in Neuronavigation System: A Phantom Study (뉴로내비게이션 시스템 표면정합에 대한 병변 정합 오차의 회전적 특성 분석: 팬텀 연구)

  • Park, Hyun-Joon;Mun, Joung Hwan;Yoo, Hakje;Shin, Ki-Young;Sim, Taeyong
    • Journal of Biomedical Engineering Research
    • /
    • v.37 no.2
    • /
    • pp.68-74
    • /
    • 2016
  • In this study, we investigated the rotational characteristics which were comprised of directionality and linearity of target registration error (TRE) as a study in advance to enhance the accuracy of contour-based registration in neuronavigation. For the experiment, two rigid head phantoms that have different faces with specially designed target frame fixed inside of the phantoms were used. Three-dimensional coordinates of facial surface point cloud and target point of the phantoms were acquired using computed tomography (CT) and 3D scanner. Iterative closest point (ICP) method was used for registration of two different point cloud and the directionality and linearity of TRE in overall head were calculated by using 3D position of targets after registration. As a result, it was represented that TRE had consistent direction in overall head region and was increased in linear fashion as distance from facial surface, but did not show high linearity. These results indicated that it is possible for decrease TRE by controlling orientation of facial surface point cloud acquired from scanner, and the prediction of TRE from surface registration error can decrease the registration accuracy in lesion. In the further studies, we have to develop the contour-based registration method for improvement of accuracy by considering rotational characteristics of TRE.

Respiratory Motion Correction on PET Images Based on 3D Convolutional Neural Network

  • Hou, Yibo;He, Jianfeng;She, Bo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2191-2208
    • /
    • 2022
  • Motion blur in PET (Positron emission tomography) images induced by respiratory motion will reduce the quality of imaging. Although exiting methods have positive performance for respiratory motion correction in medical practice, there are still many aspects that can be improved. In this paper, an improved 3D unsupervised framework, Res-Voxel based on U-Net network was proposed for the motion correction. The Res-Voxel with multiple residual structure may improve the ability of predicting deformation field, and use a smaller convolution kernel to reduce the parameters of the model and decrease the amount of computation required. The proposed is tested on the simulated PET imaging data and the clinical data. Experimental results demonstrate that the proposed achieved Dice indices 93.81%, 81.75% and 75.10% on the simulated geometric phantom data, voxel phantom data and the clinical data respectively. It is demonstrated that the proposed method can improve the registration and correction performance of PET image.

A Study on 3D Graphics Registration of Image Sequences using Planar Surface (평면을 이용한 이미지 시퀀스에서의 3D 그래픽 정합에 대한 연구)

  • 김주완;장병태
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.190-192
    • /
    • 2003
  • 본 논문은 캘리브레이션 정보를 모르는 카메라로부터 얻은 시퀀스 영상에서 공간상에서 평면인 물체의영상 정보를 이용하여 카메라 내부 및 외부 파라미터를 추정하고, 이를 이용하여 가상의 3D 그래픽을 시퀀스 영상에 정합하는 방법을 제안한다. 제안된 방법은 기존의 방법에 비해 손쉽게 이미지에 가상의 3D 그래픽 오브젝트를 정합할 수 있으며, 눈에 보이는 정합오차를 최소화하며 DirectX와 같은 3D 그래픽 툴과 쉽게 연동이 되는 장정이 있다. 본 연구는 비디오와 같은 영상에 3D 영상을 합성하는 대화형 비디오 컨텐트 개발에 활용할 수 있을 것으로 기대된다.

  • PDF

Automated Satellite Image Co-Registration using Pre-Qualified Area Matching and Studentized Outlier Detection (사전검수영역기반정합법과 't-분포 과대오차검출법'을 이용한 위성영상의 '자동 영상좌표 상호등록')

  • Kim, Jong Hong;Heo, Joon;Sohn, Hong Gyoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4D
    • /
    • pp.687-693
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene, one of which represents a reference image, while the other is geometrically transformed to the one. In order to improve efficiency and effectiveness of the co-registration approach, the author proposed a pre-qualified area matching algorithm which is composed of feature extraction with canny operator and area matching algorithm with cross correlation coefficient. For refining matching points, outlier detection using studentized residual was used and iteratively removes outliers at the level of three standard deviation. Throughout the pre-qualification and the refining processes, the computation time was significantly improved and the registration accuracy is enhanced. A prototype of the proposed algorithm was implemented and the performance test of 3 Landsat images of Korea. showed: (1) average RMSE error of the approach was 0.435 pixel; (2) the average number of matching points was over 25,573; (3) the average processing time was 4.2 min per image with a regular workstation equipped with a 3 GHz Intel Pentium 4 CPU and 1 Gbytes Ram. The proposed approach achieved robustness, full automation, and time efficiency.

4-Dimensional dose evaluation using deformable image registration in respiratory gated radiotherapy for lung cancer (폐암의 호흡동조방사선치료 시 변형영상정합을 이용한 4차원 선량평가)

  • Um, Ki Cheon;Yoo, Soon Mi;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.83-95
    • /
    • 2018
  • Purpose : After planning the Respiratory Gated Radiotherapy for Lung cancer, the movement and volume change of sparing normal structures nearby target are not often considered during dose evaluation. This study carried out 4-D dose evaluation which reflects the movement of normal structures at certain phase of Respiratory Gated Radiotherapy, by using Deformable Image Registration that is well used for Adaptive Radiotherapy. Moreover, the study discussed the need of analysis and established some recommendations, regarding the normal structures's movement and volume change due to Patient's breathing pattern during evaluation of treatment plans. Materials and methods : The subjects were taken from 10 lung cancer patients who received Respiratory Gated Radiotherapy. Using Eclipse(Ver 13.6 Varian, USA), the structures seen in the top phase of CT image was equally set via Propagation or Segmentation Wizard menu, and the structure's movement and volume were analyzed by Center-to Center method. Also, image from each phase and the dose distribution were deformed into top phase CT image, for 4-dimensional dose evaluation, via VELOCITY Program. Also, Using $QUASAR^{TM}$ Phantom(Modus Medical Devices) and $GAFCHROMIC^{TM}$ EBT3 Film(Ashland, USA), verification carried out 4-D dose distribution for 4-D gamma pass rate. Result : The movement of the Inspiration and expiration phase was the most significant in axial direction of right lung, as $0.989{\pm}0.34cm$, and was the least significant in lateral direction of spinal cord, as -0.001 cm. The volume of right lung showed the greatest rate of change as 33.5 %. The maximal and minimal difference in PTV Conformity Index and Homogeneity Index between 3-dimensional dose evaluation and 4-dimensional dose evaluation, was 0.076, 0.021 and 0.011, 0.0 respectfully. The difference of 0.0045~2.76 % was determined in normal structures, using 4-D dose evaluation. 4-D gamma pass rate of every patients passed reference of 95 % gamma pass rate. Conclusion : PTV Conformity Index was more significant in all patients using 4-D dose evaluation, but no significant difference was observed between two dose evaluations for Homogeneity Index. 4-D dose distribution was shown more homogeneous dose compared to 3D dose distribution, by considering the movement from breathing which helps to fill out the PTV margin area. There was difference of 0.004~2.76 % in 4D evaluation of normal structure, and there was significant difference between two evaluation methods in all normal structures, except spinal cord. This study shows that normal structures could be underestimated by 3-D dose evaluation. Therefore, 4-D dose evaluation with Deformable Image Registration will be considered when the dose change is expected in normal structures due to patient's breathing pattern. 4-D dose evaluation with Deformable Image Registration is considered to be a more realistic dose evaluation method by reflecting the movement of normal structures from patient's breathing pattern.

  • PDF

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Automatic Lung Registration using Local Distance Propagation (지역적 거리전파를 이용한 자동 폐 정합)

  • Lee Jeongjin;Hong Helen;Shin Yeong Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.1
    • /
    • pp.41-49
    • /
    • 2005
  • In this Paper, we Propose an automatic lung registration technique using local distance propagation for correcting the difference between two temporal images by a patient's movement in abdomen CT image obtained from the same patient to be taken at different time. The proposed method is composed of three steps. First, lung boundaries of two temporal volumes are extracted, and optimal bounding volumes including a lung are initially registered. Second, 3D distance map is generated from lung boundaries in the initially taken volume data by local distance propagation. Third, two images are registered where the distance between two surfaces is minimized by selective distance measure. In the experiment, we evaluate a speed and robustness using three patients' data by comparing chamfer-matching registration. Our proposed method shows that two volumes can be registered at optimal location rapidly. and robustly using selective distance measure on locally propagated 3D distance map.

Evaluation of Magnetic Resonance Imaging using Image Co-registration in Stereotactic Radiosurgery (정위방사선수술시 영상공동등록을 이용한 자기공명영상 유용성 평가)

  • Jin, Seongjin;Cho, Jihwan;Park, Cheolwoo
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.4
    • /
    • pp.235-240
    • /
    • 2017
  • The purpose of this study is to confirm the safety of the clinical application of image co - registration in steteotactic radiosurgery by evaluating the 3D positioning of magnetic resonance imaging using image co-registration. We performed a retrospective study using three-dimensional coordinate measurement of 32 patients who underwent stereotactic radiosurgery and performed magnetic resonance imaging follow-up using image co-registration. The 3 dimensional coordinate errors were $1.0443{\pm}0.5724mm$ (0.10 ~ 1.89) in anterior commissure and $1.0348{\pm}0.5473mm$ (0.36 ~ 2.24) in posterior commissure. The mean error of MR1 (3.0 T) was lower than that of MR2 (1.5 T). It is necessary to minimize the error of magnetic resonance imaging in the treatment planning using the image co - registration technique and to confirm it.

Efficient 3D Model based Face Representation and Recognition Algorithmusing Pixel-to-Vertex Map (PVM)

  • Jeong, Kang-Hun;Moon, Hyeon-Joon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.1
    • /
    • pp.228-246
    • /
    • 2011
  • A 3D model based approach for a face representation and recognition algorithm has been investigated as a robust solution for pose and illumination variation. Since a generative 3D face model consists of a large number of vertices, a 3D model based face recognition system is generally inefficient in computation time and complexity. In this paper, we propose a novel 3D face representation algorithm based on a pixel to vertex map (PVM) to optimize the number of vertices. We explore shape and texture coefficient vectors of the 3D model by fitting it to an input face using inverse compositional image alignment (ICIA) to evaluate face recognition performance. Experimental results show that the proposed face representation and recognition algorithm is efficient in computation time while maintaining reasonable accuracy.