• Title/Summary/Keyword: 3D Voxel

Search Result 132, Processing Time 0.025 seconds

Fast and Accurate Rigid Registration of 3D CT Images by Combining Feature and Intensity

  • June, Naw Chit Too;Cui, Xuenan;Li, Shengzhe;Kim, Hak-Il;Kwack, Kyu-Sung
    • Journal of Computing Science and Engineering
    • /
    • v.6 no.1
    • /
    • pp.1-11
    • /
    • 2012
  • Computed tomography (CT) images are widely used for the analysis of the temporal evaluation or monitoring of the progression of a disease. The follow-up examinations of CT scan images of the same patient require a 3D registration technique. In this paper, an automatic and robust registration is proposed for the rigid registration of 3D CT images. The proposed method involves two steps. Firstly, the two CT volumes are aligned based on their principal axes, and then, the alignment from the previous step is refined by the optimization of the similarity score of the image's voxel. Normalized cross correlation (NCC) is used as a similarity metric and a downhill simplex method is employed to find out the optimal score. The performance of the algorithm is evaluated on phantom images and knee synthetic CT images. By the extraction of the initial transformation parameters with principal axis of the binary volumes, the searching space to find out the parameters is reduced in the optimization step. Thus, the overall registration time is algorithmically decreased without the deterioration of the accuracy. The preliminary experimental results of the study demonstrate that the proposed method can be applied to rigid registration problems of real patient images.

Fast Multi-GPU based 3D Backprojection Method (다중 GPU 기반의 고속 삼차원 역전사 기법)

  • Lee, Byeong-Hun;Lee, Ho;Kye, Hee-Won;Shin, Yeong-Gil
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.2
    • /
    • pp.209-218
    • /
    • 2009
  • 3D backprojection is a kind of reconstruction algorithm to generate volume data consisting of tomographic images, which provides spatial information of the original 3D data from hundreds of 2D projections. The computational time of backprojection increases in proportion to the size of volume data and the number of projection images since the value of every voxel in volume data is calculated by considering corresponding pixels from hundreds of projections. For the reduction of computational time, fast GPU based 3D backprojection methods have been studied recently and the performance of them has been improved significantly. This paper presents two multiple GPU based methods to maximize the parallelism of GPU and compares the efficiencies of two methods by considering both the number of projections and the size of volume data. The first method is to generate partial volume data independently for all projections after allocating a half size of volume data on each GPU. The second method is to acquire the entire volume data by merging the incomplete volume data of each GPU on CPU. The in-complete volume data is generated using the half size of projections after allocating the full size of volume data on each GPU. In experimental results, the first method performed better than the second method when the entire volume data can be allocated on GPU. Otherwise, the second method was efficient than the first one.

  • PDF

Development of a Software Program for the Automatic Calculation of the Pulp/Tooth Volume Ratio on the Cone-Beam Computed Tomography

  • Lee, Hoon-Ki;Lee, Jeong-Yun
    • Journal of Oral Medicine and Pain
    • /
    • v.41 no.3
    • /
    • pp.85-90
    • /
    • 2016
  • Purpose: The aim of this study was to develop an automated software to extract tooth and pulpal area from sectional cone-beam computed tomography (CBCT) images, which can guarantee more reproducible, objective and time-saving way to measure pulp/tooth volume ratio. Methods: The software program was developed using MATLAB (MathWorks). To determine the optimal threshold for the region of interest (ROI) extraction, user interface to adjust the threshold for extraction algorithm was added. Default threshold was determined after several trials to make the outline of extracted ROI fitting to the tooth and pulpal outlines. To test the effect of starting point location selected initially in the pulpal area on the final result, pulp/tooth volume ratio was calculated 5 times with different 5 starting points. Results: Navigation interface is composed of image loading, zoom-in, zoom-out, and move tool. ROI extraction process can be shown by check in the option box. Default threshold is adjusted for the extracted tooth area to cover whole tooth including dentin, cementum, and enamel. Of course, the result can be corrected, if necessary, by the examiner as well as by changing the threshold of density of hard tissue. Extracted tooth and pulp area are reconstructed three-dimensional (3D) and pulp/tooth volume ratio is calculated by voxel counting on reconstructed model. The difference between the pulp/tooth volume ratio results from the 5 different extraction starting points was not significant. Conclusions: In further studies based on a large-scale sample, the most proper threshold to present the most significant relationship between age and pulp/tooth volume ratio and the tooth correlated with age the most will be explored. If the software can be improved to use whole CBCT data set rather than just sectional images and to detect pulp canal in the original 3D images generated by CBCT software itself, it will be more promising in practical uses.

Arrangement and analysis of multi-isocenter based on 3-D spatial unit in stereotactic radiosurgery (정위적 방사선 수술시 3차원적 공간상의 체적소에 기반한 회전중심점들(Multi-isocenter)의 표적내 자동적 배치 및 분석)

  • Choi, Kyoung-Sik;Oh, Seung-Jong;Lee, Jeong-Woo;Suh, Tae-Suk;Choe, Bo-Young;Kim, Moon-Chan
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2004.11a
    • /
    • pp.75-77
    • /
    • 2004
  • Stereotactic radiosurgery(SRS) is a technique to deliver a high dose to a particular target region and a low dose to the critical organ using only one or a few irradiations while the patient is fixed with a stereotactic frame. The optimized plan is decided by repetitive work to combine the beam parameters and identify prescribed doses level in a tumor, which is usually called a trial and error method. This requires a great deal of time, effort, and experience. Therefore, we developed the automatic arrangement of multi-isocenter within irregularly shaped tumor. At the arbitrary targets, which is this method based on the voxel unit of the space, well satisfies the dose conformity and dose homogeneity to the targets relative to the RTOG radiosurgery plan guidelines

  • PDF

Three Dimensional Target Volume Reconstruction from Multiple Projection Images (다중투사영상을 이용한 표적체적의 3차원 재구성)

  • 정광호;진호상;이형구;최보영;서태석
    • Progress in Medical Physics
    • /
    • v.14 no.3
    • /
    • pp.167-174
    • /
    • 2003
  • In the radiation treatment planning (RTP) process, especially for stereotactic radiosurgery (SRS), knowing the exact volume and shape and the precise position of a lesion is very important. Sometimes X-ray projection images, such as angiograms, become the best choice for lesion identification. However, while the exact target position can be acquired by bi-projection images, 3D target reconstruction from bi-projection images is considered to be impossible. The aim of this study was to reconstruct the 3D target volume from multiple projection images. It was assumed that we knew the exact target position in advance, and all processes were performed in Target Coordinates, where the origin was the center of the target. We used six projections: two projections were used to make a Reconstruction Box and four projections were for image acquisition. The Reconstruction Box was made up of voxels of 3D matrices. Projection images were transformed into 3D in this virtual box using a geometric back-projection method. The resolution and the accuracy of the reconstructed target volume were dependent on the target size. An algorithm was applied to an ellipsoid model and a horseshoe-shaped model. Projection images were created geometrically using C program language, and reconstruction was also performed using C program language and Matlab ver. 6(The Mathwork Inc., USA). For the ellipsoid model, the reconstructed volume was slightly overestimated, but the target shape and position proved to be correct. For the horseshoe-shaped model, reconstructed volume was somewhat different from the original target model, but there was a considerable improvement in determining the target volume.

  • PDF

Comparison of limited- and large-volume cone-beam computed tomography using a small voxel size for detecting isthmuses in mandibular molars

  • de Souza Tolentino, Elen;Andres Amoroso-Silva, Pablo;Alcalde, Murilo Priori;Yamashita, Fernanda Chiguti;Iwaki, Lilian Cristina Vessoni;Rubira-Bullen, Izabel Regina Fischer;Duarte, Marco Antonio Hungaro
    • Imaging Science in Dentistry
    • /
    • v.51 no.1
    • /
    • pp.27-34
    • /
    • 2021
  • Purpose: This study was performed to compare the ability of limited- and large-volume cone-beam computed tomography (CBCT) to display isthmuses in the apical root canals of mandibular molars. Materials and Methods: Forty human mandibular first molars with isthmuses in the apical 3 mm of mesial roots were scanned by micro-computed tomography (micro-CT), and their thickness, area, and length were recorded. The samples were examined using 2 CBCT systems, using the smallest voxels and field of view available for each device. The Mann-Whitney, Friedman, and Dunn multiple comparison tests were performed (α=0.05). Results: The 3D Accuitomo 170 and i-Cat devices detected 77.5% and 75.0% of isthmuses, respectively (P>0.05). For length measurements, there were significant differences between micro-CT and both 3D Accuitomo 170 and i-Cat(P<0.05). Conclusion: Both CBCT systems performed similarly and did not detect isthmuses in the apical third in some cases. CBCT still does not equal the performance of micro-CT in isthmus detection, but it is nonetheless a valuable tool in endodontic practice.

DEM generation from an IKONOS stereo pair using EpiMatch and Graph-Cut algorithms

  • Kim, Tae-Jung;Im, Yong-Jo;Kim, Ho-Won;Kweon, In-So
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.524-529
    • /
    • 2002
  • In this paper, we report the development of two DEM (digital elevation model) generation algorithms over urban areas from an IKONOS stereo pair. One ("EpiMatch") is originally developed for SPOT images and modified for IKONOS images. It uses epipolar geometry for accurate DEM generation. The other is based on graph-cut algorithm in 3D voxel space. This algorithm is believed to work better on height discontinuities than EpiMatch. An IKONOS image pair over Taejon city area was used for tests. Using ground control points obtained from differential GPS, camera model was set up and stereo matching applied. As a result, two DEMs over urban areas were produced. Within a DEM from EpiMatch small houses appear as small "cloudy" patches and large apartment and industrial buildings are visually identifiable. Within the DEM from graph-cut we could achieve better height information on building boundaries. The results show that both algorithms can generate DEMs from IKONOS images although more research is required on handling height discontinuities (for "EpiMatch") and on faster computation (for "Graph-cut").

  • PDF

Determination of Dose Correction Factor for Energy and Directional Dependence of the MOSFET Dosimeter in an Anthropomorphic Phantom (인형 모의피폭체내 MOSFET 선량계의 에너지 및 방향 의존도를 고려하기 위한 선량보정인자 결정)

  • Cho, Sung-Koo;Choi, Sang-Hyoun;Na, Seong-Ho;Kim, Chan-Hyeong
    • Journal of Radiation Protection and Research
    • /
    • v.31 no.2
    • /
    • pp.97-104
    • /
    • 2006
  • In recent years, the MOSFET dosimeter has been widely used in various medical applications such as dose verification in radiation therapeutic and diagnostic applications. The MOSFET dosimeter is, however, mainly made of silicon and shows some energy dependence for low energy Photons. Therefore, the MOSFET dosimeter tends to overestimate the dose for low energy scattered photons in a phantom. This study determines the correction factors to compensate these dependences of the MOSFET dosimeter in ATOM phantom. For this, we first constructed a computational model of the ATOM phantom based on the 3D CT image data of the phantom. The voxel phantom was then implemented in a Monte Carlo simulation code and used to calculate the energy spectrum of the photon field at each of the MOSFET dosimeter locations in the phantom. Finally, the correction factors were calculated based on the energy spectrum of the photon field at the dosimeter locations and the pre-determined energy and directional dependence of the MOSFET dosimeter. Our result for $^{60}Co$ and $^{137}Cs$ photon fields shows that the correction factors are distributed within the range of 0.89 and 0.97 considering all the MOSFET dosimeter locations in the phantom.

Visualization and Localization of Fusion Image Using VRML for Three-dimensional Modeling of Epileptic Seizure Focus (VRML을 이용한 융합 영상에서 간질환자 발작 진원지의 3차원적 가시화와 위치 측정 구현)

  • 이상호;김동현;유선국;정해조;윤미진;손혜경;강원석;이종두;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.1
    • /
    • pp.34-42
    • /
    • 2003
  • In medical imaging, three-dimensional (3D) display using Virtual Reality Modeling Language (VRML) as a portable file format can give intuitive information more efficiently on the World Wide Web (WWW). The web-based 3D visualization of functional images combined with anatomical images has not studied much in systematic ways. The goal of this study was to achieve a simultaneous observation of 3D anatomic and functional models with planar images on the WWW, providing their locational information in 3D space with a measuring implement using VRML. MRI and ictal-interictal SPECT images were obtained from one epileptic patient. Subtraction ictal SPECT co-registered to MRI (SISCOM) was performed to improve identification of a seizure focus. SISCOM image volumes were held by thresholds above one standard deviation (1-SD) and two standard deviations (2-SD). SISCOM foci and boundaries of gray matter, white matter, and cerebrospinal fluid (CSF) in the MRI volume were segmented and rendered to VRML polygonal surfaces by marching cube algorithm. Line profiles of x and y-axis that represent real lengths on an image were acquired and their maximum lengths were the same as 211.67 mm. The real size vs. the rendered VRML surface size was approximately the ratio of 1 to 605.9. A VRML measuring tool was made and merged with previous VRML surfaces. User interface tools were embedded with Java Script routines to display MRI planar images as cross sections of 3D surface models and to set transparencies of 3D surface models. When transparencies of 3D surface models were properly controlled, a fused display of the brain geometry with 3D distributions of focal activated regions provided intuitively spatial correlations among three 3D surface models. The epileptic seizure focus was in the right temporal lobe of the brain. The real position of the seizure focus could be verified by the VRML measuring tool and the anatomy corresponding to the seizure focus could be confirmed by MRI planar images crossing 3D surface models. The VRML application developed in this study may have several advantages. Firstly, 3D fused display and control of anatomic and functional image were achieved on the m. Secondly, the vector analysis of a 3D surface model was defined by the VRML measuring tool based on the real size. Finally, the anatomy corresponding to the seizure focus was intuitively detected by correlations with MRI images. Our web based visualization of 3-D fusion image and its localization will be a help to online research and education in diagnostic radiology, therapeutic radiology, and surgery applications.

  • PDF

Visual Sensor Design and Environment Modeling for Autonomous Mobile Welding Robots (자율 주행 용접 로봇을 위한 시각 센서 개발과 환경 모델링)

  • Kim, Min-Yeong;Jo, Hyeong-Seok;Kim, Jae-Hun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.776-787
    • /
    • 2002
  • Automation of welding process in shipyards is ultimately necessary, since the welding site is spatially enclosed by floors and girders, and therefore welding operators are exposed to hostile working conditions. To solve this problem, a welding mobile robot that can navigate autonomously within the enclosure has been developed. To achieve the welding task in the closed space, the robotic welding system needs a sensor system for the working environment recognition and the weld seam tracking, and a specially designed environment recognition strategy. In this paper, a three-dimensional laser vision system is developed based on the optical triangulation technology in order to provide robots with 3D work environmental map. Using this sensor system, a spatial filter based on neural network technology is designed for extracting the center of laser stripe, and evaluated in various situations. An environment modeling algorithm structure is proposed and tested, which is composed of the laser scanning module for 3D voxel modeling and the plane reconstruction module for mobile robot localization. Finally, an environmental recognition strategy for welding mobile robot is developed in order to recognize the work environments efficiently. The design of the sensor system, the algorithm for sensing the partially structured environment with plane segments, and the recognition strategy and tactics for sensing the work environment are described and discussed with a series of experiments in detail.