• Title/Summary/Keyword: Voxels

Search Result 112, Processing Time 0.04 seconds

COMPUTATIONAL ANTHROPOMORPHIC PHANTOMS FOR RADIATION PROTECTION DOSIMETRY: EVOLUTION AND PROSPECTS

  • Lee, Choon-Sik;Lee, Jai-Ki
    • Nuclear Engineering and Technology
    • /
    • v.38 no.3
    • /
    • pp.239-250
    • /
    • 2006
  • Computational anthropomorphic phantoms are computer models of human anatomy used in the calculation of radiation dose distribution in the human body upon exposure to a radiation source. Depending on the manner to represent human anatomy, they are categorized into two classes: stylized and tomographic phantoms. Stylized phantoms, which have mainly been developed at the Oak Ridge National Laboratory (ORNL), describe human anatomy by using simple mathematical equations of analytical geometry. Several improved stylized phantoms such as male and female adults, pediatric series, and enhanced organ models have been developed following the first hermaphrodite adult stylized phantom, Medical Internal Radiation Dose (MIRD)-5 phantom. Although stylized phantoms have significantly contributed to dosimetry calculation, they provide only approximations of the true anatomical features of the human body and the resulting organ dose distribution. An alternative class of computational phantom, the tomographic phantom, is based upon three-dimensional imaging techniques such as magnetic resonance (MR) imaging and computed tomography (CT). The tomographic phantoms represent the human anatomy with a large number of voxels that are assigned tissue type and organ identity. To date, a total of around 30 tomographic phantoms including male and female adults, pediatric phantoms, and even a pregnant female, have been developed and utilized for realistic radiation dosimetry calculation. They are based on MRI/CT images or sectional color photos from patients, volunteers or cadavers. Several investigators have compared tomographic phantoms with stylized phantoms, and demonstrated the superiority of tomographic phantoms in terms of realistic anatomy and dosimetry calculation. This paper summarizes the history and current status of both stylized and tomographic phantoms, including Korean computational phantoms. Advantages, limitations, and future prospects are also discussed.

Lateral pterygoid muscle volume and migraine in patients with temporomandibular disorders

  • Lopes, Sergio Lucio Pereira De Castro;Costa, Andre Luiz Ferreira;Gamba, Thiago De Oliveira;Flores, Isadora Luana;Cruz, Adriana Dibo;Min, Li Li
    • Imaging Science in Dentistry
    • /
    • v.45 no.1
    • /
    • pp.1-5
    • /
    • 2015
  • Purpose: Lateral pterygoid muscle (LPM) plays an important role in jaw movement and has been implicated in Temporomandibular disorders (TMDs). Migraine has been described as a common symptom in patients with TMDs and may be related to muscle hyperactivity. This study aimed to compare LPM volume in individuals with and without migraine, using segmentation of the LPM in magnetic resonance (MR) imaging of the TMJ. Materials and Methods: Twenty patients with migraine and 20 volunteers without migraine underwent a clinical examination of the TMJ, according to the Research Diagnostic Criteria for TMDs. MR imaging was performed and the LPM was segmented using the ITK-SNAP 1.4.1 software, which calculates the volume of each segmented structure in voxels per cubic millimeter. The chi-squared test and the Fisher's exact test were used to relate the TMD variables obtained from the MR images and clinical examinations to the presence of migraine. Logistic binary regression was used to determine the importance of each factor for predicting the presence of a migraine headache. Results: Patients with TMDs and migraine tended to have hypertrophy of the LPM (58.7%). In addition, abnormal mandibular movements (61.2%) and disc displacement (70.0%) were found to be the most common signs in patients with TMDs and migraine. Conclusion: In patients with TMDs and simultaneous migraine, the LPM tends to be hypertrophic. LPM segmentation on MR imaging may be an alternative method to study this muscle in such patients because the hypertrophic LPM is not always palpable.

Reduced Gray Matter Density in the Posterior Cerebellum of Patients with Panic Disorder : A Voxel-Based Morphometry Study

  • Lee, Junghyun H.;Jeon, Yujin;Bae, Sujin;Jeong, Jee Hyang;Namgung, Eun;Kim, Bori R.;Ban, Soonhyun;Jeon, Saerom;Kang, Ilhyang;Lim, Soo Mee
    • Korean Journal of Biological Psychiatry
    • /
    • v.22 no.1
    • /
    • pp.20-27
    • /
    • 2015
  • Objectives It is increasingly thought that the human cerebellum plays an important role in emotion and cognition. Although recent evidence suggests that the cerebellum may also be implicated in fear learning, only a limited number of studies have investigated the cerebellar abnormalities in panic disorder. The aim of this study was to evaluate the cerebellar gray matter deficits and their clinical correlations among patients with panic disorder. Methods Using a voxel-based morphometry approach with a high-resolution spatially unbiased infratentorial template, regional cerebellar gray matter density was compared between 23 patients with panic disorder and 33 healthy individuals. Results The gray matter density in the right posterior-superior (lobule Crus I) and left posterior-inferior (lobules Crus II, VIIb, VIIIa) cerebellum was significantly reduced in the panic disorder group compared to healthy individuals (p < 0.05, false discovery rate corrected, extent threshold = 100 voxels). Additionally, the gray matter reduction in the left posterior-inferior cerebellum (lobule VIIIa) was significantly associated with greater panic symptom severity (r = -0.55, p = 0.007). Conclusions Our findings suggest that the gray matter deficits in the posterior cerebellum may be involved in the pathogenesis of panic disorder. Further studies are needed to provide a comprehensive understanding of the cerebro-cerebellar network in panic disorder.

Development and Performance Evaluation of the First Model of 4D CT-Scanner

  • Endo, Masahiro;Mori, Shinichiro;Tsunoo, Takanori;Kandatsu, Susumu;Tanada, Shuji;Aradate, Hiroshi;Saito, Yasuo;Miyazaki, Hiroaki;Satoh, Kazumasa;Matsusita, Satoshi;Kusakabe, Masahiro
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.373-375
    • /
    • 2002
  • 4D CT is a dynamic volume imaging system of moving organs with an image quality comparable to conventional CT, and is realized with continuous and high-speed cone-beam CT. In order to realize 4D CT, we have developed a novel 2D detector on the basis of the present CT technology, and mounted it on the gantry frame of the state-of-the-art CT-scanner. In the present report we describe the design of the first model of 4D CT-scanner as well as the early results of performance test. The x-ray detector for the 4D CT-scanner is a discrete pixel detector in which pixel data are measured by an independent detector element. The numbers of elements are 912 (channels) ${\times}$ 256 (segments) and the element size is approximately 1mm ${\times}$ 1mm. Data sampling rate is 900views(frames)/sec, and dynamic range of A/D converter is 16bits. The rotation speed of the gantry is l.0sec/rotation. Data transfer system between rotating and stationary parts in the gantry consists of laser diode and photodiode pairs, and achieves net transfer speed of 5Gbps. Volume data of 512${\times}$512${\times}$256 voxels are reconstructed with FDK algorithm by parallel use of 128 microprocessors. Normal volunteers and several phantoms were scanned with the scanner to demonstrate high image quality.

  • PDF

Construction of Static 3D Ultrasonography Image by Radiation Beam Tracking Method from 1D Array Probe (1차원 배열 탐촉자의 방사빔추적기법을 이용한 정적 3차원 초음파진단영상 구성)

  • Kim, Yong Tae;Doh, Il;Ahn, Bongyoung;Kim, Kwang-Youn
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.35 no.2
    • /
    • pp.128-133
    • /
    • 2015
  • This paper describes the construction of a static 3D ultrasonography image by tracking the radiation beam position during the handy operation of a 1D array probe to enable point-of-care use. The theoretical model of the transformation from the translational and rotational information of the sensor mounted on the probe to the reference Cartesian coordinate system was given. The signal amplification and serial communication interface module was made using a commercially available sensor. A test phantom was also made using silicone putty in a donut shape. During the movement of the hand-held probe, B-mode movie and sensor signals were recorded. B-mode images were periodically selected from the movie, and the gray levels of the pixels for each image were converted to the gray levels of 3D voxels. 3D and 2D images of arbitrary cross-section of the B-mode type were also constructed from the voxel data, and agreed well with the shape of the test phantom.

Optimization of Multi-Atlas Segmentation with Joint Label Fusion Algorithm for Automatic Segmentation in Prostate MR Imaging

  • Choi, Yoon Ho;Kim, Jae-Hun;Kim, Chan Kyo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.24 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • Purpose: Joint label fusion (JLF) is a popular multi-atlas-based segmentation algorithm, which compensates for dependent errors that may exist between atlases. However, in order to get good segmentation results, it is very important to set the several free parameters of the algorithm to optimal values. In this study, we first investigate the feasibility of a JLF algorithm for prostate segmentation in MR images, and then suggest the optimal set of parameters for the automatic prostate segmentation by validating the results of each parameter combination. Materials and Methods: We acquired T2-weighted prostate MR images from 20 normal heathy volunteers and did a series of cross validations for every set of parameters of JLF. In each case, the atlases were rigidly registered for the target image. Then, we calculated their voting weights for label fusion from each combination of JLF's parameters (rpxy, rpz, rsxy, rsz, β). We evaluated the segmentation performances by five validation metrics of the Prostate MR Image Segmentation challenge. Results: As the number of voxels participating in the voting weight calculation and the number of referenced atlases is increased, the overall segmentation performance is gradually improved. The JLF algorithm showed the best results for dice similarity coefficient, 0.8495 ± 0.0392; relative volume difference, 15.2353 ± 17.2350; absolute relative volume difference, 18.8710 ± 13.1546; 95% Hausdorff distance, 7.2366 ± 1.8502; and average boundary distance, 2.2107 ± 0.4972; in parameters of rpxy = 10, rpz = 1, rsxy = 3, rsz = 1, and β = 3. Conclusion: The evaluated results showed the feasibility of the JLF algorithm for automatic segmentation of prostate MRI. This empirical analysis of segmentation results by label fusion allows for the appropriate setting of parameters.

Fabrication of Three-Dimensional Curved Microstructures by Two-Photon Polymerization Employing Multi-Exposure Voxel Matrix Scanning Method (다중조사 복셀 매트릭스 스캐닝법을 이용한 이광자 중합에 의한 마이크로 3차원 곡면형상 제작)

  • Lim, Tae-Woo;Park, Sang-Hu;Yang, Dong-Yol;Kong, Hong-Jin;Lee, Kwang-Sup
    • Polymer(Korea)
    • /
    • v.29 no.4
    • /
    • pp.418-421
    • /
    • 2005
  • Three-dimensional (3D) microfabrication process using two-photon polymerization (TPP) is developed to fabricate the curved microstructures in a layer, which can be applied potentially to optical MEMS, nano/micro-devices, etc. A 3D curved structure can be expressed using the same height-contours that are defined by symbolic colors which consist of 14 colors. Then, the designed bitmap figure is transformed into a multi-exposure voxel matrix (MVM). In this work a multi-exposure voxel matrix scanning method is used to generate various heights of voxels according to each laser exposure time that is assigned to the symbolic colors. An objective lens with a numerical aperture of 1.25 is employed to enlarge the variation of a voxel height in the range of 1.2 to 6.4 um which can be controlled easily using the various exposure time. Though this work some 3D curved micro-shapes are fabricated directly to demonstrate the usefulness of the process without a laminating process that is generally required in a micro-stereolithography process.

EVALUATION FOR DAMAGED DEGREE OF VEGETATION BY FOREST FIRE USING LIDARAND DIGITALAERIAL PHOTOGRAPH

  • Kwak, Doo-Ahn;Chung, Jin-Won;Lee, Woo-Kyun;Lee, Seung-Ho;Cho, Hyun-Kook;We, Gwang-Jae;Kim, Tae-Min
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.533-536
    • /
    • 2007
  • The LiDAR data structure has the potential for modeling in three dimensions because the LiDAR data can represent voxels with z value under certain defined conditions. Therefore, it is possible to classify the physical damaged degree of vegetation by forest fire as using the LiDAR data because the physical loss of canopy height and width by forest fire can be relative to an amount of points reached to the ground through the canopy of damaged forest. On the other hand, biological damage of vegetation by forest fire can be explained using the NDVI (Normalized Difference Vegetation Index) which show vegetation vitality. In this study, we graded the damaged degree of vegetation by forest fire in Yangyang-Gun of South Korea using the LiDAR data for physical grading and digital aerial photograph including Red, Green, Blue and Near Infra-Red bands for biological grading. The LiDAR data was classified into 2 classes, of which one was Serious Physical Damaged (SPD) and the other was Light Physical Damaged (LPD) area. The NDVI was also classified into 2 classes which are Serious Biological Damaged (SBD) and Light Biological Damaged (LBD) area respectively. With each 2 classes ofthe LiDAR data and NDVI, the damaged area by forest fire was graded into 4 degrees like damaged class 1,2,3 and 4 grade. As a result of this study, 1 graded area was the broadest and next was the 3 grade. With this result, we could know that the burned area by forest fire in Yangyang-Gun was damaged rather biologically because the NDVI in 1 and 3 grade appeared low value whereas the LiDAR data in 1 and 3 grade included light physical damage like the LPD.

  • PDF

Study of Computer Aided Diagnosis for the Improvement of Survival Rate of Lung Cancer based on Adaboost Learning (폐암 생존율 향상을 위한 아다부스트 학습 기반의 컴퓨터보조 진단방법에 관한 연구)

  • Won, Chulho
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.10 no.1
    • /
    • pp.87-92
    • /
    • 2016
  • In this paper, we improved classification performance of benign and malignant lung nodules by including the parenchyma features. For small pulmonary nodules (4-10mm) nodules, there are a limited number of CT data voxels within the solid tumor, making them difficult to process through traditional CAD(computer aided diagnosis) tools. Increasing feature extraction to include the surrounding parenchyma will increase the CT voxel set for analysis in these very small pulmonary nodule cases and likely improve diagnostic performance while keeping the CAD tool flexible to scanner model and parameters. In AdaBoost learning using naive Bayes and SVM weak classifier, a number of significant features were selected from 304 features. The results from the COPDGene test yielded an accuracy, sensitivity and specificity of 100%. Therefore proposed method can be used for the computer aided diagnosis effectively.

3D Visualization of Brain MR Images by Applying Image Interpolation Using Proportional Relationship of MBRs (MBR의 비례 관계를 이용한 영상 보간이 적용된 뇌 MR 영상의 3차원 가시화)

  • Song, Mi-Young;Cho, Hyung-Je
    • The KIPS Transactions:PartB
    • /
    • v.10B no.3
    • /
    • pp.339-346
    • /
    • 2003
  • In this paper, we propose a new method in which interpolation images are created by using a small number of axiai T2-weighted images instead of using many sectional images for 3D visualization of brain MR images. For image Interpolation, an important part of this process, we first segment a region of interest (ROI) that we wish to apply 3D reconstruction and extract the boundaries of segmented ROIs and MBR information. After the image size of interpolation layer is determined according to the changing rate of MBR size between top slice and bottom slice of segmented ROI, we find the corresponding pixels in segmented ROI images. Then we calculate a pixel's intensity of interpolation image by assigning to each pixel intensity weights detected by cube interpolation method. Finally, 3D reconstruction is accomplished by exploiting feature points and 3D voxels in the created interpolation images.