• Title/Summary/Keyword: CT Image Normalization

Search Result 9, Processing Time 0.019 seconds

Evaluation of Image for Phantom according to Normalization, Well Counter Correction in PET-CT (PET-CT Normalization, Well Counter Correction에 따른 팬텀을 이용한 영상 평가)

  • Choong-Woon Lee;Yeon-Wook You;Jong-Woon Mun;Yun-Cheol Kim
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.27 no.1
    • /
    • pp.47-54
    • /
    • 2023
  • Purpose PET-CT imaging require an appropriate quality assurance system to achieve high efficiency and reliability. Quality control is essential for improving the quality of care and patient safety. Currently, there are performance evaluation methods of UN2-1994 and UN2-2001 proposed by NEMA and IEC for PET-CT image evaluation. In this study, we compare phantom images with the same experiments before and after PET-CT 3D normalization and well counter correction and evaluate the usefulness of quality control. Materials and methods Discovery 690 (General Electric Healthcare, USA) PET-CT equiptment was used to perform 3D normalization and well counter correction as recommended by GE Healthcare. Based on the recovery coefficients for the six spheres of the NEMA IEC Body Phantom recommended by the EARL. 20kBq/㎖ of 18F was injected into the sphere of the phantom and 2kBq/㎖ of 18F was injected into the body of phantom. PET-CT scan was performed with a radioacitivity ratio of 10:1. Images were reconstructed by appliying TOF+PSF+TOF, OSEM+PSF, OSEM and Gaussian filter 4.0, 4.5, 5.0, 5.5, 6.0, 6,5 mm with matrix size 128×128, slice thickness 3.75 mm, iteration 2, subset 16 conditions. The PET image was attenuation corrected using the CT images and analyzed using software program AW 4.7 (General Electric Healthcare, USA). The ROI was set to fit 6 spheres in the CT image, RC (Recovery Coefficient) was measured after fusion of PET and CT. Statistical analysis was performed wilcoxon signed rank test using R. Results Overall, after the quality control items were performed, the recovery coefficient of the phantom image increased and measured. Recovery coefficient according to the image reconstruction increased in the order TOF+PSF, TOF, OSEM+PSF, before and after quality control, RCmax increased by OSEM 0.13, OSEM+PSF 0.16, TOF 0.16, TOF+PSF 0.15 and RCmean increased by OSEM 0.09, OSEM+PSF 0.09, TOF 0.106, TOF+PSF 0.10. Both groups showed a statistically significant difference in Wilcoxon signed rank test results (P value<0.001). Conclusion PET-CT system require quality assurance to achieve high efficiency and reliability. Standardized intervals and procedures should be followed for quality control. We hope that this study will be a good opportunity to think about the importance of quality control in PET-CT

  • PDF

Adaptive Optimal Thresholding for the Segmentation of Individual Tooth from CT Images (CT영상에서 개별 치아 분리를 위한 적응 최적 임계화 방안)

  • Heo, Hoon;Chae, Ok-Sam
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.163-174
    • /
    • 2004
  • The 3D tooth model in which each tooth can be manipulated individualy is essential component for the orthodontic simulation and implant simulation in dental field. For the reconstruction of such a tooth model, we need an image segmentation algorithm capable of separating individual tooth from neighboring teeth and alveolar bone. In this paper we propose a CT image normalization method and adaptive optimal thresholding algorithm for the segmenation of tooth region in CT image slices. The proposed segmentation algorithm is based on the fact that the shape and intensity of tooth change gradually among CT image slices. It generates temporary boundary of a tooth by using the threshold value estimated in the previous imge slice, and compute histograms for the inner region and the outer region seperated by the temporary boundary. The optimal threshold value generating the finnal tooth region is computed based on these two histogram.

Comparison of Based on Histogram Equalization Techniques by Using Normalization in Thoracic Computed Tomography (흉부 컴퓨터 단층 촬영에서 정규화를 사용한 다양한 히스토그램 평준화 기법을 비교)

  • Lee, Young-Jun;Min, Jung-Whan
    • Journal of radiological science and technology
    • /
    • v.44 no.5
    • /
    • pp.473-480
    • /
    • 2021
  • This study was purpose to method that applies for improving the image quality in CT and X-ray scan, especially in the lung region. Also, we researched the parameters of the image before and after applying for Histogram Equalization (HE) such as mean, median values in the histogram. These techniques are mainly used for all type of medical images such as for Chest X-ray, Low-Dose Computed Tomography (CT). These are also used to intensify tiny anatomies like vessels, lung nodules, airways and pulmonary fissures. The proposed techniques consist of two main steps using the MATLAB software (R2021a). First, the technique should apply for the process of normalization for improving the basic image more correctly. In the next, the technique actively rearranges the intensity of the image contrast. Second, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method was used for enhancing small details, textures and local contrast of the image. As a result, this paper shows the modern and improved techniques of HE and some advantages of the technique on the traditional HE. Therefore, this paper concludes that various techniques related to the HE can be helpful for many processes, especially image pre-processing for Machine Learning (ML), Deep Learning (DL).

Image Calibration Techniques for Removing Cupping and Ring Artifacts in X-ray Micro-CT Images (X-ray micro-CT 이미지 내 패임 및 동심원상 화상결함 제거를 위한 이미지 보정 기법)

  • Jung, Yeon-Jong;Yun, Tae-Sup;Kim, Kwang-Yeom;Choo, Jin-Hyun
    • Journal of the Korean Geotechnical Society
    • /
    • v.27 no.11
    • /
    • pp.93-101
    • /
    • 2011
  • High quality X-ray computed microtomography (micro-CT) imaging of internal microstructures and pore space in geomaterials is often hampered by some inherent noises embedded in the images. In this paper, we introduce image calibration techniques for removing the most common noises in X-ray micro-CT, cupping (brightness difference between the periphery and central regions) and ring artifacts (consecutive concentric circles emanating from the origin). The artifacts removal sequentially applies coordinate transformation, normalization, and low-pass filtering in 2D Fourier spectrum to raw CT-images. The applicability and performance of the techniques are showcased by describing extraction of 3D pore structures from micro-CT images of porous basalt using artifacts reductions, binarization, and volume stacking. Comparisions between calibrated and raw images indicate that the artifacts removal allows us to avoid the overestimation of porosity of imaged materials, and proper calibration of the artifacts plays a crucial role in using X-ray CT for geomaterials.

Theoretical Investigation of Metal Artifact Reduction Based on Sinogram Normalization in Computed Tomography (컴퓨터 단층영상에서 사이노그램 정규화를 이용한 금속 영상왜곡 저감 방법의 이론적 고찰)

  • Jeon, Hosang;Youn, Hanbean;Nam, Jiho;Kim, Ho Kyung
    • Progress in Medical Physics
    • /
    • v.24 no.4
    • /
    • pp.303-314
    • /
    • 2013
  • Image quality of computed tomography (CT) is very vulnerable to metal artifacts. Recently, the thickness and background normalization techniques have been introduced. Since they provide flat sinograms, it is easy to determine metal traces and a simple linear interpolation would be enough to describe the missing data in sinograms. In this study, we have developed a theory describing two normalization methods and compared two methods with respect to various sizes and numbers of metal inserts by using simple numerical simulations. The developed theory showed that the background normalization provide flatter sinograms than the thickness normalization, which was validated with the simulation results. Numerical simulation results with respect to various sizes and numbers of metal inserts showed that the background normalization was better than the thickness normalization for metal artifact corrections. Although the residual artifacts still existed, we have showed that the background normalization without the segmentation procedure was better than the thickness normalization for metal artifact corrections. Since the background normalization without the segmentation procedure is simple and it does not require any users' intervention, it can be readily installed in conventional CT systems.

Low-dose CT Image Denoising Using Classification Densely Connected Residual Network

  • Ming, Jun;Yi, Benshun;Zhang, Yungang;Li, Huixin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2480-2496
    • /
    • 2020
  • Considering that high-dose X-ray radiation during CT scans may bring potential risks to patients, in the medical imaging industry there has been increasing emphasis on low-dose CT. Due to complex statistical characteristics of noise found in low-dose CT images, many traditional methods are difficult to preserve structural details effectively while suppressing noise and artifacts. Inspired by the deep learning techniques, we propose a densely connected residual network (DCRN) for low-dose CT image noise cancelation, which combines the ideas of dense connection with residual learning. On one hand, dense connection maximizes information flow between layers in the network, which is beneficial to maintain structural details when denoising images. On the other hand, residual learning paired with batch normalization would allow for decreased training speed and better noise reduction performance in images. The experiments are performed on the 100 CT images selected from a public medical dataset-TCIA(The Cancer Imaging Archive). Compared with the other three competitive denoising algorithms, both subjective visual effect and objective evaluation indexes which include PSNR, RMSE, MAE and SSIM show that the proposed network can improve LDCT images quality more effectively while maintaining a low computational cost. In the objective evaluation indexes, the highest PSNR 33.67, RMSE 5.659, MAE 1.965 and SSIM 0.9434 are achieved by the proposed method. Especially for RMSE, compare with the best performing algorithm in the comparison algorithms, the proposed network increases it by 7 percentage points.

Efficient Semi-automatic Annotation System based on Deep Learning

  • Hyunseok Lee;Hwa Hui Shin;Soohoon Maeng;Dae Gwan Kim;Hyojeong Moon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.267-275
    • /
    • 2023
  • This paper presents the development of specialized software for annotating volume-of-interest on 18F-FDG PET/CT images with the goal of facilitating the studies and diagnosis of head and neck cancer (HNC). To achieve an efficient annotation process, we employed the SE-Norm-Residual Layer-based U-Net model. This model exhibited outstanding proficiency to segment cancerous regions within 18F-FDG PET/CT scans of HNC cases. Manual annotation function was also integrated, allowing researchers and clinicians to validate and refine annotations based on dataset characteristics. Workspace has a display with fusion of both PET and CT images, providing enhance user convenience through simultaneous visualization. The performance of deeplearning model was validated using a Hecktor 2021 dataset, and subsequently developed semi-automatic annotation functionalities. We began by performing image preprocessing including resampling, normalization, and co-registration, followed by an evaluation of the deep learning model performance. This model was integrated into the software, serving as an initial automatic segmentation step. Users can manually refine pre-segmented regions to correct false positives and false negatives. Annotation images are subsequently saved along with their corresponding 18F-FDG PET/CT fusion images, enabling their application across various domains. In this study, we developed a semi-automatic annotation software designed for efficiently generating annotated lesion images, with applications in HNC research and diagnosis. The findings indicated that this software surpasses conventional tools, particularly in the context of HNC-specific annotation with 18F-FDG PET/CT data. Consequently, developed software offers a robust solution for producing annotated datasets, driving advances in the studies and diagnosis of HNC.

Truncation Artifact Reduction Using Weighted Normalization Method in Prototype R/F Chest Digital Tomosynthesis (CDT) System (프로토타입 R/F 흉부 디지털 단층영상합성장치 시스템에서 잘림 아티팩트 감소를 위한 가중 정규화 접근법에 대한 연구)

  • Son, Junyoung;Choi, Sunghoon;Lee, Donghoon;Kim, Hee-Joung
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.1
    • /
    • pp.111-118
    • /
    • 2019
  • Chest digital tomosynthesis has become a practical imaging modality because it can solve the problem of anatomy overlapping in conventional chest radiography. However, because of both limited scan angle and finite-size detector, a portion of chest cannot be represented in some or all of the projection. These bring a discontinuity in intensity across the field of view boundaries in the reconstructed slices, which we refer to as the truncation artifacts. The purpose of this study was to reduce truncation artifacts using a weighted normalization approach and to investigate the performance of this approach for our prototype chest digital tomosynthesis system. The system source-to-image distance was 1100 mm, and the center of rotation of X-ray source was located on 100 mm above the detector surface. After obtaining 41 projection views with ${\pm}20^{\circ}$ degrees, tomosynthesis slices were reconstructed with the filtered back projection algorithm. For quantitative evaluation, peak signal to noise ratio and structure similarity index values were evaluated after reconstructing reference image using simulation, and mean value of specific direction values was evaluated using real data. Simulation results showed that the peak signal to noise ratio and structure similarity index was improved respectively. In the case of the experimental results showed that the effect of artifact in the mean value of specific direction of the reconstructed image was reduced. In conclusion, the weighted normalization method improves the quality of image by reducing truncation artifacts. These results suggested that weighted normalization method could improve the image quality of chest digital tomosynthesis.

Principal component analysis in C[11]-PIB imaging (주성분분석을 이용한 C[11]-PIB imaging 영상분석)

  • Kim, Nambeom;Shin, Gwi Soon;Ahn, Sung Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.1
    • /
    • pp.12-16
    • /
    • 2015
  • Purpose Principal component analysis (PCA) is a method often used in the neuroimagre analysis as a multivariate analysis technique for describing the structure of high dimensional correlation as the structure of lower dimensional space. PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of correlated variables into a set of values of linearly independent variables called principal components. In this study, in order to investigate the usefulness of PCA in the brain PET image analysis, we tried to analyze C[11]-PIB PET image as a representative case. Materials and Methods Nineteen subjects were included in this study (normal = 9, AD/MCI = 10). For C[11]-PIB, PET scan were acquired for 20 min starting 40 min after intravenous injection of 9.6 MBq/kg C[11]-PIB. All emission recordings were acquired with the Biograph 6 Hi-Rez (Siemens-CTI, Knoxville, TN) in three-dimensional acquisition mode. Transmission map for attenuation-correction was acquired using the CT emission scans (130 kVp, 240 mA). Standardized uptake values (SUVs) of C[11]-PIB calculated from PET/CT. In normal subjects, 3T MRI T1-weighted images were obtained to create a C[11]-PIB template. Spatial normalization and smoothing were conducted as a pre-processing for PCA using SPM8 and PCA was conducted using Matlab2012b. Results Through the PCA, we obtained linearly uncorrelated independent principal component images. Principal component images obtained through the PCA can simplify the variation of whole C[11]-PIB images into several principal components including the variation of neocortex and white matter and the variation of deep brain structure such as pons. Conclusion PCA is useful to analyze and extract the main pattern of C[11]-PIB image. PCA, as a method of multivariate analysis, might be useful for pattern recognition of neuroimages such as FDG-PET or fMRI as well as C[11]-PIB image.

  • PDF