• Title/Summary/Keyword: Iterative reconstruction

Search Result 207, Processing Time 0.028 seconds

Radiation dose Assesment according to the Adaptive Statistical Iterative Reconstruction Technique of Cardiac Computed Tomography(CT) (심장 CT 검사시 ASIR 적용에 따른 선량 평가)

  • Jang, Hyun-Cheol;Kim, Hyun-Ju;Cho, Jae-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.5
    • /
    • pp.252-259
    • /
    • 2011
  • To identify the effects of the application of the adaptive statistical iterative reconstruction (ASIR) technique in combination with the other two factors of body mass Index (BMI) and tube potential on radiation dose in cardiac CT. The patient receiving operation the cardiac CT examination was divided four groups into according to kVp.[A group(n=20), Non-ASIR, BMI < 25, 100 kVp; B group(n=20), Non-ASIR, BMI > 25, 120 kVp; C group(n=20), 40% ASIR BMI < 25, 100 kVp; D group(n=20), 40% ASIR, BMI > 25, 120 kVp] After setting up the region of interest in the main artery central part and right coronary artery and left anterior descending artery, the CT number was measured and an average and standard deviation were analyzed. There were A group and the difference which the image noise notes statistically between C. And A group was high so that the noise could note than C group (group A, 494 ${\pm}$ 32 HU; group C, 482 ${\pm}$ 48 HU: P<0.05) In addition, there were B group and the difference noted statistically between D. And B group was high so that the noise could note than D group (group B, 510 ${\pm}$ 45 HU; group D, 480 ${\pm}$ 82 HU: P<0.05). In the qualitative analysis of an image, there was no difference (p>0.05) which a group, B group, C group, and D as to average, A group 4.13${\pm}$0.2, B group 4.18${\pm}$0.1, and C group 4.1${\pm}$0.2 and D group note statistically altogether with 4.15${\pm}$0.1 as a result of making the clinical evaluation according to the coronary artery segments. And the inappropriate image was shown to the diagnosis in all groups. As to the radiation dose, a group 8.6${\pm}$0.9 and B group 14.9${\pm}$0.4 and C group 5.8${\pm}$0.5 and D group are 10.1${\pm}$0.6 mSv.

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

Development of A Recovery Algorithm for Sparse Signals based on Probabilistic Decoding (확률적 희소 신호 복원 알고리즘 개발)

  • Seong, Jin-Taek
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.5
    • /
    • pp.409-416
    • /
    • 2017
  • In this paper, we consider a framework of compressed sensing over finite fields. One measurement sample is obtained by an inner product of a row of a sensing matrix and a sparse signal vector. A recovery algorithm proposed in this study for sparse signals based probabilistic decoding is used to find a solution of compressed sensing. Until now compressed sensing theory has dealt with real-valued or complex-valued systems, but for the processing of the original real or complex signals, the loss of the information occurs from the discretization. The motivation of this work can be found in efforts to solve inverse problems for discrete signals. The framework proposed in this paper uses a parity-check matrix of low-density parity-check (LDPC) codes developed in coding theory as a sensing matrix. We develop a stochastic algorithm to reconstruct sparse signals over finite field. Unlike LDPC decoding, which is published in existing coding theory, we design an iterative algorithm using probability distribution of sparse signals. Through the proposed recovery algorithm, we achieve better reconstruction performance as the size of finite fields increases. Since the sensing matrix of compressed sensing shows good performance even in the low density matrix such as the parity-check matrix, it is expected to be actively used in applications considering discrete signals.

Channel estimation scheme of terrestrial DTV transmission employing unique-word based SC-FDE (Unique-word 채용한 SC-FDE 기반 지상파 DTV 전송의 채널 추정 기법)

  • Shin, Dong-Chul;Kim, Jae-Kil;Ahn, Jae-Min
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.207-215
    • /
    • 2011
  • A signal passed through multi-path channel suffers ISI(Inter-Symbol Interference) and severe distortions caused by channel delay spread and noise components at the SC-FDE(Single Carrier with Frequency Domain Equalizer) transmission. Conventional UW(Unique-Word) based SC-FDE iterative channel estimation improves channel estimation performance by smoothing estimated CIR(Channel Impulse Response) of the noise components outside the channel length at time domain and restoring the broken cyclic property through UW reconstruction. In this paper, we propose channel estimation scheme through noise suppression within channel length. To suppress the noise, we estimate noise standard deviation as estimated CIR of the noise components outside the channel length and make criteria of the noise standard deviation gain that doesn't affect the original signal samples. When estimated CIR samples within channel length are less than the criteria value using the noise standard deviation and gain, the noise components are removed. Simulation results show that the proposed channel estimation scheme brings good channel MSE(Mean Square Error) and good BER(Bit Error Rate) performance.

The Comparison of Quantitative Accuracy Between Energy Window-Based and CT-Based Scatter Correction Method in SPECT/CT Images (SPECT/CT 영상에서 에너지창 기반 산란보정과 CT 기반 산란보정 방법의 정량적 정확성 비교)

  • Kim, Ji-Hyeon;Son, Hyeon-Soo;Lee, Juyoung;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.2
    • /
    • pp.93-101
    • /
    • 2015
  • Purpose In SPECT image, scatter count is the cause of quantitative count error and image quality degradation. Thus, a wide range of scatter correction(SC) methods have been studied and this study is to evaluate the accuracy of CT based SC(CTSC) used in SPECT/CT as the comparison with existing energy window based SC(EWSC). Materials and Methods SPECT/CT images were obtained after filling air in order to acquire a reference image without the influence of scatter count inside the Triple line insert phantom setting hot rod(74.0 MBq) in the middle and each SPECT/CT image was obtained each separately after filling water instead of air in order to derive the influence of scatter count under the same conditions. In both conditions, Astonish(iterative : 4 subset : 16) reconstruction method and CT attenuation correction were commonly applied and three types of SC methods such as non-scatter correction(NSC), EWSC, CTSC were used in images filled with image. For EWSC, 9 sub-energy windows were set additionally in addition to main(=peak) energy window(140 keV, 20%) and then, images were acquired at the same time and five types of EWSC including DPW(dual photo-peak window)10%, DEW(dual energy window)20%, TEW(triple energy window)10%, TEW5.0%, TEW2.5% were used. Under the condition without fluctuations in primary count, total count was measured by drawing volume of interest (VOI) in the images of the two conditions and then, the ratio of scatter count of total counts was calculated as percent scatter fraction(%SF) and the count error with image filled with water was evaluated with percent normalized mean-square error(%NMSE) based on the image filled with air. Results Based on the image filled with air, %SF of images filled with water to which each SC method was applied is NSC 37.44, DPW 27.41, DEW 21.84, TEW10% 19.60, TEW5% 17.02, TEW2.5% 14.68, CTSC 5.57 and the most scattering counts were removed in CTSC and %NMSE is NSC 35.80, DPW 14.28, DEW 7.81, TEW10% 5.94, TEW5% 4.21, TEW2.5% 2.96, CTSC 0.35 and the error in CTSC was found to be the lowest. Conclusion In SPECT/CT images, the application of each scatter correction method used in the experiment could improve the quantitative count error caused by the influence of scatter count. In particular, CTSC showed the lowest %NMSE(=0.35) compared to existing EWSC methods, enabling relatively accurate scatter correction.

  • PDF

Evaluation of Image Quality Change by Truncated Region in Brain PET/CT (Brain PET에서 Truncated Region에 의한 영상의 질 평가)

  • Lee, Hong-Jae;Do, Yong-Ho;Kim, Jin-Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.2
    • /
    • pp.68-73
    • /
    • 2015
  • Purpose The purpose of this study was to evaluate image quality change by truncated region in field of view (FOV) of attenuation correction computed tomography (AC-CT) in brain PET/CT. Materials and Methods Biograph Truepoint 40 with TrueV (Siemens) was used as a scanner. $^{68}Ge$ phantom scan was performed with and without applying brain holder using brain PET/CT protocol. PET attenuation correction factor (ACF) was evaluated according to existence of pallet in FOV of AC-CT. FBP, OSEM-3D and PSF methods were applied for PET reconstruction. Parameters of iteration 4, subsets 21 and gaussian 2 mm filter were applied for iterative reconstruction methods. Window level 2900, width 6000 and level 4, 200, width 1000 were set for visual evaluation of PET AC images. Vertical profiles of 5 slices and 20 slices summation images applied gaussian 5 mm filter were produced for evaluating integral uniformity. Results Patient pallet was not covered in FOV of AC-CT when without applying brain holder because of small size of FOV. It resulted in defect of ACF sinogram by truncated region in ACF evaluation. When without applying brain holder, defect was appeared in lower part of transverse image on condition of window level 4200, width 1000 in PET AC image evaluation. With and without applying brain holder, integral uniformities of 5 slices and 20 slices summation images were 7.2%, 6.7% and 11.7%, 6.7%. Conclusion Truncated region by small FOV results in count defect in occipital lobe of brain in clinical or research studies. It is necessary to understand effect of truncated region and apply appropriate accessory for brain PET/CT.

  • PDF

Evaluation of Image Noise and Radiation Dose Analysis In Brain CT Using ASIR(Adaptive Statistical Iterative Reconstruction) (ASIR를 이용한 두부 CT의 영상 잡음 평가 및 피폭선량 분석)

  • Jang, Hyon-Chol;Kim, Kyeong-Keun;Cho, Jae-Hwan;Seo, Jeong-Min;Lee, Haeng-Ki
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.5
    • /
    • pp.357-363
    • /
    • 2012
  • The purpose of this study on head computed tomography scan corporate reorganization adaptive iteration algorithm using the statistical noise, and quality assessment, reduction of dose was evaluated. Head CT examinations do not apply ASIR group [A group], ASIR 50 applies a group [B group] were divided into examinations. B group of each 46.9 %, 48.2 %, 43.2 %, and 47.9 % the measured in the phantom research result of measurement of CT noise average were reduced more than A group in the central part (A) and peripheral unit (B, C, D). CT number was measured with the quantitive analytical method in the display-image quality evaluation and about noise was analyze. There was A group and difference which the image noise notes statistically between B. And A group was high so that the image noise could note than B group (31.87 HUs, 31.78 HUs, 26.6 HUs, 30.42 HU P<0.05). The score of the observer 1 of A group evaluated 73.17 on 74.2 at the result 80 half tone dot of evaluating by the qualitative evaluation method of the image by the bean curd clinical image evaluation table. And the score of the observer 1 of B group evaluated 71.77 on 72.47. There was no difference (P>0.05) noted statistically. And the inappropriate image was shown to the diagnosis. As to the exposure dose, by examination by applying ASIR 50 % there was no decline in quality of the image, 47.6 % could reduce the radiation dose. In conclusion, if ASIR is applied to the clinical part, it is considered with the dose written much more that examination is possible. And when examination, it is considered that it becomes the positive factor when the examiner determines.

Evaluating the Impact of Attenuation Correction Difference According to the Lipiodol in PET/CT after TACE (간동맥 화학 색전술에 사용하는 Lipiodol에 의한 감쇠 오차가 PET/CT검사에서 영상에 미치는 영향 평가)

  • Cha, Eun Sun;Hong, Gun chul;Park, Hoon;Choi, Choon Ki;Seok, Jae Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.67-70
    • /
    • 2013
  • Purpose: Surge in patients with hepatocellular carcinoma, hepatic artery chemical embolization is one of the effective interventional procedures. The PET/CT examination plays an important role in determining the presence of residual cancer cells and metastasis, and prognosis after embolization. The other hand, the hepatic artery chemical embolization of embolic material used lipiodol produced artifacts in the PET/CT examination, and these artifacts results in quantitative evaluation influence. This study, the radioactivity density and the percentage error was evaluated by the extent of the impact of lipiodol in the image of PET/CT. Materials and Methods: 1994 NEMA Phantom was acquired for 2 minutes and 30 seconds per bed after the Teflon, water and lipiodol filled, and these three inserts into the enough to mix the rest behind radioactive injection with $20{\pm}10MBq$. Phantom reconfigure with the iterative reconstruction method the number of iterations for two times by law, a subset of 20 errors. We set up region of interest at each area of the Teflon, water, lipiodol, insert artifact occurs between regions, and background and it was calculated and compared by the radioactivity density(kBq/ml) and the% Difference. Results: Radioactivity density of the each region of interest area with the teflon, water, lipiodol, insert artifact occurs between regions, background activity was $0.09{\pm}0.04$, $0.40{\pm}0.17$, $1.55{\pm}0.75$, $2.5{\pm}1.09$, $2.65{\pm}1.16 kBq/ml$ (P <0.05) and it was statistically significant results. Percentage error of lipiodol in each area was 118%, compared to the water compared with the background activity 52%, compared with a teflon was 180% of the difference. Conclusion: We found that the error due to under the influence of the attenuation correction when PET/CT scans after lipiodol injection performed, and the radioactivity density is higher than compared to other implants, lower than background. Applying the nonattenuation correction images, and after hepatic artery chemical embolization who underwent PET/CT imaging so that the test should be take the consideration to the extent of the impact of lipiodol be.

  • PDF

Quantitative Comparisons in $^{18}F$-FDG PET Images: PET/MR VS PET/CT ($^{18}F$-FDG PET 영상의 정량적 비교: PET/MR VS PET/CT)

  • Lee, Moo Seok;Im, Young Hyun;Kim, Jae Hwan;Choe, Gyu O
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.68-80
    • /
    • 2012
  • Purpose : More recently, combined PET/MR scanners have been developed in which the MR data can be used for both anatometabolic image formation and attenuation correction of the PET data. For quantitative PET information, correction of tissue photon attenuation is mandatory. The attenuation map is obtained from the CT scan in the PET/CT. In the case of PET/MR, the attenuation map can be calculated from the MR image. The purpose of this study was to assess the quantitative differences between MR-based and CT-based attenuation corrected PET images. Materials and Methods : Using the uniform cylinder phantom of distilled water which has 199.8 MBq of $^{18}F$-FDG put into the phantom, we studied the effect of MR-based and CT-based attenuation corrected PET images, of the PET-CT using time of flight (TOF) and non-TOF iterative reconstruction. The images were acquired from 60 minutes at 15-minute intervals. Region of interests were drawn over 70% from the center of the image, and the Scanners' analysis software tools calculated both maximum and mean SUV. These data were analyzed by one way-anova test and Bland-Altman analysis. MR images are segmented into three classes(not including bone), and each class is assigned to each region based on the expected average attenuation of each region. For clinical diagnostic purpose, PET/MR and PET/CT images were acquired in 23 patients (Ingenuity TF PET/MR, Gemini TF64). PET/CT scans were performed approximately 33.8 minutes after the beginnig of the PET/MR scans. Region of interests were drawn over 9 regions of interest(lung, liver, spleen, bone), and the Scanners' analysis software tools calculated both maximum and mean SUV. The SUVs from 9 regions of interest in MR-based PET images and in CT-based PET images were compared. These data were analyzed by paired t test and Bland-Altman analysis. Results : In phantom study, MR-based attenuation corrected PET images generally showed slightly lower -0.36~-0.15 SUVs than CT-based attenuation corrected PET images (p<0.05). In clinical study, MR-based attenuation corrected PET images generally showed slightly lower SUVs than CT-based attenuation corrected PET images (excepting left middle lung and transverse Lumbar) (p<0.05). And percent differences were -8.01.79% lower for the PET/MR images than for the PET/CT images. (excepting lung) Based on the Bland-Altman method, the agreement between the two methods was considered good. Conclusion : PET/MR confirms generally lower SUVs than PET/CT. But, there were no difference in the clinical interpretations made by the quantitative comparisons with both type of attenuation map.

  • PDF