• Title/Summary/Keyword: CT 영상

Search Result 1,918, Processing Time 0.036 seconds

Synthesis of contrast CT image using deep learning network (딥러닝 네트워크를 이용한 조영증강 CT 영상 생성)

  • Woo, Sang-Keun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.465-467
    • /
    • 2019
  • 본 논문에서는 영상생성이 가능한 딥러닝 네트워크를 이용하여 조영증강 CT 영상을 획득하는 연구를 수행하였다. CT는 고해상도 영상을 바탕으로 환자의 질병 및 암 세포 진단에 사용되는 의료영상 기법 중 하나이다. 특히, 조영제를 투여한 다음 CT 영상을 획득되는 영상을 조영증강 CT 영상이라 한다. 조영증강된 CT 영상은 물질의 구성 성분의 영상대비를 강조하여 임상의로 하여금 진단 및 치료반응 평가의 정확성을 향상시켜준다. 하지많은 수의 환자들이 조영제 부작용을 갖기 때문에 이에 해당되는 환자의 경우 조영증강 CT 영상 획득이 불가능해진다. 따라서 본 연구에서는 조영증강 영상을 얻지 못하는 환자 및 일반 환자의 불필요한 방사선의 노출을 최소화 하기 위하여 영상생성 딥러닝 기법을 이용하여 CT 영상에서 조영증강 CT 영상을 생성하는 연구를 진행하였다. 영상생성 딥러닝 네트워크는 generative adversarial network (GAN) 모델을 사용하였다. 연구결과 아무런 전처리도 거치지 않은 CT 영상을 이용하여 영상을 생성하는 것 보다 히스토그램 균일화 과정을 거친 영상이 더 좋은 결과를 나타냈으며 생성영상이 기존의 실제 영상과 영상의 구조적 유사도가 높음을 확인할 수 있다. 본 연구결과 딥러닝 영상생성 모델을 이용하여 조영증강 CT 영상을 생성할 수 있었으며, 이를 통하여 환자의 불필요한 방사선 피폭을 최소하며, 생성된 조영증강 CT 영상을 바탕으로 정확한 진단 및 치료반응 평가에 기여할 수 있을거라 기대된다.

  • PDF

Image Registration for PET/CT and CT Images with Particle Swarm Optimization (Particle Swarm Optimization을 이용한 PET/CT와 CT영상의 정합)

  • Lee, Hak-Jae;Kim, Yong-Kwon;Lee, Ki-Sung;Moon, Guk-Hyun;Joo, Sung-Kwan;Kim, Kyeong-Min;Cheon, Gi-Jeong;Choi, Jong-Hak;Kim, Chang-Kyun
    • Journal of radiological science and technology
    • /
    • v.32 no.2
    • /
    • pp.195-203
    • /
    • 2009
  • Image registration is a fundamental task in image processing used to match two or more images. It gives new information to the radiologists by matching images from different modalities. The objective of this study is to develop 2D image registration algorithm for PET/CT and CT images acquired by different systems at different times. We matched two CT images first (one from standalone CT and the other from PET/CT) that contain affluent anatomical information. Then, we geometrically transformed PET image according to the results of transformation parameters calculated by the previous step. We have used Affine transform to match the target and reference images. For the similarity measure, mutual information was explored. Use of particle swarm algorithm optimized the performance by finding the best matched parameter set within a reasonable amount of time. The results show good agreements of the images between PET/CT and CT. We expect the proposed algorithm can be used not only for PET/CT and CT image registration but also for different multi-modality imaging systems such as SPECT/CT, MRI/PET and so on.

  • PDF

Current Status and Improvements of Transfered PET/CT Data from Other Hospitals (외부 반출 PET/CT 영상 현황 및 개선점)

  • Kim, Gye-Hwan;Choi, Hyeon-Joon;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.38-40
    • /
    • 2010
  • Purpose: This study was performed to find the current problems of PET/CT data from other hospitals. Materials and Methods: The subjects were acquired from 64 hospitals referred to our department for image interpretation. The formats and contents of PET/CT data were reviewed and the phone questionnaire survey about these were performed. Results: PET/CT data from 39 of 64 hospitals (61%) included all transaxial CT and PET images with DICOM (Digital Imaging Communications in Medicine) standard format which were required for authentic interpretation. PET/CT data from the others included only secondary capture images or fusion PET/CT images. Conclusion: The majority of hospitals provided limited PET/CT data which could be inadequate for accurate interpretation and clinical decision making. It is necessary to standardize the format of PET/CT data to transfer including all transaxial CT and PET images with DICOM standard format.

  • PDF

Dependency of Generator Performance on T1 and T2 weights of the Input MR Images in developing a CycleGan based CT image generator from MR images (CycleGan 딥러닝기반 인공CT영상 생성성능에 대한 입력 MR영상의 T1 및 T2 가중방식의 영향)

  • Samuel Lee;Jonghun Jeong;Jinyoung Kim;Yeon Soo Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.37-44
    • /
    • 2024
  • Even though MR can reveal excellent soft-tissue contrast and functional information, CT is also required for electron density information for accurate dose calculation in Radiotherapy. For the fusion of MRI and CT images in RT treatment planning workflow, patients are normally scanned on both MRI and CT imaging modalities. Recently deep-learning-based generations of CT images from MR images became possible owing to machine learning technology. This eliminated CT scanning work. This study implemented a CycleGan deep-learning-based CT image generation from MR images. Three CT generators whose learning is based on T1- , T2- , or T1-&T2-weighted MR images were created, respectively. We found that the T1-weighted MR image-based generator can generate better than other CT generators when T1-weighted MR images are input. In contrast, a T2-weighted MR image-based generator can generate better than other CT generators do when T2-weighted MR images are input. The results say that the CT generator from MR images is just outside the practical clinics and the specific weight MR image-based machine-learning generator can generate better CT images than other sequence MR image-based generators do.

Multimodality and Application Software (다중영상기기의 응용 소프트웨어)

  • Im, Ki-Chun
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.2
    • /
    • pp.153-163
    • /
    • 2008
  • Medical imaging modalities to image either anatomical structure or functional processes have developed along somewhat independent paths. Functional images with single photon emission computed tomography (SPECT) and positron emission tomography (PET) are playing an increasingly important role in the diagnosis and staging of malignant disease, image-guided therapy planning, and treatment monitoring. SPECT and PET complement the more conventional anatomic imaging modalities of computed tomography (CT) and magnetic resonance (MR) imaging. When the functional imaging modality was combined with the anatomic imaging modality, the multimodality can help both identify and localize functional abnormalities. Combining PET with a high-resolution anatomical imaging modality such as CT can resolve the localization issue as long as the images from the two modalities are accurately coregistered. Software-based registration techniques have difficulty accounting for differences in patient positioning and involuntary movement of internal organs, often necessitating labor-intensive nonlinear mapping that may not converge to a satisfactory result. These challenges have recently been addressed by the introduction of the combined PET/CT scanner and SPECT/CT scanner, a hardware-oriented approach to image fusion. Combined PET/CT and SPECT/CT devices are playing an increasingly important role in the diagnosis and staging of human disease. The paper will review the development of multi modality instrumentations for clinical use from conception to present-day technology and the application software.

Usefulness of CT based SPECT Fusion Image in the lung Disease : Preliminary Study (폐질환의 SPECT와 CT 융합영상의 유용성: 초기연구)

  • Park, Hoon-Hee;Kim, Tae-Hyung;Shin, Ji-Yun;Lee, Tae-Soo;Lyu, Kwang-Yeul
    • Journal of radiological science and technology
    • /
    • v.35 no.1
    • /
    • pp.59-64
    • /
    • 2012
  • Recently, SPECT/CT system has been applied to many diseases, however, the application is not extensively applied at pulmonary disease. Especially, in case that, the pulmonary embolisms suspect at the CT images, SPECT is performed. For the accurate diagnosis, SPECT/CT tests are subsequently undergoing.However, without SPECT/CT, there are some limitations to apply these procedures. With SPECT/CT, although, most of the examination performed after CT. Moreover, such a test procedures generate unnecessary dual irradiation problem to the patient. In this study, we evaluated the amount of unnecessary irradiation, and the usefulness of fusion images of pulmonary disease, which independently acquired from SPECT and CT. Using NEMA PhantomTM (NU2-2001), SPECT and CT scan were performed for fusion images. From June 2011 to September 2010, 10 patients who didn't have other personal history, except lung disease were selected (male: 7, female: 3, mean age: $65.3{\pm}12.7$). In both clinical patient and phantom data, the fusion images scored higher than SPECT and CT images. The fusion images, which is combined with pulmonary vessel images from CT and functional images from SPECT, can increase the detection possibility in detecting pulmonary embolism in the resin of lung parenchyma. It is sure that performing SPECT and CT in integral SPECT/CT system were better. However, we believe this protocol can give more informative data to have more accurate diagnosis in the hospital without integral SPECT/CT system.

Current Status of Imaging Physics & Instrumentation In Nuclear Medicine (핵의학 영상 물리 및 기기의 최신 동향)

  • Kim, Hee-Joung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.2
    • /
    • pp.83-87
    • /
    • 2008
  • Diagnostic and functional imaging device have been developed independently. The recognition that combining of these two devices can provide better diagnostic outcomes by fusing anatomical and functional images. The representative examples of combining devices would be PET/CT and SPECT/CT. Development and their applications of animal imaging and instrumentation have been very active, as new drug development with advanced imaging device has been increased. The development of advanced imaging device resulted in researching and developing for detector technology and imaging systems. It also contributed to develop a new software, reconstruction algorithm, correction methods for physical factors, image quantitation, computer simulation, kinetic modeling, dosimetry, and correction for motion artifacts. Recently, development of MRI and PET by combining them together was reported. True integration of MRI and PET has been making the progress and their results were reported. The recent status of imaging and instrumentation in nuclear medicine is reported in this paper.

Analysis of the image composition speed of RT and TPSM algorithms (RT과 TPSM 알고리즘의 영상구성 속도 분석)

  • Jin-Seob Shin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.139-143
    • /
    • 2023
  • In this paper, compared to the RT algorithm that constitutes CT images, the TPSM algorithm available in the conical CB-CT system was applied to enable 3D CT image configuration faster than the existing RT, and the image speeds of the two algorithms were compared and analyzed. To this end, the TPSM algorithm available in the conical CB-CT system was applied to enable real-time processing in 3D CT image composition. As a result of the experiment, it was found that the cross-sectional image constructed using TPSM decreases the quality of the image slightly by empty pixels as the distance from the center point increases, but in the case of TPSM rotation-based methods, the image composition speed is far superior to that of the RT algorithm.

Current Status and Problems of PET/CT Data on CD for Inter-hospital Transfer (병원간 전송용 PET/CT 영상 CD자료의 현황 및 문제점)

  • Hyun, Seung-Hyup;Choi, Joon-Young;Lee, Su-Jin;Cho, Young-Seok;Lee, Ji-Young;Cheon, Mi-Ju;Cho, Suk-Kyong;Lee, Kyung-Han;Kim, Byung-Tae
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.2
    • /
    • pp.137-142
    • /
    • 2009
  • Purpose: This study was performed to find the current problems of positron emission tomography/computed tomography(PET/CT) data on CD for inter-hospital transfer. Materials and Methods: The subjects were 746 consecutive $^{18}F$-fluorodeoxyglucose PET/CT data CDs from 56 hospitals referred to our department for image interpretation. The formats and contents of PET/CT data CDs were reviewed and the email questionnaire survey about this was performed. Results: PET/CT data CDs from 21 of 56 hospitals(37.5%) included all transaxial CT and PET images with DICOM standard format which were required for authentic interpretation. PET/CT data from the others included only secondary capture images or fusion PET/CT images. According to this survey, the main reason of limited PET/CT data on CD for inter-hospital transfer was that the data volume of PET/CT was too large to upload to the Picture Archiving and Communication System. Conclusion: The majority of hospitals provided limited PET/CT data on CD for inter-hospital transfer, which could be inadequate for accurate interpretation and clinical decision making. It is necessary to standardize the format of PET/CT data on CD for inter-hospital transfer including all transaxial CT and PET images with DICOM standard format.

Artifact Reduction in Sparse-view Computed Tomography Image using Residual Learning Combined with Wavelet Transformation (Wavelet 변환과 결합한 잔차 학습을 이용한 희박뷰 전산화단층영상의 인공물 감소)

  • Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.3
    • /
    • pp.295-302
    • /
    • 2022
  • Sparse-view computed tomography (CT) imaging technique is able to reduce radiation dose, ensure the uniformity of image characteristics among projections and suppress noise. However, the reconstructed images obtained by the sparse-view CT imaging technique suffer from severe artifacts, resulting in the distortion of image quality and internal structures. In this study, we proposed a convolutional neural network (CNN) with wavelet transformation and residual learning for reducing artifacts in sparse-view CT image, and the performance of the trained model was quantitatively analyzed. The CNN consisted of wavelet transformation, convolutional and inverse wavelet transformation layers, and input and output images were configured as sparse-view CT images and residual images, respectively. For training the CNN, the loss function was calculated by using mean squared error (MSE), and the Adam function was used as an optimizer. Result images were obtained by subtracting the residual images, which were predicted by the trained model, from sparse-view CT images. The quantitative accuracy of the result images were measured in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The results showed that the trained model is able to improve the spatial resolution of the result images as well as reduce artifacts in sparse-view CT images effectively. Also, the trained model increased the PSNR and SSIM by 8.18% and 19.71% in comparison to the imaging model trained without wavelet transformation and residual learning, respectively. Therefore, the imaging model proposed in this study can restore the image quality of sparse-view CT image by reducing artifacts, improving spatial resolution and quantitative accuracy.