• Title/Summary/Keyword: CT영상

Search Result 2,407, Processing Time 0.041 seconds

Synthesis of contrast CT image using deep learning network (딥러닝 네트워크를 이용한 조영증강 CT 영상 생성)

  • Woo, Sang-Keun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.465-467
    • /
    • 2019
  • 본 논문에서는 영상생성이 가능한 딥러닝 네트워크를 이용하여 조영증강 CT 영상을 획득하는 연구를 수행하였다. CT는 고해상도 영상을 바탕으로 환자의 질병 및 암 세포 진단에 사용되는 의료영상 기법 중 하나이다. 특히, 조영제를 투여한 다음 CT 영상을 획득되는 영상을 조영증강 CT 영상이라 한다. 조영증강된 CT 영상은 물질의 구성 성분의 영상대비를 강조하여 임상의로 하여금 진단 및 치료반응 평가의 정확성을 향상시켜준다. 하지많은 수의 환자들이 조영제 부작용을 갖기 때문에 이에 해당되는 환자의 경우 조영증강 CT 영상 획득이 불가능해진다. 따라서 본 연구에서는 조영증강 영상을 얻지 못하는 환자 및 일반 환자의 불필요한 방사선의 노출을 최소화 하기 위하여 영상생성 딥러닝 기법을 이용하여 CT 영상에서 조영증강 CT 영상을 생성하는 연구를 진행하였다. 영상생성 딥러닝 네트워크는 generative adversarial network (GAN) 모델을 사용하였다. 연구결과 아무런 전처리도 거치지 않은 CT 영상을 이용하여 영상을 생성하는 것 보다 히스토그램 균일화 과정을 거친 영상이 더 좋은 결과를 나타냈으며 생성영상이 기존의 실제 영상과 영상의 구조적 유사도가 높음을 확인할 수 있다. 본 연구결과 딥러닝 영상생성 모델을 이용하여 조영증강 CT 영상을 생성할 수 있었으며, 이를 통하여 환자의 불필요한 방사선 피폭을 최소하며, 생성된 조영증강 CT 영상을 바탕으로 정확한 진단 및 치료반응 평가에 기여할 수 있을거라 기대된다.

  • PDF

Image Registration for PET/CT and CT Images with Particle Swarm Optimization (Particle Swarm Optimization을 이용한 PET/CT와 CT영상의 정합)

  • Lee, Hak-Jae;Kim, Yong-Kwon;Lee, Ki-Sung;Moon, Guk-Hyun;Joo, Sung-Kwan;Kim, Kyeong-Min;Cheon, Gi-Jeong;Choi, Jong-Hak;Kim, Chang-Kyun
    • Journal of radiological science and technology
    • /
    • v.32 no.2
    • /
    • pp.195-203
    • /
    • 2009
  • Image registration is a fundamental task in image processing used to match two or more images. It gives new information to the radiologists by matching images from different modalities. The objective of this study is to develop 2D image registration algorithm for PET/CT and CT images acquired by different systems at different times. We matched two CT images first (one from standalone CT and the other from PET/CT) that contain affluent anatomical information. Then, we geometrically transformed PET image according to the results of transformation parameters calculated by the previous step. We have used Affine transform to match the target and reference images. For the similarity measure, mutual information was explored. Use of particle swarm algorithm optimized the performance by finding the best matched parameter set within a reasonable amount of time. The results show good agreements of the images between PET/CT and CT. We expect the proposed algorithm can be used not only for PET/CT and CT image registration but also for different multi-modality imaging systems such as SPECT/CT, MRI/PET and so on.

  • PDF

Dependency of Generator Performance on T1 and T2 weights of the Input MR Images in developing a CycleGan based CT image generator from MR images (CycleGan 딥러닝기반 인공CT영상 생성성능에 대한 입력 MR영상의 T1 및 T2 가중방식의 영향)

  • Samuel Lee;Jonghun Jeong;Jinyoung Kim;Yeon Soo Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.37-44
    • /
    • 2024
  • Even though MR can reveal excellent soft-tissue contrast and functional information, CT is also required for electron density information for accurate dose calculation in Radiotherapy. For the fusion of MRI and CT images in RT treatment planning workflow, patients are normally scanned on both MRI and CT imaging modalities. Recently deep-learning-based generations of CT images from MR images became possible owing to machine learning technology. This eliminated CT scanning work. This study implemented a CycleGan deep-learning-based CT image generation from MR images. Three CT generators whose learning is based on T1- , T2- , or T1-&T2-weighted MR images were created, respectively. We found that the T1-weighted MR image-based generator can generate better than other CT generators when T1-weighted MR images are input. In contrast, a T2-weighted MR image-based generator can generate better than other CT generators do when T2-weighted MR images are input. The results say that the CT generator from MR images is just outside the practical clinics and the specific weight MR image-based machine-learning generator can generate better CT images than other sequence MR image-based generators do.

Current Status and Improvements of Transfered PET/CT Data from Other Hospitals (외부 반출 PET/CT 영상 현황 및 개선점)

  • Kim, Gye-Hwan;Choi, Hyeon-Joon;Lee, Hong-Jae;Kim, Jin-Eui;Kim, Hyun-Joo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.38-40
    • /
    • 2010
  • Purpose: This study was performed to find the current problems of PET/CT data from other hospitals. Materials and Methods: The subjects were acquired from 64 hospitals referred to our department for image interpretation. The formats and contents of PET/CT data were reviewed and the phone questionnaire survey about these were performed. Results: PET/CT data from 39 of 64 hospitals (61%) included all transaxial CT and PET images with DICOM (Digital Imaging Communications in Medicine) standard format which were required for authentic interpretation. PET/CT data from the others included only secondary capture images or fusion PET/CT images. Conclusion: The majority of hospitals provided limited PET/CT data which could be inadequate for accurate interpretation and clinical decision making. It is necessary to standardize the format of PET/CT data to transfer including all transaxial CT and PET images with DICOM standard format.

  • PDF

Multimodality and Application Software (다중영상기기의 응용 소프트웨어)

  • Im, Ki-Chun
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.2
    • /
    • pp.153-163
    • /
    • 2008
  • Medical imaging modalities to image either anatomical structure or functional processes have developed along somewhat independent paths. Functional images with single photon emission computed tomography (SPECT) and positron emission tomography (PET) are playing an increasingly important role in the diagnosis and staging of malignant disease, image-guided therapy planning, and treatment monitoring. SPECT and PET complement the more conventional anatomic imaging modalities of computed tomography (CT) and magnetic resonance (MR) imaging. When the functional imaging modality was combined with the anatomic imaging modality, the multimodality can help both identify and localize functional abnormalities. Combining PET with a high-resolution anatomical imaging modality such as CT can resolve the localization issue as long as the images from the two modalities are accurately coregistered. Software-based registration techniques have difficulty accounting for differences in patient positioning and involuntary movement of internal organs, often necessitating labor-intensive nonlinear mapping that may not converge to a satisfactory result. These challenges have recently been addressed by the introduction of the combined PET/CT scanner and SPECT/CT scanner, a hardware-oriented approach to image fusion. Combined PET/CT and SPECT/CT devices are playing an increasingly important role in the diagnosis and staging of human disease. The paper will review the development of multi modality instrumentations for clinical use from conception to present-day technology and the application software.

The Extraction of Liver from the CT Images Using Co-occurrence Matrix (Co-occurrence Matrix를 이용한 CT 영상에서의 간 영역 추출)

  • 김규태
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.508-510
    • /
    • 2000
  • 본 논문은 의료 영상 중에서 복부 방사선 분야에서 보편적으로 사용되고 있는 CT 영상으로부터 간영역을 분할해내는 방법을 제시한다. 본 논문에서는 복부 CT영상에서 근육 부분과 척추, 늑골 부분을 제거하고, co-occurrence matrix를 이용한 국부 영상 이진화(local image thresholding) 방법을 통해 영상에서 간 영역을 분할한다.

  • PDF

Usefulness of CT based SPECT Fusion Image in the lung Disease : Preliminary Study (폐질환의 SPECT와 CT 융합영상의 유용성: 초기연구)

  • Park, Hoon-Hee;Kim, Tae-Hyung;Shin, Ji-Yun;Lee, Tae-Soo;Lyu, Kwang-Yeul
    • Journal of radiological science and technology
    • /
    • v.35 no.1
    • /
    • pp.59-64
    • /
    • 2012
  • Recently, SPECT/CT system has been applied to many diseases, however, the application is not extensively applied at pulmonary disease. Especially, in case that, the pulmonary embolisms suspect at the CT images, SPECT is performed. For the accurate diagnosis, SPECT/CT tests are subsequently undergoing.However, without SPECT/CT, there are some limitations to apply these procedures. With SPECT/CT, although, most of the examination performed after CT. Moreover, such a test procedures generate unnecessary dual irradiation problem to the patient. In this study, we evaluated the amount of unnecessary irradiation, and the usefulness of fusion images of pulmonary disease, which independently acquired from SPECT and CT. Using NEMA PhantomTM (NU2-2001), SPECT and CT scan were performed for fusion images. From June 2011 to September 2010, 10 patients who didn't have other personal history, except lung disease were selected (male: 7, female: 3, mean age: $65.3{\pm}12.7$). In both clinical patient and phantom data, the fusion images scored higher than SPECT and CT images. The fusion images, which is combined with pulmonary vessel images from CT and functional images from SPECT, can increase the detection possibility in detecting pulmonary embolism in the resin of lung parenchyma. It is sure that performing SPECT and CT in integral SPECT/CT system were better. However, we believe this protocol can give more informative data to have more accurate diagnosis in the hospital without integral SPECT/CT system.

Current Status of Imaging Physics & Instrumentation In Nuclear Medicine (핵의학 영상 물리 및 기기의 최신 동향)

  • Kim, Hee-Joung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.2
    • /
    • pp.83-87
    • /
    • 2008
  • Diagnostic and functional imaging device have been developed independently. The recognition that combining of these two devices can provide better diagnostic outcomes by fusing anatomical and functional images. The representative examples of combining devices would be PET/CT and SPECT/CT. Development and their applications of animal imaging and instrumentation have been very active, as new drug development with advanced imaging device has been increased. The development of advanced imaging device resulted in researching and developing for detector technology and imaging systems. It also contributed to develop a new software, reconstruction algorithm, correction methods for physical factors, image quantitation, computer simulation, kinetic modeling, dosimetry, and correction for motion artifacts. Recently, development of MRI and PET by combining them together was reported. True integration of MRI and PET has been making the progress and their results were reported. The recent status of imaging and instrumentation in nuclear medicine is reported in this paper.

Automatic Segmentation of the Interest Organ Region in CT Images Using Region Growing (CT 영상에서 Region Growing 기법을 이용한 관심 장기 영역의 자동 추출)

  • Bae, Ho-Young;Lee, Wu-Ju;Lee, Bae-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.526-530
    • /
    • 2006
  • 논문은 CT영상에서 영역 확장 기법을 이용하여 인간의 장기 중 뇌와 간을 자동으로 추출할 수 있는 방법을 제안한다. 이는 뇌와 간이 CT영상에서 비교적 넓은 영역을 차지하고 있다는 사실에 기인하였으며, CT영상에서 특정 장기 영역을 추출하기 위해서 크게 초기 탐색 영역 결정 단계와 최종 장기 영역 단계로 나누어진다. 초기 탐색 영역은 CT영상 내에서 추출하고자 하는 장기 영역과 관계없는 부분을 제거하고 특정 장기 영역만을 남겨 관심 장기 영역의 검출률을 높이는 작업이다. 본 논문에서는 CT영상에서 비교적 높은 Gray Level을 가지고 있는 뼈영역인 두개골과 척추의 위치를 기반으로 하여 초기 탐색 영역을 결정하는 방법을 사용하였다. 특정 장기 영역의 추출은 ATID(Automatic Threshold Intensity Decision)를 이용한 이진화 단계, 모폴로지의 Opening 기법을 이용한 잡음제거 단계, Region Growing 기법을 이용한 특정 영역 추출 단계를 이용하는 과정을 거친다. 본 논문에서는 Region Growing 기법을 거친 다음 각각의 그룹 중에서 크기가 가장 큰 부분을 최종 특정 장기 영역으로 결정하였다. 본 논문에서 제안한 알고리즘은 국립전남대학교 부속병원에서 수집된 각각 뇌영상 100장과 간영상 100장을 사용하여 실험하였고, 제안된 알고리즘을 통해 관심 장기 영역을 추출했을 경우 약 91%이상의 높은 추출률을 보였다.

  • PDF

Analysis of the image composition speed of RT and TPSM algorithms (RT과 TPSM 알고리즘의 영상구성 속도 분석)

  • Jin-Seob Shin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.139-143
    • /
    • 2023
  • In this paper, compared to the RT algorithm that constitutes CT images, the TPSM algorithm available in the conical CB-CT system was applied to enable 3D CT image configuration faster than the existing RT, and the image speeds of the two algorithms were compared and analyzed. To this end, the TPSM algorithm available in the conical CB-CT system was applied to enable real-time processing in 3D CT image composition. As a result of the experiment, it was found that the cross-sectional image constructed using TPSM decreases the quality of the image slightly by empty pixels as the distance from the center point increases, but in the case of TPSM rotation-based methods, the image composition speed is far superior to that of the RT algorithm.