• Title/Summary/Keyword: 3-D CT image

Search Result 433, Processing Time 0.036 seconds

Accuracy of simulation surgery of Le Fort I osteotomy using optoelectronic tracking navigation system (광학추적항법장치를 이용한 르포씨 제1형 골절단 가상 수술의 정확성에 대한 연구)

  • Bu, Yeon-Ji;Kim, Soung-Min;Kim, Ji-Youn;Park, Jung-Min;Myoung, Hoon;Lee, Jong-Ho;Kim, Myung-Jin
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.37 no.2
    • /
    • pp.114-121
    • /
    • 2011
  • Introduction: The aim of this study was to demonstrate that the simulation surgery on rapid prototype (RP) model, which is based on the 3-dimensional computed tomography (3D CT) data taken before surgery, has the same accuracy as traditional orthograthic surgery with an intermediate splint, using an optoelectronic tracking navigation system. Materials and Methods: Simulation surgery with the same treatment plan as the Le Fort I osteotomy on the patient was done on a RP model based on the 3D CT data of 12 patients who had undergone a Le Fort I osteotomy in the department of oral and maxillofacial surgery, Seoul National University Dental Hospital. The 12 distances between 4 points on the skull, such as both infraorbital foramen and both supraorbital foramen, and 3 points on maxilla, such as the contact point of both maxillary central incisors and mesiobuccal cuspal tip of both maxillary first molars, were tracked using an optoelectronic tracking navigation system. The distances before surgery were compared to evaluate the accuracy of the RP model and the distance changes of 3D CT image after surgery were compared with those of the RP model after simulation surgery. Results: A paired t-test revealed a significant difference between the distances in the 3D CT image and RP model before surgery.(P<0.0001) On the other hand, Pearson's correlation coefficient, 0.995, revealed a significant positive correlation between the distances.(P<0.0001) There was a significant difference between the change in the distance of the 3D CT image and RP model in before and after surgery.(P<0.05) The Pearson's correlation coefficient was 0.13844, indicating positive correlation.(P<0.1) Conclusion: Theses results suggest that the simulation surgery of a Le Fort I osteotomy using an optoelectronic tracking navigation system I s relatively accurate in comparing the pre-, and post-operative 3D CT data. Furthermore, the application of an optoelectronic tracking navigation system may be a predictable and efficient method in Le Fort I orthognathic surgery.

Volume measurement of limb edema using three dimensional registration method of depth images based on plane detection (깊이 영상의 평면 검출 기반 3차원 정합 기법을 이용한 상지 부종의 부피 측정 기술)

  • Lee, Wonhee;Kim, Kwang Gi;Chung, Seung Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.818-828
    • /
    • 2014
  • After emerging of Microsoft Kinect, the interest in three-dimensional (3D) depth image was significantly increased. Depth image data of an object can be converted to 3D coordinates by simple arithmetic calculation and then can be reconstructed as a 3D model on computer. However, because the surface coordinates can be acquired only from the front area facing Kinect, total solid which has a closed surface cannot be reconstructed. In this paper, 3D registration method for multiple Kinects was suggested, in which surface information from each Kinect was simultaneously collected and registered in real time to build 3D total solid. To unify relative coordinate system used by each Kinect, 3D perspective transform was adopted. Also, to detect control points which are necessary to generate transformation matrix, 3D randomized Hough transform was used. Once transform matrices were generated, real time 3D reconstruction of various objects was possible. To verify the usefulness of suggested method, human arms were 3D reconstructed and the volumes of them were measured by using four Kinects. This volume measuring system was developed to monitor the level of lymphedema of patients after cancer treatment and the measurement difference with medical CT was lower than 5%, expected CT reconstruction error.

Study of machine learning model for predicting non-small cell lung cancer metastasis using image texture feature (Image texture feature를 이용하여 비소세포폐암 전이 예측 머신러닝 모델 연구)

  • Hye Min Ju;Sang-Keun Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.313-315
    • /
    • 2023
  • 본 논문에서는 18F-FDG PET과 CT에서 추출한 영상인자를 이용하여 비소세포폐암의 전이를 예측하는 머신러닝 모델을 생성하였다. 18F-FDG는 종양의 포도당 대사 시 사용되며 이를 추적하여 환자의 암 세포를 진단하는데 사용되는 의료영상 기법 중 하나이다. PET과 CT 영상에서 추출한 이미지 특징은 종양의 생물학적 특성을 반영하며 해당 ROI로부터 계산되어 정량화된 값이다. 본 연구에서는 환자의 의료영상으로부터 image texture 프절 전이 예측에 있어 유의한 인자인지를 확인하기 위하여 AUC를 계산하고 단변량 분석을 진행하였다. PET과 CT에서 각각 4개(GLRLM_GLNU, SHAPE_Compacity only for 3D ROI, SHAPE_Volume_vx, SHAPE_Volume_mL)와 2개(NGLDM_Busyness, TLG_ml)의 image texture feature를 모델의 생성에 사용하였다. 생성된 각 모델의 성능을 평가하기 위해 accuracy와 AUC를 계산하였으며 그 결과 random forest(RF) 모델의 예측 정확도가 가장 높았다. 추출된 PET과 CT image texture feature를 함께 사용하여 모델을 훈련하였을 때가 각각 따로 사용하였을 때 보다 예측 성능이 개선됨을 확인하였다. 추출된 영상인자가 림프절 전이를 나타내는 바이오마커로서의 가능성을 확인할 수 있었으며 이러한 연구 결과를 바탕으로 개인별 의료 영상을 기반으로 한 비소세포폐암의 치료 전략을 수립할 수 있을 것이라 기대된다.

  • PDF

Comparison of 3D Reconstruction Image and Medical Photograph of Neck Tumors (경부 종물에서 3차원 재건 영상과 적출 조직 사진의 비교)

  • Yoo, Young-Sam
    • Korean Journal of Head & Neck Oncology
    • /
    • v.26 no.2
    • /
    • pp.198-203
    • /
    • 2010
  • Objectives : Getting full information from axial CT images needs experiences and knowledge. Sagittal and coronal images could give more information but we have to draw 3-dimensional images in mind with above informations. With aid of 3D reconstruction softwares, CT data are converted to visible 3D images. We tried to compare medical photographs of 15 surgical specimens from neck tumors with 3D reconstructed images of same patients. Material and Methods : Fifteen patients with neck tumors treated surgically were recruited. Medical photograph of the surgical specimens were collected for comparison. 3D reconstruction of neck CT from same patients with aid of 3D-doctor software gave 3D images of neck masses. Width and height of tumors of each photos and images from the same cases were calculated and compared statistically. Visual similarities were rated between photos and 3D images. Results : No statatistical difference were found in size between medical photos and 3D images. Visual similarity score were higher between 2 groups of images. Conclusion : 3D reconstructed images of neck mass looked alike the real photographs of excised neck mass with similar calculated sizes. It could give us reliable visual information about the mass.

Realtime 3D Reconstruction of the Surface on Cross Sectional Contour in CT Image (단면 윤곽선을 이용한 표면의 실시간 3차원 재구성)

  • Koo, J.Y.;Jung, S.B.;Min, H.G.;Hong, S.H.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.189-190
    • /
    • 1998
  • In this paper, we show the realtime 3D reconstruction algorithm with the sliced CT images. The preprocessing is thresholding, labeling, contouring, and extracting dominant point. we reconstruct 3D image with dominant points using dynamic matching technique. The software implemented in Visualc++ 5.0 as a window-based application program.

  • PDF

MDCT Angiography of the Subclavian Artery Thrombosis of the 3D Findings (쇄골하동맥 혈전증에서의 MDCT 혈관조영술의 3D 영상)

  • Kweon, Dae Cheol
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.7
    • /
    • pp.813-819
    • /
    • 2018
  • To demonstrate the 3D usefulness of MDCT, a 73-year-old male patient with subclavian thrombosis was obtained 3D images of maximum intensity projection (MIP), volume rendering, and multiplanar reformation (MPR) to clearly detect and locate the subclavian artery. The data will be provided to the patient for diagnosis and treatment. The scan data were acquired as 3D CT images MIP, volume rendering, curved MPR, and virtual endoscopy images. In the 3D program, the ascending aorta was measured as 364.28 HU, the left carotid artery was 413.77 HU, and the left subclavian artery was 15.72 HU. MIP coronal image shows the closure of the subclavian artery in the left side. Three-dimensional volume images were obtained with 100% permeability and 87-1265 HU. The coronal curved MPR and sagittal curved MPR images show the closure of the subclavian artery due to thrombus using 3D image processing. In the case of subclavian arterial occlusion due to thrombosis, the patient is scanned with MDCT and 3D image processing can be used to confirm occlusion of subclavian artery.

3D Fusion Imaging based on Spectral Computed Tomography Using K-edge Images (K-각 영상을 이용한 스펙트럼 전산화단층촬영 기반 3차원 융합진단영상화에 관한 연구)

  • Kim, Burnyoung;Lee, Seungwan;Yim, Dobin
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.523-530
    • /
    • 2019
  • The purpose of this study was to obtain the K-edge images using a spectral CT system based on a photon-counting detector and implement the 3D fusion imaging using the conventional and spectral CT images. Also, we evaluated the clinical feasibility of the 3D fusion images though the quantitative analysis of image quality. A spectral CT system based on a CdTe photon-counting detector was used to obtain K-edge images. A pork phantom was manufactured with the six tubes including diluted iodine and gadolinium solutions. The K-edge images were obtained by the low-energy thresholds of 35 and 52 keV for iodine and gadolinium imaging with the X-ray spectrum, which was generated at a tube voltage of 100 kVp with a tube current of $500{\mu}A$. We implemented 3D fusion imaging by combining the iodine and gadolinium K-edge images with the conventional CT images. The results showed that the CNRs of the 3D fusion images were 6.76-14.9 times higher than those of the conventional CT images. Also, the 3D fusion images was able to provide the maps of target materials. Therefore, the technique proposed in this study can improve the quality of CT images and the diagnostic efficiency through the additional information of target materials.

Study of Appropriate Increment during VRT Rendering before Musculoskeletal Surgery (근골격계 수술전 VRT Rendering시 적절한 increment에 대한 연구)

  • Gang, Heon-Hyo;Kim, Dong-Hyun
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.5
    • /
    • pp.675-681
    • /
    • 2019
  • The purpose of this study was to investigate the effect of increasing the amount of 3D volume imaging on the hand, knee, and foot human phantom in CT, After analyzing the data, three - dimensional volumetric images were implemented using MMWP program to evaluate reproducibility. First, the data amount of three human phantoms according to each increment was analyzed. Secondly, the reproducibility evaluation and the measured length were compared. As a result of analyzing the amount of image data for each phantom according to the increment, it was confirmed that the amount of data is reduced to about 1/10 when the increment is set to 1.0 mm as compared with the case where the increment is set to 0.1 mm. In the evaluation of the feasibility, gap was generated from 0.7mm for hand phantom, 0.6mm for knee phantom and foot phantom, and it was confirmed that even when the actual phantom and actual length were compared, the length was much different and the implementation was lowered. As the increment is closer to 1.0mm, the number of images is small and the 3D implementation time is small. Therefore, it is best to determine the increase before the gap of the image is generated and to apply the Increment for preoperative diagnosis. We hope that this study will be an indicator of the accurate increment setting when implementing 3D image through VRT Rendering after CT scan.

An Efficient CT Image Denoising using WT-GAN Model

  • Hae Chan Jeong;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.21-29
    • /
    • 2024
  • Reducing the radiation dose during CT scanning can lower the risk of radiation exposure, but not only does the image resolution significantly deteriorate, but the effectiveness of diagnosis is reduced due to the generation of noise. Therefore, noise removal from CT images is a very important and essential processing process in the image restoration. Until now, there are limitations in removing only the noise by separating the noise and the original signal in the image area. In this paper, we aim to effectively remove noise from CT images using the wavelet transform-based GAN model, that is, the WT-GAN model in the frequency domain. The GAN model used here generates images with noise removed through a U-Net structured generator and a PatchGAN structured discriminator. To evaluate the performance of the WT-GAN model proposed in this paper, experiments were conducted on CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. As a result of the performance experiment, the WT-GAN model is better than the traditional filter, that is, the BM3D filter, as well as the existing deep learning models, such as DnCNN, CDAE model, and U-Net GAN model, in qualitative and quantitative measures, that is, PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure) showed excellent results.

The elimination of the linear artifacts by the metal restorations in the three dimensional computed tomographic images using the personal computer and software (개인용 컴퓨터와 소프트웨어를 이용한 3차원 전산화단층영상에서의 금속 수복물에 의한 선상 오류의 제거)

  • Park Hyok;Lee Hee-Cheol;Kim Kee-Deog;Park Chang-Seo
    • Imaging Science in Dentistry
    • /
    • v.33 no.3
    • /
    • pp.151-159
    • /
    • 2003
  • Purpose: The purpose of this study is to evaluate the effectiveness and usefulness of newly developed personal computer-based software to eliminate the linear artifacts by the metal restorations. Materials and Methods: A 3D CT image was conventionally reconstructed using ADVANTAGE WINDOWS 2.0 3D Analysis software (GE Medical System, Milwaukee, USA) and eliminated the linear artifacts manually. Next, a 3D CT image was reconstructed using V-works 4.0/sup TM/(Cybermed Inc., Seoul, Korea) and the linear artifacts eliminated manually in the axial images by a skillful operator using a personal computer. A 3D CT image was reconstructed using V-works 4.0/sup TM/(Cybermed Inc., Seoul, Korea) and the linear artifacts were removed using a simplified algorithm program to eliminate the linear artifacts automatically in the axial images using a personal computer, abbreviating the manual editing procedure. Finally, the automatically edited reconstructed 3D images were compared to the manually edited images. Results and Conclusion: We effectively eliminated the linear artifacts automatically by this algorithm, not by the manual editing procedures, in some degree. But programs based on more complicated and accurate algorithms may lead to a nearly flawless elimination of these linear artifacts automatically.

  • PDF