• Title/Summary/Keyword: Deep Learning Reconstruction

Search Result 97, Processing Time 0.021 seconds

Improvement in Image Quality and Visibility of Coronary Arteries, Stents, and Valve Structures on CT Angiography by Deep Learning Reconstruction

  • Chuluunbaatar Otgonbaatar;Jae-Kyun Ryu;Jaemin Shin;Ji Young Woo;Jung Wook Seo;Hackjoon Shim;Dae Hyun Hwang
    • Korean Journal of Radiology
    • /
    • v.23 no.11
    • /
    • pp.1044-1054
    • /
    • 2022
  • Objective: This study aimed to investigate whether a deep learning reconstruction (DLR) method improves the image quality, stent evaluation, and visibility of the valve apparatus in coronary computed tomography angiography (CCTA) when compared with filtered back projection (FBP) and hybrid iterative reconstruction (IR) methods. Materials and Methods: CCTA images of 51 patients (mean age ± standard deviation [SD], 63.9 ± 9.8 years, 36 male) who underwent examination at a single institution were reconstructed using DLR, FBP, and hybrid IR methods and reviewed. CT attenuation, image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and stent evaluation, including 10%-90% edge rise slope (ERS) and 10%-90% edge rise distance (ERD), were measured. Quantitative data are summarized as the mean ± SD. The subjective visual scores (1 for worst -5 for best) of the images were obtained for the following: overall image quality, image noise, and appearance of stent, vessel, and aortic and tricuspid valve apparatus (annulus, leaflets, papillary muscles, and chordae tendineae). These parameters were compared between the DLR, FBP, and hybrid IR methods. Results: DLR provided higher Hounsfield unit (HU) values in the aorta and similar attenuation in the fat and muscle compared with FBP and hybrid IR. The image noise in HU was significantly lower in DLR (12.6 ± 2.2) than in hybrid IR (24.2 ± 3.0) and FBP (54.2 ± 9.5) (p < 0.001). The SNR and CNR were significantly higher in the DLR group than in the FBP and hybrid IR groups (p < 0.001). In the coronary stent, the mean value of ERS was significantly higher in DLR (1260.4 ± 242.5 HU/mm) than that of FBP (801.9 ± 170.7 HU/mm) and hybrid IR (641.9 ± 112.0 HU/mm). The mean value of ERD was measured as 0.8 ± 0.1 mm for DLR while it was 1.1 ± 0.2 mm for FBP and 1.1 ± 0.2 mm for hybrid IR. The subjective visual scores were higher in the DLR than in the images reconstructed with FBP and hybrid IR. Conclusion: DLR reconstruction provided better images than FBP and hybrid IR reconstruction.

Nuclear Medicine Physics: Review of Advanced Technology

  • Oh, Jungsu S.
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.81-98
    • /
    • 2020
  • This review aims to provide a brief, comprehensive overview of advanced technologies of nuclear medicine physics, with a focus on recent developments from both hardware and software perspectives. Developments in image acquisition/reconstruction, especially the time-of-flight and point spread function, have potential advantages in the image signal-to-noise ratio and spatial resolution. Modern detector materials and devices (including lutetium oxyorthosilicate, cadmium zinc tellurium, and silicon photomultiplier) as well as modern nuclear medicine imaging systems (including positron emission tomography [PET]/computerized tomography [CT], whole-body PET, PET/magnetic resonance [MR], and digital PET) enable not only high-quality digital image acquisition, but also subsequent image processing, including image reconstruction and post-reconstruction methods. Moreover, theranostics in nuclear medicine extend the usefulness of nuclear medicine physics far more than quantitative image-based diagnosis, playing a key role in personalized/precision medicine by raising the importance of internal radiation dosimetry in nuclear medicine. Now that deep-learning-based image processing can be incorporated in nuclear medicine image acquisition/processing, the aforementioned fields of nuclear medicine physics face the new era of Industry 4.0. Ongoing technological developments in nuclear medicine physics are leading to enhanced image quality and decreased radiation exposure as well as quantitative and personalized healthcare.

3D Mesh Reconstruction Technique from Single Image using Deep Learning and Sphere Shape Transformation Method (딥러닝과 구체의 형태 변형 방법을 이용한 단일 이미지에서의 3D Mesh 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.2
    • /
    • pp.160-168
    • /
    • 2022
  • In this paper, we propose a 3D mesh reconstruction method from a single image using deep learning and a sphere shape transformation method. The proposed method has the following originality that is different from the existing method. First, the position of the vertex of the sphere is modified to be very similar to the 3D point cloud of an object through a deep learning network, unlike the existing method of building edges or faces by connecting nearby points. Because 3D point cloud is used, less memory is required and faster operation is possible because only addition operation is performed between offset value at the vertices of the sphere. Second, the 3D mesh is reconstructed by covering the surface information of the sphere on the modified vertices. Even when the distance between the points of the 3D point cloud created by correcting the position of the vertices of the sphere is not constant, it already has the face information of the sphere called face information of the sphere, which indicates whether the points are connected or not, thereby preventing simplification or loss of expression. can do. In order to evaluate the objective reliability of the proposed method, the experiment was conducted in the same way as in the comparative papers using the ShapeNet dataset, which is an open standard dataset. As a result, the IoU value of the method proposed in this paper was 0.581, and the chamfer distance value was It was calculated as 0.212. The higher the IoU value and the lower the chamfer distance value, the better the results. Therefore, the efficiency of the 3D mesh reconstruction was demonstrated compared to the methods published in other papers.

3D Object Generation and Renderer System based on VAE ResNet-GAN

  • Min-Su Yu;Tae-Won Jung;GyoungHyun Kim;Soonchul Kwon;Kye-Dong Jung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.142-146
    • /
    • 2023
  • We present a method for generating 3D structures and rendering objects by combining VAE (Variational Autoencoder) and GAN (Generative Adversarial Network). This approach focuses on generating and rendering 3D models with improved quality using residual learning as the learning method for the encoder. We deep stack the encoder layers to accurately reflect the features of the image and apply residual blocks to solve the problems of deep layers to improve the encoder performance. This solves the problems of gradient vanishing and exploding, which are problems when constructing a deep neural network, and creates a 3D model of improved quality. To accurately extract image features, we construct deep layers of the encoder model and apply the residual function to learning to model with more detailed information. The generated model has more detailed voxels for more accurate representation, is rendered by adding materials and lighting, and is finally converted into a mesh model. 3D models have excellent visual quality and accuracy, making them useful in various fields such as virtual reality, game development, and metaverse.

Improving the quality of light-field data extracted from a hologram using deep learning

  • Dae-youl Park;Joongki Park
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.165-174
    • /
    • 2024
  • We propose a method to suppress the speckle noise and blur effects of the light field extracted from a hologram using a deep-learning technique. The light field can be extracted by bandpass filtering in the hologram's frequency domain. The extracted light field has reduced spatial resolution owing to the limited passband size of the bandpass filter and the blurring that occurs when the object is far from the hologram plane and also contains speckle noise caused by the random phase distribution of the three-dimensional object surface. These limitations degrade the reconstruction quality of the hologram resynthesized using the extracted light field. In the proposed method, a deep-learning model based on a generative adversarial network is designed to suppress speckle noise and blurring, resulting in improved quality of the light field extracted from the hologram. The model is trained using pairs of original two-dimensional images and their corresponding light-field data extracted from the complex field generated by the images. Validation of the proposed method is performed using light-field data extracted from holograms of objects with single and multiple depths and mesh-based computer-generated holograms.

Algorithm for Determining Whether Work Data is Normal using Autoencoder (오토인코더를 이용한 작업 데이터 정상 여부 판단 알고리즘)

  • Kim, Dong-Hyun;Oh, Jeong Seok
    • Journal of the Korean Institute of Gas
    • /
    • v.25 no.5
    • /
    • pp.63-69
    • /
    • 2021
  • In this study, we established an algorithm to determine whether the work in the gas facility is a normal work or an abnormal work using the threshold of the reconstruction error of the autoencoder. This algorithm do deep learning the autoencoder only with time-series data of a normal work, and derives the optimized threshold of the reconstruction error of the normal work. We applied this algorithm to the time series data of the new work to get the reconstruction error, and then compare it with the reconstruction error threshold of the normal work to determine whether the work is normal work or abnormal work. In order to train and validate this algorithm, we defined the work in a virtual gas facility, and constructed the training data set consisting only of normal work data and the validation data set including both normal work and abnormal work data.

Validation of Deep-Learning Image Reconstruction for Low-Dose Chest Computed Tomography Scan: Emphasis on Image Quality and Noise

  • Joo Hee Kim;Hyun Jung Yoon;Eunju Lee;Injoong Kim;Yoon Ki Cha;So Hyeon Bak
    • Korean Journal of Radiology
    • /
    • v.22 no.1
    • /
    • pp.131-138
    • /
    • 2021
  • Objective: Iterative reconstruction degrades image quality. Thus, further advances in image reconstruction are necessary to overcome some limitations of this technique in low-dose computed tomography (LDCT) scan of the chest. Deep-learning image reconstruction (DLIR) is a new method used to reduce dose while maintaining image quality. The purposes of this study was to evaluate image quality and noise of LDCT scan images reconstructed with DLIR and compare with those of images reconstructed with the adaptive statistical iterative reconstruction-Veo at a level of 30% (ASiR-V 30%). Materials and Methods: This retrospective study included 58 patients who underwent LDCT scan for lung cancer screening. Datasets were reconstructed with ASiR-V 30% and DLIR at medium and high levels (DLIR-M and DLIR-H, respectively). The objective image signal and noise, which represented mean attenuation value and standard deviation in Hounsfield units for the lungs, mediastinum, liver, and background air, and subjective image contrast, image noise, and conspicuity of structures were evaluated. The differences between CT scan images subjected to ASiR-V 30%, DLIR-M, and DLIR-H were evaluated. Results: Based on the objective analysis, the image signals did not significantly differ among ASiR-V 30%, DLIR-M, and DLIR-H (p = 0.949, 0.737, 0.366, and 0.358 in the lungs, mediastinum, liver, and background air, respectively). However, the noise was significantly lower in DLIR-M and DLIR-H than in ASiR-V 30% (all p < 0.001). DLIR had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than ASiR-V 30% (p = 0.027, < 0.001, and < 0.001 in the SNR of the lungs, mediastinum, and liver, respectively; all p < 0.001 in the CNR). According to the subjective analysis, DLIR had higher image contrast and lower image noise than ASiR-V 30% (all p < 0.001). DLIR was superior to ASiR-V 30% in identifying the pulmonary arteries and veins, trachea and bronchi, lymph nodes, and pleura and pericardium (all p < 0.001). Conclusion: DLIR significantly reduced the image noise in chest LDCT scan images compared with ASiR-V 30% while maintaining superior image quality.

A Study on Pre-processing for the Classification of Rare Classes (희소 클래스 분류 문제 해결을 위한 전처리 연구)

  • Ryu, Kyungjoon;Shin, Dongkyoo;Shin, Dongil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.472-475
    • /
    • 2020
  • 실생활의 사례를 바탕으로 생성된 여러 분야의 데이터셋을 기계학습 (Machine Learning) 문제에 적용하고 있다. 정보보안 분야에서도 사이버 공간에서의 공격 트래픽 데이터를 기계학습으로 분석하는 많은 연구들이 진행 되어 왔다. 본 논문에서는 공격 데이터를 유형별로 정확히 분류할 때, 실생활 데이터에서 흔하게 발생하는 데이터 불균형 문제로 인한 분류 성능 저하에 대한 해결방안을 연구했다. 희소 클래스 관점에서 데이터를 재구성하고 기계학습에 악영향을 끼치는 특징들을 제거하고 DNN(Deep Neural Network) 모델을 사용해 분류 성능을 평가했다.

Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology (인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Kim, K.N.;Kim, D.H;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.

Deep Learning-based Super Resolution for Phase-only Holograms (위상 홀로그램을 위한 딥러닝 기반의 초고해상도)

  • Kim, Woosuk;Park, Byung-Seo;Kim, Jin-Kyum;Oh, Kwan-Jung;Kim, Jin-Woong;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.935-943
    • /
    • 2020
  • In this paper, we propose a method using deep learning for high-resolution display of phase holograms. If a general interpolation method is used, the brightness of the reconstruction result is lowered, and noise and afterimages occur. To solve this problem, a hologram was trained with a neural network structure that showed good performance in the single-image super resolution (SISR). As a result, it was possible to improve the problem that occurred in the reconstruction result and increase the resolution. In addition, by adjusting the number of channels to increase performance, the result increased by more than 0.3dB in same training.