• 제목/요약/키워드: Deep Learning Reconstruction

검색결과 108건 처리시간 0.027초

짝지어진 데이터셋을 이용한 분할-정복 U-net 기반 고화질 초음파 영상 복원 (A Divide-Conquer U-Net Based High-Quality Ultrasound Image Reconstruction Using Paired Dataset)

  • 유민하;안치영
    • 대한의용생체공학회:의공학회지
    • /
    • 제45권3호
    • /
    • pp.118-127
    • /
    • 2024
  • Commonly deep learning methods for enhancing the quality of medical images use unpaired dataset due to the impracticality of acquiring paired dataset through commercial imaging system. In this paper, we propose a supervised learning method to enhance the quality of ultrasound images. The U-net model is designed by incorporating a divide-and-conquer approach that divides and processes an image into four parts to overcome data shortage and shorten the learning time. The proposed model is trained using paired dataset consisting of 828 pairs of low-quality and high-quality images with a resolution of 512x512 pixels obtained by varying the number of channels for the same subject. Out of a total of 828 pairs of images, 684 pairs are used as the training dataset, while the remaining 144 pairs served as the test dataset. In the test results, the average Mean Squared Error (MSE) was reduced from 87.6884 in the low-quality images to 45.5108 in the restored images. Additionally, the average Peak Signal-to-Noise Ratio (PSNR) was improved from 28.7550 to 31.8063, and the average Structural Similarity Index (SSIM) was increased from 0.4755 to 0.8511, demonstrating significant enhancements in image quality.

Cycle-Consistent Generative Adversarial Network: Effect on Radiation Dose Reduction and Image Quality Improvement in Ultralow-Dose CT for Evaluation of Pulmonary Tuberculosis

  • Chenggong Yan;Jie Lin;Haixia Li;Jun Xu;Tianjing Zhang;Hao Chen;Henry C. Woodruff;Guangyao Wu;Siqi Zhang;Yikai Xu;Philippe Lambin
    • Korean Journal of Radiology
    • /
    • 제22권6호
    • /
    • pp.983-993
    • /
    • 2021
  • Objective: To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and Methods: Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were prospectively enrolled to undergo standard-dose CT (120 kVp with automated exposure control), followed immediately by ULDCT (80 kVp and 10 mAs). The images of the two successive scans were used to train the CycleGAN framework for image-to-image translation. The denoising efficacy of the CycleGAN algorithm was compared with that of hybrid and model-based iterative reconstruction. Repeated-measures analysis of variance and Wilcoxon signed-rank test were performed to compare the objective measurements and the subjective image quality scores, respectively. Results: With the optimized CycleGAN denoising model, using the ULDCT images as input, the peak signal-to-noise ratio and structural similarity index improved by 2.0 dB and 0.21, respectively. The CycleGAN-generated denoised ULDCT images typically provided satisfactory image quality for optimal visibility of anatomic structures and pathological findings, with a lower level of image noise (mean ± standard deviation [SD], 19.5 ± 3.0 Hounsfield unit [HU]) than that of the hybrid (66.3 ± 10.5 HU, p < 0.001) and a similar noise level to model-based iterative reconstruction (19.6 ± 2.6 HU, p > 0.908). The CycleGAN-generated images showed the highest contrast-to-noise ratios for the pulmonary lesions, followed by the model-based and hybrid iterative reconstruction. The mean effective radiation dose of ULDCT was 0.12 mSv with a mean 93.9% reduction compared to standard-dose CT. Conclusion: The optimized CycleGAN technique may allow the synthesis of diagnostically acceptable images from ULDCT of the chest for the evaluation of pulmonary tuberculosis.

Improving Diagnostic Performance of MRI for Temporal Lobe Epilepsy With Deep Learning-Based Image Reconstruction in Patients With Suspected Focal Epilepsy

  • Pae Sun Suh;Ji Eun Park;Yun Hwa Roh;Seonok Kim;Mina Jung;Yong Seo Koo;Sang-Ahm Lee;Yangsean Choi;Ho Sung Kim
    • Korean Journal of Radiology
    • /
    • 제25권4호
    • /
    • pp.374-383
    • /
    • 2024
  • Objective: To evaluate the diagnostic performance and image quality of 1.5-mm slice thickness MRI with deep learningbased image reconstruction (1.5-mm MRI + DLR) compared to routine 3-mm slice thickness MRI (routine MRI) and 1.5-mm slice thickness MRI without DLR (1.5-mm MRI without DLR) for evaluating temporal lobe epilepsy (TLE). Materials and Methods: This retrospective study included 117 MR image sets comprising 1.5-mm MRI + DLR, 1.5-mm MRI without DLR, and routine MRI from 117 consecutive patients (mean age, 41 years; 61 female; 34 patients with TLE and 83 without TLE). Two neuroradiologists evaluated the presence of hippocampal or temporal lobe lesions, volume loss, signal abnormalities, loss of internal structure of the hippocampus, and lesion conspicuity in the temporal lobe. Reference standards for TLE were independently constructed by neurologists using clinical and radiological findings. Subjective image quality, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were analyzed. Performance in diagnosing TLE, lesion findings, and image quality were compared among the three protocols. Results: The pooled sensitivity of 1.5-mm MRI + DLR (91.2%) for diagnosing TLE was higher than that of routine MRI (72.1%, P < 0.001). In the subgroup analysis, 1.5-mm MRI + DLR showed higher sensitivity for hippocampal lesions than routine MRI (92.7% vs. 75.0%, P = 0.001), with improved depiction of hippocampal T2 high signal intensity change (P = 0.016) and loss of internal structure (P < 0.001). However, the pooled specificity of 1.5-mm MRI + DLR (76.5%) was lower than that of routine MRI (89.2%, P = 0.004). Compared with 1.5-mm MRI without DLR, 1.5-mm MRI + DLR resulted in significantly improved pooled accuracy (91.2% vs. 73.1%, P = 0.010), image quality, SNR, and CNR (all, P < 0.001). Conclusion: The use of 1.5-mm MRI + DLR enhanced the performance of MRI in diagnosing TLE, particularly in hippocampal evaluation, because of improved depiction of hippocampal abnormalities and enhanced image quality.

Stage-GAN with Semantic Maps for Large-scale Image Super-resolution

  • Wei, Zhensong;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.3942-3961
    • /
    • 2019
  • Recently, the models of deep super-resolution networks can successfully learn the non-linear mapping from the low-resolution inputs to high-resolution outputs. However, for large scaling factors, this approach has difficulties in learning the relation of low-resolution to high-resolution images, which lead to the poor restoration. In this paper, we propose Stage Generative Adversarial Networks (Stage-GAN) with semantic maps for image super-resolution (SR) in large scaling factors. We decompose the task of image super-resolution into a novel semantic map based reconstruction and refinement process. In the initial stage, the semantic maps based on the given low-resolution images can be generated by Stage-0 GAN. In the next stage, the generated semantic maps from Stage-0 and corresponding low-resolution images can be used to yield high-resolution images by Stage-1 GAN. In order to remove the reconstruction artifacts and blurs for high-resolution images, Stage-2 GAN based post-processing module is proposed in the last stage, which can reconstruct high-resolution images with photo-realistic details. Extensive experiments and comparisons with other SR methods demonstrate that our proposed method can restore photo-realistic images with visual improvements. For scale factor ${\times}8$, our method performs favorably against other methods in terms of gradients similarity.

MLP-Mixer를 이용한 이미지 이상탐지 (Image Anomaly Detection Using MLP-Mixer)

  • 황주효;진교홍
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.104-107
    • /
    • 2022
  • 오토인코더 딥러닝 모델은 이상 데이터도 정상 데이터로 복원하는 능력이 우수하여 이상탐지에 적절하지 못한 경우가 발생한다. 그리고 데이터의 일부를 가린(마스킹) 후 가린 데이터를 복원하는 방식인 Inpainting 방식은 잡음이 많은 이미지에 대해서는 복원능력이 떨어지는 문제점을 가지고 있다. 본 논문에서는 MLP-Mixer 모델을 수정·개선하여 이미지를 일정 비율로 마스킹하고 마스킹된 이미지의 압축된 정보를 모델에 전달해 이미지를 재구성하는 방식을 사용하였다. MVTec AD 데이터 셋의 정상 데이터로 학습한 모델을 구축한 뒤, 정상과 이상 이미지를 각각 입력하여 재구성 오류를 구하고 이를 통해 이상탐지를 수행하였다. 성능 평가 결과 제안된 방식이 기존의 방식에 비해 이상탐지 성능이 우수한 것으로 나타났다.

  • PDF

Multimode-fiber Speckle Image Reconstruction Based on Multiscale Convolution and a Multidimensional Attention Mechanism

  • Kai Liu;Leihong Zhang;Runchu Xu;Dawei Zhang;Haima Yang;Quan Sun
    • Current Optics and Photonics
    • /
    • 제8권5호
    • /
    • pp.463-471
    • /
    • 2024
  • Multimode fibers (MMFs) possess high information throughput and small core diameter, making them highly promising for applications such as endoscopy and communication. However, modal dispersion hinders the direct use of MMFs for image transmission. By training neural networks on time-series waveforms collected from MMFs it is possible to reconstruct images, transforming blurred speckle patterns into recognizable images. This paper proposes a fully convolutional neural-network model, MSMDFNet, for image restoration in MMFs. The network employs an encoder-decoder architecture, integrating multiscale convolutional modules in the decoding layers to enhance the receptive field for feature extraction. Additionally, attention mechanisms are incorporated from both spatial and channel dimensions, to improve the network's feature-perception capabilities. The algorithm demonstrates excellent performance on MNIST and Fashion-MNIST datasets collected through MMFs, showing significant improvements in various metrics such as SSIM.

AdaMM-DepthNet: Unsupervised Adaptive Depth Estimation Guided by Min and Max Depth Priors for Monocular Images

  • ;김문철
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2020년도 추계학술대회
    • /
    • pp.252-255
    • /
    • 2020
  • Unsupervised deep learning methods have shown impressive results for the challenging monocular depth estimation task, a field of study that has gained attention in recent years. A common approach for this task is to train a deep convolutional neural network (DCNN) via an image synthesis sub-task, where additional views are utilized during training to minimize a photometric reconstruction error. Previous unsupervised depth estimation networks are trained within a fixed depth estimation range, irrespective of its possible range for a given image, leading to suboptimal estimates. To overcome this suboptimal limitation, we first propose an unsupervised adaptive depth estimation method guided by minimum and maximum (min-max) depth priors for a given input image. The incorporation of min-max depth priors can drastically reduce the depth estimation complexity and produce depth estimates with higher accuracy. Moreover, we propose a novel network architecture for adaptive depth estimation, called the AdaMM-DepthNet, which adopts the min-max depth estimation in its front side. Intensive experimental results demonstrate that the adaptive depth estimation can significantly boost up the accuracy with a fewer number of parameters over the conventional approaches with a fixed minimum and maximum depth range.

  • PDF

이상 전력 탐지를 위한 TCN-USAD (TCN-USAD for Anomaly Power Detection)

  • 진현석;김경백
    • 스마트미디어저널
    • /
    • 제13권7호
    • /
    • pp.9-17
    • /
    • 2024
  • 에너지 사용량의 증가와 친환경 정책으로 인해 건물 에너지를 효율적으로 소비할 필요가 있으며, 이를 위해 딥러닝 기반 이상 전력 탐지가 수행되고 있다. 수집이 어려운 이상치 데이터의 특징으로 인해 Recurrent Neural Network(RNN) 기반 오토인코더를 활용한 복원 에러 기반으로 이상 탐지가 수행되고 있으나, 시계열 특징을 온전히 학습하는데 시간이 오래 걸리고 학습 데이터의 노이즈에 민감하다는 단점이 있다. 본 논문에서는 이러한 한계를 극복하기 위해 Temporal Convolutional Network(TCN)과 UnSupervised Anomaly Detection for multivariate time series(USAD)를 결합한 TCN-USAD를 제안한다. 제안된 모델은 TCN 기반 오토인코더와 두 개의 디코더와 적대적 학습을 사용하는 USAD 구조를 활용하여 빠르게 시계열 특징을 온전히 학습할 수 있고 강건한 이상 탐지가 가능하다. TCN-USAD의 성능을 입증하기 위해 2개의 건물 전력 사용량 데이터 세트를 사용하여 비교 실험을 수행한 결과, TCN 기반 오토인코더는 RNN 기반 오토 인코더 대비 빠르고 복원 성능이 우수하였으며, 이를 활용한 TCN-USAD는 다른 이상 탐지 모델 대비 약 20% 개선된 F1-Score를 달성하여 뛰어난 이상 탐지 성능을 보였다.

Image Enhanced Machine Vision System for Smart Factory

  • Kim, ByungJoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권2호
    • /
    • pp.7-13
    • /
    • 2021
  • Machine vision is a technology that helps the computer as if a person recognizes and determines things. In recent years, as advanced technologies such as optical systems, artificial intelligence and big data advanced in conventional machine vision system became more accurate quality inspection and it increases the manufacturing efficiency. In machine vision systems using deep learning, the image quality of the input image is very important. However, most images obtained in the industrial field for quality inspection typically contain noise. This noise is a major factor in the performance of the machine vision system. Therefore, in order to improve the performance of the machine vision system, it is necessary to eliminate the noise of the image. There are lots of research being done to remove noise from the image. In this paper, we propose an autoencoder based machine vision system to eliminate noise in the image. Through experiment proposed model showed better performance compared to the basic autoencoder model in denoising and image reconstruction capability for MNIST and fashion MNIST data sets.

생성적 적대 신경망 기반 3차원 포인트 클라우드 향상 기법 (3D Point Cloud Enhancement based on Generative Adversarial Network)

  • Moon, HyungDo;Kang, Hoonjong;Jo, Dongsik
    • 한국정보통신학회논문지
    • /
    • 제25권10호
    • /
    • pp.1452-1455
    • /
    • 2021
  • Recently, point clouds are generated by capturing real space in 3D, and it is actively applied and serviced for performances, exhibitions, education, and training. These point cloud data require post-correction work to be used in virtual environments due to errors caused by the capture environment with sensors and cameras. In this paper, we propose an enhancement technique for 3D point cloud data by applying generative adversarial network(GAN). Thus, we performed an approach to regenerate point clouds as an input of GAN. Through our method presented in this paper, point clouds with a lot of noise is configured in the same shape as the real object and environment, enabling precise interaction with the reconstructed content.