• Title/Summary/Keyword: DeepU-Net

Search Result 179, Processing Time 0.025 seconds

Incremental Image Noise Reduction in Coronary CT Angiography Using a Deep Learning-Based Technique with Iterative Reconstruction

  • Jung Hee Hong;Eun-Ah Park;Whal Lee;Chulkyun Ahn;Jong-Hyo Kim
    • Korean Journal of Radiology
    • /
    • v.21 no.10
    • /
    • pp.1165-1177
    • /
    • 2020
  • Objective: To assess the feasibility of applying a deep learning-based denoising technique to coronary CT angiography (CCTA) along with iterative reconstruction for additional noise reduction. Materials and Methods: We retrospectively enrolled 82 consecutive patients (male:female = 60:22; mean age, 67.0 ± 10.8 years) who had undergone both CCTA and invasive coronary artery angiography from March 2017 to June 2018. All included patients underwent CCTA with iterative reconstruction (ADMIRE level 3, Siemens Healthineers). We developed a deep learning based denoising technique (ClariCT.AI, ClariPI), which was based on a modified U-net type convolutional neural net model designed to predict the possible occurrence of low-dose noise in the originals. Denoised images were obtained by subtracting the predicted noise from the originals. Image noise, CT attenuation, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were objectively calculated. The edge rise distance (ERD) was measured as an indicator of image sharpness. Two blinded readers subjectively graded the image quality using a 5-point scale. Diagnostic performance of the CCTA was evaluated based on the presence or absence of significant stenosis (≥ 50% lumen reduction). Results: Objective image qualities (original vs. denoised: image noise, 67.22 ± 25.74 vs. 52.64 ± 27.40; SNR [left main], 21.91 ± 6.38 vs. 30.35 ± 10.46; CNR [left main], 23.24 ± 6.52 vs. 31.93 ± 10.72; all p < 0.001) and subjective image quality (2.45 ± 0.62 vs. 3.65 ± 0.60, p < 0.001) improved significantly in the denoised images. The average ERDs of the denoised images were significantly smaller than those of originals (0.98 ± 0.08 vs. 0.09 ± 0.08, p < 0.001). With regard to diagnostic accuracy, no significant differences were observed among paired comparisons. Conclusion: Application of the deep learning technique along with iterative reconstruction can enhance the noise reduction performance with a significant improvement in objective and subjective image qualities of CCTA images.

Compressed-Sensing Cardiac CINE MRI using Neural Network with Transfer Learning (전이학습을 수행한 신경망을 사용한 압축센싱 심장 자기공명영상)

  • Park, Seong-Jae;Yoon, Jong-Hyun;Ahn, Chang-Beom
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1408-1414
    • /
    • 2019
  • Deep artificial neural network with transfer learning is applied to compressed sensing cardiovascular MRI. Transfer learning is a method that utilizes structure, filter kernels, and weights of the network used in prior learning for current learning or application. The transfer learning is useful in accelerating learning speed, and in generalization of the neural network when learning data is limited. From a cardiac MRI experiment, with 8 healthy volunteers, the neural network with transfer learning was able to reduce learning time by a factor of more than five compared to that with standalone learning. Using test data set, reconstructed images with transfer learning showed lower normalized mean square error and better image quality compared to those without transfer learning.

Study on 2D Sprite *3.Generation Using the Impersonator Network

  • Yongjun Choi;Beomjoo Seo;Shinjin Kang;Jongin Choi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1794-1806
    • /
    • 2023
  • This study presents a method for capturing photographs of users as input and converting them into 2D character animation sprites using a generative adversarial network-based artificial intelligence network. Traditionally, 2D character animations have been created by manually creating an entire sequence of sprite images, which incurs high development costs. To address this issue, this study proposes a technique that combines motion videos and sample 2D images. In the 2D sprite generation process that uses the proposed technique, a sequence of images is extracted from real-life images captured by the user, and these are combined with character images from within the game. Our research aims to leverage cutting-edge deep learning-based image manipulation techniques, such as the GAN-based motion transfer network (impersonator) and background noise removal (U2 -Net), to generate a sequence of animation sprites from a single image. The proposed technique enables the creation of diverse animations and motions just one image. By utilizing these advancements, we focus on enhancing productivity in the game and animation industry through improved efficiency and streamlined production processes. By employing state-of-the-art techniques, our research enables the generation of 2D sprite images with various motions, offering significant potential for boosting productivity and creativity in the industry.

Performance Analysis of Anomaly Area Segmentation in Industrial Products Based on Self-Attention Deep Learning Model (Self-Attention 딥러닝 모델 기반 산업 제품의 이상 영역 분할 성능 분석)

  • Changjoon Park;Namjung Kim;Junhwi Park;Jaehyun Lee;Jeonghwan Gwak
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.45-46
    • /
    • 2024
  • 본 논문에서는 Self-Attention 기반 딥러닝 기법인 Dense Prediction Transformer(DPT) 모델을 MVTec Anomaly Detection(MVTec AD) 데이터셋에 적용하여 실제 산업 제품 이미지 내 이상 부분을 분할하는 연구를 진행하였다. DPT 모델의 적용을 통해 기존 Convolutional Neural Network(CNN) 기반 이상 탐지기법의 한계점인 지역적 Feature 추출 및 고정된 수용영역으로 인한 문제를 개선하였으며, 실제 산업 제품 데이터에서의 이상 분할 시 기존 주력 기법인 U-Net의 구조를 적용한 최고 성능의 모델보다 1.14%만큼의 성능 향상을 보임에 따라 Self-Attention 기반 딥러닝 기법의 적용이 산업 제품 이상 분할에 효과적임을 입증하였다.

  • PDF

Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients

  • Jeon, Wan;An, Hyun Joon;Kim, Jung-in;Park, Jong Min;Kim, Hyoungnyoun;Shin, Kyung Hwan;Chie, Eui Kyu
    • Journal of Radiation Protection and Research
    • /
    • v.44 no.4
    • /
    • pp.149-155
    • /
    • 2019
  • Background: Magnetic resonance (MR) image guided radiation therapy system, enables real time MR guided radiotherapy (RT) without additional radiation exposure to patients during treatment. However, MR image lacks electron density information required for dose calculation. Image fusion algorithm with deformable registration between MR and computed tomography (CT) was developed to solve this issue. However, delivered dose may be different due to volumetric changes during image registration process. In this respect, synthetic CT generated from the MR image would provide more accurate information required for the real time RT. Materials and Methods: We analyzed 1,209 MR images from 16 patients who underwent MR guided RT. Structures were divided into five tissue types, air, lung, fat, soft tissue and bone, according to the Hounsfield unit of deformed CT. Using the deep learning model (U-NET model), synthetic CT images were generated from the MR images acquired during RT. This synthetic CT images were compared to deformed CT generated using the deformable registration. Pixel-to-pixel match was conducted to compare the synthetic and deformed CT images. Results and Discussion: In two test image sets, average pixel match rate per section was more than 70% (67.9 to 80.3% and 60.1 to 79%; synthetic CT pixel/deformed planning CT pixel) and the average pixel match rate in the entire patient image set was 69.8%. Conclusion: The synthetic CT generated from the MR images were comparable to deformed CT, suggesting possible use for real time RT. Deep learning model may further improve match rate of synthetic CT with larger MR imaging data.

Development of wound segmentation deep learning algorithm (딥러닝을 이용한 창상 분할 알고리즘 )

  • Hyunyoung Kang;Yeon-Woo Heo;Jae Joon Jeon;Seung-Won Jung;Jiye Kim;Sung Bin Park
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.90-94
    • /
    • 2024
  • Diagnosing wounds presents a significant challenge in clinical settings due to its complexity and the subjective assessments by clinicians. Wound deep learning algorithms quantitatively assess wounds, overcoming these challenges. However, a limitation in existing research is reliance on specific datasets. To address this limitation, we created a comprehensive dataset by combining open dataset with self-produced dataset to enhance clinical applicability. In the annotation process, machine learning based on Gradient Vector Flow (GVF) was utilized to improve objectivity and efficiency over time. Furthermore, the deep learning model was equipped U-net with residual blocks. Significant improvements were observed using the input dataset with images cropped to contain only the wound region of interest (ROI), as opposed to original sized dataset. As a result, the Dice score remarkably increased from 0.80 using the original dataset to 0.89 using the wound ROI crop dataset. This study highlights the need for diverse research using comprehensive datasets. In future study, we aim to further enhance and diversify our dataset to encompass different environments and ethnicities.

A New Hyper Parameter of Hounsfield Unit Range in Liver Segmentation

  • Kim, Kangjik;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.103-111
    • /
    • 2020
  • Liver cancer is the most fatal cancer that occurs worldwide. In order to diagnose liver cancer, the patient's physical condition was checked by using a CT technique using radiation. Segmentation was needed to diagnose the liver on the patient's abdominal CT scan, which the radiologists had to do manually, which caused tremendous time and human mistakes. In order to automate, researchers attempted segmentation using image segmentation algorithms in computer vision field, but it was still time-consuming because of the interactive based and the setting value. To reduce time and to get more accurate segmentation, researchers have begun to attempt to segment the liver in CT images using CNNs, which show significant performance in various computer vision fields. The pixel value, or numerical value, of the CT image is called the Hounsfield Unit (HU) value, which is a relative representation of the transmittance of radiation, and usually ranges from about -2000 to 2000. In general, deep learning researchers reduce or limit this range and use it for training to remove noise and focus on the target organ. Here, we observed that the range of HU values was limited in many studies but different in various liver segmentation studies, and assumed that performance could vary depending on the HU range. In this paper, we propose the possibility of considering HU value range as a hyper parameter. U-Net and ResUNet were used to compare and experiment with different HU range limit preprocessing of CHAOS dataset under limited conditions. As a result, it was confirmed that the results are different depending on the HU range. This proves that the range limiting the HU value itself can be a hyper parameter, which means that there are HU ranges that can provide optimal performance for various models.

Morphological Analysis of Hydraulically Stimulated Fractures by Deep-Learning Segmentation Method (딥러닝 기반 균열 추출 기법을 통한 수압 파쇄 균열 형상 분석)

  • Park, Jimin;Kim, Kwang Yeom ;Yun, Tae Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.39 no.8
    • /
    • pp.17-28
    • /
    • 2023
  • Laboratory-scale hydraulic fracturing experiments were conducted on granite specimens at various viscosities and injection rates of the fracturing fluid. A series of cross-sectional computed tomography (CT) images of fractured specimens was obtained via a three-dimensional X-ray CT imaging method. Pixel-level fracture segmentation of the CT images was conducted using a convolutional neural network (CNN)-based Nested U-Net model structure. Compared with traditional image processing methods, the CNN-based model showed a better performance in the extraction of thin and complex fractures. These extracted fractures extracted were reconstructed in three dimensions and morphologically analyzed based on their fracture volume, aperture, tortuosity, and surface roughness. The fracture volume and aperture increased with the increase in viscosity of the fracturing fluid, while the tortuosity and roughness of the fracture surface decreased. The findings also confirmed the anisotropic tortuosity and roughness of the fracture surface. In this study, a CNN-based model was used to perform accurate fracture segmentation, and quantitative analysis of hydraulic stimulated fractures was conducted successfully.

Semantic Segmentation of Drone Images Based on Combined Segmentation Network Using Multiple Open Datasets (개방형 다중 데이터셋을 활용한 Combined Segmentation Network 기반 드론 영상의 의미론적 분할)

  • Ahram Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.967-978
    • /
    • 2023
  • This study proposed and validated a combined segmentation network (CSN) designed to effectively train on multiple drone image datasets and enhance the accuracy of semantic segmentation. CSN shares the entire encoding domain to accommodate the diversity of three drone datasets, while the decoding domains are trained independently. During training, the segmentation accuracy of CSN was lower compared to U-Net and the pyramid scene parsing network (PSPNet) on single datasets because it considers loss values for all dataset simultaneously. However, when applied to domestic autonomous drone images, CSN demonstrated the ability to classify pixels into appropriate classes without requiring additional training, outperforming PSPNet. This research suggests that CSN can serve as a valuable tool for effectively training on diverse drone image datasets and improving object recognition accuracy in new regions.

Respiratory Motion Correction on PET Images Based on 3D Convolutional Neural Network

  • Hou, Yibo;He, Jianfeng;She, Bo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2191-2208
    • /
    • 2022
  • Motion blur in PET (Positron emission tomography) images induced by respiratory motion will reduce the quality of imaging. Although exiting methods have positive performance for respiratory motion correction in medical practice, there are still many aspects that can be improved. In this paper, an improved 3D unsupervised framework, Res-Voxel based on U-Net network was proposed for the motion correction. The Res-Voxel with multiple residual structure may improve the ability of predicting deformation field, and use a smaller convolution kernel to reduce the parameters of the model and decrease the amount of computation required. The proposed is tested on the simulated PET imaging data and the clinical data. Experimental results demonstrate that the proposed achieved Dice indices 93.81%, 81.75% and 75.10% on the simulated geometric phantom data, voxel phantom data and the clinical data respectively. It is demonstrated that the proposed method can improve the registration and correction performance of PET image.