• Title/Summary/Keyword: DeepU-Net

Search Result 179, Processing Time 0.03 seconds

Automatic Extraction of Liver Region from Medical Images by Using an MFUnet

  • Vi, Vo Thi Tuong;Oh, A-Ran;Lee, Guee-Sang;Yang, Hyung-Jeong;Kim, Soo-Hyung
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.59-70
    • /
    • 2020
  • This paper presents a fully automatic tool to recognize the liver region from CT images based on a deep learning model, namely Multiple Filter U-net, MFUnet. The advantages of both U-net and Multiple Filters were utilized to construct an autoencoder model, called MFUnet for segmenting the liver region from computed tomograph. The MFUnet architecture includes the autoencoding model which is used for regenerating the liver region, the backbone model for extracting features which is trained on ImageNet, and the predicting model used for liver segmentation. The LiTS dataset and Chaos dataset were used for the evaluation of our research. This result shows that the integration of Multiple Filter to U-net improves the performance of liver segmentation and it opens up many research directions in medical imaging processing field.

Perceptual Photo Enhancement with Generative Adversarial Networks (GAN 신경망을 통한 자각적 사진 향상)

  • Que, Yue;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.522-524
    • /
    • 2019
  • In spite of a rapid development in the quality of built-in mobile cameras, their some physical restrictions hinder them to achieve the satisfactory results of digital single lens reflex (DSLR) cameras. In this work we propose an end-to-end deep learning method to translate ordinary images by mobile cameras into DSLR-quality photos. The method is based on the framework of generative adversarial networks (GANs) with several improvements. First, we combined the U-Net with DenseNet and connected dense block (DB) in terms of U-Net. The Dense U-Net acts as the generator in our GAN model. Then, we improved the perceptual loss by using the VGG features and pixel-wise content, which could provide stronger supervision for contrast enhancement and texture recovery.

Automatic Building Extraction Using SpaceNet Building Dataset and Context-based ResU-Net (SpaceNet 건물 데이터셋과 Context-based ResU-Net을 이용한 건물 자동 추출)

  • Yoo, Suhong;Kim, Cheol Hwan;Kwon, Youngmok;Choi, Wonjun;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.685-694
    • /
    • 2022
  • Building information is essential for various urban spatial analyses. For this reason, continuous building monitoring is required, but it is a subject with many practical difficulties. To this end, research is being conducted to extract buildings from satellite images that can be continuously observed over a wide area. Recently, deep learning-based semantic segmentation techniques have been used. In this study, a part of the structure of the context-based ResU-Net was modified, and training was conducted to automatically extract a building from a 30 cm Worldview-3 RGB image using SpaceNet's building v2 free open data. As a result of the classification accuracy evaluation, the f1-score, which was higher than the classification accuracy of the 2nd SpaceNet competition winners. Therefore, if Worldview-3 satellite imagery can be continuously provided, it will be possible to use the building extraction results of this study to generate an automatic model of building around the world.

Corneal Ulcer Region Detection With Semantic Segmentation Using Deep Learning

  • Im, Jinhyuk;Kim, Daewon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.1-12
    • /
    • 2022
  • Traditional methods of measuring corneal ulcers were difficult to present objective basis for diagnosis because of the subjective judgment of the medical staff through photographs taken with special equipment. In this paper, we propose a method to detect the ulcer area on a pixel basis in corneal ulcer images using a semantic segmentation model. In order to solve this problem, we performed the experiment to detect the ulcer area based on the DeepLab model which has the highest performance in semantic segmentation model. For the experiment, the training and test data were selected and the backbone network of DeepLab model which set as Xception and ResNet, respectively were evaluated and compared the performances. We used Dice similarity coefficient and IoU value as an indicator to evaluate the performances. Experimental results show that when 'crop & resized' images are added to the dataset, it segment the ulcer area with an average accuracy about 93% of Dice similarity coefficient on the DeepLab model with ResNet101 as the backbone network. This study shows that the semantic segmentation model used for object detection also has an ability to make significant results when classifying objects with irregular shapes such as corneal ulcers. Ultimately, we will perform the extension of datasets and experiment with adaptive learning methods through future studies so that they can be implemented in real medical diagnosis environment.

Comparative evaluation of deep learning-based building extraction techniques using aerial images (항공영상을 이용한 딥러닝 기반 건물객체 추출 기법들의 비교평가)

  • Mo, Jun Sang;Seong, Seon Kyeong;Choi, Jae Wan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.157-165
    • /
    • 2021
  • Recently, as the spatial resolution of satellite and aerial images has improved, various studies using remotely sensed data with high spatial resolution have been conducted. In particular, since the building extraction is essential for creating digital thematic maps, high accuracy of building extraction result is required. In this manuscript, building extraction models were generated using SegNet, U-Net, FC-DenseNet, and HRNetV2, which are representative semantic segmentation models in deep learning techniques, and then the evaluation of building extraction results was performed. Training dataset for building extraction were generated by using aerial orthophotos including various buildings, and evaluation was conducted in three areas. First, the model performance was evaluated through the region adjacent to the training dataset. In addition, the applicability of the model was evaluated through the region different from the training dataset. As a result, the f1-score of HRNetV2 represented the best values in terms of model performance and applicability. Through this study, the possibility of creating and modifying the building layer in the digital map was confirmed.

SKU-Net: Improved U-Net using Selective Kernel Convolution for Retinal Vessel Segmentation

  • Hwang, Dong-Hwan;Moon, Gwi-Seong;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.29-37
    • /
    • 2021
  • In this paper, we propose a deep learning-based retinal vessel segmentation model for handling multi-scale information of fundus images. we integrate the selective kernel convolution into U-Net-based convolutional neural network. The proposed model extracts and segment features information with various shapes and sizes of retinal blood vessels, which is important information for diagnosing eye-related diseases from fundus images. The proposed model consists of standard convolutions and selective kernel convolutions. While the standard convolutional layer extracts information through the same size kernel size, The selective kernel convolution extracts information from branches with various kernel sizes and combines them by adaptively adjusting them through split-attention. To evaluate the performance of the proposed model, we used the DRIVE and CHASE DB1 datasets and the proposed model showed F1 score of 82.91% and 81.71% on both datasets respectively, confirming that the proposed model is effective in segmenting retinal blood vessels.

Document Image Binarization by GAN with Unpaired Data Training

  • Dang, Quang-Vinh;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.16 no.2
    • /
    • pp.8-18
    • /
    • 2020
  • Data is critical in deep learning but the scarcity of data often occurs in research, especially in the preparation of the paired training data. In this paper, document image binarization with unpaired data is studied by introducing adversarial learning, excluding the need for supervised or labeled datasets. However, the simple extension of the previous unpaired training to binarization inevitably leads to poor performance compared to paired data training. Thus, a new deep learning approach is proposed by introducing a multi-diversity of higher quality generated images. In this paper, a two-stage model is proposed that comprises the generative adversarial network (GAN) followed by the U-net network. In the first stage, the GAN uses the unpaired image data to create paired image data. With the second stage, the generated paired image data are passed through the U-net network for binarization. Thus, the trained U-net becomes the binarization model during the testing. The proposed model has been evaluated over the publicly available DIBCO dataset and it outperforms other techniques on unpaired training data. The paper shows the potential of using unpaired data for binarization, for the first time in the literature, which can be further improved to replace paired data training for binarization in the future.

A Triple Residual Multiscale Fully Convolutional Network Model for Multimodal Infant Brain MRI Segmentation

  • Chen, Yunjie;Qin, Yuhang;Jin, Zilong;Fan, Zhiyong;Cai, Mao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.962-975
    • /
    • 2020
  • The accurate segmentation of infant brain MR image into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is very important for early studying of brain growing patterns and morphological changes in neurodevelopmental disorders. Because of inherent myelination and maturation process, the WM and GM of babies (between 6 and 9 months of age) exhibit similar intensity levels in both T1-weighted (T1w) and T2-weighted (T2w) MR images in the isointense phase, which makes brain tissue segmentation very difficult. We propose a deep network architecture based on U-Net, called Triple Residual Multiscale Fully Convolutional Network (TRMFCN), whose structure exists three gates of input and inserts two blocks: residual multiscale block and concatenate block. We solved some difficulties and completed the segmentation task with the model. Our model outperforms the U-Net and some cutting-edge deep networks based on U-Net in evaluation of WM, GM and CSF. The data set we used for training and testing comes from iSeg-2017 challenge (http://iseg2017.web.unc.edu).

Detecting Boundary of Erythema Using Deep Learning (딥러닝을 활용한 피부 발적의 경계 판별)

  • Kwon, Gwanyoung;Kim, Jong Hoon;Kim, Young Jae;Lee, Sang Min;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1492-1499
    • /
    • 2021
  • Skin prick test is widely used in diagnosing allergic sensitization to common inhalant or food allergens, in which positivities are manually determined by calculating the areas or mean diameters of wheals and erythemas provoked by allergens pricked into patients' skin. In this work, we propose a segmentation algorithm over U-Net, one of the FCN models of deep learning, to help us more objectively grasp the erythema boundaries. The performance of the model is analyzed by comparing the results of automatic segmentation of the test data to U-Net with the results of manual segmentation. As a result, the average Dice coefficient value was 94.93%, the average precision and sensitivity value was 95.19% and 95.24% respectively. We find that the proposed algorithm effectively discriminates the skin's erythema boundaries. We expect this algorithm to play an auxiliary role in skin prick test in real clinical trials in the future.

Multi-scale U-SegNet architecture with cascaded dilated convolutions for brain MRI Segmentation

  • Dayananda, Chaitra;Lee, Bumshik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.25-28
    • /
    • 2020
  • Automatic segmentation of brain tissues such as WM, GM, and CSF from brain MRI scans is helpful for the diagnosis of many neurological disorders. Accurate segmentation of these brain structures is a very challenging task due to low tissue contrast, bias filed, and partial volume effects. With the aim to improve brain MRI segmentation accuracy, we propose an end-to-end convolutional based U-SegNet architecture designed with multi-scale kernels, which includes cascaded dilated convolutions for the task of brain MRI segmentation. The multi-scale convolution kernels are designed to extract abundant semantic features and capture context information at different scales. Further, the cascaded dilated convolution scheme helps to alleviate the vanishing gradient problem in the proposed model. Experimental outcomes indicate that the proposed architecture is superior to the traditional deep-learning methods such as Segnet, U-net, and U-Segnet and achieves high performance with an average DSC of 93% and 86% of JI value for brain MRI segmentation.

  • PDF