• Title/Summary/Keyword: Deep learning segmentation

Search Result 379, Processing Time 0.035 seconds

Comparison of Multi-Label U-Net and Mask R-CNN for panoramic radiograph segmentation to detect periodontitis

  • Rini, Widyaningrum;Ika, Candradewi;Nur Rahman Ahmad Seno, Aji;Rona, Aulianisa
    • Imaging Science in Dentistry
    • /
    • v.52 no.4
    • /
    • pp.383-391
    • /
    • 2022
  • Purpose: Periodontitis, the most prevalent chronic inflammatory condition affecting teeth-supporting tissues, is diagnosed and classified through clinical and radiographic examinations. The staging of periodontitis using panoramic radiographs provides information for designing computer-assisted diagnostic systems. Performing image segmentation in periodontitis is required for image processing in diagnostic applications. This study evaluated image segmentation for periodontitis staging based on deep learning approaches. Materials and Methods: Multi-Label U-Net and Mask R-CNN models were compared for image segmentation to detect periodontitis using 100 digital panoramic radiographs. Normal conditions and 4 stages of periodontitis were annotated on these panoramic radiographs. A total of 1100 original and augmented images were then randomly divided into a training (75%) dataset to produce segmentation models and a testing (25%) dataset to determine the evaluation metrics of the segmentation models. Results: The performance of the segmentation models against the radiographic diagnosis of periodontitis conducted by a dentist was described by evaluation metrics(i.e., dice coefficient and intersection-over-union [IoU] score). MultiLabel U-Net achieved a dice coefficient of 0.96 and an IoU score of 0.97. Meanwhile, Mask R-CNN attained a dice coefficient of 0.87 and an IoU score of 0.74. U-Net showed the characteristic of semantic segmentation, and Mask R-CNN performed instance segmentation with accuracy, precision, recall, and F1-score values of 95%, 85.6%, 88.2%, and 86.6%, respectively. Conclusion: Multi-Label U-Net produced superior image segmentation to that of Mask R-CNN. The authors recommend integrating it with other techniques to develop hybrid models for automatic periodontitis detection.

Development of Fender Segmentation System for Port Structures using Vision Sensor and Deep Learning (비전센서 및 딥러닝을 이용한 항만구조물 방충설비 세분화 시스템 개발)

  • Min, Jiyoung;Yu, Byeongjun;Kim, Jonghyeok;Jeon, Haemin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.2
    • /
    • pp.28-36
    • /
    • 2022
  • As port structures are exposed to various extreme external loads such as wind (typhoons), sea waves, or collision with ships; it is important to evaluate the structural safety periodically. To monitor the port structure, especially the rubber fender, a fender segmentation system using a vision sensor and deep learning method has been proposed in this study. For fender segmentation, a new deep learning network that improves the encoder-decoder framework with the receptive field block convolution module inspired by the eccentric function of the human visual system into the DenseNet format has been proposed. In order to train the network, various fender images such as BP, V, cell, cylindrical, and tire-types have been collected, and the images are augmented by applying four augmentation methods such as elastic distortion, horizontal flip, color jitter, and affine transforms. The proposed algorithm has been trained and verified with the collected various types of fender images, and the performance results showed that the system precisely segmented in real time with high IoU rate (84%) and F1 score (90%) in comparison with the conventional segmentation model, VGG16 with U-net. The trained network has been applied to the real images taken at one port in Republic of Korea, and found that the fenders are segmented with high accuracy even with a small dataset.

Enhancement of Tongue Segmentation by Using Data Augmentation (데이터 증강을 이용한 혀 영역 분할 성능 개선)

  • Chen, Hong;Jung, Sung-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.5
    • /
    • pp.313-322
    • /
    • 2020
  • A large volume of data will improve the robustness of deep learning models and avoid overfitting problems. In automatic tongue segmentation, the availability of annotated tongue images is often limited because of the difficulty of collecting and labeling the tongue image datasets in reality. Data augmentation can expand the training dataset and increase the diversity of training data by using label-preserving transformations without collecting new data. In this paper, augmented tongue image datasets were developed using seven augmentation techniques such as image cropping, rotation, flipping, color transformations. Performance of the data augmentation techniques were studied using state-of-the-art transfer learning models, for instance, InceptionV3, EfficientNet, ResNet, DenseNet and etc. Our results show that geometric transformations can lead to more performance gains than color transformations and the segmentation accuracy can be increased by 5% to 20% compared with no augmentation. Furthermore, a random linear combination of geometric and color transformations augmentation dataset gives the superior segmentation performance than all other datasets and results in a better accuracy of 94.98% with InceptionV3 models.

Improved Performance of Image Semantic Segmentation using NASNet (NASNet을 이용한 이미지 시맨틱 분할 성능 개선)

  • Kim, Hyoung Seok;Yoo, Kee-Youn;Kim, Lae Hyun
    • Korean Chemical Engineering Research
    • /
    • v.57 no.2
    • /
    • pp.274-282
    • /
    • 2019
  • In recent years, big data analysis has been expanded to include automatic control through reinforcement learning as well as prediction through modeling. Research on the utilization of image data is actively carried out in various industrial fields such as chemical, manufacturing, agriculture, and bio-industry. In this paper, we applied NASNet, which is an AutoML reinforced learning algorithm, to DeepU-Net neural network that modified U-Net to improve image semantic segmentation performance. We used BRATS2015 MRI data for performance verification. Simulation results show that DeepU-Net has more performance than the U-Net neural network. In order to improve the image segmentation performance, remove dropouts that are typically applied to neural networks, when the number of kernels and filters obtained through reinforcement learning in DeepU-Net was selected as a hyperparameter of neural network. The results show that the training accuracy is 0.5% and the verification accuracy is 0.3% better than DeepU-Net. The results of this study can be applied to various fields such as MRI brain imaging diagnosis, thermal imaging camera abnormality diagnosis, Nondestructive inspection diagnosis, chemical leakage monitoring, and monitoring forest fire through CCTV.

Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and Bayesian data fusion

  • Tang, Wen;Wu, Rih-Teng;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.221-235
    • /
    • 2022
  • Manual inspection of steel box girders on long span bridges is time-consuming and labor-intensive. The quality of inspection relies on the subjective judgements of the inspectors. This study proposes an automated approach to detect and segment cracks in high-resolution images. An end-to-end cascaded framework is proposed to first detect the existence of cracks using a deep convolutional neural network (CNN) and then segment the crack using a modified U-Net encoder-decoder architecture. A Naïve Bayes data fusion scheme is proposed to reduce the false positives and false negatives effectively. To generate the binary crack mask, first, the original images are divided into 448 × 448 overlapping image patches where these image patches are classified as cracks versus non-cracks using a deep CNN. Next, a modified U-Net is trained from scratch using only the crack patches for segmentation. A customized loss function that consists of binary cross entropy loss and the Dice loss is introduced to enhance the segmentation performance. Additionally, a Naïve Bayes fusion strategy is employed to integrate the crack score maps from different overlapping crack patches and to decide whether a pixel is crack or not. Comprehensive experiments have demonstrated that the proposed approach achieves an 81.71% mean intersection over union (mIoU) score across 5 different training/test splits, which is 7.29% higher than the baseline reference implemented with the original U-Net.

A Study on Automatic Classification of Characterized Ground Regions on Slopes by a Deep Learning based Image Segmentation (딥러닝 영상처리를 통한 비탈면의 지반 특성화 영역 자동 분류에 관한 연구)

  • Lee, Kyu Beom;Shin, Hyu-Soung;Kim, Seung Hyeon;Ha, Dae Mok;Choi, Isu
    • Tunnel and Underground Space
    • /
    • v.29 no.6
    • /
    • pp.508-522
    • /
    • 2019
  • Because of the slope failure, not only property damage but also human damage can occur, slope stability analysis should be conducted to predict and reinforce of the slope. This paper, defines the ground areas that can be characterized in terms of slope failure such as Rockmass jointset, Rockmass fault, Soil, Leakage water and Crush zone in sloped images. As a result, it was shown that the deep learning instance segmentation network can be used to recognize and automatically segment the precise shape of the ground region with different characteristics shown in the image. It showed the possibility of supporting the slope mapping work and automatically calculating the ground characteristics information of slopes necessary for decision making such as slope reinforcement.

Deep Learning-based Pixel-level Concrete Wall Crack Detection Method (딥러닝 기반 픽셀 단위 콘크리트 벽체 균열 검출 방법)

  • Kang, Kyung-Su;Ryu, Han-Guk
    • Journal of the Korea Institute of Building Construction
    • /
    • v.23 no.2
    • /
    • pp.197-207
    • /
    • 2023
  • Concrete is a widely used material due to its excellent compressive strength and durability. However, depending on the surrounding environment and the characteristics of the materials used in the construction, various defects may occur, such as cracks on the surface and subsidence of the structure. The detects on the surface of the concrete structure occur after completion or over time. Neglecting these cracks may lead to severe structural damage, necessitating regular safety inspections. Traditional visual inspections of concrete walls are labor-intensive and expensive. This research presents a deep learning-based semantic segmentation model designed to detect cracks in concrete walls. The model addresses surface defects that arise from aging, and an image augmentation technique is employed to enhance feature extraction and generalization performance. A dataset for semantic segmentation was created by combining publicly available and self-generated datasets, and notable semantic segmentation models were evaluated and tested. The model, specifically trained for concrete wall fracture detection, achieved an extraction performance of 81.4%. Moreover, a 3% performance improvement was observed when applying the developed augmentation technique.

Generation and Validation of Finite Element Models of Computed Tomography for Unidirectional Composites Using Supervised Learning-based Segmentation Techniques (지도학습 기반 분할기법을 이용한 단층 촬영된 단방향 복합재료의 유한요소모델 생성 및 검증)

  • Taeyi Kim;Seong-Won Jin;Yeong-Bae Kim;Jae Hyuk Lim;YunHo Kim
    • Composites Research
    • /
    • v.36 no.6
    • /
    • pp.395-401
    • /
    • 2023
  • In this study, finite element modeling of unidirectional composite materials of the computed tomography (CT) was conducted using a supervised learning-based segmentation technique. Firstly, Micro-CT scan was performed to obtain the raw volume of unidirectional composite materials, providing microstructure information. From the CT volume images, actual microstructure of the cross-section of unidirectional composite materials was extracted by the labeling process. Then, a U-net deep learning model was trained with a small number of raw images as inputs and their labeled images as outputs to generate a segmentation model. Subsequently, most of remaining images were input to the trained U-net deep learning model to segment all raw volume for identifying complex microstructure, which was used for the generation of finite element model. Finally, the fiber volume fraction of the finite element model was compared with that of experimentally measured volume to validate the appropriateness of the proposed method.

Research Trend of the Remote Sensing Image Analysis Using Deep Learning (딥러닝을 이용한 원격탐사 영상분석 연구동향)

  • Kim, Hyungwoo;Kim, Minho;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.819-834
    • /
    • 2022
  • Artificial Intelligence (AI) techniques have been effectively used for image classification, object detection, and image segmentation. Along with the recent advancement of computing power, deep learning models can build deeper and thicker networks and achieve better performance by creating more appropriate feature maps based on effective activation functions and optimizer algorithms. This review paper examined technical and academic trends of Convolutional Neural Network (CNN) and Transformer models that are emerging techniques in remote sensing and suggested their utilization strategies and development directions. A timely supply of satellite images and real-time processing for deep learning to cope with disaster monitoring will be required for future work. In addition, a big data platform dedicated to satellite images should be developed and integrated with drone and Closed-circuit Television (CCTV) images.

Character Level and Word Level English License Plate Recognition Using Deep-learning Neural Networks (딥러닝 신경망을 이용한 문자 및 단어 단위의 영문 차량 번호판 인식)

  • Kim, Jinho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.16 no.4
    • /
    • pp.19-28
    • /
    • 2020
  • Vehicle license plate recognition system is not generalized in Malaysia due to the loose character layout rule and the varying number of characters as well as the mixed capital English characters and italic English words. Because the italic English word is hard to segmentation, a separate method is required to recognize in Malaysian license plate. In this paper, we propose a mixed character level and word level English license plate recognition algorithm using deep learning neural networks. The difference of Gaussian method is used to segment character and word by generating a black and white image with emphasized character strokes and separated touching characters. The proposed deep learning neural networks are implemented on the LPR system at the gate of a building in Kuala-Lumpur for the collection of database and the evaluation of algorithm performance. The evaluation results show that the proposed Malaysian English LPR can be used in commercial market with 98.01% accuracy.