• 제목/요약/키워드: Image Semantic Segmentation

검색결과 144건 처리시간 0.456초

DP-LinkNet: A convolutional network for historical document image binarization

  • Xiong, Wei;Jia, Xiuhong;Yang, Dichun;Ai, Meihui;Li, Lirong;Wang, Song
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권5호
    • /
    • pp.1778-1797
    • /
    • 2021
  • Document image binarization is an important pre-processing step in document analysis and archiving. The state-of-the-art models for document image binarization are variants of encoder-decoder architectures, such as FCN (fully convolutional network) and U-Net. Despite their success, they still suffer from three limitations: (1) reduced feature map resolution due to consecutive strided pooling or convolutions, (2) multiple scales of target objects, and (3) reduced localization accuracy due to the built-in invariance of deep convolutional neural networks (DCNNs). To overcome these three challenges, we propose an improved semantic segmentation model, referred to as DP-LinkNet, which adopts the D-LinkNet architecture as its backbone, with the proposed hybrid dilated convolution (HDC) and spatial pyramid pooling (SPP) modules between the encoder and the decoder. Extensive experiments are conducted on recent document image binarization competition (DIBCO) and handwritten document image binarization competition (H-DIBCO) benchmark datasets. Results show that our proposed DP-LinkNet outperforms other state-of-the-art techniques by a large margin. Our implementation and the pre-trained models are available at https://github.com/beargolden/DP-LinkNet.

Deep Facade Parsing with Occlusions

  • Ma, Wenguang;Ma, Wei;Xu, Shibiao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.524-543
    • /
    • 2022
  • Correct facade image parsing is essential to the semantic understanding of outdoor scenes. Unfortunately, there are often various occlusions in front of buildings, which fails many existing methods. In this paper, we propose an end-to-end deep network for facade parsing with occlusions. The network learns to decompose an input image into visible and invisible parts by occlusion reasoning. Then, a context aggregation module is proposed to collect nonlocal cues for semantic segmentation of the visible part. In addition, considering the regularity of man-made buildings, a repetitive pattern completion branch is designed to infer the contents in the invisible regions by referring to the visible part. Finally, the parsing map of the input facade image is generated by fusing the results of the visible and invisible results. Experiments on both synthetic and real datasets demonstrate that the proposed method outperforms state-of-the-art methods in parsing facades with occlusions. Moreover, we applied our method in applications of image inpainting and 3D semantic modeling.

ETLi: Efficiently annotated traffic LiDAR dataset using incremental and suggestive annotation

  • Kang, Jungyu;Han, Seung-Jun;Kim, Nahyeon;Min, Kyoung-Wook
    • ETRI Journal
    • /
    • 제43권4호
    • /
    • pp.630-639
    • /
    • 2021
  • Autonomous driving requires a computerized perception of the environment for safety and machine-learning evaluation. Recognizing semantic information is difficult, as the objective is to instantly recognize and distinguish items in the environment. Training a model with real-time semantic capability and high reliability requires extensive and specialized datasets. However, generalized datasets are unavailable and are typically difficult to construct for specific tasks. Hence, a light detection and ranging semantic dataset suitable for semantic simultaneous localization and mapping and specialized for autonomous driving is proposed. This dataset is provided in a form that can be easily used by users familiar with existing two-dimensional image datasets, and it contains various weather and light conditions collected from a complex and diverse practical setting. An incremental and suggestive annotation routine is proposed to improve annotation efficiency. A model is trained to simultaneously predict segmentation labels and suggest class-representative frames. Experimental results demonstrate that the proposed algorithm yields a more efficient dataset than uniformly sampled datasets.

Comparison of Multi-Label U-Net and Mask R-CNN for panoramic radiograph segmentation to detect periodontitis

  • Rini, Widyaningrum;Ika, Candradewi;Nur Rahman Ahmad Seno, Aji;Rona, Aulianisa
    • Imaging Science in Dentistry
    • /
    • 제52권4호
    • /
    • pp.383-391
    • /
    • 2022
  • Purpose: Periodontitis, the most prevalent chronic inflammatory condition affecting teeth-supporting tissues, is diagnosed and classified through clinical and radiographic examinations. The staging of periodontitis using panoramic radiographs provides information for designing computer-assisted diagnostic systems. Performing image segmentation in periodontitis is required for image processing in diagnostic applications. This study evaluated image segmentation for periodontitis staging based on deep learning approaches. Materials and Methods: Multi-Label U-Net and Mask R-CNN models were compared for image segmentation to detect periodontitis using 100 digital panoramic radiographs. Normal conditions and 4 stages of periodontitis were annotated on these panoramic radiographs. A total of 1100 original and augmented images were then randomly divided into a training (75%) dataset to produce segmentation models and a testing (25%) dataset to determine the evaluation metrics of the segmentation models. Results: The performance of the segmentation models against the radiographic diagnosis of periodontitis conducted by a dentist was described by evaluation metrics(i.e., dice coefficient and intersection-over-union [IoU] score). MultiLabel U-Net achieved a dice coefficient of 0.96 and an IoU score of 0.97. Meanwhile, Mask R-CNN attained a dice coefficient of 0.87 and an IoU score of 0.74. U-Net showed the characteristic of semantic segmentation, and Mask R-CNN performed instance segmentation with accuracy, precision, recall, and F1-score values of 95%, 85.6%, 88.2%, and 86.6%, respectively. Conclusion: Multi-Label U-Net produced superior image segmentation to that of Mask R-CNN. The authors recommend integrating it with other techniques to develop hybrid models for automatic periodontitis detection.

사전위치정보를 이용한 도심 영상의 의미론적 분할 (Semantic Segmentation of Urban Scenes Using Location Prior Information)

  • 왕정현;김진환
    • 로봇학회논문지
    • /
    • 제12권3호
    • /
    • pp.249-257
    • /
    • 2017
  • This paper proposes a method to segment urban scenes semantically based on location prior information. Since major scene elements in urban environments such as roads, buildings, and vehicles are often located at specific locations, using the location prior information of these elements can improve the segmentation performance. The location priors are defined in special 2D coordinates, referred to as road-normal coordinates, which are perpendicular to the orientation of the road. With the help of depth information to each element, all the possible pixels in the image are projected into these coordinates and the learned prior information is applied to those pixels. The proposed location prior can be modeled by defining a unary potential of a conditional random field (CRF) as a sum of two sub-potentials: an appearance feature-based potential and a location potential. The proposed method was validated using publicly available KITTI dataset, which has urban images and corresponding 3D depth measurements.

관개용수로 CCTV 이미지를 이용한 CNN 딥러닝 이미지 모델 적용 (Application of CCTV Image and Semantic Segmentation Model for Water Level Estimation of Irrigation Channel)

  • 김귀훈;김마가;윤푸른;방재홍;명우호;최진용;최규훈
    • 한국농공학회논문집
    • /
    • 제64권3호
    • /
    • pp.63-73
    • /
    • 2022
  • A more accurate understanding of the irrigation water supply is necessary for efficient agricultural water management. Although we measure water levels in an irrigation canal using ultrasonic water level gauges, some errors occur due to malfunctions or the surrounding environment. This study aims to apply CNN (Convolutional Neural Network) Deep-learning-based image classification and segmentation models to the irrigation canal's CCTV (Closed-Circuit Television) images. The CCTV images were acquired from the irrigation canal of the agricultural reservoir in Cheorwon-gun, Gangwon-do. We used the ResNet-50 model for the image classification model and the U-Net model for the image segmentation model. Using the Natural Breaks algorithm, we divided water level data into 2, 4, and 8 groups for image classification models. The classification models of 2, 4, and 8 groups showed the accuracy of 1.000, 0.987, and 0.634, respectively. The image segmentation model showed a Dice score of 0.998 and predicted water levels showed R2 of 0.97 and MAE (Mean Absolute Error) of 0.02 m. The image classification models can be applied to the automatic gate-controller at four divisions of water levels. Also, the image segmentation model results can be applied to the alternative measurement for ultrasonic water gauges. We expect that the results of this study can provide a more scientific and efficient approach for agricultural water management.

ATLAS V2.0 데이터에서 의료영상 분할 모델 성능 비교 (Comparison of Performance of Medical Image Semantic Segmentation Model in ATLASV2.0 Data)

  • 우소연;구영현;유성준
    • 방송공학회논문지
    • /
    • 제28권3호
    • /
    • pp.267-274
    • /
    • 2023
  • 의료영상 공개 데이터는 수집에 한계가 있어 데이터셋의 양이 부족하다는 문제점이 있다. 때문에 기존 연구들은 공개 데이터셋에 과적합 되었을 우려가 있다. 본 논문은 실험을 통해 8개의 (Unet, X-Net, HarDNet, SegNet, PSPNet, SwinUnet, 3D-ResU-Net, UNETR) 의료영상 분할 모델의 성능을 비교함으로써 기존 모델의 성능을 재검증하고자 한다. 뇌졸중 진단 공개 데이터 셋인 Anatomical Tracings of Lesions After Stroke(ATLAS) V1.2과 ATLAS V2.0에서 모델들의 성능 비교 실험을 진행한다. 실험결과 대부분 모델은 V1.2과 V2.0에서 성능이 비슷한 결과를 보였다. 하지만 X-net과 3D-ResU-Net는 V1.2 데이터셋에서 더 높은 성능을 기록했다. 이러한 결과는 해당 모델들이 V1.2에 과적합 되었을 것으로 해석할 수 있다.

Semantic crack-image identification framework for steel structures using atrous convolution-based Deeplabv3+ Network

  • Ta, Quoc-Bao;Dang, Ngoc-Loi;Kim, Yoon-Chul;Kam, Hyeon-Dong;Kim, Jeong-Tae
    • Smart Structures and Systems
    • /
    • 제30권1호
    • /
    • pp.17-34
    • /
    • 2022
  • For steel structures, fatigue cracks are critical damage induced by long-term cycle loading and distortion effects. Vision-based crack detection can be a solution to ensure structural integrity and performance by continuous monitoring and non-destructive assessment. A critical issue is to distinguish cracks from other features in captured images which possibly consist of complex backgrounds such as handwritings and marks, which were made to record crack patterns and lengths during periodic visual inspections. This study presents a parametric study on image-based crack identification for orthotropic steel bridge decks using captured images with complicated backgrounds. Firstly, a framework for vision-based crack segmentation using the atrous convolution-based Deeplapv3+ network (ACDN) is designed. Secondly, features on crack images are labeled to build three databanks by consideration of objects in the backgrounds. Thirdly, evaluation metrics computed from the trained ACDN models are utilized to evaluate the effects of obstacles on crack detection results. Finally, various training parameters, including image sizes, hyper-parameters, and the number of training images, are optimized for the ACDN model of crack detection. The result demonstrated that fatigue cracks could be identified by the trained ACDN models, and the accuracy of the crack-detection result was improved by optimizing the training parameters. It enables the applicability of the vision-based technique for early detecting tiny fatigue cracks in steel structures.

Deep learning approach to generate 3D civil infrastructure models using drone images

  • Kwon, Ji-Hye;Khudoyarov, Shekhroz;Kim, Namgyu;Heo, Jun-Haeng
    • Smart Structures and Systems
    • /
    • 제30권5호
    • /
    • pp.501-511
    • /
    • 2022
  • Three-dimensional (3D) models have become crucial for improving civil infrastructure analysis, and they can be used for various purposes such as damage detection, risk estimation, resolving potential safety issues, alarm detection, and structural health monitoring. 3D point cloud data is used not only to make visual models but also to analyze the states of structures and to monitor them using semantic data. This study proposes automating the generation of high-quality 3D point cloud data and removing noise using deep learning algorithms. In this study, large-format aerial images of civilian infrastructure, such as cut slopes and dams, which were captured by drones, were used to develop a workflow for automatically generating a 3D point cloud model. Through image cropping, downscaling/upscaling, semantic segmentation, generation of segmentation masks, and implementation of region extraction algorithms, the generation of the point cloud was automated. Compared with the method wherein the point cloud model is generated from raw images, our method could effectively improve the quality of the model, remove noise, and reduce the processing time. The results showed that the size of the 3D point cloud model created using the proposed method was significantly reduced; the number of points was reduced by 20-50%, and distant points were recognized as noise. This method can be applied to the automatic generation of high-quality 3D point cloud models of civil infrastructures using aerial imagery.

온톨로지를 이용한 이미지 내 객체사이의 의미 정보 추론 (Semantic Information Inference among Objects in Image Using Ontology)

  • 김지원;김철원
    • 한국전자통신학회논문지
    • /
    • 제15권3호
    • /
    • pp.579-586
    • /
    • 2020
  • 웹 페이지에는 방대한 양의 멀티미디어 자료가 있으며 정확한 검색을 위하여 낮은 수준의 시각 정보에서 의미 정보를 추출하는 방법에 대한 연구가 이루어지고 있다. 그러나 이러한 기술들은 대부분 한 장의 이미지에 하나의 정보를 추출하므로 이미지 내에 여러 객체가 조합되어 있는 경우 의미 정보를 추출하기 어렵다. 본 논문에서는 이미지내의 여러 객체와 배경 등을 추출하기 위하여 우선 각각의 저수준 특징을 추출하고, 이를 SVM을 이용하여 미리 정의해 놓은 배경과 객체로 나눈다. 이렇게 나눈 객체와 배경은 온톨로지로 구축하고, 위치와 연관 관계의 의미 정보를 추론엔진을 이용하여 추론한다. 이는 이미지 내의 여러 객체들 사이에 의미 정보 추론이 가능하고, 좀 더 복잡하고 다양한 고수준의 의미 정보를 추론하는 방법을 제안한다.