• 제목/요약/키워드: Semantic segmentation

검색결과 242건 처리시간 0.032초

깊이 슈퍼 픽셀을 이용한 실내 장면의 의미론적 분할 방법 (Semantic Segmentation of Indoor Scenes Using Depth Superpixel)

  • 김선걸;강행봉
    • 한국멀티미디어학회논문지
    • /
    • 제19권3호
    • /
    • pp.531-538
    • /
    • 2016
  • In this paper, we propose a novel post-processing method of semantic segmentation from indoor scenes with RGBD inputs. For accurate segmentation, various post-processing methods such as superpixel from color edges or Conditional Random Field (CRF) method considering neighborhood connectivity have been used, but these methods are not efficient due to high complexity and computational cost. To solve this problem, we maximize the efficiency of post processing by using depth superpixel extracted from disparity image to handle object silhouette. Our experimental results show reasonable performances compared to previous methods in the post processing of semantic segmentation.

A Deep Learning-Based Image Semantic Segmentation Algorithm

  • Chaoqun, Shen;Zhongliang, Sun
    • Journal of Information Processing Systems
    • /
    • 제19권1호
    • /
    • pp.98-108
    • /
    • 2023
  • This paper is an attempt to design segmentation method based on fully convolutional networks (FCN) and attention mechanism. The first five layers of the Visual Geometry Group (VGG) 16 network serve as the coding part in the semantic segmentation network structure with the convolutional layer used to replace pooling to reduce loss of image feature extraction information. The up-sampling and deconvolution unit of the FCN is then used as the decoding part in the semantic segmentation network. In the deconvolution process, the skip structure is used to fuse different levels of information and the attention mechanism is incorporated to reduce accuracy loss. Finally, the segmentation results are obtained through pixel layer classification. The results show that our method outperforms the comparison methods in mean pixel accuracy (MPA) and mean intersection over union (MIOU).

Saliency-Assisted Collaborative Learning Network for Road Scene Semantic Segmentation

  • Haifeng Sima;Yushuang Xu;Minmin Du;Meng Gao;Jing Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권3호
    • /
    • pp.861-880
    • /
    • 2023
  • Semantic segmentation of road scene is the key technology of autonomous driving, and the improvement of convolutional neural network architecture promotes the improvement of model segmentation performance. The existing convolutional neural network has the simplification of learning knowledge and the complexity of the model. To address this issue, we proposed a road scene semantic segmentation algorithm based on multi-task collaborative learning. Firstly, a depthwise separable convolution atrous spatial pyramid pooling is proposed to reduce model complexity. Secondly, a collaborative learning framework is proposed involved with saliency detection, and the joint loss function is defined using homoscedastic uncertainty to meet the new learning model. Experiments are conducted on the road and nature scenes datasets. The proposed method achieves 70.94% and 64.90% mIoU on Cityscapes and PASCAL VOC 2012 datasets, respectively. Qualitatively, Compared to methods with excellent performance, the method proposed in this paper has significant advantages in the segmentation of fine targets and boundaries.

효율적인 비정형 도로영역 인식을 위한 Semantic segmentation 기반 심층 신경망 구조 (Efficient Deep Neural Network Architecture based on Semantic Segmentation for Paved Road Detection)

  • 박세진;한정훈;문영식
    • 한국정보통신학회논문지
    • /
    • 제24권11호
    • /
    • pp.1437-1444
    • /
    • 2020
  • 컴퓨터 비전 시스템의 발달로 보안, 생체인식, 의료영상, 자율주행 등의 분야에 많은 발전이 있었다. 자율주행 분야에서는 특히 딥러닝을 이용한 객체인식, 탐지 기법이 주로 사용되는데, 자동차가 갈 수 있는 영역을 판단하기 위한 도로영역 인식이 특히 중요한 문제이다. 도로 영역은 일반적인 객체탐지에서 활용되는 사각영역인식과는 달리 비정형적인 형태를 띠므로, ROI 기반의 객체인식 구조는 적용할 수 없다. 본 논문에서는 Semantic segmentation 기법을 사용한 비정형적인 도로영역 인식에 맞는 심층 신경망 구조를 제안한다. 또한 도로영역에 특화된 네트워크 구조인 Multi-scale semantic segmentation 기법을 사용하여 성능이 개선됨을 입증하였다.

딥 러닝 기반의 팬옵틱 분할 기법 분석 (Survey on Deep Learning-based Panoptic Segmentation Methods)

  • 권정은;조성인
    • 대한임베디드공학회논문지
    • /
    • 제16권5호
    • /
    • pp.209-214
    • /
    • 2021
  • Panoptic segmentation, which is now widely used in computer vision such as medical image analysis, and autonomous driving, helps understanding an image with holistic view. It identifies each pixel by assigning a unique class ID, and an instance ID. Specifically, it can classify 'thing' from 'stuff', and provide pixel-wise results of semantic prediction and object detection. As a result, it can solve both semantic segmentation and instance segmentation tasks through a unified single model, producing two different contexts for two segmentation tasks. Semantic segmentation task focuses on how to obtain multi-scale features from large receptive field, without losing low-level features. On the other hand, instance segmentation task focuses on how to separate 'thing' from 'stuff' and how to produce the representation of detected objects. With the advances of both segmentation techniques, several panoptic segmentation models have been proposed. Many researchers try to solve discrepancy problems between results of two segmentation branches that can be caused on the boundary of the object. In this survey paper, we will introduce the concept of panoptic segmentation, categorize the existing method into two representative methods and explain how it is operated on two methods: top-down method and bottom-up method. Then, we will analyze the performance of various methods with experimental results.

독점 멀티 분류기의 심층 학습 모델을 사용한 약지도 시맨틱 분할 (Weakly-supervised Semantic Segmentation using Exclusive Multi-Classifier Deep Learning Model)

  • 최현준;강동중
    • 한국인터넷방송통신학회논문지
    • /
    • 제19권6호
    • /
    • pp.227-233
    • /
    • 2019
  • 최근 딥러닝 기술의 발달과 함께 신경 네트워크는 컴퓨터 비전에서도 성공을 거두고 있다. 컨볼루션 신경망은 단순한 영상 분류 작업뿐만 아니라 객체 분할 및 검출 등 난이도가 높은 작업에서도 탁월한 성능을 보였다. 그러나 그러한 많은 심층 학습 모델은 지도학습에 기초하고 있으며, 이는 이미지 라벨보다 주석 라벨이 더 많이 필요하다. 특히 semantic segmentation 모델은 훈련을 위해 픽셀 수준의 주석을 필요로 하는데, 이는 매우 중요하다. 이 논문은 이러한 문제를 해결하기 위한 네트워크 훈련을 위해 영상 수준 라벨만 필요한 약지도 semantic segmentation 방법을 제안한다. 기존의 약지도학습 방법은 대상의 특정 영역만 탐지하는 데 한계가 있다. 반면에, 본 논문에서는 우리의 모델이 사물의 더 다른 부분을 인식하도 multi-classifier 심층 학습 아키텍처를 사용한다. 제안된 방법은 VOC 2012 검증 데이터 세트를 사용하여 평가한다.

Semantic Segmentation of Heterogeneous Unmanned Aerial Vehicle Datasets Using Combined Segmentation Network

  • Ahram, Song
    • 대한원격탐사학회지
    • /
    • 제39권1호
    • /
    • pp.87-97
    • /
    • 2023
  • Unmanned aerial vehicles (UAVs) can capture high-resolution imagery from a variety of viewing angles and altitudes; they are generally limited to collecting images of small scenes from larger regions. To improve the utility of UAV-appropriated datasetsfor use with deep learning applications, multiple datasets created from variousregions under different conditions are needed. To demonstrate a powerful new method for integrating heterogeneous UAV datasets, this paper applies a combined segmentation network (CSN) to share UAVid and semantic drone dataset encoding blocks to learn their general features, whereas its decoding blocks are trained separately on each dataset. Experimental results show that our CSN improves the accuracy of specific classes (e.g., cars), which currently comprise a low ratio in both datasets. From this result, it is expected that the range of UAV dataset utilization will increase.

딥러닝 기반 거리 영상의 Semantic Segmentation을 위한 Atrous Residual U-Net (Atrous Residual U-Net for Semantic Segmentation in Street Scenes based on Deep Learning)

  • 신석용;이상훈;한현호
    • 융합정보논문지
    • /
    • 제11권10호
    • /
    • pp.45-52
    • /
    • 2021
  • 본 논문에서는 U-Net 기반의 semantic segmentation 방법에서 정확도를 개선하기 위한 Atrous Residual U-Net (AR-UNet)을 제안하였다. U-Net은 의료 영상 분석, 자율주행 자동차, 원격 감지 영상 등의 분야에서 주로 사용된다. 기존 U-Net은 인코더 부분에서 컨볼루션 계층 수가 적어 추출되는 특징이 부족하다. 추출된 특징은 객체의 범주를 분류하는 데 필수적이며, 부족할 경우 분할 정확도를 저하시키는 문제를 초래한다. 따라서 이 문제를 개선하기 위해 인코더에 residual learning과 ASPP를 활용한 AR-UNet을 제안하였다. Residual learning은 특징 추출 능력을 개선하고, 연속적인 컨볼루션으로 발생하는 특징 손실과 기울기 소실 문제 방지에 효과적이다. 또한 ASPP는 특징맵의 해상도를 줄이지 않고 추가적인 특징 추출이 가능하다. 실험은 Cityscapes 데이터셋으로 AR-UNet의 효과를 검증하였다. 실험 결과는 AR-UNet이 기존 U-Net과 비교하여 향상된 분할 결과를 보였다. 이를 통해 AR-UNet은 정확도가 중요한 여러 응용 분야의 발전에 기여할 수 있다.

A hierarchical semantic segmentation framework for computer vision-based bridge damage detection

  • Jingxiao Liu;Yujie Wei ;Bingqing Chen;Hae Young Noh
    • Smart Structures and Systems
    • /
    • 제31권4호
    • /
    • pp.325-334
    • /
    • 2023
  • Computer vision-based damage detection enables non-contact, efficient and low-cost bridge health monitoring, which reduces the need for labor-intensive manual inspection or that for a large number of on-site sensing instruments. By leveraging recent semantic segmentation approaches, we can detect regions of critical structural components and identify damages at pixel level on images. However, existing methods perform poorly when detecting small and thin damages (e.g., cracks); the problem is exacerbated by imbalanced samples. To this end, we incorporate domain knowledge to introduce a hierarchical semantic segmentation framework that imposes a hierarchical semantic relationship between component categories and damage types. For instance, certain types of concrete cracks are only present on bridge columns, and therefore the noncolumn region may be masked out when detecting such damages. In this way, the damage detection model focuses on extracting features from relevant structural components and avoid those from irrelevant regions. We also utilize multi-scale augmentation to preserve contextual information of each image, without losing the ability to handle small and/or thin damages. In addition, our framework employs an importance sampling, where images with rare components are sampled more often, to address sample imbalance. We evaluated our framework on a public synthetic dataset that consists of 2,000 railway bridges. Our framework achieves a 0.836 mean intersection over union (IoU) for structural component segmentation and a 0.483 mean IoU for damage segmentation. Our results have in total 5% and 18% improvements for the structural component segmentation and damage segmentation tasks, respectively, compared to the best-performing baseline model.

DA-Res2Net: a novel Densely connected residual Attention network for image semantic segmentation

  • Zhao, Xiaopin;Liu, Weibin;Xing, Weiwei;Wei, Xiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4426-4442
    • /
    • 2020
  • Since scene segmentation is becoming a hot topic in the field of autonomous driving and medical image analysis, researchers are actively trying new methods to improve segmentation accuracy. At present, the main issues in image semantic segmentation are intra-class inconsistency and inter-class indistinction. From our analysis, the lack of global information as well as macroscopic discrimination on the object are the two main reasons. In this paper, we propose a Densely connected residual Attention network (DA-Res2Net) which consists of a dense residual network and channel attention guidance module to deal with these problems and improve the accuracy of image segmentation. Specifically, in order to make the extracted features equipped with stronger multi-scale characteristics, a densely connected residual network is proposed as a feature extractor. Furthermore, to improve the representativeness of each channel feature, we design a Channel-Attention-Guide module to make the model focusing on the high-level semantic features and low-level location features simultaneously. Experimental results show that the method achieves significant performance on various datasets. Compared to other state-of-the-art methods, the proposed method reaches the mean IOU accuracy of 83.2% on PASCAL VOC 2012 and 79.7% on Cityscapes dataset, respectively.