• Title/Summary/Keyword: superpixels

Search Result 25, Processing Time 0.019 seconds

Salient Object Detection via Adaptive Region Merging

  • Zhou, Jingbo;Zhai, Jiyou;Ren, Yongfeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4386-4404
    • /
    • 2016
  • Most existing salient object detection algorithms commonly employed segmentation techniques to eliminate background noise and reduce computation by treating each segment as a processing unit. However, individual small segments provide little information about global contents. Such schemes have limited capability on modeling global perceptual phenomena. In this paper, a novel salient object detection algorithm is proposed based on region merging. An adaptive-based merging scheme is developed to reassemble regions based on their color dissimilarities. The merging strategy can be described as that a region R is merged with its adjacent region Q if Q has the lowest dissimilarity with Q among all Q's adjacent regions. To guide the merging process, superpixels that located at the boundary of the image are treated as the seeds. However, it is possible for a boundary in the input image to be occupied by the foreground object. To avoid this case, we optimize the boundary influences by locating and eliminating erroneous boundaries before the region merging. We show that even though three simple region saliency measurements are adopted for each region, encouraging performance can be obtained. Experiments on four benchmark datasets including MSRA-B, SOD, SED and iCoSeg show the proposed method results in uniform object enhancement and achieve state-of-the-art performance by comparing with nine existing methods.

Superpixel-based Vehicle Detection using Plane Normal Vector in Dispar ity Space

  • Seo, Jeonghyun;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.6
    • /
    • pp.1003-1013
    • /
    • 2016
  • This paper proposes a framework of superpixel-based vehicle detection method using plane normal vector in disparity space. We utilize two common factors for detecting vehicles: Hypothesis Generation (HG) and Hypothesis Verification (HV). At the stage of HG, we set the regions of interest (ROI) by estimating the lane, and track them to reduce computational cost of the overall processes. The image is then divided into compact superpixels, each of which is viewed as a plane composed of the normal vector in disparity space. After that, the representative normal vector is computed at a superpixel-level, which alleviates the well-known problems of conventional color-based and depth-based approaches. Based on the assumption that the central-bottom of the input image is always on the navigable region, the road and obstacle candidates are simultaneously extracted by the plane normal vectors obtained from K-means algorithm. At the stage of HV, the separated obstacle candidates are verified by employing HOG and SVM as for a feature and classifying function, respectively. To achieve this, we trained SVM classifier by HOG features of KITTI training dataset. The experimental results demonstrate that the proposed vehicle detection system outperforms the conventional HOG-based methods qualitatively and quantitatively.

CRF-Based Figure/Ground Segmentation with Pixel-Level Sparse Coding and Neighborhood Interactions

  • Zhang, Lihe;Piao, Yongri
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.3
    • /
    • pp.205-214
    • /
    • 2015
  • In this paper, we propose a new approach to learning a discriminative model for figure/ground segmentation by incorporating the bag-of-features and conditional random field (CRF) techniques. We advocate the use of image patches instead of superpixels as the basic processing unit. The latter has a homogeneous appearance and adheres to object boundaries, while an image patch often contains more discriminative information (e.g., local image structure) to distinguish its categories. We use pixel-level sparse coding to represent an image patch. With the proposed feature representation, the unary classifier achieves a considerable binary segmentation performance. Further, we integrate unary and pairwise potentials into the CRF model to refine the segmentation results. The pairwise potentials include color and texture potentials with neighborhood interactions, and an edge potential. High segmentation accuracy is demonstrated on three benchmark datasets: the Weizmann horse dataset, the VOC2006 cow dataset, and the MSRC multiclass dataset. Extensive experiments show that the proposed approach performs favorably against the state-of-the-art approaches.

2D-to-3D Conversion System using Depth Map Enhancement

  • Chen, Ju-Chin;Huang, Meng-yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1159-1181
    • /
    • 2016
  • This study introduces an image-based 2D-to-3D conversion system that provides significant stereoscopic visual effects for humans. The linear and atmospheric perspective cues that compensate each other are employed to estimate depth information. Rather than retrieving a precise depth value for pixels from the depth cues, a direction angle of the image is estimated and then the depth gradient, in accordance with the direction angle, is integrated with superpixels to obtain the depth map. However, stereoscopic effects of synthesized views obtained from this depth map are limited and dissatisfy viewers. To obtain impressive visual effects, the viewer's main focus is considered, and thus salient object detection is performed to explore the significance region for visual attention. Then, the depth map is refined by locally modifying the depth values within the significance region. The refinement process not only maintains global depth consistency by correcting non-uniform depth values but also enhances the visual stereoscopic effect. Experimental results show that in subjective evaluation, the subjectively evaluated degree of satisfaction with the proposed method is approximately 7% greater than both existing commercial conversion software and state-of-the-art approach.

Wildfire-induced Change Detection Using Post-fire VHR Satellite Images and GIS Data (산불 발생 후 VHR 위성영상과 GIS 데이터를 이용한 산불 피해 지역 변화 탐지)

  • Chung, Minkyung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1389-1403
    • /
    • 2021
  • Disaster management using VHR (very high resolution) satellite images supports rapid damage assessment and also offers detailed information of the damages. However, the acquisition of pre-event VHR satellite images is usually limited due to the long revisit time of VHR satellites. The absence of the pre-event data can reduce the accuracy of damage assessment since it is difficult to distinguish the changed region from the unchanged region with only post-event data. To address this limitation, in this study, we conducted the wildfire-induced change detection on national wildfire cases using post-fire VHR satellite images and GIS (Geographic Information System) data. For GIS data, a national land cover map was selected to simulate the pre-fire NIR (near-infrared) images using the spatial information of the pre-fire land cover. Then, the simulated pre-fire NIR images were used to analyze bi-temporal NDVI (Normalized Difference Vegetation Index) correlation for unsupervised change detection. The whole process of change detection was performed on a superpixel basis considering the advantages of superpixels being able to reduce the complexity of the image processing while preserving the details of the VHR images. The proposed method was validated on the 2019 Gangwon wildfire cases and showed a high overall accuracy over 98% and a high F1-score over 0.97 for both study sites.