• Title/Summary/Keyword: Saliency Object Detection

Search Result 26, Processing Time 0.027 seconds

Multi-scale Diffusion-based Salient Object Detection with Background and Objectness Seeds

  • Yang, Sai;Liu, Fan;Chen, Juan;Xiao, Dibo;Zhu, Hairong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4976-4994
    • /
    • 2018
  • The diffusion-based salient object detection methods have shown excellent detection results and more efficient computation in recent years. However, the current diffusion-based salient object detection methods still have disadvantage of detecting the object appearing at the image boundaries and different scales. To address the above mentioned issues, this paper proposes a multi-scale diffusion-based salient object detection algorithm with background and objectness seeds. In specific, the image is firstly over-segmented at several scales. Secondly, the background and objectness saliency of each superpixel is then calculated and fused in each scale. Thirdly, manifold ranking method is chosen to propagate the Bayessian fusion of background and objectness saliency to the whole image. Finally, the pixel-level saliency map is constructed by weighted summation of saliency values under different scales. We evaluate our salient object detection algorithm with other 24 state-of-the-art methods on four public benchmark datasets, i.e., ASD, SED1, SED2 and SOD. The results show that the proposed method performs favorably against 24 state-of-the-art salient object detection approaches in term of popular measures of PR curve and F-measure. And the visual comparison results also show that our method highlights the salient objects more effectively.

Saliency Detection based on Global Color Distribution and Active Contour Analysis

  • Hu, Zhengping;Zhang, Zhenbin;Sun, Zhe;Zhao, Shuhuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.12
    • /
    • pp.5507-5528
    • /
    • 2016
  • In computer vision, salient object is important to extract the useful information of foreground. With active contour analysis acting as the core in this paper, we propose a bottom-up saliency detection algorithm combining with the Bayesian model and the global color distribution. Under the supports of active contour model, a more accurate foreground can be obtained as a foundation for the Bayesian model and the global color distribution. Furthermore, we establish a contour-based selection mechanism to optimize the global-color distribution, which is an effective revising approach for the Bayesian model as well. To obtain an excellent object contour, we firstly intensify the object region in the source gray-scale image by a seed-based method. The final saliency map can be detected after weighting the color distribution to the Bayesian saliency map, after both of the two components are available. The contribution of this paper is that, comparing the Harris-based convex hull algorithm, the active contour can extract a more accurate and non-convex foreground. Moreover, the global color distribution can solve the saliency-scattered drawback of Bayesian model, by the mutual complementation. According to the detected results, the final saliency maps generated with considering the global color distribution and active contour are much-improved.

A Saliency Map based on Color Boosting and Maximum Symmetric Surround

  • Huynh, Trung Manh;Lee, Gueesang
    • Smart Media Journal
    • /
    • v.2 no.2
    • /
    • pp.8-13
    • /
    • 2013
  • Nowadays, the saliency region detection has become a popular research topic because of its uses for many applications like object recognition and object segmentation. Some of recent methods apply color distinctiveness based on an analysis of statistics of color image derivatives in order to boosting color saliency can produce the good saliency maps. However, if the salient regions comprise more than half the pixels of the image or the background is complex, it may cause bad results. In this paper, we introduce the method to handle these problems by using maximum symmetric surround. The results show that our method outperforms the previous algorithms. We also show the segmentation results by using Otsu's method.

  • PDF

Analysis of the effect of class classification learning on the saliency map of Self-Supervised Transformer (클래스분류 학습이 Self-Supervised Transformer의 saliency map에 미치는 영향 분석)

  • Kim, JaeWook;Kim, Hyeoncheol
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.67-70
    • /
    • 2022
  • NLP 분야에서 적극 활용되기 시작한 Transformer 모델을 Vision 분야에서 적용하기 시작하면서 object detection과 segmentation 등 각종 분야에서 기존 CNN 기반 모델의 정체된 성능을 극복하며 향상되고 있다. 또한, label 데이터 없이 이미지들로만 자기지도학습을 한 ViT(Vision Transformer) 모델을 통해 이미지에 포함된 여러 중요한 객체의 영역을 검출하는 saliency map을 추출할 수 있게 되었으며, 이로 인해 ViT의 자기지도학습을 통한 object detection과 semantic segmentation 연구가 활발히 진행되고 있다. 본 논문에서는 ViT 모델 뒤에 classifier를 붙인 모델에 일반 학습한 모델과 자기지도학습의 pretrained weight을 사용해서 전이학습한 모델의 시각화를 통해 각 saliency map들을 비교 분석하였다. 이를 통해, 클래스 분류 학습 기반 전이학습이 transformer의 saliency map에 미치는 영향을 확인할 수 있었다.

  • PDF

Saliency Attention Method for Salient Object Detection Based on Deep Learning (딥러닝 기반의 돌출 객체 검출을 위한 Saliency Attention 방법)

  • Kim, Hoi-Jun;Lee, Sang-Hun;Han, Hyun Ho;Kim, Jin-Soo
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.12
    • /
    • pp.39-47
    • /
    • 2020
  • In this paper, we proposed a deep learning-based detection method using Saliency Attention to detect salient objects in images. The salient object detection separates the object where the human eye is focused from the background, and determines the highly relevant part of the image. It is usefully used in various fields such as object tracking, detection, and recognition. Existing deep learning-based methods are mostly Autoencoder structures, and many feature losses occur in encoders that compress and extract features and decoders that decompress and extend the extracted features. These losses cause the salient object area to be lost or detect the background as an object. In the proposed method, Saliency Attention is proposed to reduce the feature loss and suppress the background region in the Autoencoder structure. The influence of the feature values was determined using the ELU activation function, and Attention was performed on the feature values in the normalized negative and positive regions, respectively. Through this Attention method, the background area was suppressed and the projected object area was emphasized. Experimental results showed improved detection results compared to existing deep learning methods.

Object Detection and 3D Position Estimation based on Stereo Vision (스테레오 영상 기반의 객체 탐지 및 객체의 3차원 위치 추정)

  • Son, Haengseon;Lee, Seonyoung;Min, Kyoungwon;Seo, Seongjin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.4
    • /
    • pp.318-324
    • /
    • 2017
  • We introduced a stereo camera on the aircraft to detect flight objects and to estimate the 3D position of them. The Saliency map algorithm based on PCT was proposed to detect a small object between clouds, and then we processed a stereo matching algorithm to find out the disparity between the left and right camera. In order to extract accurate disparity, cost aggregation region was used as a variable region to adapt to detection object. In this paper, we use the detection result as the cost aggregation region. In order to extract more precise disparity, sub-pixel interpolation is used to extract float type-disparity at sub-pixel level. We also proposed a method to estimate the spatial position of an object by using camera parameters. It is expected that it can be applied to image - based object detection and collision avoidance system of autonomous aircraft in the future.

Video Saliency Detection Using Bi-directional LSTM

  • Chi, Yang;Li, Jinjiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2444-2463
    • /
    • 2020
  • Significant detection of video can more rationally allocate computing resources and reduce the amount of computation to improve accuracy. Deep learning can extract the edge features of the image, providing technical support for video saliency. This paper proposes a new detection method. We combine the Convolutional Neural Network (CNN) and the Deep Bidirectional LSTM Network (DB-LSTM) to learn the spatio-temporal features by exploring the object motion information and object motion information to generate video. A continuous frame of significant images. We also analyzed the sample database and found that human attention and significant conversion are time-dependent, so we also considered the significance detection of video cross-frame. Finally, experiments show that our method is superior to other advanced methods.

Window Production Method based on Low-Frequency Detection for Automatic Object Extraction of GrabCut (GrabCut의 자동 객체 추출을 위한 저주파 영역 탐지 기반의 윈도우 생성 기법)

  • Yoo, Tae-Hoon;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.211-217
    • /
    • 2012
  • Conventional GrabCut algorithm is semi-automatic algorithm that user must be set rectangle window surrounds the object. This paper studied automatic object detection to solve these problem by detecting salient region based on Human Visual System. Saliency map is computed using Lab color space which is based on color opposing theory of 'red-green' and 'blue-yellow'. Then Saliency Points are computed from the boundaries of Low-Frequency region that are extracted from Saliency Map. Finally, Rectangle windows are obtained from coordinate value of Saliency Points and these windows are used in GrabCut algorithm to extract objects. Through various experiments, the proposed algorithm computing rectangle windows of salient region and extracting objects has been proved.

Small Object Segmentation Based on Visual Saliency in Natural Images

  • Manh, Huynh Trung;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • v.9 no.4
    • /
    • pp.592-601
    • /
    • 2013
  • Object segmentation is a challenging task in image processing and computer vision. In this paper, we present a visual attention based segmentation method to segment small sized interesting objects in natural images. Different from the traditional methods, we first search the region of interest by using our novel saliency-based method, which is mainly based on band-pass filtering, to obtain the appropriate frequency. Secondly, we applied the Gaussian Mixture Model (GMM) to locate the object region. By incorporating the visual attention analysis into object segmentation, our proposed approach is able to narrow the search region for object segmentation, so that the accuracy is increased and the computational complexity is reduced. The experimental results indicate that our proposed approach is efficient for object segmentation in natural images, especially for small objects. Our proposed method significantly outperforms traditional GMM based segmentation.

Image saliency detection based on geodesic-like and boundary contrast maps

  • Guo, Yingchun;Liu, Yi;Ma, Runxin
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.797-810
    • /
    • 2019
  • Image saliency detection is the basis of perceptual image processing, which is significant to subsequent image processing methods. Most saliency detection methods can detect only a single object with a high-contrast background, but they have no effect on the extraction of a salient object from images with complex low-contrast backgrounds. With the prior knowledge, this paper proposes a method for detecting salient objects by combining the boundary contrast map and the geodesics-like maps. This method can highlight the foreground uniformly and extract the salient objects efficiently in images with low-contrast backgrounds. The classical receiver operating characteristics (ROC) curve, which compares the salient map with the ground truth map, does not reflect the human perception. An ROC curve with distance (distance receiver operating characteristic, DROC) is proposed in this paper, which takes the ROC curve closer to the human subjective perception. Experiments on three benchmark datasets and three low-contrast image datasets, with four evaluation methods including DROC, show that on comparing the eight state-of-the-art approaches, the proposed approach performs well.