• Title/Summary/Keyword: saliency

Search Result 225, Processing Time 0.022 seconds

An Improved Level Set Method to Image Segmentation Based on Saliency

  • Wang, Yan;Xu, Xianfa
    • Journal of Information Processing Systems
    • /
    • v.15 no.1
    • /
    • pp.7-21
    • /
    • 2019
  • In order to improve the edge segmentation effect of the level set image segmentation and avoid the influence of the initial contour on the level set method, a saliency level set image segmentation model based on local Renyi entropy is proposed. Firstly, the saliency map of the original image is extracted by using saliency detection algorithm. And the outline of the saliency map can be used to initialize the level set. Secondly, the local energy and edge energy of the image are obtained by using local Renyi entropy and Canny operator respectively. At the same time, new adaptive weight coefficient and boundary indication function are constructed. Finally, the local binary fitting energy model (LBF) as an external energy term is introduced. In this paper, the contrast experiments are implemented in different image database. The robustness of the proposed model for segmentation of images with intensity inhomogeneity and complicated edges is verified.

Analysis of the effect of class classification learning on the saliency map of Self-Supervised Transformer (클래스분류 학습이 Self-Supervised Transformer의 saliency map에 미치는 영향 분석)

  • Kim, JaeWook;Kim, Hyeoncheol
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.67-70
    • /
    • 2022
  • NLP 분야에서 적극 활용되기 시작한 Transformer 모델을 Vision 분야에서 적용하기 시작하면서 object detection과 segmentation 등 각종 분야에서 기존 CNN 기반 모델의 정체된 성능을 극복하며 향상되고 있다. 또한, label 데이터 없이 이미지들로만 자기지도학습을 한 ViT(Vision Transformer) 모델을 통해 이미지에 포함된 여러 중요한 객체의 영역을 검출하는 saliency map을 추출할 수 있게 되었으며, 이로 인해 ViT의 자기지도학습을 통한 object detection과 semantic segmentation 연구가 활발히 진행되고 있다. 본 논문에서는 ViT 모델 뒤에 classifier를 붙인 모델에 일반 학습한 모델과 자기지도학습의 pretrained weight을 사용해서 전이학습한 모델의 시각화를 통해 각 saliency map들을 비교 분석하였다. 이를 통해, 클래스 분류 학습 기반 전이학습이 transformer의 saliency map에 미치는 영향을 확인할 수 있었다.

  • PDF

Window Production Method based on Low-Frequency Detection for Automatic Object Extraction of GrabCut (GrabCut의 자동 객체 추출을 위한 저주파 영역 탐지 기반의 윈도우 생성 기법)

  • Yoo, Tae-Hoon;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.211-217
    • /
    • 2012
  • Conventional GrabCut algorithm is semi-automatic algorithm that user must be set rectangle window surrounds the object. This paper studied automatic object detection to solve these problem by detecting salient region based on Human Visual System. Saliency map is computed using Lab color space which is based on color opposing theory of 'red-green' and 'blue-yellow'. Then Saliency Points are computed from the boundaries of Low-Frequency region that are extracted from Saliency Map. Finally, Rectangle windows are obtained from coordinate value of Saliency Points and these windows are used in GrabCut algorithm to extract objects. Through various experiments, the proposed algorithm computing rectangle windows of salient region and extracting objects has been proved.

Saliency Map Creation Method Robust to the Contour of Objects (객체의 윤곽선에 강인한 Saliency Map 생성 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.3
    • /
    • pp.173-178
    • /
    • 2012
  • In this paper, a new saliency map generation method is discussed which extracts objects effectively using extracted Salient Region. Feature map is constructed first using four features of edge, hue of HSV color model, focus and entropy and then conspicuity map is generated from Center Surround Differences using the feature map. Final saliency map is constructed by the combination of conspicuity maps. Saliency map generated using this procedure is compared to the conventional technique and confirmed that new technique has better results.

Visual Saliency Detection Based on color Frequency Features under Bayesian framework

  • Ayoub, Naeem;Gao, Zhenguo;Chen, Danjie;Tobji, Rachida;Yao, Nianmin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.676-692
    • /
    • 2018
  • Saliency detection in neurobiology is a vehement research during the last few years, several cognitive and interactive systems are designed to simulate saliency model (an attentional mechanism, which focuses on the worthiest part in the image). In this paper, a bottom up saliency detection model is proposed by taking into account the color and luminance frequency features of RGB, CIE $L^*a^*b^*$ color space of the image. We employ low-level features of image and apply band pass filter to estimate and highlight salient region. We compute the likelihood probability by applying Bayesian framework at pixels. Experiments on two publically available datasets (MSRA and SED2) show that our saliency model performs better as compared to the ten state of the art algorithms by achieving higher precision, better recall and F-Measure.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

Implementation of saliency map model using independent component analysis (독립성분해석을 이용한 Saliency map 모델 구현)

  • Sohn, Jun-Il;Lee, Min-Ho;Shin, Jang-Kyoo
    • Journal of Sensor Science and Technology
    • /
    • v.10 no.5
    • /
    • pp.286-291
    • /
    • 2001
  • We propose a new saliency map model for selecting an attended location in an arbitrary visual scene, which is one of the most important characteristics of human vision system. In selecting an attended location, an edge information can be considered as a feature basis to construct the saliency map. Edge filters are obtained from the independent component analysis(ICA) that is the best way to find independent edges in natural gray scenes. In order to reflect the non-uniform density in our retina, we use a multi-scaled pyramid input image instead of using an original input image. Computer simulation results show that the proposed saliency map model with multi-scale property successfully generates the plausible attended locations.

  • PDF

Saliency Detection based on Global Color Distribution and Active Contour Analysis

  • Hu, Zhengping;Zhang, Zhenbin;Sun, Zhe;Zhao, Shuhuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.12
    • /
    • pp.5507-5528
    • /
    • 2016
  • In computer vision, salient object is important to extract the useful information of foreground. With active contour analysis acting as the core in this paper, we propose a bottom-up saliency detection algorithm combining with the Bayesian model and the global color distribution. Under the supports of active contour model, a more accurate foreground can be obtained as a foundation for the Bayesian model and the global color distribution. Furthermore, we establish a contour-based selection mechanism to optimize the global-color distribution, which is an effective revising approach for the Bayesian model as well. To obtain an excellent object contour, we firstly intensify the object region in the source gray-scale image by a seed-based method. The final saliency map can be detected after weighting the color distribution to the Bayesian saliency map, after both of the two components are available. The contribution of this paper is that, comparing the Harris-based convex hull algorithm, the active contour can extract a more accurate and non-convex foreground. Moreover, the global color distribution can solve the saliency-scattered drawback of Bayesian model, by the mutual complementation. According to the detected results, the final saliency maps generated with considering the global color distribution and active contour are much-improved.

Vector Control of PM Motor without any Rotational Transducer PART 1 - Surface Mounted Permanent Magnet Motor (위치 검출기가 없는 영구자석 동기 전동기의 제어 PART1 - 표면부착형 영구자석 전동기)

  • Jang, Ji-Hun;Ha, Jeong-Ik;Seol, Seung-Gi
    • The Transactions of the Korean Institute of Electrical Engineers B
    • /
    • v.50 no.9
    • /
    • pp.453-458
    • /
    • 2001
  • This paper presents a new vector control algorithm of the surface mounted permanent magnet motor (SMPMM) without any rotational tranceducer. Originally, SMPMM does not have any magnetic saliency in structure, but it has a little magnetic saliency due to the saturation by the flux of the permanent magnet. Moreover, it varies according to the load conditions and the control performance of schematics using the saliency can be easily degraded. To prevent it and to improve the performance of the proposed algorithm, the saliency of a SMPMM under various load conditions is analyzed. In the proposed algorithm, the saliency or the impedance difference related to the saliency is utilized in order to estimate the position and speed of the rotor. And the high frequency signal is injected into the motor to measure the impedance difference and also to enhance the control performance of the system. The experimental results verify the performance of the proposed sensorless algorithm.

  • PDF

Salient Region Extraction based on Global Contrast Enhancement and Saliency Cut for Image Information Recognition of the Visually Impaired

  • Yoon, Hongchan;Kim, Baek-Hyun;Mukhriddin, Mukhiddinov;Cho, Jinsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2287-2312
    • /
    • 2018
  • Extracting key visual information from images containing natural scene is a challenging task and an important step for the visually impaired to recognize information based on tactile graphics. In this study, a novel method is proposed for extracting salient regions based on global contrast enhancement and saliency cuts in order to improve the process of recognizing images for the visually impaired. To accomplish this, an image enhancement technique is applied to natural scene images, and a saliency map is acquired to measure the color contrast of homogeneous regions against other areas of the image. The saliency maps also help automatic salient region extraction, referred to as saliency cuts, and assist in obtaining a binary mask of high quality. Finally, outer boundaries and inner edges are detected in images with natural scene to identify edges that are visually significant. Experimental results indicate that the method we propose in this paper extracts salient objects effectively and achieves remarkable performance compared to conventional methods. Our method offers benefits in extracting salient objects and generating simple but important edges from images containing natural scene and for providing information to the visually impaired.