• 제목/요약/키워드: saliency

검색결과 225건 처리시간 0.026초

An Improved Level Set Method to Image Segmentation Based on Saliency

  • Wang, Yan;Xu, Xianfa
    • Journal of Information Processing Systems
    • /
    • 제15권1호
    • /
    • pp.7-21
    • /
    • 2019
  • In order to improve the edge segmentation effect of the level set image segmentation and avoid the influence of the initial contour on the level set method, a saliency level set image segmentation model based on local Renyi entropy is proposed. Firstly, the saliency map of the original image is extracted by using saliency detection algorithm. And the outline of the saliency map can be used to initialize the level set. Secondly, the local energy and edge energy of the image are obtained by using local Renyi entropy and Canny operator respectively. At the same time, new adaptive weight coefficient and boundary indication function are constructed. Finally, the local binary fitting energy model (LBF) as an external energy term is introduced. In this paper, the contrast experiments are implemented in different image database. The robustness of the proposed model for segmentation of images with intensity inhomogeneity and complicated edges is verified.

클래스분류 학습이 Self-Supervised Transformer의 saliency map에 미치는 영향 분석 (Analysis of the effect of class classification learning on the saliency map of Self-Supervised Transformer)

  • 김재욱;김현철
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2022년도 제66차 하계학술대회논문집 30권2호
    • /
    • pp.67-70
    • /
    • 2022
  • NLP 분야에서 적극 활용되기 시작한 Transformer 모델을 Vision 분야에서 적용하기 시작하면서 object detection과 segmentation 등 각종 분야에서 기존 CNN 기반 모델의 정체된 성능을 극복하며 향상되고 있다. 또한, label 데이터 없이 이미지들로만 자기지도학습을 한 ViT(Vision Transformer) 모델을 통해 이미지에 포함된 여러 중요한 객체의 영역을 검출하는 saliency map을 추출할 수 있게 되었으며, 이로 인해 ViT의 자기지도학습을 통한 object detection과 semantic segmentation 연구가 활발히 진행되고 있다. 본 논문에서는 ViT 모델 뒤에 classifier를 붙인 모델에 일반 학습한 모델과 자기지도학습의 pretrained weight을 사용해서 전이학습한 모델의 시각화를 통해 각 saliency map들을 비교 분석하였다. 이를 통해, 클래스 분류 학습 기반 전이학습이 transformer의 saliency map에 미치는 영향을 확인할 수 있었다.

  • PDF

GrabCut의 자동 객체 추출을 위한 저주파 영역 탐지 기반의 윈도우 생성 기법 (Window Production Method based on Low-Frequency Detection for Automatic Object Extraction of GrabCut)

  • 유태훈;이강성;이상훈
    • 디지털융복합연구
    • /
    • 제10권8호
    • /
    • pp.211-217
    • /
    • 2012
  • 기존의 GrabCut 알고리즘은 자동 객체 추출이 아닌 사용자가 객체 영역에 사각형 윈도우를 설정해야하는 알고리즘이다. 본 논문에서는 자동 시스템으로 변환하기 위해 인간의 시각 시스템을 기반으로 영상에서 가장 눈에 띄는 영역을 탐지하는 방법을 연구하였다. 주의 시각 영역인 Saliency Map을 생성하기 위해서 인간이 색채를 감지하는 '적/녹' '황/청'의 대립색설을 기반으로 하는 Lab 색공간을 이용하여 생성한다. 생성된 Saliency Map을 주파수 공간으로 변환하여 저주파 영역에 국부적인 경계를 나타내고 경계를 탐지해내어 Saliency Point를 생성한다. 이렇게 생성된 Saliency Point의 좌표 값을 이용하여 윈도우를 자동으로 생성한 후 GrabCut 알고리즘을 기반으로 객체를 추출하였다. 다양한 영상에 제안한 알고리즘을 적용한 결과 객체 영역에 자동으로 윈도우가 생성되었고 객체가 추출되었다.

객체의 윤곽선에 강인한 Saliency Map 생성 기법 (Saliency Map Creation Method Robust to the Contour of Objects)

  • 한성호;홍영표;이상훈
    • 디지털융복합연구
    • /
    • 제10권3호
    • /
    • pp.173-178
    • /
    • 2012
  • 본 논문에서는 영상의 관심 영역을 선택추출하여 효과적으로 객체를 추출 할 수 있는 관심 영역 지도(Saliency Map) 생성 기법을 제안하였다. 제안하는 방법은 객체의 윤곽선에 초점을 맞추어 단일영상의 에지(Edge), HSV 색상 모델의 H(Hue)성분, 포커스(Focus), 엔트로피(Entropy)의 네 가지 특징 정보를 이용한 각각의 특징 지도(Feature Map)를 생성하고, 생성된 특징 지도들을 중심 주변 차이(Center Surround Differences)를 이용하여 중요도 지도(conspicuity map)를 생성하게 된다. 이후 생성된 중요도 지도들을 조합함으로써 관심 영역 지도를 생성하게 된다. 제안한 기법을 이용하여 생성한 관심 영역 지도를 기존 기법의 관심 영역 지도와 비교한 결과 제안한 기법의 우수함을 알 수 있었다.

Visual Saliency Detection Based on color Frequency Features under Bayesian framework

  • Ayoub, Naeem;Gao, Zhenguo;Chen, Danjie;Tobji, Rachida;Yao, Nianmin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권2호
    • /
    • pp.676-692
    • /
    • 2018
  • Saliency detection in neurobiology is a vehement research during the last few years, several cognitive and interactive systems are designed to simulate saliency model (an attentional mechanism, which focuses on the worthiest part in the image). In this paper, a bottom up saliency detection model is proposed by taking into account the color and luminance frequency features of RGB, CIE $L^*a^*b^*$ color space of the image. We employ low-level features of image and apply band pass filter to estimate and highlight salient region. We compute the likelihood probability by applying Bayesian framework at pixels. Experiments on two publically available datasets (MSRA and SED2) show that our saliency model performs better as compared to the ten state of the art algorithms by achieving higher precision, better recall and F-Measure.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권1호
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

독립성분해석을 이용한 Saliency map 모델 구현 (Implementation of saliency map model using independent component analysis)

  • 손준일;이민호;신장규
    • 센서학회지
    • /
    • 제10권5호
    • /
    • pp.286-291
    • /
    • 2001
  • 논문에서는 임의의 시각계에서 인간과 유사한 시각 응시점을 선택하기 위한 Saliency map 모델을 제안한다. 제안하는 모델은 영상의 에지 정보를 시각 응시점 결정을 위한 특징 기저로 이용한다. 자연 정지 흑백 영상에서 상호 독립적인 에지 성분들을 찾는데 가장 좋은 방법이라고 알려진 독립성분해석 방법을 이용한다. 인간 시각계에서 시각 수용체의 비균일 분포를 고려하기 위해 카메라와 같은 시각 센서로 받은 영상을 직접 이용하는 대신에 입력 영상으로 다중 해상도를 갖는 계층 영상을 이용한다. 컴퓨터를 이용한 시뮬레이션으로부터 제안한 Saliency map을 이용하여 주어진 임의의 이미지에 대한 시각 응시점을 구한다.

  • PDF

Saliency Detection based on Global Color Distribution and Active Contour Analysis

  • Hu, Zhengping;Zhang, Zhenbin;Sun, Zhe;Zhao, Shuhuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권12호
    • /
    • pp.5507-5528
    • /
    • 2016
  • In computer vision, salient object is important to extract the useful information of foreground. With active contour analysis acting as the core in this paper, we propose a bottom-up saliency detection algorithm combining with the Bayesian model and the global color distribution. Under the supports of active contour model, a more accurate foreground can be obtained as a foundation for the Bayesian model and the global color distribution. Furthermore, we establish a contour-based selection mechanism to optimize the global-color distribution, which is an effective revising approach for the Bayesian model as well. To obtain an excellent object contour, we firstly intensify the object region in the source gray-scale image by a seed-based method. The final saliency map can be detected after weighting the color distribution to the Bayesian saliency map, after both of the two components are available. The contribution of this paper is that, comparing the Harris-based convex hull algorithm, the active contour can extract a more accurate and non-convex foreground. Moreover, the global color distribution can solve the saliency-scattered drawback of Bayesian model, by the mutual complementation. According to the detected results, the final saliency maps generated with considering the global color distribution and active contour are much-improved.

위치 검출기가 없는 영구자석 동기 전동기의 제어 PART1 - 표면부착형 영구자석 전동기 (Vector Control of PM Motor without any Rotational Transducer PART 1 - Surface Mounted Permanent Magnet Motor)

  • 장지훈;하정익;설승기
    • 대한전기학회논문지:전기기기및에너지변환시스템부문B
    • /
    • 제50권9호
    • /
    • pp.453-458
    • /
    • 2001
  • This paper presents a new vector control algorithm of the surface mounted permanent magnet motor (SMPMM) without any rotational tranceducer. Originally, SMPMM does not have any magnetic saliency in structure, but it has a little magnetic saliency due to the saturation by the flux of the permanent magnet. Moreover, it varies according to the load conditions and the control performance of schematics using the saliency can be easily degraded. To prevent it and to improve the performance of the proposed algorithm, the saliency of a SMPMM under various load conditions is analyzed. In the proposed algorithm, the saliency or the impedance difference related to the saliency is utilized in order to estimate the position and speed of the rotor. And the high frequency signal is injected into the motor to measure the impedance difference and also to enhance the control performance of the system. The experimental results verify the performance of the proposed sensorless algorithm.

  • PDF

Salient Region Extraction based on Global Contrast Enhancement and Saliency Cut for Image Information Recognition of the Visually Impaired

  • Yoon, Hongchan;Kim, Baek-Hyun;Mukhriddin, Mukhiddinov;Cho, Jinsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2287-2312
    • /
    • 2018
  • Extracting key visual information from images containing natural scene is a challenging task and an important step for the visually impaired to recognize information based on tactile graphics. In this study, a novel method is proposed for extracting salient regions based on global contrast enhancement and saliency cuts in order to improve the process of recognizing images for the visually impaired. To accomplish this, an image enhancement technique is applied to natural scene images, and a saliency map is acquired to measure the color contrast of homogeneous regions against other areas of the image. The saliency maps also help automatic salient region extraction, referred to as saliency cuts, and assist in obtaining a binary mask of high quality. Finally, outer boundaries and inner edges are detected in images with natural scene to identify edges that are visually significant. Experimental results indicate that the method we propose in this paper extracts salient objects effectively and achieves remarkable performance compared to conventional methods. Our method offers benefits in extracting salient objects and generating simple but important edges from images containing natural scene and for providing information to the visually impaired.