• Title/Summary/Keyword: Saliency Detection

Search Result 72, Processing Time 0.028 seconds

Far Distance Face Detection from The Interest Areas Expansion based on User Eye-tracking Information (시선 응시 점 기반의 관심영역 확장을 통한 원 거리 얼굴 검출)

  • Park, Heesun;Hong, Jangpyo;Kim, Sangyeol;Jang, Young-Min;Kim, Cheol-Su;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.113-127
    • /
    • 2012
  • Face detection methods using image processing have been proposed in many different ways. Generally, the most widely used method for face detection is an Adaboost that is proposed by Viola and Jones. This method uses Haar-like feature for image learning, and the detection performance depends on the learned images. It is well performed to detect face images within a certain distance range, but if the image is far away from the camera, face images become so small that may not detect them with the pre-learned Haar-like feature of the face image. In this paper, we propose the far distance face detection method that combine the Aadaboost of Viola-Jones with a saliency map and user's attention information. Saliency Map is used to select the candidate face images in the input image, face images are finally detected among the candidated regions using the Adaboost with Haar-like feature learned in advance. And the user's eye-tracking information is used to select the interest regions. When a subject is so far away from the camera that it is difficult to detect the face image, we expand the small eye gaze spot region using linear interpolation method and reuse that as input image and can increase the face image detection performance. We confirmed the proposed model has better results than the conventional Adaboost in terms of face image detection performance and computational time.

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

Small Object Segmentation Based on Visual Saliency in Natural Images

  • Manh, Huynh Trung;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • v.9 no.4
    • /
    • pp.592-601
    • /
    • 2013
  • Object segmentation is a challenging task in image processing and computer vision. In this paper, we present a visual attention based segmentation method to segment small sized interesting objects in natural images. Different from the traditional methods, we first search the region of interest by using our novel saliency-based method, which is mainly based on band-pass filtering, to obtain the appropriate frequency. Secondly, we applied the Gaussian Mixture Model (GMM) to locate the object region. By incorporating the visual attention analysis into object segmentation, our proposed approach is able to narrow the search region for object segmentation, so that the accuracy is increased and the computational complexity is reduced. The experimental results indicate that our proposed approach is efficient for object segmentation in natural images, especially for small objects. Our proposed method significantly outperforms traditional GMM based segmentation.

Vehicle License Plate Detection in Road Images (도로주행 영상에서의 차량 번호판 검출)

  • Lim, Kwangyong;Byun, Hyeran;Choi, Yeongwoo
    • Journal of KIISE
    • /
    • v.43 no.2
    • /
    • pp.186-195
    • /
    • 2016
  • This paper proposes a vehicle license plate detection method in real road environments using 8 bit-MCT features and a landmark-based Adaboost method. The proposed method allows identification of the potential license plate region, and generates a saliency map that presents the license plate's location probability based on the Adaboost classification score. The candidate regions whose scores are higher than the given threshold are chosen from the saliency map. Each candidate region is adjusted by the local image variance and verified by the SVM and the histograms of the 8bit-MCT features. The proposed method achieves a detection accuracy of 85% from various road images in Korea and Europe.

Salient Object Detection via Multiple Random Walks

  • Zhai, Jiyou;Zhou, Jingbo;Ren, Yongfeng;Wang, Zhijian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.4
    • /
    • pp.1712-1731
    • /
    • 2016
  • In this paper, we propose a novel saliency detection framework via multiple random walks (MRW) which simulate multiple agents on a graph simultaneously. In the MRW system, two agents, which represent the seeds of background and foreground, traverse the graph according to a transition matrix, and interact with each other to achieve a state of equilibrium. The proposed algorithm is divided into three steps. First, an initial segmentation is performed to partition an input image into homogeneous regions (i.e., superpixels) for saliency computation. Based on the regions of image, we construct a graph that the nodes correspond to the superpixels in the image, and the edges between neighboring nodes represent the similarities of the corresponding superpixels. Second, to generate the seeds of background, we first filter out one of the four boundaries that most unlikely belong to the background. The superpixels on each of the three remaining sides of the image will be labeled as the seeds of background. To generate the seeds of foreground, we utilize the center prior that foreground objects tend to appear near the image center. In last step, the seeds of foreground and background are treated as two different agents in multiple random walkers to complete the process of salient object detection. Experimental results on three benchmark databases demonstrate the proposed method performs well when it against the state-of-the-art methods in terms of accuracy and robustness.

Spatiotemporal Saliency-Based Video Summarization on a Smartphone (스마트폰에서의 시공간적 중요도 기반의 비디오 요약)

  • Lee, Won Beom;Williem, Williem;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.18 no.2
    • /
    • pp.185-195
    • /
    • 2013
  • In this paper, we propose a video summarization technique on a smartphone, based on spatiotemporal saliency. The proposed technique detects scene changes by computing the difference of the color histogram, which is robust to camera and object motion. Then the similarity between adjacent frames, face region, and frame saliency are computed to analyze the spatiotemporal saliency in a video clip. Over-segmented hierarchical tree is created using scene changes and is updated iteratively using mergence and maintenance energies computed during the analysis procedure. In the updated hierarchical tree, segmented frames are extracted by applying a greedy algorithm on the node with high saliency when it satisfies the reduction ratio and the minimum interval requested by the user. Experimental result shows that the proposed method summaries a 2 minute-length video in about 10 seconds on a commercial smartphone. The summarization quality is superior to the commercial video editing software, Muvee.

A Scale Invariant Object Detection Algorithm Using Wavelet Transform in Sea Environment (해양 환경에서 웨이블렛 변환을 이용한 크기 변화에 무관한 물표 탐지 알고리즘)

  • Bazarvaani, Badamtseren;Park, Ki Tae;Jeong, Jongmyeon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.3
    • /
    • pp.249-255
    • /
    • 2013
  • In this paper, we propose an algorithm to detect scale invariant object from IR image obtained in the sea environment. We create horizontal edge (HL), vertical edge (LH), diagonal edge (HH) of images through 2-D discrete Haar wavelet transform (DHWT) technique after noise reduction using morphology operations. Considering the sea environment, Gaussian blurring to the horizontal and vertical edge images at each level of wavelet is performed and then saliency map is generated by multiplying the blurred horizontal and vertical edges and combining into one image. Then we extract object candidate region by performing a binarization to saliency map. A small area in the object candidate region are removed to produce final result. Experiment results show the feasibility of the proposed algorithm.

Product Label Detection based on the Local Structure Tensor (구조 텐서 기반의 상품 라벨 검출)

  • Chen, Yan-Juan;Lee, Myung-Eun;Kim, Soo-Hyung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06c
    • /
    • pp.397-400
    • /
    • 2011
  • In this paper, we propose an approach to detect the product label for mobile phone images based on saliency map and the local structure tensor. The object boundary information can be better described by the local structure tensor than other edge detectors, and the saliency map methods can find out the most salient area and shorten the computational time by reducing the size of the orignal image. Therefore, these two methods are considered for our product label detection. The experimental results show an acceptable performance based on our proposed approach.

Salient Object Detection Based on Regional Contrast and Relative Spatial Compactness

  • Xu, Dan;Tang, Zhenmin;Xu, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2737-2753
    • /
    • 2013
  • In this study, we propose a novel salient object detection strategy based on regional contrast and relative spatial compactness. Our algorithm consists of four basic steps. First, we learn color names offline using the probabilistic latent semantic analysis (PLSA) model to find the mapping between basic color names and pixel values. The color names can be used for image segmentation and region description. Second, image pixels are assigned to special color names according to their values, forming different color clusters. The saliency measure for every cluster is evaluated by its spatial compactness relative to other clusters rather than by the intra variance of the cluster alone. Third, every cluster is divided into local regions that are described with color name descriptors. The regional contrast is evaluated by computing the color distance between different regions in the entire image. Last, the final saliency map is constructed by incorporating the color cluster's spatial compactness measure and the corresponding regional contrast. Experiments show that our algorithm outperforms several existing salient object detection methods with higher precision and better recall rates when evaluated using public datasets.

Video Based Tail-Lights Status Recognition Algorithm (영상기반 차량 후미등 상태 인식 알고리즘)

  • Kim, Gyu-Yeong;Lee, Geun-Hoo;Do, Jin-Kyu;Park, Keun-Soo;Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.10
    • /
    • pp.1443-1449
    • /
    • 2013
  • Automatic detection of vehicles in front is an integral component of many advanced driver-assistance system, such as collision mitigation, automatic cruise control, and automatic head-lamp dimming. Regardless day and night, tail-lights play an important role in vehicle detecting and status recognizing of driving in front. However, some drivers do not know the status of the tail-lights of vehicles. Thus, it is required for drivers to inform status of tail-lights automatically. In this paper, a recognition method of status of tail-lights based on video processing and recognition technology is proposed. Background estimation, optical flow and Euclidean distance is used to detect vehicles entering tollgate. Then saliency map is used to detect tail-lights and recognize their status in the Lab color coordinates. As results of experiments of using tollgate videos, it is shown that the proposed method can be used to inform status of tail-lights.