• Title/Summary/Keyword: Saliency Pixel

Search Result 13, Processing Time 0.026 seconds

The Method to Estimate Saliency Values using Gauss Weight (가우스 가중치를 이용한 돌출 값 추정을 위한 방법)

  • Yu, Young-Jung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.965-970
    • /
    • 2013
  • It is important work to extract saliency regions from an image as preprocessing for various image processing methods. In this paper, we introduce an improved method to estimate saliency value of each pixel from an image. The proposed method is an improved work of the previously studied method using color and statistical framework to estimate saliency values. At first, saliency value of each pixel is calculated using the local contrast of an image region at various scales and the most significant saliency pixel is determined using saliency value of each pixel. Then, saliency value of each pixel is again estimated using gauss weight with respect to the most significant saliency pixel and the saliency of each pixel is determined to calculate initial probability. At last, the saliency value of each pixel is calculated by Bayes' rule. The experiments show that our approach outperforms the current statistical based method.

Object Detection and 3D Position Estimation based on Stereo Vision (스테레오 영상 기반의 객체 탐지 및 객체의 3차원 위치 추정)

  • Son, Haengseon;Lee, Seonyoung;Min, Kyoungwon;Seo, Seongjin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.4
    • /
    • pp.318-324
    • /
    • 2017
  • We introduced a stereo camera on the aircraft to detect flight objects and to estimate the 3D position of them. The Saliency map algorithm based on PCT was proposed to detect a small object between clouds, and then we processed a stereo matching algorithm to find out the disparity between the left and right camera. In order to extract accurate disparity, cost aggregation region was used as a variable region to adapt to detection object. In this paper, we use the detection result as the cost aggregation region. In order to extract more precise disparity, sub-pixel interpolation is used to extract float type-disparity at sub-pixel level. We also proposed a method to estimate the spatial position of an object by using camera parameters. It is expected that it can be applied to image - based object detection and collision avoidance system of autonomous aircraft in the future.

Triqubit-State Measurement-Based Image Edge Detection Algorithm

  • Wang, Zhonghua;Huang, Faliang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1331-1346
    • /
    • 2018
  • Aiming at the problem that the gradient-based edge detection operators are sensitive to the noise, causing the pseudo edges, a triqubit-state measurement-based edge detection algorithm is presented in this paper. Combing the image local and global structure information, the triqubit superposition states are used to represent the pixel features, so as to locate the image edge. Our algorithm consists of three steps. Firstly, the improved partial differential method is used to smooth the defect image. Secondly, the triqubit-state is characterized by three elements of the pixel saliency, edge statistical characteristics and gray scale contrast to achieve the defect image from the gray space to the quantum space mapping. Thirdly, the edge image is outputted according to the quantum measurement, local gradient maximization and neighborhood chain code searching. Compared with other methods, the simulation experiments indicate that our algorithm has less pseudo edges and higher edge detection accuracy.

Efficient Image Segmentation Algorithm Based on Improved Saliency Map and Superpixel (향상된 세일리언시 맵과 슈퍼픽셀 기반의 효과적인 영상 분할)

  • Nam, Jae-Hyun;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.7
    • /
    • pp.1116-1126
    • /
    • 2016
  • Image segmentation is widely used in the pre-processing stage of image analysis and, therefore, the accuracy of image segmentation is important for performance of an image-based analysis system. An efficient image segmentation method is proposed, including a filtering process for super-pixels, improved saliency map information, and a merge process. The proposed algorithm removes areas that are not equal or of small size based on comparison of the area of smoothed superpixels in order to maintain generation of a similar size super pixel area. In addition, application of a bilateral filter to an existing saliency map that represents human visual attention allows improvement of separation between objects and background. Finally, a segmented result is obtained based on the suggested merging process without any prior knowledge or information. Performance of the proposed algorithm is verified experimentally.

The Method to Measure Saliency Values for Salient Region Detection from an Image

  • Park, Seong-Ho;Yu, Young-Jung
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.1
    • /
    • pp.55-58
    • /
    • 2011
  • In this paper we introduce an improved method to measure saliency values of pixels from an image. The proposed saliency measure is formulated using local features of color and a statistical framework. In the preprocessing step, rough salient pixels are determined as the local contrast of an image region with respect to its neighborhood at various scales. Then, the saliency value of each pixel is calculated by Bayes' rule using rough salient pixels. The experiments show that our approach outperforms the current Bayes' rule based method.

Building Change Detection Using Deep Learning for Remote Sensing Images

  • Wang, Chang;Han, Shijing;Zhang, Wen;Miao, Shufeng
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.587-598
    • /
    • 2022
  • To increase building change recognition accuracy, we present a deep learning-based building change detection using remote sensing images. In the proposed approach, by merging pixel-level and object-level information of multitemporal remote sensing images, we create the difference image (DI), and the frequency-domain significance technique is used to generate the DI saliency map. The fuzzy C-means clustering technique pre-classifies the coarse change detection map by defining the DI saliency map threshold. We then extract the neighborhood features of the unchanged pixels and the changed (buildings) from pixel-level and object-level feature images, which are then used as valid deep neural network (DNN) training samples. The trained DNNs are then utilized to identify changes in DI. The suggested strategy was evaluated and compared to current detection methods using two datasets. The results suggest that our proposed technique can detect more building change information and improve change detection accuracy.

Location-Based Saliency Maps from a Fully Connected Layer using Multi-Shapes

  • Kim, Hoseung;Han, Seong-Soo;Jeong, Chang-Sung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.1
    • /
    • pp.166-179
    • /
    • 2021
  • Recently, with the development of technology, computer vision research based on the human visual system has been actively conducted. Saliency maps have been used to highlight areas that are visually interesting within the image, but they can suffer from low performance due to external factors, such as an indistinct background or light source. In this study, existing color, brightness, and contrast feature maps are subjected to multiple shape and orientation filters and then connected to a fully connected layer to determine pixel intensities within the image based on location-based weights. The proposed method demonstrates better performance in separating the background from the area of interest in terms of color and brightness in the presence of external elements and noise. Location-based weight normalization is also effective in removing pixels with high intensity that are outside of the image or in non-interest regions. Our proposed method also demonstrates that multi-filter normalization can be processed faster using parallel processing.

Multi-scale Diffusion-based Salient Object Detection with Background and Objectness Seeds

  • Yang, Sai;Liu, Fan;Chen, Juan;Xiao, Dibo;Zhu, Hairong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4976-4994
    • /
    • 2018
  • The diffusion-based salient object detection methods have shown excellent detection results and more efficient computation in recent years. However, the current diffusion-based salient object detection methods still have disadvantage of detecting the object appearing at the image boundaries and different scales. To address the above mentioned issues, this paper proposes a multi-scale diffusion-based salient object detection algorithm with background and objectness seeds. In specific, the image is firstly over-segmented at several scales. Secondly, the background and objectness saliency of each superpixel is then calculated and fused in each scale. Thirdly, manifold ranking method is chosen to propagate the Bayessian fusion of background and objectness saliency to the whole image. Finally, the pixel-level saliency map is constructed by weighted summation of saliency values under different scales. We evaluate our salient object detection algorithm with other 24 state-of-the-art methods on four public benchmark datasets, i.e., ASD, SED1, SED2 and SOD. The results show that the proposed method performs favorably against 24 state-of-the-art salient object detection approaches in term of popular measures of PR curve and F-measure. And the visual comparison results also show that our method highlights the salient objects more effectively.

Modified Seam Finding Algorithm based on Saliency Map to Generate 360 VR Image (360 VR 영상 제작을 위한 Saliency Map 기반 Seam Finding 알고리즘)

  • Han, Hyeon-Deok;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1096-1112
    • /
    • 2019
  • The cameras generating 360 VR image are too expensive to be used publically. To overcome this problem, we propose a way to use smart phones instead of VR camera, where more than 100 pictures are taken by smart phone and are stitched into a 360 VR image. In this scenario, when moving objects are in some of the pictures, the stitched 360 VR image has various degradations, for example, ghost effect and mis-aligning. In this paper, we proposed an algorithm to modify the seam finding algorithms, where the saliency map in ROI is generated to check whether the pixel belongs to visually salient objects or not. Various simulation results show that the proposed algorithm is effective to increase the quality of the generated 360 VR image.

Salient Object Detection Based on Regional Contrast and Relative Spatial Compactness

  • Xu, Dan;Tang, Zhenmin;Xu, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2737-2753
    • /
    • 2013
  • In this study, we propose a novel salient object detection strategy based on regional contrast and relative spatial compactness. Our algorithm consists of four basic steps. First, we learn color names offline using the probabilistic latent semantic analysis (PLSA) model to find the mapping between basic color names and pixel values. The color names can be used for image segmentation and region description. Second, image pixels are assigned to special color names according to their values, forming different color clusters. The saliency measure for every cluster is evaluated by its spatial compactness relative to other clusters rather than by the intra variance of the cluster alone. Third, every cluster is divided into local regions that are described with color name descriptors. The regional contrast is evaluated by computing the color distance between different regions in the entire image. Last, the final saliency map is constructed by incorporating the color cluster's spatial compactness measure and the corresponding regional contrast. Experiments show that our algorithm outperforms several existing salient object detection methods with higher precision and better recall rates when evaluated using public datasets.