• Title/Summary/Keyword: saliency

Search Result 225, Processing Time 0.025 seconds

Pothole Detection Algorithm Based on Saliency Map for Improving Detection Performance (포트홀 탐지 정확도 향상을 위한 Saliency Map 기반 포트홀 탐지 알고리즘)

  • Jo, Young-Tae;Ryu, Seung-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.4
    • /
    • pp.104-114
    • /
    • 2016
  • Potholes have caused diverse problems such as wheel damage and car accident. A pothole detection technology is the most important to provide efficient pothole maintenance. The previous pothole detections have been performed by manual reporting methods. Thus, the problems caused by potholes have not been solved previously. Recently, many pothole detection systems based on video cameras have been studied, which can be implemented at low costs. In this paper, we propose a new pothole detection algorithm based on saliency map information in order to improve our previously developed algorithm. Our previous algorithm shows wrong detection with complicated situations such as the potholes overlapping with shades and similar surface textures with normal road surfaces. To address the problems, the proposed algorithm extracts more accurate pothole regions using the saliency map information, which consists of candidate extraction and decision. The experimental results show that the proposed algorithm shows better performance than our previous algorithm.

Automatic Change Detection Using Unsupervised Saliency Guided Method with UAV and Aerial Images

  • Farkoushi, Mohammad Gholami;Choi, Yoonjo;Hong, Seunghwan;Bae, Junsu;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1067-1076
    • /
    • 2020
  • In this paper, an unsupervised saliency guided change detection method using UAV and aerial imagery is proposed. Regions that are more different from other areas are salient, which make them more distinct. The existence of the substantial difference between two images makes saliency proper for guiding the change detection process. Change Vector Analysis (CVA), which has the capability of extracting of overall magnitude and direction of change from multi-spectral and temporal remote sensing data, is used for generating an initial difference image. Combined with an unsupervised CVA and the saliency, Principal Component Analysis(PCA), which is possible to implemented as the guide for change detection method, is proposed for UAV and aerial images. By implementing the saliency generation on the difference map extracted via the CVA, potentially changed areas obtained, and by thresholding the saliency map, most of the interest areas correctly extracted. Finally, the PCA method is implemented to extract features, and K-means clustering is applied to detect changed and unchanged map on the extracted areas. This proposed method is applied to the image sets over the flooded and typhoon-damaged area and is resulted in 95 percent better than the PCA approach compared with manually extracted ground truth for all the data sets. Finally, we compared our approach with the PCA K-means method to show the effectiveness of the method.

Visual Search Model based on Saliency and Scene-Context in Real-World Images (실제 이미지에서 현저성과 맥락 정보의 영향을 고려한 시각 탐색 모델)

  • Choi, Yoonhyung;Oh, Hyungseok;Myung, Rohae
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.4
    • /
    • pp.389-395
    • /
    • 2015
  • According to much research on cognitive science, the impact of the scene-context on human visual search in real-world images could be as important as the saliency. Therefore, this study proposed a method of Adaptive Control of Thought-Rational (ACT-R) modeling of visual search in real-world images, based on saliency and scene-context. The modeling method was developed by using the utility system of ACT-R to describe influences of saliency and scene-context in real-world images. Then, the validation of the model was performed, by comparing the data of the model and eye-tracking data from experiments in simple task in which subjects search some targets in indoor bedroom images. Results show that model data was quite well fit with eye-tracking data. In conclusion, the method of modeling human visual search proposed in this study should be used, in order to provide an accurate model of human performance in visual search tasks in real-world images.

Robust Face Detection Based on Knowledge-Directed Specification of Bottom-Up Saliency

  • Lee, Yu-Bu;Lee, Suk-Han
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.600-610
    • /
    • 2011
  • This paper presents a novel approach to face detection by localizing faces as the goal-specific saliencies in a scene, using the framework of selective visual attention of a human with a particular goal in mind. The proposed approach aims at achieving human-like robustness as well as efficiency in face detection under large scene variations. The key is to establish how the specific knowledge relevant to the goal interacts with the bottom-up process of external visual stimuli for saliency detection. We propose a direct incorporation of the goal-related knowledge into the specification and/or modification of the internal process of a general bottom-up saliency detection framework. More specifically, prior knowledge of the human face, such as its size, skin color, and shape, is directly set to the window size and color signature for computing the center of difference, as well as to modify the importance weight, as a means of transforming into a goal-specific saliency detection. The experimental evaluation shows that the proposed method reaches a detection rate of 93.4% with a false positive rate of 7.1%, indicating the robustness against a wide variation of scale and rotation.

Image Caption Area extraction using Saliency Map and Max Filter (중요도 맵과 최댓값 필터를 이용한 영상 자막 영역 추출)

  • Kim, Youngjin;Kim, Manbae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.11a
    • /
    • pp.63-64
    • /
    • 2014
  • 본 논문에서는 Saliency map과 Max Filter를 이용한 영상의 자막영역을 추출 한다. Saliency map은 눈에 띄는 영역, 즉 영상에서 주변영역에 비해 밝기 차이가 심한 영역과 윤곽선에 대한 특징이 강한 영역을 돌출하는 것을 말하며, MaxFilter는 중심 픽셀을 최대 윈도우 값을 사용하는 것으로 극단적인 Impulse Noise를 제거하는데 효과적이며 특히 어두운 스파이크를 제거하는데 유용하게 사용된다. 이 두 가지의 특징들을 이용하여 영상의 자막 영역을 추출한다.

  • PDF

A Saliency-Based Focusing Region Selection Method for Robust Auto-Focusing

  • Jeon, Jaehwan;Cho, Changhun;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.133-142
    • /
    • 2012
  • This paper presents a salient region detection algorithm for auto-focusing based on the characteristics of a human's visual attention. To describe the saliency at the local, regional, and global levels, this paper proposes a set of novel features including multi-scale local contrast, variance, center-surround entropy, and closeness to the center. Those features are then prioritized to produce a saliency map. The major advantage of the proposed approach is twofold; i) robustness to changes in focus and ii) low computational complexity. The experimental results showed that the proposed method outperforms the existing low-level feature-based methods in the sense of both robustness and accuracy for auto-focusing.

  • PDF

Implementation of a Stereo Vision Using Saliency Map Method

  • Choi, Hyeung-Sik;Kim, Hwan-Sung;Shin, Hee-Young;Lee, Min-Ho
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.5
    • /
    • pp.674-682
    • /
    • 2012
  • A new intelligent stereo vision sensor system was studied for the motion and depth control of unmanned vehicles. A new bottom-up saliency map model for the human-like active stereo vision system based on biological visual process was developed to select a target object. If the left and right cameras successfully find the same target object, the implemented active vision system with two cameras focuses on a landmark and can detect the depth and the direction information. By using this information, the unmanned vehicle can approach to the target autonomously. A number of tests for the proposed bottom-up saliency map were performed, and their results were presented.

Efficient Image Segmentation Algorithm Based on Improved Saliency Map and Superpixel (향상된 세일리언시 맵과 슈퍼픽셀 기반의 효과적인 영상 분할)

  • Nam, Jae-Hyun;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.7
    • /
    • pp.1116-1126
    • /
    • 2016
  • Image segmentation is widely used in the pre-processing stage of image analysis and, therefore, the accuracy of image segmentation is important for performance of an image-based analysis system. An efficient image segmentation method is proposed, including a filtering process for super-pixels, improved saliency map information, and a merge process. The proposed algorithm removes areas that are not equal or of small size based on comparison of the area of smoothed superpixels in order to maintain generation of a similar size super pixel area. In addition, application of a bilateral filter to an existing saliency map that represents human visual attention allows improvement of separation between objects and background. Finally, a segmented result is obtained based on the suggested merging process without any prior knowledge or information. Performance of the proposed algorithm is verified experimentally.

The Method to Measure Saliency Values for Salient Region Detection from an Image

  • Park, Seong-Ho;Yu, Young-Jung
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.1
    • /
    • pp.55-58
    • /
    • 2011
  • In this paper we introduce an improved method to measure saliency values of pixels from an image. The proposed saliency measure is formulated using local features of color and a statistical framework. In the preprocessing step, rough salient pixels are determined as the local contrast of an image region with respect to its neighborhood at various scales. Then, the saliency value of each pixel is calculated by Bayes' rule using rough salient pixels. The experiments show that our approach outperforms the current Bayes' rule based method.

Spatiotemporal Saliency-Based Video Summarization on a Smartphone (스마트폰에서의 시공간적 중요도 기반의 비디오 요약)

  • Lee, Won Beom;Williem, Williem;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.18 no.2
    • /
    • pp.185-195
    • /
    • 2013
  • In this paper, we propose a video summarization technique on a smartphone, based on spatiotemporal saliency. The proposed technique detects scene changes by computing the difference of the color histogram, which is robust to camera and object motion. Then the similarity between adjacent frames, face region, and frame saliency are computed to analyze the spatiotemporal saliency in a video clip. Over-segmented hierarchical tree is created using scene changes and is updated iteratively using mergence and maintenance energies computed during the analysis procedure. In the updated hierarchical tree, segmented frames are extracted by applying a greedy algorithm on the node with high saliency when it satisfies the reduction ratio and the minimum interval requested by the user. Experimental result shows that the proposed method summaries a 2 minute-length video in about 10 seconds on a commercial smartphone. The summarization quality is superior to the commercial video editing software, Muvee.