• Title/Summary/Keyword: image saliency

Search Result 102, Processing Time 0.023 seconds

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.

Security Vulnerability Verification for Open Deep Learning Libraries (공개 딥러닝 라이브러리에 대한 보안 취약성 검증)

  • Jeong, JaeHan;Shon, Taeshik
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.1
    • /
    • pp.117-125
    • /
    • 2019
  • Deep Learning, which is being used in various fields recently, is being threatened with Adversarial Attack. In this paper, we experimentally verify that the classification accuracy is lowered by adversarial samples generated by malicious attackers in image classification models. We used MNIST dataset and measured the detection accuracy by injecting adversarial samples into the Autoencoder classification model and the CNN (Convolution neural network) classification model, which are created using the Tensorflow library and the Pytorch library. Adversarial samples were generated by transforming MNIST test dataset with JSMA(Jacobian-based Saliency Map Attack) and FGSM(Fast Gradient Sign Method). When injected into the classification model, detection accuracy decreased by at least 21.82% up to 39.08%.

A Novel Multifocus Image Fusion Algorithm Based on Nonsubsampled Contourlet Transform

  • Liu, Cuiyin;Cheng, Peng;Chen, Shu-Qing;Wang, Cuiwei;Xiang, Fenghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.539-557
    • /
    • 2013
  • A novel multifocus image fusion algorithm based on NSCT is proposed in this paper. In order to not only attain the image focusing properties and more visual information in the fused image, but also sensitive to the human visual perception, a local multidirection variance (LEOV) fusion rule is proposed for lowpass subband coefficient. In order to introduce more visual saliency, a modified local contrast is defined. In addition, according to the feature of distribution of highpass subband coefficients, a direction vector is proposed to constrain the modified local contrast and construct the new fusion rule for highpass subband coefficients selection The NSCT is a flexible multiscale, multidirection, and shift-invariant tool for image decomposition, which can be implemented via the atrous algorithm. The proposed fusion algorithm based on NSCT not only can prevent artifacts and erroneous from introducing into the fused image, but also can eliminate 'block effect' and 'frequency aliasing' phenomenon. Experimental results show that the proposed method achieved better fusion results than wavelet-based and CT-based fusion method in contrast and clarity.

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.

Efficient 3D Geometric Structure Inference and Modeling for Tensor Voting based Region Segmentation (효과적인 3차원 기하학적 구조 추정 및 모델링을 위한 텐서 보팅 기반 영역 분할)

  • Kim, Sang-Kyoon;Park, Soon-Young;Park, Jong-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.10-17
    • /
    • 2012
  • In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. In this paper, we propose a method for creating 3D virtual scenes based on 2D image that is completely automatic and requires only a single scene as input data. The proposed method is similar to the creation of a pop-up illustration in a children's book. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting to an image segmentation. The tensor voting is used based on the fact that homogeneous region in an image is usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. And then, our algorithm labels regions of the input image into coarse categories: "ground", "sky", and "vertical". These labels are then used to "cut and fold" the image into a pop-up model using a set of simple assumptions. The experimental results show that our method successfully segments coarse regions in many complex natural scene images and can create a 3D pop-up model to infer the structure information based on the segmented region information.

An Adaptive Iterative Algorithm for Motion Deblurring Based on Salient Intensity Prior

  • Yu, Hancheng;Wang, Wenkai;Fan, Wenshi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.855-870
    • /
    • 2019
  • In this paper, an adaptive iterative algorithm is proposed for motion deblurring by using the salient intensity prior. Based on the observation that the salient intensity of the clear image is sparse, and the salient intensity of the blurred image is less sparse during the image blurring process. The salient intensity prior is proposed to enforce the sparsity of the distribution of the saliency in the latent image, which guides the blind deblurring in various scenarios. Furthermore, an adaptive iteration strategy is proposed to adjust the number of iterations by evaluating the performance of the latent image and the similarity of the estimated blur kernel. The negative influence of overabundant iterations in each scale is effectively restrained in this way. Experiments on publicly available image deblurring datasets demonstrate that the proposed algorithm achieves state-of-the-art deblurring results with small computational costs.

Retargeting method from panorama image for Mobile device (모바일 기기를 위한 광시야각 영상의 Retargeting 기법)

  • Kim, Jung-Un;Kang, Hang-Bong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.469-472
    • /
    • 2011
  • 본 논문에서는 모바일 기기의 디스플레이 비율을 고려한 복합적 Image retargeting 기법을 제안한다. 제안 기법은 다양한 비율을 가지는 광시야각 영상을 모바일기기에서 디스플레이하고자 할 때 줄어드는 과정에서 영상 내의 주요 영역의 손실을 최소화하고 자연스러운 결과 영상을 만들어내기 위해 영상의 saliency를 분석하여 에너지 분포와 gravity center를 구한다. 분석된 에너지 분포를 통해 영상을 동일한 에너지량을 가진 n개의 영역으로 분할하고 각 영역의 분포 특징에 따라 crop, linear scaling, seam carving의 기법을 최적의 영역에 복합적으로 적용하여 영상을 retargeting한다. 끝으로 제안하는 기법의 결과영상과 기존 기법의 결과 영상을 비교하여 제안 기법의 강점을 검증한다.

Efficient Object-based Image Retrieval Method using Color Features from Salient Regions

  • An, Jaehyun;Lee, Sang Hwa;Cho, Nam Ik
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.4
    • /
    • pp.229-236
    • /
    • 2017
  • This paper presents an efficient object-based color image-retrieval algorithm that is suitable for the classification and retrieval of images from small to mid-scale datasets, such as images in PCs, tablets, phones, and cameras. The proposed method first finds salient regions by using regional feature vectors, and also finds several dominant colors in each region. Then, each salient region is partitioned into small sub-blocks, which are assigned 1 or 0 with respect to the number of pixels corresponding to a dominant color in the sub-block. This gives a binary map for the dominant color, and this process is repeated for the predefined number of dominant colors. Finally, we have several binary maps, each of which corresponds to a dominant color in a salient region. Hence, the binary maps represent the spatial distribution of the dominant colors in the salient region, and the union (OR operation) of the maps can describe the approximate shapes of salient objects. Also proposed in this paper is a matching method that uses these binary maps and which needs very few computations, because most operations are binary. Experiments on widely used color image databases show that the proposed method performs better than state-of-the-art and previous color-based methods.

Salient Object Detection via Adaptive Region Merging

  • Zhou, Jingbo;Zhai, Jiyou;Ren, Yongfeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4386-4404
    • /
    • 2016
  • Most existing salient object detection algorithms commonly employed segmentation techniques to eliminate background noise and reduce computation by treating each segment as a processing unit. However, individual small segments provide little information about global contents. Such schemes have limited capability on modeling global perceptual phenomena. In this paper, a novel salient object detection algorithm is proposed based on region merging. An adaptive-based merging scheme is developed to reassemble regions based on their color dissimilarities. The merging strategy can be described as that a region R is merged with its adjacent region Q if Q has the lowest dissimilarity with Q among all Q's adjacent regions. To guide the merging process, superpixels that located at the boundary of the image are treated as the seeds. However, it is possible for a boundary in the input image to be occupied by the foreground object. To avoid this case, we optimize the boundary influences by locating and eliminating erroneous boundaries before the region merging. We show that even though three simple region saliency measurements are adopted for each region, encouraging performance can be obtained. Experiments on four benchmark datasets including MSRA-B, SOD, SED and iCoSeg show the proposed method results in uniform object enhancement and achieve state-of-the-art performance by comparing with nine existing methods.

A Novel Text Sample Selection Model for Scene Text Detection via Bootstrap Learning

  • Kong, Jun;Sun, Jinhua;Jiang, Min;Hou, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.771-789
    • /
    • 2019
  • Text detection has been a popular research topic in the field of computer vision. It is difficult for prevalent text detection algorithms to avoid the dependence on datasets. To overcome this problem, we proposed a novel unsupervised text detection algorithm inspired by bootstrap learning. Firstly, the text candidate in a novel form of superpixel is proposed to improve the text recall rate by image segmentation. Secondly, we propose a unique text sample selection model (TSSM) to extract text samples from the current image and eliminate database dependency. Specifically, to improve the precision of samples, we combine maximally stable extremal regions (MSERs) and the saliency map to generate sample reference maps with a double threshold scheme. Finally, a multiple kernel boosting method is developed to generate a strong text classifier by combining multiple single kernel SVMs based on the samples selected from TSSM. Experimental results on standard datasets demonstrate that our text detection method is robust to complex backgrounds and multilingual text and shows stable performance on different standard datasets.