• Title/Summary/Keyword: natural scene image

Search Result 88, Processing Time 0.033 seconds

Salient Region Extraction based on Global Contrast Enhancement and Saliency Cut for Image Information Recognition of the Visually Impaired

  • Yoon, Hongchan;Kim, Baek-Hyun;Mukhriddin, Mukhiddinov;Cho, Jinsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2287-2312
    • /
    • 2018
  • Extracting key visual information from images containing natural scene is a challenging task and an important step for the visually impaired to recognize information based on tactile graphics. In this study, a novel method is proposed for extracting salient regions based on global contrast enhancement and saliency cuts in order to improve the process of recognizing images for the visually impaired. To accomplish this, an image enhancement technique is applied to natural scene images, and a saliency map is acquired to measure the color contrast of homogeneous regions against other areas of the image. The saliency maps also help automatic salient region extraction, referred to as saliency cuts, and assist in obtaining a binary mask of high quality. Finally, outer boundaries and inner edges are detected in images with natural scene to identify edges that are visually significant. Experimental results indicate that the method we propose in this paper extracts salient objects effectively and achieves remarkable performance compared to conventional methods. Our method offers benefits in extracting salient objects and generating simple but important edges from images containing natural scene and for providing information to the visually impaired.

Image Scene Classification of Multiclass (다중 클래스의 이미지 장면 분류)

  • Shin, Seong-Yoon;Lee, Hyun-Chang;Shin, Kwang-Seong;Kim, Hyung-Jin;Lee, Jae-Wan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.551-552
    • /
    • 2021
  • In this paper, we present a multi-class image scene classification method based on transformation learning. ImageNet classifies multiple classes of natural scene images by relying on pre-trained network models on large image datasets. In the experiment, we obtained excellent results by classifying the optimized ResNet model on Kaggle's Intel Image Classification data set.

  • PDF

A Study on Localization of Text in Natural Scene Images (자연 영상에서의 정확한 문자 검출에 관한 연구)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.5
    • /
    • pp.77-84
    • /
    • 2008
  • This paper proposes a new approach to eliminate the reflectance component for the localization of text in natural scene images. Natural scene images normally have an illumination component as well as a reflectance component. It is well known that a reflectance component usually obstructs the task of detecting and recognizing objects like texts in the scene, since it blurs out an overall image. We have developed an approach that efficiently removes reflectance components while Preserving illumination components. We decided whether an input image hits Normal or Polarized for determining the light environment, using the histogram which consisted of a red component. In the normal image, we acquired the text region without additional processing. Otherwise we removed light reflecting from the object using homomorphic filtering in the polarized image. And then this decided the each text region based on the color merging technique and the Saliency Map. Finally, we localized text region on these two candidate regions.

  • PDF

Text Region Extraction of Natural Scene Images using Gray-level Information and Split/Merge Method (명도 정보와 분할/합병 방법을 이용한 자연 영상에서의 텍스트 영역 추출)

  • Kim Ji-Soo;Kim Soo-Hyung;Choi Yeong-Woo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.502-511
    • /
    • 2005
  • In this paper, we propose a hybrid analysis method(HAM) based on gray-intensity information from natural scene images. The HAM is composed of GIA(Gray-intensity Information Analysis) and SMA(Split/Merge Analysis). Our experimental results show that the proposed approach is superior to conventional methods both in simple and complex images.

Text Extraction from Complex Natural Images

  • Kumar, Manoj;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.6 no.2
    • /
    • pp.1-5
    • /
    • 2010
  • The rapid growth in communication technology has led to the development of effective ways of sharing ideas and information in the form of speech and images. Understanding this information has become an important research issue and drawn the attention of many researchers. Text in a digital image contains much important information regarding the scene. Detecting and extracting this text is a difficult task and has many challenging issues. The main challenges in extracting text from natural scene images are the variation in the font size, alignment of text, font colors, illumination changes, and reflections in the images. In this paper, we propose a connected component based method to automatically detect the text region in natural images. Since text regions in mages contain mostly repetitions of vertical strokes, we try to find a pattern of closely packed vertical edges. Once the group of edges is found, the neighboring vertical edges are connected to each other. Connected regions whose geometric features lie outside of the valid specifications are considered as outliers and eliminated. The proposed method is more effective than the existing methods for slanted or curved characters. The experimental results are given for the validation of our approach.

Adaptive Scene Classification based on Semantic Concepts and Edge Detection (시멘틱개념과 에지탐지 기반의 적응형 이미지 분류기법)

  • Jamil, Nuraini;Ahmed, Shohel;Kim, Kang-Seok;Kang, Sang-Jil
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.2
    • /
    • pp.1-13
    • /
    • 2009
  • Scene classification and concept-based procedures have been the great interest for image categorization applications for large database. Knowing the category to which scene belongs, we can filter out uninterested images when we try to search a specific scene category such as beach, mountain, forest and field from database. In this paper, we propose an adaptive segmentation method for real-world natural scene classification based on a semantic modeling. Semantic modeling stands for the classification of sub-regions into semantic concepts such as grass, water and sky. Our adaptive segmentation method utilizes the edge detection to split an image into sub-regions. Frequency of occurrences of these semantic concepts represents the information of the image and classifies it to the scene categories. K-Nearest Neighbor (k-NN) algorithm is also applied as a classifier. The empirical results demonstrate that the proposed adaptive segmentation method outperforms the Vogel and Schiele's method in terms of accuracy.

  • PDF

Single Image Dehazing Using Dark Channel Prior and Minimal Atmospheric Veil

  • Zhou, Xiao;Wang, Chengyou;Wang, Liping;Wang, Nan;Fu, Qiming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.341-363
    • /
    • 2016
  • Haze or fog is a common natural phenomenon. In foggy weather, the captured pictures are difficult to be applied to computer vision system, such as road traffic detection, target tracking, etc. Therefore, the image dehazing technique has become a hotspot in the field of image processing. This paper presents an overview of the existing achievements on the image dehazing technique. The intent of this paper is not to review all the relevant works that have appeared in the literature, but rather to focus on two main works, that is, image dehazing scheme based on atmospheric veil and image dehazing scheme based on dark channel prior. After the overview and a comparative study, we propose an improved image dehazing method, which is based on two image dehazing schemes mentioned above. Our image dehazing method can obtain the fog-free images by proposing a more desirable atmospheric veil and estimating atmospheric light more accurately. In addition, we adjust the transmission of the sky regions and conduct tone mapping for the obtained images. Compared with other state of the art algorithms, experiment results show that images recovered by our algorithm are clearer and more natural, especially at distant scene and places where scene depth jumps abruptly.

Extraction of Text Alignment by Tensor Voting and its Application to Text Detection (텐서보팅을 이용한 텍스트 배열정보의 획득과 이를 이용한 텍스트 검출)

  • Lee, Guee-Sang;Dinh, Toan Nguyen;Park, Jong-Hyun
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.912-919
    • /
    • 2009
  • A novel algorithm using 2D tensor voting and edge-based approach is proposed for text detection in natural scene images. The tensor voting is used based on the fact that characters in a text line are usually close together on a smooth curve and therefore the tokens corresponding to centers of these characters have high curve saliency values. First, a suitable edge-based method is used to find all possible text regions. Since the false positive rate of text detection result generated from the edge-based method is high, 2D tensor voting is applied to remove false positives and find only text regions. The experimental results show that our method successfully detects text regions in many complex natural scene images.

Representative Batch Normalization for Scene Text Recognition

  • Sun, Yajie;Cao, Xiaoling;Sun, Yingying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2390-2406
    • /
    • 2022
  • Scene text recognition has important application value and attracted the interest of plenty of researchers. At present, many methods have achieved good results, but most of the existing approaches attempt to improve the performance of scene text recognition from the image level. They have a good effect on reading regular scene texts. However, there are still many obstacles to recognizing text on low-quality images such as curved, occlusion, and blur. This exacerbates the difficulty of feature extraction because the image quality is uneven. In addition, the results of model testing are highly dependent on training data, so there is still room for improvement in scene text recognition methods. In this work, we present a natural scene text recognizer to improve the recognition performance from the feature level, which contains feature representation and feature enhancement. In terms of feature representation, we propose an efficient feature extractor combined with Representative Batch Normalization and ResNet. It reduces the dependence of the model on training data and improves the feature representation ability of different instances. In terms of feature enhancement, we use a feature enhancement network to expand the receptive field of feature maps, so that feature maps contain rich feature information. Enhanced feature representation capability helps to improve the recognition performance of the model. We conducted experiments on 7 benchmarks, which shows that this method is highly competitive in recognizing both regular and irregular texts. The method achieved top1 recognition accuracy on four benchmarks of IC03, IC13, IC15, and SVTP.

A Study on Visual Mise-en-Scene of VR Animation (VR 애니메이션 의 시각적 미장센 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.15 no.9
    • /
    • pp.407-413
    • /
    • 2017
  • Mis-en-Scene is a direction method of image aesthetics for constructing screen and space. Mis-en-Scene is important factor not only in plays and movies, but also in animations, and it is a strong method to induce audience to immerse in the works and to continue the immersion. This study examined animation's Mis-en-Scene based on theories of Mis-en-Scene in movies, how Mis-en-Scene is directed and expressed in virtual spaces, and what factors and characteristics induce audience to immerse in the works and continue the immersion through analysis on visual Mis-en-Scene factors of a specific case, VR animation . It was found that as visual Mis-en-Scene factors, character and props, background, unique quality and friendliness of character, natural movement and acting, symbolism and utilization, and variety and consistency of background induce and sustain immersion. It is thought that this study would helpful for related areas based on the findings which suggest that there is a need for differentiated measure and method to catch audience's eyes and sustain immersion utilizing characteristics of vidual Mis-en-Scene factors in VR animation in the future.