• Title/Summary/Keyword: Salient region extraction

Search Result 10, Processing Time 0.02 seconds

Salient Region Extraction based on Global Contrast Enhancement and Saliency Cut for Image Information Recognition of the Visually Impaired

  • Yoon, Hongchan;Kim, Baek-Hyun;Mukhriddin, Mukhiddinov;Cho, Jinsoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2287-2312
    • /
    • 2018
  • Extracting key visual information from images containing natural scene is a challenging task and an important step for the visually impaired to recognize information based on tactile graphics. In this study, a novel method is proposed for extracting salient regions based on global contrast enhancement and saliency cuts in order to improve the process of recognizing images for the visually impaired. To accomplish this, an image enhancement technique is applied to natural scene images, and a saliency map is acquired to measure the color contrast of homogeneous regions against other areas of the image. The saliency maps also help automatic salient region extraction, referred to as saliency cuts, and assist in obtaining a binary mask of high quality. Finally, outer boundaries and inner edges are detected in images with natural scene to identify edges that are visually significant. Experimental results indicate that the method we propose in this paper extracts salient objects effectively and achieves remarkable performance compared to conventional methods. Our method offers benefits in extracting salient objects and generating simple but important edges from images containing natural scene and for providing information to the visually impaired.

A Region-based Image Retrieval System using Salient Point Extraction and Image Segmentation (영상분할과 특징점 추출을 이용한 영역기반 영상검색 시스템)

  • 이희경;호요성
    • Journal of Broadcast Engineering
    • /
    • v.7 no.3
    • /
    • pp.262-270
    • /
    • 2002
  • Although most image indexing schemes ate based on global image features, they have limited discrimination capability because they cannot capture local variations of the image. In this paper, we propose a new region-based image retrieval system that can extract important regions in the image using salient point extraction and image segmentation techniques. Our experimental results show that color and texture information in the region provide a significantly improved retrieval performances compared to the global feature extraction methods.

A New Hybrid Algorithm for Invariance and Improved Classification Performance in Image Recognition

  • Shi, Rui-Xia;Jeong, Dong-Gyu
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.85-96
    • /
    • 2020
  • It is important to extract salient object image and to solve the invariance problem for image recognition. In this paper we propose a new hybrid algorithm for invariance and improved classification performance in image recognition, whose algorithm is combined by FT(Frequency-tuned Salient Region Detection) algorithm, Guided filter, Zernike moments, and a simple artificial neural network (Multi-layer Perceptron). The conventional FT algorithm is used to extract initial salient object image, the guided filtering to preserve edge details, Zernike moments to solve invariance problem, and a classification to recognize the extracted image. For guided filtering, guided filter is used, and Multi-layer Perceptron which is a simple artificial neural networks is introduced for classification. Experimental results show that this algorithm can achieve a superior performance in the process of extracting salient object image and invariant moment feature. And the results show that the algorithm can also classifies the extracted object image with improved recognition rate.

Image Retrieval Method Based on IPDSH and SRIP

  • Zhang, Xu;Guo, Baolong;Yan, Yunyi;Sun, Wei;Yi, Meng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.5
    • /
    • pp.1676-1689
    • /
    • 2014
  • At present, the Content-Based Image Retrieval (CBIR) system has become a hot research topic in the computer vision field. In the CBIR system, the accurate extractions of low-level features can reduce the gaps between high-level semantics and improve retrieval precision. This paper puts forward a new retrieval method aiming at the problems of high computational complexities and low precision of global feature extraction algorithms. The establishment of the new retrieval method is on the basis of the SIFT and Harris (APISH) algorithm, and the salient region of interest points (SRIP) algorithm to satisfy users' interests in the specific targets of images. In the first place, by using the IPDSH and SRIP algorithms, we tested stable interest points and found salient regions. The interest points in the salient region were named as salient interest points. Secondary, we extracted the pseudo-Zernike moments of the salient interest points' neighborhood as the feature vectors. Finally, we calculated the similarities between query and database images. Finally, We conducted this experiment based on the Caltech-101 database. By studying the experiment, the results have shown that this new retrieval method can decrease the interference of unstable interest points in the regions of non-interests and improve the ratios of accuracy and recall.

Salient Object Extraction from Video Sequences using Contrast Map and Motion Information (대비 지도와 움직임 정보를 이용한 동영상으로부터 중요 객체 추출)

  • Kwak, Soo-Yeong;Ko, Byoung-Chul;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1121-1135
    • /
    • 2005
  • This paper proposes a moving object extraction method using the contrast map and salient points. In order to make the contrast map, we generate three-feature maps such as luminance map, color map and directional map and extract salient points from an image. By using these features, we can decide the Attention Window(AW) location easily The purpose of the AW is to remove the useless regions in the image such as background as well as to reduce the amount of image processing. To create the exact location and flexible size of the AW, we use motion feature instead of pre-assumptions or heuristic parameters. After determining of the AW, we find the difference of edge to inner area from the AW. Then, we can extract horizontal candidate region and vortical candidate region. After finding both horizontal and vertical candidates, intersection regions through logical AND operation are further processed by morphological operations. The proposed algorithm has been applied to many video sequences which have static background like surveillance type of video sequences. The moving object was quite well segmented with accurate boundaries.

3D Mesh Model Exterior Salient Part Segmentation Using Prominent Feature Points and Marching Plane

  • Hong, Yiyu;Kim, Jongweon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1418-1433
    • /
    • 2019
  • In computer graphics, 3D mesh segmentation is a challenging research field. This paper presents a 3D mesh model segmentation algorithm that focuses on removing exterior salient parts from the original 3D mesh model based on prominent feature points and marching plane. To begin with, the proposed approach uses multi-dimensional scaling to extract prominent feature points that reside on the tips of each exterior salient part of a given mesh. Subsequently, a set of planes intersect the 3D mesh; one is the marching plane, which start marching from prominent feature points. Through the marching process, local cross sections between marching plane and 3D mesh are extracted, subsequently, its corresponding area are calculated to represent local volumes of the 3D mesh model. As the boundary region of an exterior salient part generally lies on the location at which the local volume suddenly changes greatly, we can simply cut this location with the marching plane to separate this part from the mesh. We evaluated our algorithm on the Princeton Segmentation Benchmark, and the evaluation results show that our algorithm works well for some categories.

Performance Evaluation of Pixel Clustering Approaches for Automatic Detection of Small Bowel Obstruction from Abdominal Radiographs

  • Kim, Kwang Baek
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.153-159
    • /
    • 2022
  • Plain radiographic analysis is the initial imaging modality for suspected small bowel obstruction. Among the many features that affect the diagnosis of small bowel obstruction (SBO), the presence of gas-filled or fluid-filled small bowel loops is the most salient feature that can be automatized by computer vision algorithms. In this study, we compare three frequently applied pixel-clustering algorithms for extracting gas-filled areas without human intervention. In a comparison involving 40 suspected SBO cases, the Possibilistic C-Means and Fuzzy C-Means algorithms exhibited initialization-sensitivity problems and difficulties coping with low intensity contrast, achieving low 72.5% and 85% success rates in extraction. The Adaptive Resonance Theory 2 algorithm is the most suitable algorithm for gas-filled region detection, achieving a 100% success rate on 40 tested images, largely owing to its dynamic control of the number of clusters.

Window Production Method based on Low-Frequency Detection for Automatic Object Extraction of GrabCut (GrabCut의 자동 객체 추출을 위한 저주파 영역 탐지 기반의 윈도우 생성 기법)

  • Yoo, Tae-Hoon;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.211-217
    • /
    • 2012
  • Conventional GrabCut algorithm is semi-automatic algorithm that user must be set rectangle window surrounds the object. This paper studied automatic object detection to solve these problem by detecting salient region based on Human Visual System. Saliency map is computed using Lab color space which is based on color opposing theory of 'red-green' and 'blue-yellow'. Then Saliency Points are computed from the boundaries of Low-Frequency region that are extracted from Saliency Map. Finally, Rectangle windows are obtained from coordinate value of Saliency Points and these windows are used in GrabCut algorithm to extract objects. Through various experiments, the proposed algorithm computing rectangle windows of salient region and extracting objects has been proved.

Raising Visual Experience of Soccer Video for Mobile Viewers (이동형 단말기 사용자를 위한 축구경기 비디오의 시청경험 향상 방법)

  • Ahn, Il-Koo;Ko, Jae-Seung;Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.3
    • /
    • pp.165-178
    • /
    • 2007
  • The recent progress in multimedia signal processing and transmission technologies has contributed to the extensive use of multimedia devices to watch sports games with small LCD panel. However, the most of video sequences are captured for normal viewing on standard TV or HDTV, for cost reasons, merely resized and delivered without additional editing. This may give the small-display-viewers uncomfortable experiences in understanding what is happening in a scene. For instance, in a soccer video sequence taken by a long-shot camera techniques, the tiny objects (e.g., soccer ball and players) may not be clearly viewed on the small LCD panel. Moreover, it is also difficult to recognize the contents of the scorebox which contains the elapsed time and scores. This renuires intelligent display technique to provide small-display-viewers with better experience. To this end, one of the key technologies is to determine region of interest (ROI) and display the magnified ROI on the screen, where ROI is a part of the scene that viewers pay more attention to than other regions. Examples include a region surrounding a ball in long-shot and a scorebox located in the comer of each frame. In this paper, we propose a scheme for raising viewing experiences of multimedia mobile device users. Instead of taking generic approaches utilizing visually salient features for extraction of ROI in a scene, we take domain-specific approach to exploit unique attributes of the soccer video. The proposed scheme consists of two modules: ROI determination and scorebox extraction. The experimental results show that the proposed scheme offers useful tools for intelligent video display on multimedia mobile devices.

A New Temporal Filtering Method for Improved Automatic Lipreading (향상된 자동 독순을 위한 새로운 시간영역 필터링 기법)

  • Lee, Jong-Seok;Park, Cheol-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.15B no.2
    • /
    • pp.123-130
    • /
    • 2008
  • Automatic lipreading is to recognize speech by observing the movement of a speaker's lips. It has received attention recently as a method of complementing performance degradation of acoustic speech recognition in acoustically noisy environments. One of the important issues in automatic lipreading is to define and extract salient features from the recorded images. In this paper, we propose a feature extraction method by using a new filtering technique for obtaining improved recognition performance. The proposed method eliminates frequency components which are too slow or too fast compared to the relevant speech information by applying a band-pass filter to the temporal trajectory of each pixel in the images containing the lip region and, then, features are extracted by principal component analysis. We show that the proposed method produces improved performance in both clean and visually noisy conditions via speaker-independent recognition experiments.