• Title/Summary/Keyword: Image Feature

Search Result 3,584, Processing Time 0.034 seconds

A Feature Re-weighting Approach for the Non-Metric Feature Space (가변적인 길이의 특성 정보를 지원하는 특성 가중치 조정 기법)

  • Lee Robert-Samuel;Kim Sang-Hee;Park Ho-Hyun;Lee Seok-Lyong;Chung Chin-Wan
    • Journal of KIISE:Databases
    • /
    • v.33 no.4
    • /
    • pp.372-383
    • /
    • 2006
  • Among the approaches to image database management, content-based image retrieval (CBIR) is viewed as having the best support for effective searching and browsing of large digital image libraries. Typical CBIR systems allow a user to provide a query image, from which low-level features are extracted and used to find 'similar' images in a database. However, there exists the semantic gap between human visual perception and low-level representations. An effective methodology for overcoming this semantic gap involves relevance feedback to perform feature re-weighting. Current approaches to feature re-weighting require the number of components for a feature representation to be the same for every image in consideration. Following this assumption, they map each component to an axis in the n-dimensional space, which we call the metric space; likewise the feature representation is stored in a fixed-length vector. However, with the emergence of features that do not have a fixed number of components in their representation, existing feature re-weighting approaches are invalidated. In this paper we propose a feature re-weighting technique that supports features regardless of whether or not they can be mapped into a metric space. Our approach analyses the feature distances calculated between the query image and the images in the database. Two-sided confidence intervals are used with the distances to obtain the information for feature re-weighting. There is no restriction on how the distances are calculated for each feature. This provides freedom for how feature representations are structured, i.e. there is no requirement for features to be represented in fixed-length vectors or metric space. Our experimental results show the effectiveness of our approach and in a comparison with other work, we can see how it outperforms previous work.

Efficient Image Search using Advanced SURF and DCD on Mobile Platform (모바일 플랫폼에서 개선된 SURF와 DCD를 이용한 효율적인 영상 검색)

  • Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.14 no.2
    • /
    • pp.53-59
    • /
    • 2015
  • Since the amount of digital image continues to grow in usage, users feel increased difficulty in finding specific images from the image collection. This paper proposes a novel image searching scheme that extracts the image feature using combination of Advanced SURF (Speed-Up Robust Feature) and DCD (Dominant Color Descriptor). The key point of this research is to provide a new feature extraction algorithm to improve the existing SURF method with removal of unnecessary feature in image retrieval, which can be adaptable to mobile system and efficiently run on the mobile environments. To evaluate the proposed scheme, we assessed the performance of simulation in term of average precision and F-score on two databases, commonly used in the field of image retrieval. The experimental results revealed that the proposed algorithm exhibited a significant improvement of over 14.4% in retrieval effectiveness, compared to OpenSURF. The main contribution of this paper is that the proposed approach achieves high accuracy and stability by using ASURF and DCD in searching for natural image on mobile platform.

Image Description and Matching Scheme Using Synthetic Features for Recommendation Service

  • Yang, Won-Keun;Cho, A-Young;Oh, Weon-Geun;Jeong, Dong-Seok
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.589-599
    • /
    • 2011
  • This paper presents an image description and matching scheme using synthetic features for a recommendation service. The recommendation service is an example of smart search because it offers something before a user's request. In the proposed extraction scheme, an image is described by synthesized spatial and statistical features. The spatial feature is designed to increase the discriminability by reflecting delicate variations. The statistical feature is designed to increase the robustness by absorbing small variations. For extracting spatial features, we partition the image into concentric circles and extract four characteristics using a spatial relation. To extract statistical features, we adapt three transforms into the image and compose a 3D histogram as the final statistical feature. The matching schemes are designed hierarchically using the proposed spatial and statistical features. The result shows that each feature is better than the compared algorithms that use spatial or statistical features. Additionally, if we adapt the proposed whole extraction and matching scheme, the overall performance will become 98.44% in terms of the correct search ratio.

A multisource image fusion method for multimodal pig-body feature detection

  • Zhong, Zhen;Wang, Minjuan;Gao, Wanlin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4395-4412
    • /
    • 2020
  • The multisource image fusion has become an active topic in the last few years owing to its higher segmentation rate. To enhance the accuracy of multimodal pig-body feature segmentation, a multisource image fusion method was employed. Nevertheless, the conventional multisource image fusion methods can not extract superior contrast and abundant details of fused image. To superior segment shape feature and detect temperature feature, a new multisource image fusion method was presented and entitled as NSST-GF-IPCNN. Firstly, the multisource images were resolved into a range of multiscale and multidirectional subbands by Nonsubsampled Shearlet Transform (NSST). Then, to superior describe fine-scale texture and edge information, even-symmetrical Gabor filter and Improved Pulse Coupled Neural Network (IPCNN) were used to fuse low and high-frequency subbands, respectively. Next, the fused coefficients were reconstructed into a fusion image using inverse NSST. Finally, the shape feature was extracted using automatic threshold algorithm and optimized using morphological operation. Nevertheless, the highest temperature of pig-body was gained in view of segmentation results. Experiments revealed that the presented fusion algorithm was able to realize 2.102-4.066% higher average accuracy rate than the traditional algorithms and also enhanced efficiency.

Fragile Watermarking Scheme Based on Wavelet Edge Features

  • Vaishnavi, D.;Subashini, T.S.
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.5
    • /
    • pp.2149-2154
    • /
    • 2015
  • This paper proposes a novel watermarking method to discover the tampers and localize it in digital image. The image which is to be used to generate a watermark is first wavelet decomposed and the edge feature from the sub bands of high frequency coefficients are retrieved to generate a watermark (Edge Feature Image) and which is to be embed on the cover image. Before embedding the watermark, the pixels of cover image are disordered through the Arnold Transform and this helps to upgrade the security of the watermark. The embedding of generated edge feature image is done only on the Least Significant Bit (LSB) of the cover image. The invisibleness and robustness of the proposed method is computed using Peak-Signal to Noise Ratio (PSNR) and Normalized Correlation (NC) and it proves that the proposed method delivers good results and the proposed method also detects and localizes the tampers efficiently. The invisibleness of proposed method is compared with the existing method and it proves that the proposed method is better.

Morphological Feature Extraction of Microorganisms Using Image Processing

  • Kim Hak-Kyeong;Jeong Nam-Su;Kim Sang-Bong;Lee Myung-Suk
    • Fisheries and Aquatic Sciences
    • /
    • v.4 no.1
    • /
    • pp.1-9
    • /
    • 2001
  • This paper describes a procedure extracting feature vector of a target cell more precisely in the case of identifying specified cell. The classification of object type is based on feature vector such as area, complexity, centroid, rotation angle, effective diameter, perimeter, width and height of the object So, the feature vector plays very important role in classifying objects. Because the feature vectors is affected by noises and holes, it is necessary to remove noises contaminated in original image to get feature vector extraction exactly. In this paper, we propose the following method to do to get feature vector extraction exactly. First, by Otsu's optimal threshold selection method and morphological filters such as cleaning, filling and opening filters, we separate objects from background an get rid of isolated particles. After the labeling step by 4-adjacent neighborhood, the labeled image is filtered by the area filter. From this area-filtered image, feature vector such as area, complexity, centroid, rotation angle, effective diameter, the perimeter based on chain code and the width and height based on rotation matrix are extracted. To prove the effectiveness, the proposed method is applied for yeast Zygosaccharomyces rouxn. It is also shown that the experimental results from the proposed method is more efficient in measuring feature vectors than from only Otsu's optimal threshold detection method.

  • PDF

Feature-Based Image Retrieval using SOM-Based R*-Tree

  • Shin, Min-Hwa;Kwon, Chang-Hee;Bae, Sang-Hyun
    • Proceedings of the KAIS Fall Conference
    • /
    • 2003.11a
    • /
    • pp.223-230
    • /
    • 2003
  • Feature-based similarity retrieval has become an important research issue in multimedia database systems. The features of multimedia data are useful for discriminating between multimedia objects (e 'g', documents, images, video, music score, etc.). For example, images are represented by their color histograms, texture vectors, and shape descriptors, and are usually high-dimensional data. The performance of conventional multidimensional data structures(e'g', R- Tree family, K-D-B tree, grid file, TV-tree) tends to deteriorate as the number of dimensions of feature vectors increases. The R*-tree is the most successful variant of the R-tree. In this paper, we propose a SOM-based R*-tree as a new indexing method for high-dimensional feature vectors.The SOM-based R*-tree combines SOM and R*-tree to achieve search performance more scalable to high dimensionalities. Self-Organizing Maps (SOMs) provide mapping from high-dimensional feature vectors onto a two dimensional space. The mapping preserves the topology of the feature vectors. The map is called a topological of the feature map, and preserves the mutual relationship (similarity) in the feature spaces of input data, clustering mutually similar feature vectors in neighboring nodes. Each node of the topological feature map holds a codebook vector. A best-matching-image-list. (BMIL) holds similar images that are closest to each codebook vector. In a topological feature map, there are empty nodes in which no image is classified. When we build an R*-tree, we use codebook vectors of topological feature map which eliminates the empty nodes that cause unnecessary disk access and degrade retrieval performance. We experimentally compare the retrieval time cost of a SOM-based R*-tree with that of an SOM and an R*-tree using color feature vectors extracted from 40, 000 images. The result show that the SOM-based R*-tree outperforms both the SOM and R*-tree due to the reduction of the number of nodes required to build R*-tree and retrieval time cost.

  • PDF

Change Detection in Bitemporal Remote Sensing Images by using Feature Fusion and Fuzzy C-Means

  • Wang, Xin;Huang, Jing;Chu, Yanli;Shi, Aiye;Xu, Lizhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.4
    • /
    • pp.1714-1729
    • /
    • 2018
  • Change detection of remote sensing images is a profound challenge in the field of remote sensing image analysis. This paper proposes a novel change detection method for bitemporal remote sensing images based on feature fusion and fuzzy c-means (FCM). Different from the state-of-the-art methods that mainly utilize a single image feature for difference image construction, the proposed method investigates the fusion of multiple image features for the task. The subsequent problem is regarded as the difference image classification problem, where a modified fuzzy c-means approach is proposed to analyze the difference image. The proposed method has been validated on real bitemporal remote sensing data sets. Experimental results confirmed the effectiveness of the proposed method.

Efficient Content-Based Image Retrieval Methods Using Color and Texture

  • Lee, Sang-Mi;Bae, Hee-Jung;Jung, Sung-Hwan
    • ETRI Journal
    • /
    • v.20 no.3
    • /
    • pp.272-283
    • /
    • 1998
  • In this paper, we propose efficient content-based image retrieval methods using the automatic extraction of the low-level visual features as image content. Two new feature extraction methods are presented. The first one os an advanced color feature extraction derived from the modification of Stricker's method. The second one is a texture feature extraction using some DCT coefficients which represent some dominant directions and gray level variations of the image. In the experiment with an image database of 200 natural images, the proposed methods show higher performance than other methods. They can be combined into an efficient hierarchical retrieval method.

  • PDF

Extraction of Feature Points Using a Line-Edge Detector (선경계 검출에 의한 특징점 추출)

  • Kim, Ji-Hong;Kim, Nam-Chul
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1427-1430
    • /
    • 1987
  • The feature points of an image play a very important role in understanding the image. Especially, when an image is composed of lines, vertices of the image offer informations about its property and structure. In this paper, a series of process for extracting feature points from actual IC image is described. This result can be used to acquire CIF ( Caltech Intermediate Form ) file.

  • PDF