• Title/Summary/Keyword: color features

Search Result 1,192, Processing Time 0.028 seconds

Channel Color Energy Feature Representing Color and Texture in Content-Based Image Retrieval (내용기반 영상검색에서 색과 질감을 나타내는 채널색에너지)

  • Jung Jae Woong;Kwon Tae Wan;Park Seop Hyeong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.1
    • /
    • pp.21-28
    • /
    • 2004
  • In the field of content-based image retrieval, many numerical features have been proposed for representing visual image content such as color, torture, and shape. Because the features are assumed to be independent, each of them is extracted without ny consideration of the others. In this paper, we consider the relationship between color and texture and propose a new feature called CCE(channel color energy). Simulation results with natural images show that the proposed method outperforms the conventional regular weighted comparison method and SCFT(sequential chromatic Fourier transform)-based color torture method.

Quaternion Markov Splicing Detection for Color Images Based on Quaternion Discrete Cosine Transform

  • Wang, Jinwei;Liu, Renfeng;Wang, Hao;Wu, Bin;Shi, Yun-Qing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.7
    • /
    • pp.2981-2996
    • /
    • 2020
  • With the increasing amount of splicing images, many detection schemes of splicing images are proposed. In this paper, a splicing detection scheme for color image based on the quaternion discrete cosine transform (QDCT) is proposed. Firstly, the proposed quaternion Markov features are extracted in QDCT domain. Secondly, the proposed quaternion Markov features consist of global and local quaternion Markov, which utilize both magnitude and three phases to extract Markov features by using two different ways. In total, 2916-D features are extracted. Finally, the support vector machine (SVM) is used to detect the splicing images. In our experiments, the accuracy of the proposed scheme reaches 99.16% and 97.52% in CASIA TIDE v1.0 and CASIA TIDE v2.0, respectively, which exceeds that of the existing schemes.

Role of linked color imaging for upper gastrointestinal disease: present and future

  • Sang Pyo Lee
    • Clinical Endoscopy
    • /
    • v.56 no.5
    • /
    • pp.546-552
    • /
    • 2023
  • Techniques for upper gastrointestinal endoscopy are advancing to facilitate lesion detection and improve prognosis. However, most early tumors in the upper gastrointestinal tract exhibit subtle color changes or morphological features that are difficult to detect using white light imaging. Linked color imaging (LCI) has been developed to overcome these shortcomings; it expands or reduces color information to clarify color differences, thereby facilitating the detection and observation of lesions. This article summarizes the characteristics of LCI and advances in LCI-related research in the upper gastrointestinal tract field.

Object-Based Image Search Using Color and Texture Homogeneous Regions (유사한 색상과 질감영역을 이용한 객체기반 영상검색)

  • 유헌우;장동식;서광규
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.6
    • /
    • pp.455-461
    • /
    • 2002
  • Object-based image retrieval method is addressed. A new image segmentation algorithm and image comparing method between segmented objects are proposed. For image segmentation, color and texture features are extracted from each pixel in the image. These features we used as inputs into VQ (Vector Quantization) clustering method, which yields homogeneous objects in terns of color and texture. In this procedure, colors are quantized into a few dominant colors for simple representation and efficient retrieval. In retrieval case, two comparing schemes are proposed. Comparing between one query object and multi objects of a database image and comparing between multi query objects and multi objects of a database image are proposed. For fast retrieval, dominant object colors are key-indexed into database.

SPEECH TRAINING TOOLS BASED ON VOWEL SWITCH/VOLUME CONTROL AND ITS VISUALIZATION

  • Ueda, Yuichi;Sakata, Tadashi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.441-445
    • /
    • 2009
  • We have developed a real-time software tool to extract a speech feature vector whose time sequences consist of three groups of vector components; the phonetic/acoustic features such as formant frequencies, the phonemic features as outputs on neural networks, and some distances of Japanese phonemes. In those features, since the phoneme distances for Japanese five vowels are applicable to express vowel articulation, we have designed a switch, a volume control and a color representation which are operated by pronouncing vowel sounds. As examples of those vowel interface, we have developed some speech training tools to display a image character or a rolling color ball and to control a cursor's movement for aurally- or vocally-handicapped children. In this paper, we introduce the functions and the principle of those systems.

  • PDF

Object Cataloging Using Heterogeneous Local Features for Image Retrieval

  • Islam, Mohammad Khairul;Jahan, Farah;Baek, Joong Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4534-4555
    • /
    • 2015
  • We propose a robust object cataloging method using multiple locally distinct heterogeneous features for aiding image retrieval. Due to challenges such as variations in object size, orientation, illumination etc. object recognition is extraordinarily challenging problem. In these circumstances, we adapt local interest point detection method which locates prototypical local components in object imageries. In each local component, we exploit heterogeneous features such as gradient-weighted orientation histogram, sum of wavelet responses, histograms using different color spaces etc. and combine these features together to describe each component divergently. A global signature is formed by adapting the concept of bag of feature model which counts frequencies of its local components with respect to words in a dictionary. The proposed method demonstrates its excellence in classifying objects in various complex backgrounds. Our proposed local feature shows classification accuracy of 98% while SURF,SIFT, BRISK and FREAK get 81%, 88%, 84% and 87% respectively.

Target Detection Based on Moment Invariants

  • Wang, Jiwu;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.677-680
    • /
    • 2003
  • Perceptual landmarks are an effective solution for a mobile robot realizing steady and reliable long distance navigation. But the prerequisite is those landmarks must be detected and recognized robustly at a higher speed under various lighting conditions. This made image processing more complicated so that its speed and reliability can not be both satisfied at the same time. Color based target detection technique can separate target color regions from non-target color regions in an image with a faster speed, and better results were obtained only under good lighting conditions. Moreover, in the case that there are other things with a target color, we have to consider other target features to tell apart the target from them. Such thing always happens when we detect a target with its single character. On the other hand, we can generally search for only one target for each time so that we can not make use of landmarks efficiently, especially when we want to make more landmarks work together. In this paper, by making use of the moment invariants of each landmark, we can not only search specified target from separated color region but also find multi-target at the same time if necessary. This made the finite landmarks carry on more functions. Because moment invariants were easily used with some low level image processing techniques, such as color based target detection and gradient runs based target detection etc, and moment invariants are more reliable features of each target, the ratio of target detection were improved. Some necessary experiments were carried on to verify its robustness and efficiency of this method.

  • PDF

Content-Based Image Retrieval Using Combined Color and Texture Features Extracted by Multi-resolution Multi-direction Filtering

  • Bu, Hee-Hyung;Kim, Nam-Chul;Moon, Chae-Joo;Kim, Jong-Hwa
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.464-475
    • /
    • 2017
  • In this paper, we present a new texture image retrieval method which combines color and texture features extracted from images by a set of multi-resolution multi-direction (MRMD) filters. The MRMD filter set chosen is simple and can be separable to low and high frequency information, and provides efficient multi-resolution and multi-direction analysis. The color space used is HSV color space separable to hue, saturation, and value components, which are easily analyzed as showing characteristics similar to the human visual system. This experiment is conducted by comparing precision vs. recall of retrieval and feature vector dimensions. Images for experiments include Corel DB and VisTex DB; Corel_MR DB and VisTex_MR DB, which are transformed from the aforementioned two DBs to have multi-resolution images; and Corel_MD DB and VisTex_MD DB, transformed from the two DBs to have multi-direction images. According to the experimental results, the proposed method improves upon the existing methods in aspects of precision and recall of retrieval, and also reduces feature vector dimensions.

Optical and Near-IR Photometry of the NGC 4874 Globular Cluster System with the Hubble Space Telescope

  • Cho, Hyejeon;Blakeslee, John P.;Peng, Eric W.;Lee, Young-Wook
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.2
    • /
    • pp.37.1-37.1
    • /
    • 2013
  • We present our study of analyzing the photometric properties of the globular cluster (GC) system which resides in the extended halo of the central bright Coma cluster galaxy NGC 4874. The core of the Coma cluster of galaxies (Abell 1656) was observed with both the HST Advanced Camera for Surveys (ACS) in the F475W (g475) and F814W (I814) and Wide Field Camera 3 IR Channel (WFC3/IR) in the F160W (H160) filters. The data analysis procedure and GC candidate selection criteria are briefly described. We investigate the interesting "tilt" features in color-magnitude diagrams for this GC system and their link to the nonlinear color-metallicity relation for GCs. The NGC 4874's GC system exhibits a bimodal distribution in the optical g475-I814 color and much more than half the GCs fall in the red side at g475-I814 ~ 1.1. This bimodality is weakened in the optical-IR I814-H160 color; the quantitative analysis on the features of both color distributions using the Gaussian Mixture Modeling code proves the bimodalities are different. Both colors, thus, cannot linearly reflect the bimodality of an underlying metallicity, supporting the suggestion that observed bimodalities in extragalactic GC colors are the metallicity-to-color projection effect.

  • PDF

Integrated Method for Text Detection in Natural Scene Images

  • Zheng, Yang;Liu, Jie;Liu, Heping;Li, Qing;Li, Gen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5583-5604
    • /
    • 2016
  • In this paper, we present a novel image operator to extract textual information in natural scene images. First, a powerful refiner called the Stroke Color Extension, which extends the widely used Stroke Width Transform by incorporating color information of strokes, is proposed to achieve significantly enhanced performance on intra-character connection and non-character removal. Second, a character classifier is trained by using gradient features. The classifier not only eliminates non-character components but also remains a large number of characters. Third, an effective extractor called the Character Color Transform combines color information of characters and geometry features. It is used to extract potential characters which are not correctly extracted in previous steps. Fourth, a Convolutional Neural Network model is used to verify text candidates, improving the performance of text detection. The proposed technique is tested on two public datasets, i.e., ICDAR2011 dataset and ICDAR2013 dataset. The experimental results show that our approach achieves state-of-the-art performance.