• Title/Summary/Keyword: Color Feature

Search Result 945, Processing Time 0.031 seconds

Content-Based Image Retrieval Using Combined Color and Texture Features Extracted by Multi-resolution Multi-direction Filtering

  • Bu, Hee-Hyung;Kim, Nam-Chul;Moon, Chae-Joo;Kim, Jong-Hwa
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.464-475
    • /
    • 2017
  • In this paper, we present a new texture image retrieval method which combines color and texture features extracted from images by a set of multi-resolution multi-direction (MRMD) filters. The MRMD filter set chosen is simple and can be separable to low and high frequency information, and provides efficient multi-resolution and multi-direction analysis. The color space used is HSV color space separable to hue, saturation, and value components, which are easily analyzed as showing characteristics similar to the human visual system. This experiment is conducted by comparing precision vs. recall of retrieval and feature vector dimensions. Images for experiments include Corel DB and VisTex DB; Corel_MR DB and VisTex_MR DB, which are transformed from the aforementioned two DBs to have multi-resolution images; and Corel_MD DB and VisTex_MD DB, transformed from the two DBs to have multi-direction images. According to the experimental results, the proposed method improves upon the existing methods in aspects of precision and recall of retrieval, and also reduces feature vector dimensions.

Content-Based Image Retrieval Using Visual Features and Fuzzy Integral (시각 특징과 퍼지 적분을 이용한 내용기반 영상 검색)

  • Song Young-Jun;Kim Nam;Kim Mi-Hye;Kim Dong-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.5
    • /
    • pp.20-28
    • /
    • 2006
  • This paper proposes visual-feature extraction for each band in wavelet domain with both spatial frequency features and multi resolution features, and the combination of visual features using fuzzy integral. In addition, it uses color feature expression method taking advantage of the frequency of the same color after color quantization for reducing quantization error, a disadvantage of the existing color histogram intersection method. Also, it is found that the final similarity can be represented in a linear combination of the respective factors(Homogram, color, energy) when each factor is independent one another. With respect to the combination patterns the fuzzy measurement is defined and the fuzzy integral is taken. Experiments are peformed on a database containing 1,000 color images. The proposed method gives better performance than the conventional method in both objective and subjective performance evaluation.

  • PDF

Implementation of an improved real-time object tracking algorithm using brightness feature information and color information of object

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.5
    • /
    • pp.21-28
    • /
    • 2017
  • As technology related to digital imaging equipment is developed and generalized, digital imaging system is used for various purposes in fields of society. The object tracking technology from digital image data in real time is one of the core technologies required in various fields such as security system and robot system. Among the existing object tracking technologies, cam shift technology is a technique of tracking an object using color information of an object. Recently, digital image data using infrared camera functions are widely used due to various demands of digital image equipment. However, the existing cam shift method can not track objects in image data without color information. Our proposed tracking algorithm tracks the object by analyzing the color if valid color information exists in the digital image data, otherwise it generates the lightness feature information and tracks the object through it. The brightness feature information is generated from the ratio information of the width and the height of the area divided by the brightness. Experimental results shows that our tracking algorithm can track objects in real time not only in general image data including color information but also in image data captured by an infrared camera.

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

The Facial Area Extraction Using Multi-Channel Skin Color Model and The Facial Recognition Using Efficient Feature Vectors (Multi-Channel 피부색 모델을 이용한 얼굴영역추출과 효율적인 특징벡터를 이용한 얼굴 인식)

  • Choi Gwang-Mi;Kim Hyeong-Gyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.7
    • /
    • pp.1513-1517
    • /
    • 2005
  • In this paper, I make use of a Multi-Channel skin color model with Hue, Cb, Cg using Red, Blue, Green channel altogether which remove bight component as being consider the characteristics of skin color to do modeling more effective to a facial skin color for extracting a facial area. 1 used efficient HOLA(Higher order local autocorrelation function) using 26 feature vectors to obtain both feature vectors of a facial area and the edge image extraction using Harr wavelet in image which split a facial area. Calculated feature vectors are used of date for the facial recognition through learning of neural network It demonstrate improvement in both the recognition rate and speed by proposed algorithm through simulation.

Automatic Speechreading Feature Detection Using Color Information (색상 정보를 이용한 자동 독화 특징 추출)

  • Lee, Kyong-Ho;Yang, Ryong;Rhee, Sang-Burm
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.107-115
    • /
    • 2008
  • Face feature detection plays an important role in application such as automatic speechreading, human computer interface, face recognition, and face image database management. We proposed a automatic speechreading feature detection algorithm for color image using color information. Face feature pixels is represented for various value because of the luminance and chrominance in various color space. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, inner boundary of lips and the outer line of the tooth is detected and show very encouraging result.

  • PDF

Content-Based Image Retrieval using Region Feature Vector (영역 특징벡터를 이용한 내용기반 영상검색)

  • Kim Dong-Woo;Song Young-Jun;Kim Young-Gil;Ah Jae-Hyeong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.47-52
    • /
    • 2006
  • This paper proposes a method of content-based image retrieval using region feature vector in order to overcome disadvantages of existing color histogram methods. The color histogram methods have a weak point that reduces accuracy because of quantization error, and more. In order to solve this, we convert color information to HSV space and quantize hue factor being purecolor information and calculate histogram and then use thus for retrieval feature that is robust in brightness, movement, and rotation. Also we solve an insufficient part that is the most serious problem in color histogram methods by dividing an image into sixteen regions and then comparing each region. We improve accuracy by edge and DC of DCT transformation. As a result of experimenting with 1,000 color images, the proposed method has showed better precision than the existing methods.

Wine Label Recognition System using Image Similarity (이미지 유사도를 이용한 와인라벨 인식 시스템)

  • Jung, Jeong-Mun;Yang, Hyung-Jeong;Kim, Soo-Hyung;Lee, Guee-Sang;Kim, Sun-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.5
    • /
    • pp.125-137
    • /
    • 2011
  • Recently the research on the system using images taken from camera phones as input is actively conducted. This paper proposed a system that shows wine pictures which are similar to the input wine label in order. For the calculation of the similarity of images, the representative color of each cell of the image, the recognized text color, background color and distribution of feature points are used as the features. In order to calculate the difference of the colors, RGB is converted into CIE-Lab and the feature points are extracted by using Harris Corner Detection Algorithm. The weights of representative color of each cell of image, text color and background color are applied. The image similarity is calculated by normalizing the difference of color similarity and distribution of feature points. After calculating the similarity between the input image and the images in the database, the images in Database are shown in the descent order of the similarity so that the effort of users to search for similar wine labels again from the searched result is reduced.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

FE-CBIRS Using Color Distribution for Cut Retrieval in IPTV (IPTV에서 컷 검색을 위한 색 분포정보를 이용한 FE-CBIRS)

  • Koo, Gun-Seo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.1
    • /
    • pp.91-97
    • /
    • 2009
  • This paper proposes novel FE-CBIRS that finds best position of a cut to be retrieved based on color feature distribution in digital contents of IPTV. Conventional CBIRS have used a method that utilizes both color and shape information together to classify images, as well as a method that utilizes both feature information of the entire region and feature information of a partial region that is extracted by segmentation for searching. Also, in the algorithm, average, standard deviation and skewness values are used in case of color features for each hue, saturation and intensity values respectively. Furthermore, in case of using partial regions, only a few major colors are used and in case of shape features, the invariant moment is mainly used on the extracted partial regions. Due to these reasons, some problems have been issued in CBIRS in processing time and accuracy so far. Therefore, in order to tackle these problems, this paper proposes the FE-CBIRS that makes searching speed faster by classifying and indexing the extracted color information by each class and by using several cuts that are restricted in range as comparative images.