• Title/Summary/Keyword: 색상 히스토그램

Search Result 221, Processing Time 0.023 seconds

Content-Based Retrieval System Design for Image and Video using Multiple Fetures (다중 특징을 이용한 영상 및 비디오 내용 기반 검색 시스템 설계)

  • Go, Byeong-Cheol;Lee, Hae-Seong;Byeon, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.12
    • /
    • pp.1519-1530
    • /
    • 1999
  • 오늘날 멀티미디어 정보의 양이 매우 빠른 속도로 증가함에 따라 멀티미디어 데이타베이스에 대한 효율적인 관리는 더욱 중요한 의미를 가지게 되었다. 게다가 영상과 같은 비 문자형태의 데이타에 대한 사용자들의 내용기반 검색욕구 증가로 인해 비디오 인덱싱에 대한 관심은 더욱 고조되고 있다. 따라서 본 논문에서는 우선적으로 분할된 샷 경계면에서 추출된 대표 프레임과 정지 영상 데이타베이스로부터 유사 영상과 유사 대표 프레임을 검색할 수 있는 환경을 제공한다. 우선적으로 영상에 의한 질의는 기존에 주로 사용되어온 색상 히스토그램방식을 탈피하여 본 논문에서 제안하는 CS와 GS방식을 이용하여 색상 및 방향성 정보도 고려하도록 설계하였다. 또한 얼굴에 의한 질의는 대표 프레임으로부터 얼굴 영역을 추출해 내고 얼굴의 경계선 값 및 쌍 직교 웨이블릿 변환에 의해 얻어진 2개의 특징값을 이용하여 유사 인물이 포함된 대표 프레임을 검색해 내도록 설계하였다. Abstract There is a rapid increase in the use of digital video information in recent years, it becomes more important to manage multimedia databases efficiently. There is a big concern about video indexing because users require content-based image retrieval. In this paper, we first propose query-by-image system environment which allows to retrieve similar images from the chosen representative frames or images from the image databases. This algorithm considers not only the discretized color histogram but also the proposed directional information called CS & GS method. Finally, we designe another query environment using query-by-face. In this system , user selects a people in the representative frame browser and then system extracts a face region from that frame. After that system retrieves similar representative frames using 2 features, edge information and biorthogonal wavelet transform.

Object Segmentation/Detection through learned Background Model and Segmented Object Tracking Method using Particle Filter (배경 모델 학습을 통한 객체 분할/검출 및 파티클 필터를 이용한 분할된 객체의 움직임 추적 방법)

  • Lim, Su-chang;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.8
    • /
    • pp.1537-1545
    • /
    • 2016
  • In real time video sequence, object segmentation and tracking method are actively applied in various application tasks, such as surveillance system, mobile robots, augmented reality. This paper propose a robust object tracking method. The background models are constructed by learning the initial part of each video sequences. After that, the moving objects are detected via object segmentation by using background subtraction method. The region of detected objects are continuously tracked by using the HSV color histogram with particle filter. The proposed segmentation method is superior to average background model in term of moving object detection. In addition, the proposed tracking method provide a continuous tracking result even in the case that multiple objects are existed with similar color, and severe occlusion are occurred with multiple objects. The experiment results provided with 85.9 % of average object overlapping rate and 96.3% of average object tracking rate using two video sequences.

The Method of Wet Road Surface Condition Detection With Image Processing at Night (영상처리기반 야간 젖은 노면 판별을 위한 방법론)

  • KIM, Youngmin;BAIK, Namcheol
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.3
    • /
    • pp.284-293
    • /
    • 2015
  • The objective of this paper is to determine the conditions of road surface by utilizing the images collected from closed-circuit television (CCTV) cameras installed on roadside. First, a technique was examined to detect wet surfaces at nighttime. From the literature reviews, it was revealed that image processing using polarization is one of the preferred options. However, it is hard to use the polarization characteristics of road surface images at nighttime because of irregular or no light situations. In this study, we proposes a new discriminant for detecting wet and dry road surfaces using CCTV image data at night. To detect the road surface conditions with night vision, we applied the wavelet packet transform for analyzing road surface textures. Additionally, to apply the luminance feature of night CCTV images, we set the intensity histogram based on HSI(Hue Saturation Intensity) color model. With a set of 200 images taken from the field, we constructed a detection criteria hyperplane with SVM (Support Vector Machine). We conducted field tests to verify the detection ability of the wet road surfaces and obtained reliable results. The outcome of this study is also expected to be used for monitoring road surfaces to improve safety.

Development of an Image Processing Algorithm for Paprika Recognition and Coordinate Information Acquisition using Stereo Vision (스테레오 영상을 이용한 파프리카 인식 및 좌표 정보 획득 영상처리 알고리즘 개발)

  • Hwa, Ji-Ho;Song, Eui-Han;Lee, Min-Young;Lee, Bong-Ki;Lee, Dae-Weon
    • Journal of Bio-Environment Control
    • /
    • v.24 no.3
    • /
    • pp.210-216
    • /
    • 2015
  • Purpose of this study was a development of an image processing algorithm to recognize paprika and acquire it's 3D coordinates from stereo images to precisely control an end-effector of a paprika auto harvester. First, H and S threshold was set using HSI histogram analyze for extracting ROI(region of interest) from raw paprika cultivation images. Next, fundamental matrix of a stereo camera system was calculated to process matching between extracted ROI of corresponding images. Epipolar lines were acquired using F matrix, and $11{\times}11$ mask was used to compare pixels on the line. Distance between extracted corresponding points were calibrated using 3D coordinates of a calibration board. Non linear regression analyze was used to prove relation between each pixel disparity of corresponding points and depth(Z). Finally, the program could calculate horizontal(X), vertical(Y) directional coordinates using stereo camera's geometry. Horizontal directional coordinate's average error was 5.3mm, vertical was 18.8mm, depth was 5.4mm. Most of the error was occurred at 400~450mm of depth and distorted regions of image.

Edge-based spatial descriptor for content-based Image retrieval (내용 기반 영상 검색을 위한 에지 기반의 공간 기술자)

  • Kim, Nac-Woo;Kim, Tae-Yong;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.1-10
    • /
    • 2005
  • Content-based image retrieval systems are being actively investigated owing to their ability to retrieve images based on the actual visual content rather than by manually associated textual descriptions. In this paper, we propose a novel approach for image retrieval based on edge structural features using edge correlogram and color coherence vector. After color vector angle is applied in the pre-processing stage, an image is divided into two image parts (high frequency image and low frequency image). In low frequency image, the global color distribution of smooth pixels is extracted by color coherence vector, thereby incorporating spatial information into the proposed color descriptor. Meanwhile, in high frequency image, the distribution of the gray pairs at an edge is extracted by edge correlogram. Since the proposed algorithm includes the spatial and edge information between colors, it can robustly reduce the effect of the significant change in appearance and shape in image analysis. The proposed method provides a simple and flexible description for the image with complex scene in terms of structural features of the image contents. Experimental evidence suggests that our algorithm outperforms the recently histogram refinement methods for image indexing and retrieval. To index the multidimensional feature vectors, we use R*-tree structure.

Color Image Rendering using A Modified Image Formation Model (변형된 영상 생성 모델을 이용한 칼라 영상 보정)

  • Choi, Ho-Hyoung;Yun, Byoung-Ju
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.71-79
    • /
    • 2011
  • The objective of the imaging pipeline is to transform the original scene into a display image that appear similar, Generally, gamma adjustment or histogram-based method is modified to improve the contrast and detail. However, this is insufficient as the intensity and the chromaticity of illumination vary with geometric position. Thus, MSR (Multi-Scale Retinex) has been proposed. the MSR is based on a channel-independent logarithm, and it is dependent on the scale of the Gaussian filter, which varies according to input image. Therefore, after correcting the color, image quality degradations, such as halo, graying-out, and dominated color, may occur. Accordingly, this paper presents a novel color correction method using a modified image formation model in which the image is divided into three components such as global illumination, local illumination, and reflectance. The global illumination is obtained through Gaussian filtering of the original image, and the local illumination is estimated by using JND-based adaptive filter. Thereafter, the reflectance is estimated by dividing the original image by the estimated global and the local illumination to remove the influence of the illumination effects. The output image is obtained based on sRGB color representation. The experiment results show that the proposed method yields better performance of color correction over the conventional methods.

Color Image Enhancement Based on an Improved Image Formation Model (개선된 영상 생성 모델에 기반한 칼라 영상 향상)

  • Choi, Doo-Hyun;Jang, Ick-Hoon;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.65-84
    • /
    • 2006
  • In this paper, we present an improved image formation model and propose a color image enhancement based on the model. In the presented image formation model, an input image is represented as a product of global illumination, local illumination, and reflectance. In the proposed color image enhancement, an input RGB color image is converted into an HSV color image. Under the assumption of white-light illumination, the H and S component images are remained as they are and the V component image only is enhanced based on the image formation model. The global illumination is estimated by applying a linear LPF with wide support region to the input V component image and the local illumination by applying a JND (just noticeable difference)-based nonlinear LPF with narrow support region to the processed image, where the estimated global illumination is eliminated from the input V component image. The reflectance is estimated by dividing the input V component image by the estimated global and local illuminations. After performing the gamma correction on the three estimated components, the output V component image is obtained from their product. Histogram modeling is next executed such that the final output V component image is obtained. Finally an output RGB color image is obtained from the H and S component images of the input color image and the final output V component image. Experimental results for the test image DB built with color images downloaded from NASA homepage and MPEG-7 CCD color images show that the proposed method gives output color images of very well-increased global and local contrast without halo effect and color shift.

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.

Improved Binarization and Removal of Noises for Effective Extraction of Characters in Color Images (컬러 영상에서 효율적 문자 추출을 위한 개선된 2치화 및 잡음 저거)

  • 이은주;정장호
    • Journal of Information Technology Application
    • /
    • v.3 no.2
    • /
    • pp.133-147
    • /
    • 2001
  • This paper proposed a new algorithm for binarization and removal of noises in color images with characters and pictures. Binarization was performed by threshold which had computed with color-relationship relative to the number of pixel in background and character candidates and pre-threshold for dividing of background and character candidates in input images. The pre-threshold has been computed by the histogram of R, G, B In respect of the images, while background and character candidates of input images are divided by the above pre-threshold. As it is possible that threshold can be dynamically decided by the quantity of the noises, and the character images are maintained and the noises are removed to the maximum. And, in this study, we made the noise pattern table as a result of analysis in noise pattern included in the various color images aiming at removal of the noises from the Images. Noises included in the images can figure out Distribution by way of the noise pattern table and pattern matching itself. And then this Distribution classified difficulty of noises included in the images into the three categories. As removal of noises in the images is processed through different procedure according to the its classified difficulties, time required for process was reduced and efficiency of noise removal was improved. As a result of recognition experiments in respect of extracted characters in color images by way of the proposed algorithm, we conformed that the proposed algorithm is useful in a sense that it obtained the recognition rate in general documents without colors and pictures to the same level.

  • PDF

Wavelet Transform-based Face Detection for Real-time Applications (실시간 응용을 위한 웨이블릿 변환 기반의 얼굴 검출)

  • 송해진;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.829-842
    • /
    • 2003
  • In this Paper, we propose the new face detection and tracking method based on template matching for real-time applications such as, teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Since the main purpose of paper is to track a face regardless of various environments, we use template-based face tracking method. To generate robust face templates, we apply wavelet transform to the average face image and extract three types of wavelet template from transformed low-resolution average face. However template matching is generally sensitive to the change of illumination conditions, we apply Min-max normalization with histogram equalization according to the variation of intensity. Tracking method is also applied to reduce the computation time and predict precise face candidate region. Finally, facial components are also detected and from the relative distance of two eyes, we estimate the size of facial ellipse.