• Title/Summary/Keyword: Object Color

Search Result 926, Processing Time 0.021 seconds

Video Scene Detection using Shot Clustering based on Visual Features (시각적 특징을 기반한 샷 클러스터링을 통한 비디오 씬 탐지 기법)

  • Shin, Dong-Wook;Kim, Tae-Hwan;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.47-60
    • /
    • 2012
  • Video data comes in the form of the unstructured and the complex structure. As the importance of efficient management and retrieval for video data increases, studies on the video parsing based on the visual features contained in the video contents are researched to reconstruct video data as the meaningful structure. The early studies on video parsing are focused on splitting video data into shots, but detecting the shot boundary defined with the physical boundary does not cosider the semantic association of video data. Recently, studies on structuralizing video shots having the semantic association to the video scene defined with the semantic boundary by utilizing clustering methods are actively progressed. Previous studies on detecting the video scene try to detect video scenes by utilizing clustering algorithms based on the similarity measure between video shots mainly depended on color features. However, the correct identification of a video shot or scene and the detection of the gradual transitions such as dissolve, fade and wipe are difficult because color features of video data contain a noise and are abruptly changed due to the intervention of an unexpected object. In this paper, to solve these problems, we propose the Scene Detector by using Color histogram, corner Edge and Object color histogram (SDCEO) that clusters similar shots organizing same event based on visual features including the color histogram, the corner edge and the object color histogram to detect video scenes. The SDCEO is worthy of notice in a sense that it uses the edge feature with the color feature, and as a result, it effectively detects the gradual transitions as well as the abrupt transitions. The SDCEO consists of the Shot Bound Identifier and the Video Scene Detector. The Shot Bound Identifier is comprised of the Color Histogram Analysis step and the Corner Edge Analysis step. In the Color Histogram Analysis step, SDCEO uses the color histogram feature to organizing shot boundaries. The color histogram, recording the percentage of each quantized color among all pixels in a frame, are chosen for their good performance, as also reported in other work of content-based image and video analysis. To organize shot boundaries, SDCEO joins associated sequential frames into shot boundaries by measuring the similarity of the color histogram between frames. In the Corner Edge Analysis step, SDCEO identifies the final shot boundaries by using the corner edge feature. SDCEO detect associated shot boundaries comparing the corner edge feature between the last frame of previous shot boundary and the first frame of next shot boundary. In the Key-frame Extraction step, SDCEO compares each frame with all frames and measures the similarity by using histogram euclidean distance, and then select the frame the most similar with all frames contained in same shot boundary as the key-frame. Video Scene Detector clusters associated shots organizing same event by utilizing the hierarchical agglomerative clustering method based on the visual features including the color histogram and the object color histogram. After detecting video scenes, SDCEO organizes final video scene by repetitive clustering until the simiarity distance between shot boundaries less than the threshold h. In this paper, we construct the prototype of SDCEO and experiments are carried out with the baseline data that are manually constructed, and the experimental results that the precision of shot boundary detection is 93.3% and the precision of video scene detection is 83.3% are satisfactory.

Comparisons of Color Spaces for Shadow Elimination (그림자 제거를 위한 색상 공간의 비교)

  • Lee, Gwang-Gook;Uzair, Muhammad;Yoon, Ja-Young;Kim, Jae-Jun;Kim, Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.5
    • /
    • pp.610-622
    • /
    • 2008
  • Moving object segmentation is an essential technique for various video surveillance applications. The result of moving object segmentation often contains shadow regions caused by the color difference of shadow pixels. Hence, moving object segmentation is usually followed by a shadow elimination process to remove the false detection results. The common assumption adopted in previous works is that, under the illumination variation, the value of chromaticity components are preserved while the value of intensity component is changed. Hence, color transforms which separates luminance component and chromaticity component are usually utilized to remove shadow pixels. In this paper, various color spaces (YCbCr, HSI, normalized rgb, Yxy, Lab, c1c2c3) are examined to find the most appropriate color space for shadow elimination. So far, there have been some research efforts to compare the influence of various color spaces for shadow elimination. However, previous efforts are somewhat insufficient to compare the color distortions under illumination change in diverse color spaces, since they used a specific shadow elimination scheme or different thresholds for different color spaces. In this paper, to relieve the limitations of previous works, (1) the amount of gradients in shadow boundaries drawn to uniform colored regions are examined only for chromaticity components to compare the color distortion under illumination change and (2) the accuracy of background subtraction are analyzed via RoC curves to compare different color spaces without the problem of threshold level selection. Through experiments on real video sequences, YCbCr and normalized rgb color spaces showed good results for shadow elimination among various color spaces used for the experiments.

  • PDF

Taste in Pollen and Byukgongmuhan - Hyo-Suk's art-for-art's sake - (<화분(花粉)>과 <벽공무한(碧空無限)>에 나타난 TASTE - 효석(孝石)의 예술지상주의(藝術至上主義) -)

  • Jeoung, Kyung-Ihm
    • Journal of Fashion Business
    • /
    • v.3 no.1
    • /
    • pp.159-175
    • /
    • 1999
  • In literature, a description of costume represents an individual's characteristics when the object is an individual. If the literary object is a certain group in a certain region, it would play an important role in representing the culture of time. It clearly shows that aesthetic consciousness of Hyo-Suk Lee who had accepted the western dandyism was well expressed in his literary works. Hyo-Suk has been unique in describing life-styles such as beauty of costume, art-for-art's sake, and leisure activities, and color imagery in his works. The color and the style of the costume show us the mental state of the wearer. They also affect the emotional states of other people. Hyo-Suk's "Pollen(화분)" and "Byukongmuhan(벽공무한)" confirm the fact that the mentality of the people can be hinted through the description of costume. They also ascertain that the color imagery retained by a special color can be altered by different circumstances and settings. Hyo-Suk applies in his works the effect of vivid color contrast, which newly appeared in Fauvism, to the description of costume. In consequence, he reflects the color aesthetics of Modern Art in which the fine art has an effect on the applied art.

  • PDF

Hardware implementation of CIE1931 color coordinate system transformation for color correction (색상 보정을 위한 CIE1931 색좌표계 변환의 하드웨어 구현)

  • Lee, Seung-min;Park, Sangwook;Kang, Bong-Soon
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.502-506
    • /
    • 2020
  • With the development of autonomous driving technology, the importance of object recognition technology is increasing. Haze removal is required because the hazy weather reduces visibility and detectability in object recognition. However, the image from which the haze has been removed cannot properly reflect the unique color, and a detection error occurs. In this paper, we use CIE1931 color coordinate system to extend or reduce the color area to provide algorithms and hardware that reflect the colors of the real world. In addition, we will implement hardware capable of real-time processing in a 4K environment as the image media develops. This hardware was written in Verilog and implemented on the SoC verification board.

A Study on Game Contents Classification Service Method using Image Region Segmentation (칼라 영상 객체 분할을 이용한 게임 콘텐츠 분류 서비스 방안에 관한 연구)

  • Park, Chang Min
    • Journal of Service Research and Studies
    • /
    • v.5 no.2
    • /
    • pp.103-110
    • /
    • 2015
  • Recently, Classification of characters in a 3D FPS game has emerged as a very significant issue. In this study, We propose the game character Classification method using Image Region Segmentation of the extracting meaningful object in a simple operation. In this method, first used a non-linear RGB color model and octree color quantization scheme. The input image represented a less than 20 quantized color and uses a small number of meaningful color histogram. And then, the image divided into small blocks, calculate the degree of similarity between the color histogram intersection and adjacent block in block units. Because, except for the block boundary according to the texture and to extract only the boundaries of the object block. Set a region by these boundary blocks as a game object and can be used for FPS game play. Through experiment, we obtain accuracy of more than 80% for Classification method using each feature. Thus, using this property, characters could be classified effectively and it draws the game more speed and strategic actions as a result.

Object Segmentation/Detection through learned Background Model and Segmented Object Tracking Method using Particle Filter (배경 모델 학습을 통한 객체 분할/검출 및 파티클 필터를 이용한 분할된 객체의 움직임 추적 방법)

  • Lim, Su-chang;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.8
    • /
    • pp.1537-1545
    • /
    • 2016
  • In real time video sequence, object segmentation and tracking method are actively applied in various application tasks, such as surveillance system, mobile robots, augmented reality. This paper propose a robust object tracking method. The background models are constructed by learning the initial part of each video sequences. After that, the moving objects are detected via object segmentation by using background subtraction method. The region of detected objects are continuously tracked by using the HSV color histogram with particle filter. The proposed segmentation method is superior to average background model in term of moving object detection. In addition, the proposed tracking method provide a continuous tracking result even in the case that multiple objects are existed with similar color, and severe occlusion are occurred with multiple objects. The experiment results provided with 85.9 % of average object overlapping rate and 96.3% of average object tracking rate using two video sequences.

Fixed-Wing UAV's Image-Based Target Detection and Tracking using Embedded Processor (임베디드 프로세서를 이용한 고정익 무인항공기 영상기반 목표물 탐지 및 추적)

  • Kim, Jeong-Ho;Jeong, Jae-Won;Han, Dong-In;Heo, Jin-Woo;Cho, Kyeom-Rae;Lee, Dae-Woo
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.6
    • /
    • pp.910-919
    • /
    • 2012
  • In this paper, we described development of on-board image processing system and its process and verified its performance through flight experiment. The image processing board has single ARM(Advanced Risk Machine) processor. We performed Embedded Linux Porting. Algorithm to be applied for object tracking is color-based image processing algorithm, it can be designed to track the object that has specific color on ground in real-time. To verify performance of the on-board image processing system, we performed flight test using the PNUAV, UAV developed by LAB. Also, we performed optimization of the image processing algorithm and kernel to improve real-time performance. Finally we confirmed that proposed system can track the blue-color object within four pixels error range consistently in the experiment.

Visual Object Tracking based on Particle Filters with Multiple Observation (다중 관측 모델을 적용한 입자 필터 기반 물체 추적)

  • Koh, Hyeung-Seong;Jo, Yong-Gun;Kang, Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.539-544
    • /
    • 2004
  • We investigate a visual object tracking algorithm based upon particle filters, namely CONDENSATION, in order to combine multiple observation models such as active contours of digitally subtracted image and the particle measurement of object color. The former is applied to matching the contour of the moving target and the latter is used to independently enhance the likelihood of tracking a particular color of the object. Particle filters are more efficient than any other tracking algorithms because the tracking mechanism follows Bayesian inference rule of conditional probability propagation. In the experimental results, it is demonstrated that the suggested contour tracking particle filters prove to be robust in the cluttered environment of robot vision.

The motion estimation algorithm implemented by the color / shape information of the object in the real-time image (실시간 영상에서 물체의 색/모양 정보를 이용한 움직임 검출 알고리즘 구현)

  • Kim, Nam-Woo;Hur, Chang-Wu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2733-2737
    • /
    • 2014
  • Motion detection according to the movement and the change area detection method according to the background difference and the motion history image for use in a motion estimation technique using a real-time image, the motion detection method according to the optical flow, the back-projection of the histogram of the object to track for motion tracking At the heart of MeanShift center point of the object and the object to track, while used, the size, and the like due to the motion tracking algorithm CamShift, Kalman filter to track with direction. In this paper, we implemented the motion detection algorithm based on color and shape information of the object and verify.

Implementation of Motion Detection of Human Under Fixed Video Camera (고정 카메라 환경하에서 사람의 움직임 검출 알고리즘의 구현)

  • 한희일
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.202-205
    • /
    • 2000
  • In this paper we propose an algorithm that detects, tracks a moving object, and classify whether it is human from the video clip captured under the fixed video camera. It detects the outline of the moving object by finding out the local maximum points of the modulus image, which is the magnitude of the motion vectors. It also estimates the size and the center of the moving object. When the object is detected, the algorithm discriminates whether it is human by segmenting the face. It is segmented by searching the elliptic shape using Hough transform and grouping the skin color region within the elliptic shape.

  • PDF