• Title/Summary/Keyword: Video sequences

Search Result 543, Processing Time 0.033 seconds

Color and Motion-based Fire Detection in Video Sequences (비디오 영상에서 컬러와 움직임 기반의 화재 검출)

  • Kim, Alla;Kim, Yoon-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.3
    • /
    • pp.471-477
    • /
    • 2011
  • A wide distribution of CCTV cameras in many public areas can be used not only for video surveillance systems but also for preserving fire occurrence. A proposed approach is based on visual information through a static camera. Video sequences are analyzed to find fire candidates and then spatial analyses procedure for detected fire-like color foreground is carried out. If spatial and temporal variances changes rapidly and close to fire motion, fire candidate is considered as fire.

Fast Intra-Prediction Mode Decision Algorithm for H.264/AVC using Non-parametric Thresholds and Simplified Directional Masks

  • Kim, Young-Ju
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.4
    • /
    • pp.501-506
    • /
    • 2009
  • In the H.264/ AVC video coding standard, the intra-prediction coding with various block sizes offers a considerably high improvement in coding efficiency compared to previous standards. In order to achieve this, H.264/AVC uses the Rate-distortion optimization (RDO) technique to select the best intraprediction mode for a macroblock, and it brings about the drastic increase of the computation complexity of H.264 encoder. To reduce the computation complexity and stabilize the coding performance on visual quality, this paper proposed a fast intra-prediction mode decision algorithm using non-parametric thresholds and simplified directional masks. The use of nonparametric thresholds makes the intra-coding performance not be dependent on types of video sequences and simplified directional masks reduces the compuation loads needed by the calculation of local edge information. Experiment results show that the proposed algorithm is able to reduce more than 55% of the whole encoding time with a negligible loss in PSNR and bitrates and provides the stable performance regardless types of video sequences.

Segmented Video Coding Using Variable Block-Size Segmentation by Motion Vectors (움직임벡터에 의한 가변블럭영역화를 이용한 영역기반 동영상 부호화)

  • 이기헌;김준식;박래홍;이상욱;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.4
    • /
    • pp.62-76
    • /
    • 1994
  • In this paper, a segmentation-based coding technique as applied to video sequences is proposed. A proposed method separates an image into contour and texture parts, then the visually-sensitive contour part is represented by chain codes and the visually-insensitive texture part is reconstructed by a representative motion vector of a region and mean of the segmented frame difference. It uses a change detector to find moving areas and adopts variable blocks to represent different motions correctly. For better quality of reconstructed images, the displaced frame difference between the original image and the motion compensated image reconstructed by the representative motion vector is segmented. Computer simulation with several video sequences shows that the proposed method gives better performance than the conventional ones in terms of the peak signal to noise ratio(PSNR) and compression ration.

  • PDF

An Efficient Motion Compensation Algorithm for Video Sequences with Brightness Variations (밝기 변화가 심한 비디오 시퀀스에 대한 효율적인 움직임 보상 알고리즘)

  • 김상현;박래홍
    • Journal of Broadcast Engineering
    • /
    • v.7 no.4
    • /
    • pp.291-299
    • /
    • 2002
  • This paper proposes an efficient motion compensation algorithm for video sequences with brightness variations. In the proposed algorithm, the brightness variation parameters are estimated and local motions are compensated. To detect the frame with large brightness variations. we employ the frame classification based on the cross entropy between histograms of two successive frames, which can reduce the computational redundancy. Simulation results show that the proposed method yields a higher peak signal to noise ratio (PSNR) than the conventional methods, with a low computational load, when the video scene contains large brightness changes.

Fusion of Background Subtraction and Clustering Techniques for Shadow Suppression in Video Sequences

  • Chowdhury, Anuva;Shin, Jung-Pil;Chong, Ui-Pil
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.4
    • /
    • pp.231-234
    • /
    • 2013
  • This paper introduces a mixture of background subtraction technique and K-Means clustering algorithm for removing shadows from video sequences. Lighting conditions cause an issue with segmentation. The proposed method can successfully eradicate artifacts associated with lighting changes such as highlight and reflection, and cast shadows of moving object from segmentation. In this paper, K-Means clustering algorithm is applied to the foreground, which is initially fragmented by background subtraction technique. The estimated shadow region is then superimposed on the background to eliminate the effects that cause redundancy in object detection. Simulation results depict that the proposed approach is capable of removing shadows and reflections from moving objects with an accuracy of more than 95% in every cases considered.

Threshold-Based Camera Motion Characterization of MPEG Video

  • Kim, Jae-Gon;Chang, Hyun-Sung;Kim, Jin-Woong;Kim, Hyung-Myung
    • ETRI Journal
    • /
    • v.26 no.3
    • /
    • pp.269-272
    • /
    • 2004
  • We propose an efficient scheme for camera motion characterization in MPEG-compressed video. The proposed scheme detects six types of basic camera motions through threshold-based qualitative interpretation, in which fixed thresholds are applied to motion model parameters estimated from MPEG motion vectors (MVs). The efficiency and robustness of the scheme are validated by the experiment with real compressed video sequences.

  • PDF

Two-Layer Video Coding Using Pyramid Structure for ATM Networks (ATM 망에서 피라미드 구조를 이용한 2계층 영상부호화)

  • 홍승훈;김인권;박래홍
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1995.06a
    • /
    • pp.97-100
    • /
    • 1995
  • In transmission of image sequences over ATM networks, the packet loss problem and channel sharing efficiency are important. As a possible solution two-layer video coding methods have been proposed. These methods transmit video information over the network with different levels of protection with respect to packets loss. In this paper, a two-layer coding method using pyramid structure is proposed and several realizations of two-layer video coding methods are presented and their performances are compared.

Detecting Gradual Transitions in Video Sequences (비디오 영상에서 점진적 장면전환 검출)

  • 이광국;김형준;김회율
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.149-152
    • /
    • 2002
  • Automated video segmentation is important as the first step of video indexing, video retrieval and other uses. Unlike abrupt changes that are relatively easy to detect, gradual transitions like dissolve, fade-in and fade-out are rather difficult to detect. In this paper, we propose a method for detecting gradual transitions based on local statistics and less dependent to a given threshold level. Experimental results show that the proposed method detected about 85% of gradual transitions.

  • PDF

Automatic Extraction of Focused Video Object from Low Depth-of-Field Image Sequences (낮은 피사계 심도의 동영상에서 포커스 된 비디오 객체의 자동 검출)

  • Park, Jung-Woo;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.10
    • /
    • pp.851-861
    • /
    • 2006
  • The paper proposes a novel unsupervised video object segmentation algorithm for image sequences with low depth-of-field (DOF), which is a popular photographic technique enabling to represent the intention of photographer by giving a clear focus only on an object-of-interest (OOI). The proposed algorithm largely consists of two modules. The first module automatically extracts OOIs from the first frame by separating sharply focused OOIs from other out-of-focused foreground or background objects. The second module tracks OOIs for the rest of the video sequence, aimed at running the system in real-time, or at least, semi-real-time. The experimental results indicate that the proposed algorithm provides an effective tool, which can be a basis of applications, such as video analysis for virtual reality, immersive video system, photo-realistic video scene generation and video indexing systems.

Analysis on Subjective Image Quality Assessments for 4K-UHD Video Viewing Environments (4K-UHD 비디오 시청환경 특성분석을 위한 주관적 화질평가 분석)

  • Park, In-Kyung;Ha, Kwang-Sung;Kim, Mun-Churl;Cho, Suk-Hee;Cho, Jin-Soo
    • Journal of Broadcast Engineering
    • /
    • v.15 no.4
    • /
    • pp.563-581
    • /
    • 2010
  • In this paper, we perform subjective visual quality assessments on UHD video for UHD TV services and analyze the assessment results. Demands for video services have been increased with availabilities of DTV, Internet and personal media equipments. With this trend, the demands for high definition video have also been increasing. Currently, 2K-HD ($1920{\times}1080$) video have been widely consumed over DTV, DVD, digital camcoders, security cameras and other multimedia terminals in various types, and recently digital cinema contents of 4K-UHD($3840{\times}2160$) have been popularly produced and the cameras, beam projects, display panels that support for 4K-UHD video start to come out into multimedia markets. Also it is expected that 4K-UHD service will appear soon in broadcasting and telecommunications environments. Therefore, in this paper, subjective assessments of visual quality on resolutions, color formats, frame rates and compression rates have been carried to provide basis information for standardization of signal specification of UHD video and viewing environments for future UHDTV. As the analysis on the assessments, UHD video exhibits better subjective visual quality than HD by the evaluators. Also, the 4K-UHD test sequences in YUV444 shows better subjective visual quality than the 4K-UHD test sequences in YUV422 and YUV420, but there is little perceptual difference on 4K-UHD test sequences between YUV422 and YUV420 formats. For the comparison between different frame rates, 4K-UHD test sequences of 60fps gives better subjective visual quality than those of 30fps. For bit-depth comparison, HD test sequences in 10-bit depth were little differentiated from those in 8-bit depth in subject visual quality assessment. Lastly, the larger the PSNR values of the reconstructed 4K-UHD test sequences are, the higher the subjective visual quality is. Against the viewing distances, the differences among encoded 4K-UHD test sequences were less distinguished in longer distances from the display.