• Title/Summary/Keyword: Video Information

Search Result 6,882, Processing Time 0.027 seconds

Scalable Multi-view Video Coding based on HEVC

  • Lim, Woong;Nam, Junghak;Sim, Donggyu
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.6
    • /
    • pp.434-442
    • /
    • 2015
  • In this paper, we propose an integrated spatial and view scalable video codec based on high efficiency video coding (HEVC). The proposed video codec is developed based on similarity and uniqueness between the scalable extension and 3D multi-view extension of HEVC. To improve compression efficiency using the proposed scalable multi-view video codec, inter-layer and inter-view predictions are jointly employed by using high-level syntaxes that are defined to identify view and layer information. For the inter-view and inter-layer predictions, a decoded picture buffer (DPB) management algorithm is also proposed. The inter-view and inter-layer motion predictions are integrated into a consolidated prediction by harmonizing with the temporal motion prediction of HEVC. We found that the proposed scalable multi-view codec achieves bitrate reduction of 36.1%, 31.6% and 15.8% on the top of ${\times}2$, ${\times}1.5$ parallel scalable codec and parallel multi-view codec, respectively.

A Study on the Use of Speech Recognition Technology for Content-based Video Indexing and Retrieval (내용기반 비디오 색인 및 검색을 위한 음성인식기술 이용에 관한 연구)

  • 손종목;배건성;강경옥;김재곤
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.16-20
    • /
    • 2001
  • An important aspect of video program indexing and retrieval is the ability to segment video program into meaningful segments, in other words, the ability of content-based video program segmentation. In this paper, a new approach using speech recognition technology has been proposed for content-based video program segmentation. This approach uses speech recognition technique to synchronize closed caption with speech signal. Experimental results demonstrate that the proposed scheme is very promising for content-based video program segmentation.

  • PDF

The Motion-Based Video Segmentation for Low Bit Rate Transmission (저비트율 동영상 전송을 위한 움직임 기반 동영상 분할)

  • Lee, Beom-Ro;Jeong, Jin-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.10
    • /
    • pp.2838-2844
    • /
    • 1999
  • The motion-based video segmentation provides a powerful method of video compression, because it defines a region with similar motion, and it makes video compression system to more efficiently describe motion video. In this paper, we propose the Modified Fuzzy Competitive Learning Algorithm (MFCLA) to improve the traditional K-menas clustering algorithm to implement the motion-based video segmentation efficiently. The segmented region is described with the affine model, which consists of only six parameters. This affine model was calculated with optical flow, describing the movements of pixels by frames. This method could be applied in the low bit rate video transmission, such as video conferencing system.

  • PDF

Design And Implementation of Video Retrieval System for Using Semantic-based Annotation (의미 기반 주석을 이용한 비디오 검색 시스템의 설계 및 구현)

  • 홍수열
    • Journal of the Korea Society of Computer and Information
    • /
    • v.5 no.3
    • /
    • pp.99-105
    • /
    • 2000
  • Video has become an important element of multimedia computing and communication environments, with applications as varied as broadcasting, education, publishing, and military intelligence. The necessity of the efficient methods for multimedia data retrieval is increasing more and more on account of various large scale multimedia applications. According1y, the retrieval and representation of video data becomes one of the main research issues in video database. As for the representation of the video data there have been mainly two approaches: (1) content-based video retrieval, and (2) annotation-based video retrieval This paper designs and implements a video retrieval system for using semantic-based annotation.

  • PDF

Using Fuzzy Neural Network to Assess Network Video Quality

  • Shi, Zhiming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2377-2389
    • /
    • 2022
  • At present people have higher and higher requirements for network video quality, but video quality will be impaired by various factors, so video quality assessment has become more and more important. This paper focuses on the video quality assessment method using different fuzzy neural networks. Firstly, the main factors that impair the video quality are introduced, such as unit time jamming times, average pause time, blur degree and block effect. Secondly, two fuzzy neural network models are used to build the objective assessment method. By adjusting the network structure to optimize the assessment model, the objective assessment value of video quality is obtained. Meanwhile the advantages and disadvantages of the two models are analysed. Lastly, the proposed method is compared with many recent related assessment methods. This paper will give the experimental results and the detail of assessment process.

An Optimal Selection of Frame Skip and Spatial Quantization for Low Bit Rate Video Coding (저속 영상부호화를 위한 최적 프레임 율과 공간 양자화 결정)

  • Bu, So-Young;Lee, Byung-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.842-847
    • /
    • 2004
  • We present a new video coding technique to tradeoff frame rate and picture quality for low bit rate video coding. We show a model equation for selecting the optimal frame rate from the motion content of the source video. We can determine DCT quantization parameter (QP) using the frame rate and bit rate. For objective video quality measurement we propose a simple and effective error measure for skipped frames. The proposed method enhances the video quality up to 2 ㏈ over the H.263 TMN5 encoder.

Distributed Video Compressive Sensing Reconstruction by Adaptive PCA Sparse Basis and Nonlocal Similarity

  • Wu, Minghu;Zhu, Xiuchang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.8
    • /
    • pp.2851-2865
    • /
    • 2014
  • To improve the rate-distortion performance of distributed video compressive sensing (DVCS), the adaptive sparse basis and nonlocal similarity of video are proposed to jointly reconstruct the video signal in this paper. Due to the lack of motion information between frames and the appearance of some noises in the reference frames, the sparse dictionary, which is constructed using the examples directly extracted from the reference frames, has already not better obtained the sparse representation of the interpolated block. This paper proposes a method to construct the sparse dictionary. Firstly, the example-based data matrix is constructed by using the motion information between frames, and then the principle components analysis (PCA) is used to compute some significant principle components of data matrix. Finally, the sparse dictionary is constructed by these significant principle components. The merit of the proposed sparse dictionary is that it can not only adaptively change in terms of the spatial-temporal characteristics, but also has ability to suppress noises. Besides, considering that the sparse priors cannot preserve the edges and textures of video frames well, the nonlocal similarity regularization term has also been introduced into reconstruction model. Experimental results show that the proposed algorithm can improve the objective and subjective quality of video frame, and achieve the better rate-distortion performance of DVCS system at the cost of a certain computational complexity.

Impact of playout buffer dynamics on the QoE of wireless adaptive HTTP progressive video

  • Xie, Guannan;Chen, Huifang;Yu, Fange;Xie, Lei
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.447-458
    • /
    • 2021
  • The quality of experience (QoE) of video streaming is degraded by playback interruptions, which can be mitigated by the playout buffers of end users. To analyze the impact of playout buffer dynamics on the QoE of wireless adaptive hypertext transfer protocol (HTTP) progressive video, we model the playout buffer as a G/D/1 queue with an arbitrary packet arrival rate and deterministic service time. Because all video packets within a block must be available in the playout buffer before that block is decoded, playback interruption can occur even when the playout buffer is non-empty. We analyze the queue length evolution of the playout buffer using diffusion approximation. Closed-form expressions for user-perceived video quality are derived in terms of the buffering delay, playback duration, and interruption probability for an infinite buffer size, the packet loss probability and re-buffering probability for a finite buffer size. Simulation results verify our theoretical analysis and reveal that the impact of playout buffer dynamics on QoE is content dependent, which can contribute to the design of QoE-driven wireless adaptive HTTP progressive video management.

Customizing Ground Color to Deliver Better Viewing Experience of Soccer Video

  • Ahn, Il-Koo;Kim, Young-Woo;Kim, Chang-Ick
    • ETRI Journal
    • /
    • v.30 no.1
    • /
    • pp.101-112
    • /
    • 2008
  • In this paper, we present a method to customize the ground color in outdoor sports video to provide TV viewers with a better viewing experience or subjective satisfaction. This issue, related to content personalization, is becoming critical with the advent of mobile TV and interactive TV. In outdoor sports video, such as soccer video, it is sometimes observed that the ground color is not satisfactory to viewers. In this work, the proposed algorithm is focused on customizing the ground color to deliver a better viewing experience for viewers. The algorithm comprises three modules: ground detection, shot classification, and ground color customization. We customize the ground color by considering the difference between ground colors from both input video and the target ground patch. Experimental results show that the proposed scheme offers useful tools to provide a more comfortable viewing experience and that it is amenable to real-time performance, even in a software-based implementation.

  • PDF

Biological Infectious Watermarking Model for Video Copyright Protection

  • Jang, Bong-Joo;Lee, Suk-Hwan;Lim, SangHun;Kwon, Ki-Ryong
    • Journal of Information Processing Systems
    • /
    • v.11 no.2
    • /
    • pp.280-294
    • /
    • 2015
  • This paper presents the infectious watermarking model (IWM) for the protection of video contents that are based on biological virus modeling by the infectious route and procedure. Our infectious watermarking is designed as a new paradigm protection for video contents, regarding the hidden watermark for video protection as an infectious virus, video content as host, and codec as contagion medium. We used pathogen, mutant, and contagion as the infectious watermark and defined the techniques of infectious watermark generation and authentication, kernel-based infectious watermarking, and content-based infectious watermarking. We experimented with our watermarking model by using existing watermarking methods as kernel-based infectious watermarking and content-based infectious watermarking medium, and verified the practical applications of our model based on these experiments.