• Title/Summary/Keyword: MPEG-I

Search Result 219, Processing Time 0.021 seconds

Computation method of effective bandwidth of VBR MPEG video traffic using the modified equivalent capacity (수정된 equivalent capcity를 이용한 VBR MPEG 비디오 트랙픽의 등가대역폭 계산방법)

  • 하경봉;이창범;박래홍
    • Journal of the Korean Institute of Telematics and Electronics A
    • /
    • v.33A no.10
    • /
    • pp.40-47
    • /
    • 1996
  • A method for computing effectiv ebandwidth of aggregated varable bit rate (VBR) moving picture experts group (MPEG) video traffic is proposed. To compute statistical characteristics of aggregated MPEG traffic, first we split input MPEG traffic into I, B, and P frame traffics and aggregate respective I, B, and P frame traffics according to the frame type. Second statisticsal characteristics of the aggregated MPEG traffic are obtained using those of aggregated I, B, and P frame traffics. The effective bandwidth of the aggregated I frame traffic is computed by using the gaussian bound. Using the modified equivalent capacity, we obtain the effective bandwidths of aggregated B and P frame traffics and then compute the effective bandwidth of the combined B and P frame traffic. Finally the effective bandwidth of the aggregated MPEG traffic is computed by adding the gaussian bound of the aggregated I frame traffic and modifed equivalent capacity of combined B and P frame traffic. Computer simulation shows that the proposed method estimates effective bandwidth of the aggregated MPEG traffic well.

  • PDF

A Novel I-picture Arrangement Method for Multiple MPEG Video Transmission (다중 MPEG 비디오 전송을 위한 I-픽쳐 정렬 방안)

  • Park Sang-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.277-282
    • /
    • 2005
  • The arrangement of I-picture starting times of multiplexed variable bit .ate (VBR) MPEG videos may significantly affect the cell loss ratio (CLR) characteristics of the multiplexed traffic. This paper presents an efficient I-picture arrangement method which can minimize the CLR of the multiplexed traffic when multiple VBR MPEG videos are multiplexed onto a single constant bit rate link. In the proposed method, we use the probability that the arrival rate exceeds the link capacity as the measure for the CLR of the multiplexed traffic. Simulation results show that the proposed method can find more optimal arrangement than existing methods in respect of the CLR.

MPEG4 decoding system modeling in SystemC (SystemC를 이용한 MPEG4 복호화 시스템 모델링)

  • 이미영;이승준;배영환
    • Proceedings of the IEEK Conference
    • /
    • 2001.06b
    • /
    • pp.109-112
    • /
    • 2001
  • In this paper, I present a MPEG4 decoding system modeling in SystemC, a new C/C++ based system simulation approach, In the modeling, MPEG4 decoding behavior is modeled and verified. And I partitions the MPEG4 decoding system into several hardware components which will be implemented at low level hardware design flow and I model a synchronized hardware block communication through data ports.

  • PDF

MPEG-I RVS Software Speed-up for Real-time Application (실시간 렌더링을 위한 MPEG-I RVS 가속화 기법)

  • Ahn, Heejune;Lee, Myeong-jin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.655-664
    • /
    • 2020
  • Free viewpoint image synthesis technology is one of the important technologies in the MPEG-I (Immersive) standard. RVS (Reference View Synthesizer) developed by MPEG-I and in use in MPEG group is a DIBR (Depth Information-Based Rendering) program that generates an image at a virtual (intermediate) viewpoint from multiple viewpoints' inputs. RVS uses the mesh surface method based on computer graphics, and outperforms the pixel-based ones by 2.5dB or more compared to the previous pixel method. Even though its OpenGL version provides 10 times speed up over the non OpenGL based one, it still shows a non-real-time processing speed, i.e., 0.75 fps on the two 2k resolution input images. In this paper, we analyze the internal of RVS implementation and modify its structure, achieving 34 times speed up, therefore, real-time performance (22-26 fps), through the 3 key improvements: 1) the reuse of OpenGL buffers and texture objects 2) the parallelization of file I/O and OpenGL execution 3) the parallelization of GPU shader program and buffer transfer.

Standardization of MPEG-I Immersive Audio and Related Technologies (MPEG-I Immersive Audio 표준화 및 기술 동향)

  • Jang, D.Y.;Kang, K.O.;Lee, Y.J.;Yoo, J.H.;Lee, T.J.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.3
    • /
    • pp.52-63
    • /
    • 2022
  • Immersive media, also known as spatial media, has become essential with the decrease in face-to-face activities in the COVID-19 pandemic era. Teleconference, metaverse, and digital twin have been developed with high expectations as immersive media services, and the demand for hyper-realistic media is increasing. Under these circumstances, MPEG-I Immersive Media is being standardized as a technologies of navigable virtual reality, which is expected to be launched in the first half of 2024, and the Audio Group is working to standardize the immersive audio technology. Following this trend, this article introduces the trend in MPEG-I immersive audio standardization. Further, it describes the features of the immersive audio rendering technology, focusing on the structure and function of the RM0 base technology, which was chosen after evaluating all the technologies proposed in the January 2022 "MPEG Audio Meeting."

Spatio-Temporal Error Concealment of I-frame using GOP structure of MPEG-2 (MPEG-2의 GOP 구조를 이용한 I 프레임의 시공간적 오류 은닉)

  • Kang, Min-Jung;Ryu, Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.72-82
    • /
    • 2004
  • This paper proposes more robust error concealment techniques (ECTs) for MPEG-2 intra coded frame. MPEG-2 source coding algorithm is very sensitive to transmission errors due to the use of variable-length coding. The transmission errors are corrected by error correction scheme, however, they cannot be revised properly. Error concealment (EC) is used to conceal the errors which are not corrected by error correction and to provide minimum visual distortion at the decoder. If errors are generated in intra coded frame, that is the starting frame of GOP, they are propagated to other inter coded frames due to the nature of motion compensated prediction coding. Such propagation of error may cause severe visual distortion. The proposed algorithm in this paper utilizes the spatio-temporal information of neighboring inter coded frames to conceal the successive slices errors occurred in I-frame. The proposed method also overcomes the problems that previous ECTs reside. The proposed algorithm generates consistent performance even in network where the violent transmission errors frequently occur. Algorithm is performed in MPEG-2 video codec and we can confirm that the proposed algorithm provides less visible distortion and higher PSNR than other approaches through simulations.

A Study on Encryption Techniques for Digital Rights Management of MPEG-4 Video Streams (MPEG-4 비디오 스트림의 디지털 저작권 관리를 위한 암호화 기법 연구)

  • Kim Gunhee;Shin Dongkyoo;Shin Dongil
    • The KIPS Transactions:PartC
    • /
    • v.12C no.2 s.98
    • /
    • pp.175-182
    • /
    • 2005
  • This paper presents encryption techniques for digital right management solutions of MPEG-4 streams. MPEG-4 is a format for multimedia streaming and stored in the MPEG-4 file format. We designed three kinds of encryption methods, which encrypt macro blocks (MBs) or motion vectors (MVs) of I-, P-VOPs (Video Object Planes), extracted from the MPEG-4 file format. We used DES to encrypt MPEG-4 data Based on theses three methods, we designed and implemented a DRM solution for an Internet broadcasting service, which enabled a MPEG-4 data streaming, and then compared the results of decryption speed and quality of rendered video sequences to get an optimal encryption method.

Design and Implementation of MPEG-2 Compressed Video Information Management System (MPEG-2 압축 동영상 정보 관리 시스템의 설계 및 구현)

  • Heo, Jin-Yong;Kim, In-Hong;Bae, Jong-Min;Kang, Hyun-Syug
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1431-1440
    • /
    • 1998
  • Video data are retrieved and stored in various compressed forms according to their characteristics, In this paper, we present a generic data model that captures the structure of a video document and that provides a means for indexing a video stream, Using this model, we design and implement CVIMS (the MPEG-2 Compressed Video Information Management System) to store and retrieve video documents, CVIMS extracts I-frames from MPEG-2 files, selects key-frames from the I -frames, and stores in database the index information such as thumbnails, captions, and picture descriptors of the key-frames, And also, CVIMS retrieves MPEG- 2 video data using the thumbnails of key-frames and v31ious labels of queries.

  • PDF

Adaptive Video Watermarking using the Bitrate and the Motion Vector (비트율과 움직임 벡터를 이용한 적응적 동영상 워터마킹)

  • Ahn, I.Y.
    • 전자공학회논문지 IE
    • /
    • v.43 no.4
    • /
    • pp.37-42
    • /
    • 2006
  • This paper proposes a adaptive video watermarking algorithm according to bitrate and motion vector size in MPEG2 system. The watermark strength in the I-frames is adapted for quantization step size and the strength in the P-B-frames is adapted for quantization step size and motion vector of macroblock to make the watermark more robust against the accompanying degradation due to aggressively compression. A realtime watermark extraction is done directly in the DCT domain during MPEG decoding without full decoding of MPEG video. The experimental simulations show that the video quality results almost invisible difference between the watermarked frames and the original frames and the watermark is resistant to frame dropping, MPEG compression, GoP conversion and low pass filter attacks.

Performance Analysis on View Synthesis of 360 Videos for Omnidirectional 6DoF in MPEG-I (MPEG-I의 6DoF를 위한 360 비디오 가상시점 합성 성능 분석)

  • Kim, Hyun-Ho;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.273-280
    • /
    • 2019
  • 360 video is attracting attention as immersive media with the spread of VR applications, and MPEG-I (Immersive) Visual group is actively working on standardization to support immersive media experiences with up to six degree of freedom (6DoF). In virtual space of omnidirectional 6DoF, which is defined as a case of degree of freedom providing 6DoF in a restricted area, looking at the scene at any viewpoint of any position in the space requires rendering the view by synthesizing additional viewpoints called virtual omnidirectional viewpoints. This paper presents the performance results on view synthesis and their analysis, which have been done as exploration experiments (EEs) of omnidirectional 6DoF in MPEG-I. In other words, experiment results on view synthesis in various aspects of synthesis conditions such as the distances between input views and virtual view to be synthesized and the number of input views to be selected from the given set of 360 videos providing omnidirectional 6DoF are presented.