• 제목/요약/키워드: Video Encoding

검색결과 505건 처리시간 0.024초

Low-Complexity MPEG-4 Shape Encoding towards Realtime Object-Based Applications

  • Jang, Euee-Seon
    • ETRI Journal
    • /
    • 제26권2호
    • /
    • pp.122-135
    • /
    • 2004
  • Although frame-based MPEG-4 video services have been successfully deployed since 2000, MPEG-4 video coding is now facing great competition in becoming a dominant player in the market. Object-based coding is one of the key functionalities of MPEG-4 video coding. Real-time object-based video encoding is also important for multimedia broadcasting for the near future. Object-based video services using MPEG-4 have not yet made a successful debut due to several reasons. One of the critical problems is the coding complexity of object-based video coding over frame-based video coding. Since a video object is described with an arbitrary shape, the bitstream contains not only motion and texture data but also shape data. This has introduced additional complexity to the decoder side as well as to the encoder side. In this paper, we have analyzed the current MPEG-4 video encoding tools and proposed efficient coding technologies that reduce the complexity of the encoder. Using the proposed coding schemes, we have obtained a 56 percent reduction in shape-coding complexity over the MPEG-4 video reference software (Microsoft version, 2000 edition).

  • PDF

고속 스케일러블 동영상 부호화 알고리듬 (A Fast Scalable Video Encoding Algorithm)

  • 문용호
    • 대한임베디드공학회논문지
    • /
    • 제7권5호
    • /
    • pp.285-290
    • /
    • 2012
  • In this paper, we propose a fast encoding algorithm for scalable video encoding without compromising coding performance. Through analysis on multiple motion estimation processes performed at the enhancement layer, we show redundant motion estimations and suggest the condition under which the redundant ones can efficiently be determined without additional memory. Based on the condition, the redundant motion estimation processes are excluded in the proposed algorithm. Simulation results show that the proposed algorithm is faster than the conventional fast encoding method without performance degradation and additional memory.

A Study on a Compensation of Decoded Video Quality and an Enhancement of Encoding Speed

  • Sir, Jaechul;Yoon, Sungkyu;Lim, Younghwan
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제6권3호
    • /
    • pp.35-40
    • /
    • 2000
  • There are two problems in H.26X compression technique. One is compressing time in encoding process and the other is degradation of the decoded video quality due to high compression rate. For transferring moving pictures in real-time, it is required to adopt massively high compression. In this case, there are a lot of losses of an original video data and that results in degradation of quality. Especially degradation called by blocking artifact may be produced. The blocking artifact effect is produced by DCT-based coding techniques because they operate without considering correlation between pixels in block boundaries. So it represents discontinuity between adjacent blocks. This paper describes methods of quality compensation for H.26x decoded data and enhancing encoding speed for real-time operation. Our goal of the quality compensation is not to make the decoded video identical to a original video but to make it perceived better through human eyes. We suggest an algorithm that reduces block artifact and clears decoded video in decoder. To enhance encoding speed, we adopt new four-step search algorithm. As shown in the experimental result, the quality compensation provides better video quality because of reducing blocking artifact. And then new four-step search algorithm with $MMX^{TM}$ implementation improves encoding speed from 2.5 fps to 17 fps.

  • PDF

A Wavelet-Based Video Watermarking Approach Robust to Re-encoding

  • 유길상;이원형
    • 한국통신학회논문지
    • /
    • 제33권1C호
    • /
    • pp.124-130
    • /
    • 2008
  • We present in this paper a method of digital watermarking for video data based on the discrete wavelet transform. In the proposed method, a watermark signal is inserted into the decompressed bitstream while detection is performed using the uncompressed video. This method allows detection if video has been manipulated or its format changed. We embed the watermark in the lowest frequency components of each frame in the un-coded video by using wavelet transform. The watermark can be extracted directly from the decoded video without access to the original video. Experimental results show that the proposed method gives the watermarked video of better quality and is robust against MPEG coding, down sampling and re-encoding to other type of video format such as MPEG4, H.264

Stereo 360 VR을 위한 실시간 압축 영상 획득 시스템 (Real-Time Compressed Video Acquisition System for Stereo 360 VR)

  • 최민수;백준기
    • 방송공학회논문지
    • /
    • 제24권6호
    • /
    • pp.965-973
    • /
    • 2019
  • 본 논문에서는 Stereo 4K@60fps 360 VR 실시간 영상 획득 시스템을 영상 스트림 획득과 영상 인코딩(encoding), 영상 스티칭(stitching) 모듈로 나누어 설계하였다. 시스템은 6대의 카메라로부터 HDMI Interface를 통해 획득된 6개의 2K@60fps의 영상 스트림을 스티칭을 통하여 실시간으로 Stereo 4K@60fps 360 VR 영상을 획득한다. 영상 획득 단계에서는 멀티 스레드(Multi-Thread)를 이용하여 각 카메라로부터 실시간으로 영상 스트림을 획득하였다. 영상 인코딩 단계에서는 영상 획득과 영상 스티칭 모듈 간의 전송 리소스를 줄이기 위하여 멀티 스레드를 이용한 Raw Frame 메모리 전송과 병렬 인코딩을 하였다. 영상 스티칭 단계에서는 스티칭 Calibration 전처리작업을 통하여 스티칭 실시간성을 확보하였다.

병렬 LDPCA 채널코드 부호화 방법을 사용한 고속 분산비디오부호화 (Fast Distributed Video Coding using Parallel LDPCA Encoding)

  • 박종빈;전병우
    • 방송공학회논문지
    • /
    • 제16권1호
    • /
    • pp.144-154
    • /
    • 2011
  • 본 논문에서는 고속, 저전력 비디오 부호화에 적합한 변환영역 Wyner-Ziv 분산비디오부호화기를 더욱 고속화하기 위한 병렬처리 방법을 제안한다. 기존의 변환영역 Wyner-Ziv 분산비디오부호화 방법은 양자화 된 변환계수를 비트플레인 단위로 분해한 후 비트플레인별로 순차적으로 LDPCA 채널코드로 부호화함에 따라 전체 부호화 연산량에서 LDPCA 부호화가 평균적으로 60% 정도 차지하였고, 이러한 복잡도는 고비트율로 부호화 할수록 더욱 증가하였다. 본 논문에서는 이런 분산비디오부호화 방법의 복잡도 문제를 개선하기 위해 여러 개의 비트플레인들을 하나의 메시지묶음으로 묶어서 한 번의 연산으로 여러 개의 데이터를 동시에 고속 LDPCA 채널코드 부호화하는 병렬화 방법을 제안한다. 이를 통해 기존의 순차적 방법에 비해 저비트율에서는 8배, 고비트율에서는 55배까지 LDPCA 채널코드 부호화 속도를 향상시켰다. 결과적으로 전체 변환영역 Wyner-Ziv 분산비디오부호화에서 LDPCA 채널코드 부호화의 상대적인 복잡도 비율을 평균 9%까지 낮출 수 있었으며, Wyner-Ziv 영상의 부호화 속도도 QCIF 크기 영상을 2.5GHz 속도의 CPU를 가진 PC환경에서 GOP 길이가 64인 경우 초당 700 ~ 2,300장을 부호화 할 수 있음을 확인했다. 제안 방법은 LDPCA를 사용하는 화소영역 Wyner-Ziv 분산비디오부호화에도 적용 가능하여 고속의 부호화가 요구되는 다양한 응용에 활용이 기대된다.

다목적 비디오 부호화를 위한 고속 어파인 움직임 예측 방법 (Fast Affine Motion Estimation Method for Versatile Video Coding)

  • 정승원;전동산
    • 한국산업융합학회 논문집
    • /
    • 제25권4_2호
    • /
    • pp.707-714
    • /
    • 2022
  • Versatile Video Coding (VVC) is the most recent video coding standard, which had been developed by Joint Video Expert Team (JVET). It can improve significant coding performance compared to the previous standard, namely High Efficiency Video Coding (HEVC). Although VVC can achieve the powerful coding performance, it requires the tremendous computational complexity of VVC encoder. Especially, affine motion compensation (AMC) was adopted the block-based 4-parameter or 6-parameter affine prediction to overcome the limit of translational motion model while VVC require the cost of higher encoding complexity. In this paper, we proposed the early termination of AMC that determines whether the affine motion estimation for AMC is performed or not. Experimental results showed that the proposed method reduced the encoding complexity of affine motion estimation (AME) up to 16% compared to the VVC Test Model 17 (VTM17).

고속 동영상 압축을 위한 양자화 과정 생략 기법 (Skipping Method of Quantization for Fast Video Encoding)

  • 송원선;김범수;홍민철
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2007년도 하계종합학술대회 논문집
    • /
    • pp.323-324
    • /
    • 2007
  • In this paper, we present the method of skipping the quantization for fast video encoding. Based on the theoretical analyzes for the integer transform and quantization in H.264 video coder, we can derive a sufficient threshold under which each quantized coefficient becomes zero. In addition, in order to reduce the complexity of the fast video encoding, complexity is improved, leading to improvement of total encoding time saving for given threshold. The simulation results show the capability of the proposed algorithm.

  • PDF

IMPLEMENTATION EXPERIMENT OF VTP BASED ADAPTIVE VIDEO BIT-RATE CONTROL OVER WIRELESS AD-HOC NETWORK

  • Ujikawa, Hirotaka;Katto, Jiro
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.668-672
    • /
    • 2009
  • In wireless ad-hoc network, knowing the available bandwidth of the time varying channel is imperative for live video streaming applications. This is because the available bandwidth is varying all the time and strictly limited against the large data size of video streaming. Additionally, adapting the encoding rate to the suitable bit-rate for the network, where an overlarge encoding rate induces congestion loss and playback delay, decreases the loss and delay. While some effective rate controlling methods have been proposed and simulated well like VTP (Video Transport Protocol) [1], implementing to cooperate with the encoder and tuning the parameters are still challenging works. In this paper, we show our result of the implementation experiment of VTP based encoding rate controlling method and then introduce some techniques of our parameter tuning for a video streaming application over wireless environment.

  • PDF

다목적 비디오 부/복호화를 위한 다층 퍼셉트론 기반 삼항 트리 분할 결정 방법 (Multi-Layer Perceptron Based Ternary Tree Partitioning Decision Method for Versatile Video Coding)

  • 이태식;전동산
    • 한국멀티미디어학회논문지
    • /
    • 제25권6호
    • /
    • pp.783-792
    • /
    • 2022
  • Versatile Video Coding (VVC) is the latest video coding standard, which had been developed by the Joint Video Experts Team (JVET) of ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG) in 2020. Although VVC can provide powerful coding performance, it requires tremendous computational complexity to determine the optimal block structures during the encoding process. In this paper, we propose a fast ternary tree decision method using two neural networks with 7 nodes as input vector based on the multi-layer perceptron structure, names STH-NN and STV-NN. As a training result of neural network, the STH-NN and STV-NN achieved accuracies of 85% and 91%, respectively. Experimental results show that the proposed method reduces the encoding complexity up to 25% with unnoticeable coding loss compared to the VVC test model (VTM).