• Title/Summary/Keyword: Video extrapolation

Search Result 13, Processing Time 0.031 seconds

Video classifier with adaptive blur network to determine horizontally extrapolatable video content (적응형 블러 기반 비디오의 수평적 확장 여부 판별 네트워크)

  • Minsun Kim;Changwook Seo;Hyun Ho Yun;Junyong Noh
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.99-107
    • /
    • 2024
  • While the demand for extrapolating video content horizontally or vertically is increasing, even the most advanced techniques cannot successfully extrapolate all videos. Therefore, it is important to determine if a given video can be well extrapolated before attempting the actual extrapolation. This can help avoid wasting computing resources. This paper proposes a video classifier that can identify if a video is suitable for horizontal extrapolation. The classifier utilizes optical flow and an adaptive Gaussian blur network, which can be applied to flow-based video extrapolation methods. The labeling for training was rigorously conducted through user tests and quantitative evaluations. As a result of learning from this labeled dataset, a network was developed to determine the extrapolation capability of a given video. The proposed classifier achieved much more accurate classification performance than methods that simply use the original video or fixed blur alone by effectively capturing the characteristics of the video through optical flow and adaptive Gaussian blur network. This classifier can be utilized in various fields in conjunction with automatic video extrapolation techniques for immersive viewing experiences.

Whole Frame Error Concealment with an Adaptive PU-based Motion Vector Extrapolation for HEVC

  • Kim, Seounghwi;Lee, Dongkyu;Oh, Seoung-Jun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.1
    • /
    • pp.16-21
    • /
    • 2015
  • Most video services are transmitted in wireless networks. In a network environment, a packet of video is likely to be lost during transmission. For this reason, numerous error concealment (EC) algorithms have been proposed to combat channel errors. On the other hand, most existing algorithms cannot conceal the whole missing frame effectively. To resolve this problem, this paper proposes a new Adaptive Prediction Unit-based Motion Vector Extrapolation (APMVE) algorithm to restore the entire missing frame encoded by High Efficiency Video Coding (HEVC). In each missing HEVC frame, it uses the prediction unit (PU) information of the previous frame to adaptively decide the size of a basic unit for error concealment and to provide a more accurate estimation for the motion vector in that basic unit than can be achieved by any other conventional method. The simulation results showed that it is highly effective and significantly outperforms other existing frame recovery methods in terms of both objective and subjective quality.

Whole Frame Error Concealment with an Adaptive PU-based Motion Vector Extrapolation and Boundary Matching (적응적인 PU 기반 움직임 벡터 외삽과 경계 정합을 통한 프레임 전체 오류 은닉 방법에 관한 연구)

  • Kim, Seounghwi;Lee, Dongkyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.20 no.4
    • /
    • pp.533-544
    • /
    • 2015
  • Recently, most of the video services are usually transmitted in wireless networks. In networks environment, a packet of video is likely to be lost during transmission. For this reason, this paper proposes a new Error Concealment (EC) algorithm. For High Efficiency Video Coding (HEVC) bitstreams, the proposed algorithm includes Adaptive Prediction Unit-based Motion Vector Extrapolation (APMVE) and Boundary Matching (BM) algorithm, which employs both the temporal and spatial correlation. APMVE adaptively decides a Error Concealment Basic Unit (ECBU) by using the PU information of the previous frame and BM employing the spatial correlation is applied to only unreliable blocks. Simulation results show that the proposed algorithm provides the higher subjective quality by reducing blocking artifacts which appear in other existing algorithms.

Efficient Motion Compensated Extrapolation Technique Using Forward and Backward Motion Estimation (순방향과 역방향 움직임 추정을 이용한 효율적인 움직임 보상 외삽 기법)

  • Kwon, Hye-Gyung;Lee, Chang-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.4C
    • /
    • pp.207-216
    • /
    • 2011
  • Motion compensated extrapolation (MCE) techniques show inferior performance compared to motion compensated interpolation techniques, since only past frames are used in MCE. MCE techniques are used for the reconstruction of corrupted frames, the up-conversion of frame rates and the generation of side information in the distributed video coding system. In this paper, the performance of various MCE techniques are evaluated and an efficient MCE technique using the forward and backward motion estimation is proposed. In the proposed technique, the present frame is extrapolated by averaging two frames which are generated by forward and backward motion estimation respectively. It is shown that the proposed method produces better PSNR results and less blocking phenomena than conventional methods.

PU-based Motion Vector Extrapolation for HEVC Error Concealment (HEVC 오류 은닉을 위한 PU 기반 움직임 벡터 외삽법)

  • Kim, Sangmin;Lee, Dong-Kyu;Park, Dongmin;Oh, Seoung-Jun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.06a
    • /
    • pp.209-210
    • /
    • 2014
  • 최근 인터넷 상에서 제공되는 영상 서비스에 대한 요구가 증가하고 있다. 하지만 네트워크 환경에서 전송되는 데이터는 오류로 인하여 쉽게 손실될 수 있다. 특히 HEVC(High Efficiency Video Coding)와 같이 높은 압축률로 압축된 정보에 대한 전송 오류는 영상 복원에 심각한 영향을 끼친다. 따라서 네트워크 환경에서 일정한 화질을 유지하기 위한 오류 은닉(Error Concealment : EC) 방법이 필요하다. 본 논문은 HEVC EC 를 위한 PU(Prediction Unit) 기반 움직임 벡터 외삽법(Motion Vector Extrapolation : MVE) 모델을 제안한다. PU 는 예측의 기본 단위로써 PU 내에 동일한 물체가 포함될 확률이 높다. 따라서, 이 모델은 손실된 프레임의 이전 프레임이 갖는 PU 정보를 이용하여 PU 단위로 외삽(extrapolation)을 실시한다. 또한, 손실된 블록과 외삽 블록간의 관계를 고려하여 겹쳐진(overlapped) 외삽 블록 중 가장 작은 PU 크기를 EC 기본 단위로 결정한다. 이 방법은 PU 정보를 반영함으로써 블록 경계 오류(block artifact)를 감소시킨다.

  • PDF

Side Information Extrapolation Using Motion-aligned Auto Regressive Model for Compressed Sensing based Wyner-Ziv Codec

  • Li, Ran;Gan, Zongliang;Cui, Ziguan;Wu, Minghu;Zhu, Xiuchang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.2
    • /
    • pp.366-385
    • /
    • 2013
  • In this paper, we propose a compressed sensing (CS) based Wyner-Ziv (WZ) codec using motion-aligned auto regressive model (MAAR) based side information (SI) extrapolation to improve the compression performance of low-delay distributed video coding (DVC). In the CS based WZ codec, the WZ frame is divided into small blocks and CS measurements of each block are acquired at the encoder, and a specific CS reconstruction algorithm is proposed to correct errors in the SI using CS measurements at the decoder. In order to generate high quality SI, a MAAR model is introduced to improve the inaccurate motion field in auto regressive (AR) model, and the Tikhonov regularization on MAAR coefficients and overlapped block based interpolation are performed to reduce block effects and errors from over-fitting. Simulation experiments show that our proposed CS based WZ codec associated with MAAR based SI generation achieves better results compared to other SI extrapolation methods.

Digital Gray-Scale/Color Image-Segmentation Architecture for Cell-Network-Based Real-Time Applications

  • Koide, Tetsushi;Morimoto, Takashi;Harada, Youmei;Mattausch, Jurgen Hans
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.670-673
    • /
    • 2002
  • This paper proposes a digital algorithm for gray-scale/color image segmentation of real-time video signals and a cell-network-based implementation architecture in state-of-the-art CMOS technology. Through extrapolation of design and simulation results we predict that about 300$\times$300 pixels can be integrated on a chip at 100nm CMOS technology, realizing very high-speed segmentation at about 1600sec per color image. Consequently real-time color-video segmentation will become possible in near future.

  • PDF

Trajectory Recovery Using Goal-directed Tracking (목표-지향 추적 기법을 이용한 궤적 복원 방법)

  • Oh, Seon Ho;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.5
    • /
    • pp.575-582
    • /
    • 2015
  • Obtaining the complete trajectory of the object is a very important task in computer vision applications, such as video surveillance. Previous studies to recover the trajectory between two disconnected trajectory segments, however, do not takes into account the object's motion characteristics and uncertainty of trajectory segments. In this paper, we present a novel approach to recover the trajectory between two disjoint but associated trajectory segments, called goal-directed tracking. To incorporate the object's motion characteristics and uncertainty, the goal-directed state equation is first introduced. Then the goal-directed tracking framework is constructed by integrating the equation to the object tracking and trajectory linking process pipeline. Evaluation on challenging dataset demonstrates that proposed method can accurately recover the missing trajectory between two disconnected trajectory segments as well as appropriately constrain a motion of the object to the its goal(or the target state) with uncertainty.

Efficient Motion Compensated Extrapolation Techniques Using Forward and Backward Motion Estimation (순방향과 역방향 움직임 예측을 이용한 효율적인 움직임 보상 외삽 기법)

  • Kwon, Hye-Gyung;Lee, Chang-Woo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.24-27
    • /
    • 2010
  • 움직임 보상 외삽 기법은 전송중 손상된 프레임의 복원 및 프레임율 증가 뿐 아니라 분산 동영상 부호화 시스템(distributed video coding:DVC)의 부가 정보(side information) 생성에도 활용될 수 있다. 본 논문에서는 기존의 다양한 움직임 보상 외삽기법의 성능을 평가하고 정방향과 역방향 움직임 예측을 함께 이용한 효율적인 움직임 보상 외삽 기법을 제안한다. 모의 실험결과 제안하는 기법이 기존의 기법에 비해서 우수한 성능을 보임을 확인하였다.

  • PDF

Haptic Rendering based on Real-time Video of Deformable Bodies using Snakes Algorithm (스네이크 알고리즘을 이용한 실시간 영상기반 변형체의 햅틱 렌더링)

  • Kim, Young-Jin;Kim, Jung-Sik;Kim, Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.58-63
    • /
    • 2007
  • 본 논문은 현미경이나 카메라 영상 등의 실시간 영상을 이용한 변형체(deformable object)의 햅틱 렌더링을 구현하는 방법에 관한 것이다. 이는 저속으로 변형하는 물체의 영상정보를 실시간으로 추출하여, 그에 대한 영상처리를 통해 변형과 이동에 대한 위치 정보를 제공함으로써 이루어진다. 물체에 변형이 가해지면 카메라를 통해 컴퓨터로 그 영상이 전송되며 얻어진 영상은 스네이크 알고리즘의 영상처리 과정을 거쳐 이차원 모델 구현을 위한 위치정보를 제공한다. 이 가상모델에 대한 햅틱 렌더링을 구현하여 햅틱장치에 힘 피드백을 주게 되며, 안정적인 햅틱 렌더링의 구현을 위해 보간법(interpolation) 및 보외법(extrapolation)을 적용하여 모델과 햅틱장치간의 샘플링 문제를 해결한다. 그래픽 렌더링 또한 구현하여 조작의 용이함을 제공한다.

  • PDF