• Title/Summary/Keyword: 3D video

Search Result 1,156, Processing Time 0.028 seconds

Error Concealment Using Inter-layer Correlation for Scalable Video Coding

  • Park, Chun-Su;Wang, Tae-Shick;Ko, Sung-Jea
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.390-392
    • /
    • 2007
  • In this paper, we propose a new error concealment (EC) method using inter-layer correlation for scalable video coding. In the proposed method, the auxiliary motion vector (MV) and the auxiliary mode number (MN) of intra prediction are interleaved into the bitstream to recover the corrupted frame. In order to reduce the bit rate, the proposed method encodes the difference between the original and the predicted values of the MV and MN instead of the original values. Experimental results show that the proposed EC outperforms the conventional EC by 2.8 dB to 6.7 dB.

  • PDF

A Study for management of rendition in MPV's implementation of CE products (CE(Consumer Electric) 제품으로의 MPV(MusicPhotoVideo) 적용에 있어서 연출자(Rendition) 관리에 대한 연구)

  • Kim, Du-Il;Kim, Young-Yoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11a
    • /
    • pp.3-6
    • /
    • 2003
  • MPV(MultiPhotoVideo 또는 MusicPhotoVideo)는 디지털 기기간의 상호-연동성(Inter-operability)를 향상시키고, 사용자의 컨텐츠 관리를 용이하게 하기 위해서 제안되고 있는 표준안이다. MPV 에서는 사용자의 편의를 위해서 데이터 부분에 해당하는 컨텐츠를 직접 관리하지 않고 XML 포맷으로 형성되는 메타데이터(Metadata)를 통하여 관리하며, 저작자의 의도가 하드웨어 환경에 구애받지 않고 재생될 수 있도록 연출자(Rendition)를 정의하고 있으나, IT 제품에 비하여 하드웨어 자원이 절대적으로 빈약한 CE 제품에 상기 표준 MPV 를 적용하는 것은 많은 어려움이 따르게 된다. 본 논문은 CE 제품에서 상기 MVP 표준에 따른 멀티미디어 컨텐츠를 효율적으로 관리하기 위한 멀티미디어 컨텐츠 배치 방법 및 컨텐츠 검색 속도 향상 방법을 제안한다.

  • PDF

The Method for Removing Jagging Artifact (Jagging Artifact 억제 기법)

  • Yang Seoung-Joon;Lee In-Hwan;Kwon Young-Jin
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.194-197
    • /
    • 2005
  • Digital display products are gradually becoming diversified and pursuing high-quality image display. Digital TV supports various video signal formats from conventional SD to digital HD because the format conversion of video image is required. Traditional format conversion of the video image is achieved by a 1-dimensional linear interpolator applying both horizontal and vertical direction. Jagging artifact can be expressed as the linkage of line segments in several directions. In this paper, we present the method that removes jagging artifact effectively using PCA (Principle Component Analysis) and reserve the detail in a given image.

Temporal Texture modeling for Video Retrieval (동영상 검색을 위한 템포럴 텍스처 모델링)

  • Kim, Do-Nyun;Cho, Dong-Sub
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.3
    • /
    • pp.149-157
    • /
    • 2001
  • In the video retrieval system, visual clues of still images and motion information of video are employed as feature vectors. We generate the temporal textures to express the motion information whose properties are simple expression, easy to compute. We make those temporal textures of wavelet coefficients to express motion information, M components. Then, temporal texture feature vectors are extracted using spatial texture feature vectors, i.e. spatial gray-level dependence. Also, motion amount and motion centroid are computed from temporal textures. Motion trajectories provide the most important information for expressing the motion property. In our modeling system, we can extract the main motion trajectory from the temporal textures.

  • PDF

Design and Implementation of JPEG Image Display Board Using FFGA (FPGA를 이용한 JPEG Image Display Board 설계 및 구현)

  • Kwon Byong-Heon;Seo Burm-Suk
    • Journal of Digital Contents Society
    • /
    • v.6 no.3
    • /
    • pp.169-174
    • /
    • 2005
  • In this paper we propose efficient design and implementation of JPEG image display board that can display JPEG image on TV. we used NAND Flash Memory to save the compressed JPEG bit stream and video encoder to display the decoded JPEG mage on TV. Also we convert YCbCr to RGB to super impose character on JPEG image. The designed B/D is implemented using FPGA.

  • PDF

A Study on convergence video system trough Floating Hologram (플로팅 홀로그램을 통한 융복합 영상시스템 연구)

  • Oh, Seung-Hwan
    • Journal of Digital Convergence
    • /
    • v.18 no.10
    • /
    • pp.397-402
    • /
    • 2020
  • Hologram can be categorized into analog and digital hologram but there's a clear limitation in expensive equipment and content realization for ordinary people to realize. In addition, it's required to conduct study on hologram contents with interaction added, escaping out of exiting stable format like endlessly repetitive contents or passive view through specific video. Therefore, this article aims to suggest fusion image system, especially focusing on floating hologram among similar holograms. Eight elements of hologram interaction are as follows: height of camera in a three-dimensional space, interval between 3D model, overlapped model, scale, animation, position, color and 3D model change. For the floating hologram, the audience can control by themselves in real time, the popular, active hologram contents-making methodology is suggested by making the best use of fusion image system and making floating hologram easily without using expensive hologram equipment. The image system developed in actual exhibition and feedback should be complemented to develop better hologram image system.

Acceleration of Viewport Extraction for Multi-Object Tracking Results in 360-degree Video (360도 영상에서 다중 객체 추적 결과에 대한 뷰포트 추출 가속화)

  • Heesu Park;Seok Ho Baek;Seokwon Lee;Myeong-jin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.306-313
    • /
    • 2023
  • Realistic and graphics-based virtual reality content is based on 360-degree videos, and viewport extraction through the viewer's intention or automatic recommendation function is essential. This paper designs a viewport extraction system based on multiple object tracking in 360-degree videos and proposes a parallel computing structure necessary for multiple viewport extraction. The viewport extraction process in 360-degree videos is parallelized by composing pixel-wise threads, through 3D spherical surface coordinate transformation from ERP coordinates and 2D coordinate transformation of 3D spherical surface coordinates within the viewport. The proposed structure evaluated the computation time for up to 30 viewport extraction processes in aerial 360-degree video sequences and confirmed up to 5240 times acceleration compared to the CPU-based computation time proportional to the number of viewports. When using high-speed I/O or memory buffers that can reduce ERP frame I/O time, viewport extraction time can be further accelerated by 7.82 times. The proposed parallelized viewport extraction structure can be applied to simultaneous multi-access services for 360-degree videos or virtual reality contents and video summarization services for individual users.

VIDEO TRAFFIC MODELING BASED ON $GEO^Y/G/{\infty}$ INPUT PROCESSES

  • Kang, Sang-Hyuk;Kim, Ba-Ra
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.12 no.3
    • /
    • pp.171-190
    • /
    • 2008
  • With growing applications of wireless video streaming, an efficient video traffic model featuring modern high-compression techniques is more desirable than ever, because the wireless channel bandwidths are ever limited and time-varying. We propose a modeling and analysis method for video traffic by a class of stochastic processes, which we call '$GEO^Y/G/{\infty}$ input processes'. We model video traffic by $GEO^Y/G/{\infty}$ input process with gamma-distributed batch sizes Y and Weibull-like autocorrelation function. Using four real-encoded, full-length video traces including action movies, a drama, and an animation, we evaluate our modeling performance against existing model, transformed-M/G/${\infty}$ input process, which is one of most recently proposed video modeling methods in the literature. Our proposed $GEO^Y/G/{\infty}$ model is observed to consistently provide conservative performance predictions, in terms of packet loss ratio, within acceptable error at various traffic loads of interest in practical multimedia streaming systems, while the existing transformed-M/G/${\infty}$ fails. For real-time implementation of our model, we analyze G/D/1/K queueing systems with $GEO^Y/G/{\infty}$ input process to upper estimate the packet loss probabilities.

  • PDF

EFFICIENT MULTIVIEW VIDEO CODING BY OBJECT SEGMENTATION

  • Boonthep, Narasak;Chiracharit, Werapon;Chamnongthai, Kosin;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.294-297
    • /
    • 2009
  • Multi-view video consists of a set of multiple video sequences from multiple viewpoints or view directions in the same scene. It contains extremely a large amount of data and some extra information to be stored or transmitted to the user. This paper presents inter-view correlations among video objects and the background to reduce the prediction complexity while achieving a high coding efficiency in multi-view video coding. Our proposed algorism is based on object-based segmentation scheme that utilizes video object information obtained from the coded base view. This set of data help us to predict disparity vectors and motion vectors in enhancement views by employing object registration, which leads to high compression and low-complexity coding scheme for enhancement views. An experimental results show that the superiority can provide an improvement of PSNR gain 2.5.3 dB compared to the simulcast.

  • PDF

Extracting Graphics Information for Better Video Compression

  • Hong, Kang Woon;Ryu, Won;Choi, Jun Kyun;Lim, Choong-Gyoo
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.743-751
    • /
    • 2015
  • Cloud gaming services are heavily dependent on the efficiency of real-time video streaming technology owing to the limited bandwidths of wire or wireless networks through which consecutive frame images are delivered to gamers. Video compression algorithms typically take advantage of similarities among video frame images or in a single video frame image. This paper presents a method for computing and extracting both graphics information and an object's boundary from consecutive frame images of a game application. The method will allow video compression algorithms to determine the positions and sizes of similar image blocks, which in turn, will help achieve better video compression ratios. The proposed method can be easily implemented using function call interception, a programmable graphics pipeline, and off-screen rendering. It is implemented using the most widely used Direct3D API and applied to a well-known sample application to verify its feasibility and analyze its performance. The proposed method computes various kinds of graphics information with minimal overhead.