• Title/Summary/Keyword: Video Generation

Search Result 577, Processing Time 0.035 seconds

Fast Generation of Digital Video Holograms Using Multiple PCs (다수의 PC를 이용한 디지털 비디오 홀로그램의 고속 생성)

  • Park, Hanhoon;Kim, Changseob;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.509-518
    • /
    • 2017
  • High-resolution digital holograms can be quickly generated by using a PC cluster that is based on server-client architecture and is composed of several GPU-equipped PCs. However, the data transmission time between PCs becomes a large obstacle for fast generation of video holograms because it linearly increases in proportion to the number of frames. To resolve the problem with the increase of data transmission time, this paper proposes a multi-threading-based method. Hologram generation in each client PC basically consists of three processes: acquisition of light sources, CGH operation using GPUs, and transmission of the result to the server PC. Unlike the previous method that sequentially executes the processes, the proposed method executes in parallel them by multi-threading and thus can significantly reduce the proportion of the data transmission time to the total hologram generation time. Through experiments, it was confirmed that the total generation time of a high-resolution video hologram with 150 frames can be reduced by about 30%.

Multi-stream Generation Method for Intra-media Synchronization of Very Low Bit Rate Video (초저속 고압축 비디오의 미디어내 동기화를 위한 멀티 스트림 생성 기법)

  • 강경원;류권열;권기룡;문광석;김문수
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.3
    • /
    • pp.9-15
    • /
    • 2001
  • Very low bit rate video coding uses the inter-picture video coding method for high compression. The inter-picture video coding is coded based on the information of the previous frames so any packet loss can lead to reduce the image quality on the transmission. In this paper, we proposed the multi-stream generation method for inter-media synchronization of very low bit rate video based on TCP for reliable transmission. The proposed approach performs a reliable transmission via a TCP based protocol. This method incorporates multi-streams in order to enhance the robustness of delivery and can withstand against network jitter. Moreover, the client bandwidths are fully utilized in a highly efficient way.

  • PDF

AnoVid: A Deep Neural Network-based Tool for Video Annotation (AnoVid: 비디오 주석을 위한 심층 신경망 기반의 도구)

  • Hwang, Jisu;Kim, Incheol
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.986-1005
    • /
    • 2020
  • In this paper, we propose AnoVid, an automated video annotation tool based on deep neural networks, that automatically generates various meta data for each scene or shot in a long drama video containing rich elements. To this end, a novel meta data schema for drama video is designed. Based on this schema, the AnoVid video annotation tool has a total of six deep neural network models for object detection, place recognition, time zone recognition, person recognition, activity detection, and description generation. Using these models, the AnoVid can generate rich video annotation data. In addition, AnoVid provides not only the ability to automatically generate a JSON-type video annotation data file, but also provides various visualization facilities to check the video content analysis results. Through experiments using a real drama video, "Misaeing", we show the practical effectiveness and performance of the proposed video annotation tool, AnoVid.

Video Streaming Receiver with Token Bucket Automatic Parameter Setting Scheme by Video Information File needing Successful Acknowledge Character (성공적인 확인응답이 필요한 비디오 정보 파일에 의한 토큰버킷 자동 파라메타 설정 기법을 가진 비디오 스트리밍 수신기)

  • Lee, Hyun-no;Kim, Dong-hoi;Nam, Boo-hee;Park, Seung-young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.10
    • /
    • pp.1976-1985
    • /
    • 2015
  • The amount of packets in palyout buffer of video streaming receiver can be changed by network condition, and saturated and exhausted by the delay and jitter. Especially, if the amount of incoming video traffic exceeds the maximum allowed playout buffer, buffer overflow problem can be generated. It makes the deterioration of video image and the discontinuity of playout by skip phenomenon. Also, if the incoming packets are delayed by network confusion, the stop phenomenon of video image is made by buffering due to buffer underflow problem. To solve these problems, this paper proposes the video streaming receiver with token bucket scheme which automatically establishes the important parameters like token generation rate r and bucket maximum capacity c adapting to the pattern of video packets. The simulation results using network simulator-2 (NS-2) and joint scalable video model (JSVM) show that the proposed token bucket scheme with automatic establishment parameter provides better performance than the existing token bucket scheme with manual establishment parameter in terms of the generation number of overflow and underflow, packer loss rate, and peak signal to noise ratio (PSNR) in three test video sequences.

Improved Side Information Generation using Field Coding for Wyner-Ziv Codec (Wyner-Ziv 부호화기를 위한 필드 부호화 기반 개선된 보조정보 생성)

  • Han, Chan-Hee;Jeon, Yeong-Il;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.11
    • /
    • pp.10-17
    • /
    • 2009
  • Wyner-Ziv video coding is a new video compression paradigm based on distributed source coding theory of Slepian-Wolf and Wyner-Ziv. Wyner-Ziv coding enables light-encoder/heavy-decoder structure by shifting complex modules including motion estimation/compensation task to the decoder. Instead of performing the complicated motion estimation process in the encoder, the Wyner-Ziv decoder performs the motion estimation for the generation of side information in order to make the predicted signal of the Wyner-Ziv frame. The efficiency of side information generation deeply affects the overall coding performance, since the bit-rates of the Wyner-Ziv coding is directly dependent on side information. In this paper, an improved side information generation method using field coding is proposed. In the proposed method, top fields are coded with the existing SI generation method and bottom fields are coded with new SI generation method using the information of the top fields. Simulation results show that the proposed method improves the quality of the side information and rate-distortion performance compared to the conventional method.

Cross Layer Optimization for Scalable Video Streaming (효율적인 Scalable Video Streaming을 위한 Cross Layer Optimization)

  • Yoon, Min-Young;Cho, Hee-Young;Suh, Doug-Young
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.352-354
    • /
    • 2005
  • As further Studies on 4th generation mobile telecommunication are progressed, the importance of a Cross-Layer is being increased. However, it has focused on coordination model only between MAC layer and PHY layer. It is necessary to expand into II' layer and upper layers. In this paper, we introduce a Cross-layer optimization which can be used to transmit video data with effect by managing resources among layers. It can gives further more adaptive method to solve QoS model problem than single layer.

  • PDF

A Study on the 3D Video Generation Technique using Multi-view and Depth Camera (다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구)

  • Um, Gi-Mun;Chang, Eun-Young;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF

Real-Time Panorama Video Generation System using Multiple Networked Cameras (다중 네트워크 카메라 기반 실시간 파노라마 동영상 생성 시스템)

  • Choi, KyungYoon;Jun, KyungKoo
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.990-997
    • /
    • 2015
  • Panoramic image creation has been extensively studied. Existing methods use customized hardware, or apply post-processing methods to seamlessly stitch images. These result in an increase in either cost or complexity. In addition, images can only be stitched under certain conditions such as existence of characteristic points of the images. This paper proposes a low cost and easy-to-use system that produces realtime panoramic video. We use an off-the-shelf embedded platform to capture multiple images, and these are then transmitted to a server in a compressed format to be merged into a single panoramic video. Finally, we analyze the performance of the implemented system by measuring time to successfully create the panoramic image.

A Study for Depth-map Generation using Vanishing Point (소실점을 이용한 Depth-map 생성에 관한 연구)

  • Kim, Jong-Chan;Ban, Kyeong-Jin;Kim, Chee-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.329-338
    • /
    • 2011
  • Recent augmentation reality demands more realistic multimedia data with the mixture of various media. High-technology for multimedia data which combines existing media data with various media such as audio and video dominates entire media industries. In particular, there is a growing need to serve augmentation reality, 3-dimensional contents and realtime interaction system development which are communication method and visualization tool in Internet. The existing services do not correspond to generate depth value for 3-dimensional space structure recovery which is to form solidity in existing contents. Therefore, it requires research for effective depth-map generation using 2-dimensional video. Complementing shortcomings of existing depth-map generation method using 2-dimensional video, this paper proposes an enhanced depth-map generation method that defines the depth direction in regard to loss location in a video in which none of existing algorithms has defined.