• Title/Summary/Keyword: Video Synthesis

Search Result 116, Processing Time 0.023 seconds

FPGA-Based Post-Quantum Cryptography Hardware Accelerator Design using High Level Synthesis (HLS 를 이용한 FPGA 기반 양자내성암호 하드웨어 가속기 설계)

  • Haesung Jung;Hanyoung Lee;Hanho Lee
    • Transactions on Semiconductor Engineering
    • /
    • v.1 no.1
    • /
    • pp.1-8
    • /
    • 2023
  • This paper presents the design and implementation of Crystals-Kyber, a next-generation postquantum cryptography, as a hardware accelerator on an FPGA using High-Level Synthesis (HLS). We optimized the Crystals-Kyber algorithm using various directives provided by Vitis HLS, configured the AXI interface, and designed a hardware accelerator that can be implemented on an FPGA. Then, we used Vivado tool to design the IP block and implement it on the ZYNQ ZCU106 FPGA. Finally, the video was recorded and H.264 compressed with Python code in the PYNQ framework, and the video encryption and decryption were accelerated using Crystals-Kyber hardware accelerator implemented on the FPGA.

Performance Analysis of 3D-HEVC Video Coding (3D-HEVC 비디오 부호화 성능 분석)

  • Park, Daemin;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.713-725
    • /
    • 2014
  • Multi-view and 3D video technologies for a next generation video service are widely studied. These technologies can make users feel realistic experience as supporting various views. Because acquisition and transmission of a large number of views require a high cost, main challenges for multi-view and 3D video include view synthesis, video coding, and depth coding. Recently, JCT-3V (joint collaborative team on 3D video coding extension development) has being developed a new standard for multi-view and 3D video. In this paper, major tools adopted in this standard are introduced and evaluated in terms of coding efficiency and complexity. This performance analysis would be helpful for the development of a fast 3D video encoder as well as a new 3D video coding algorithm.

Design and Verification of the Motion Estimation and Compensation Unit Using Full Search Algorithm (전역탐색 알고리즘을 이용한 움직임 추정 보상부 설계 및 검증)

  • Jin Goon-Seon;Kang Jin-Ah;Lim Jae-Yoon
    • Proceedings of the IEEK Conference
    • /
    • 2004.06b
    • /
    • pp.585-588
    • /
    • 2004
  • This paper describes design and verification of the motion estimation and compensation unit using full search algorithm. Video processor is the key device of video communication systems. Motion estimation is the key module of video processor. The technologies of motion estimation and compensation unit are the core technologies for wireless video telecommunications system, portable multimedia systems. In this design, Verilog simulator and logic synthesis tools are used for hardware design and verification. In this paper, motion estimation and compensation unit are designed using FPGA, coded in Verilog HDL, and simulated and verified using Xilinx FPGA.

  • PDF

Performance Analysis on View Synthesis of 360 Videos for Omnidirectional 6DoF in MPEG-I (MPEG-I의 6DoF를 위한 360 비디오 가상시점 합성 성능 분석)

  • Kim, Hyun-Ho;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.273-280
    • /
    • 2019
  • 360 video is attracting attention as immersive media with the spread of VR applications, and MPEG-I (Immersive) Visual group is actively working on standardization to support immersive media experiences with up to six degree of freedom (6DoF). In virtual space of omnidirectional 6DoF, which is defined as a case of degree of freedom providing 6DoF in a restricted area, looking at the scene at any viewpoint of any position in the space requires rendering the view by synthesizing additional viewpoints called virtual omnidirectional viewpoints. This paper presents the performance results on view synthesis and their analysis, which have been done as exploration experiments (EEs) of omnidirectional 6DoF in MPEG-I. In other words, experiment results on view synthesis in various aspects of synthesis conditions such as the distances between input views and virtual view to be synthesized and the number of input views to be selected from the given set of 360 videos providing omnidirectional 6DoF are presented.

Design and Implementation of Multi-View 3D Video Player (다시점 3차원 비디오 재생 시스템 설계 및 구현)

  • Heo, Young-Su;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.258-273
    • /
    • 2011
  • This paper designs and implements a multi-view 3D video player system which is operated faster than existing video player systems. The structure for obtaining the near optimum speed in a multi-processor environment by parallelizing the component modules is proposed to process large volumes of multi-view image data at high speed. In order to use the concurrency of bottleneck, we designed image decoding, synthesis and rendering modules in a pipeline structure. For load balancing, the decoder module is divided into the unit of viewpoint, and the image synthesis module is geometrically divided based on synthesized images. As a result of this experiment, multi-view images were correctly synthesized and the 3D sense could be felt when watching the images on the multi-view autostereoscopic display. The proposed application processing structure could be used to process large volumes of multi-view image data at high speed, using the multi-processors to their maximum capacity.

Hardware Implementation of Past Multi-resolution Motion Estimator for MPEG-4 AVC (MPEG-4 AVC를 위한 고속 다해상도 움직임 추정기의 하드웨어 구현)

  • Lim Young-hun;Jeong Yong-jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1541-1550
    • /
    • 2004
  • In this paper, we propose an advanced hardware architecture for fast multi-resolution motion estimation of the video coding standard MPEG-1,2 and MPEG-4 AVC. We describe the algorithm and derive hardware architecture emphasizing the importance of area for low cost and fast operation by using the shared memory, the special ram architecture, the motion vector for 4 pixel x 4 pixel, the spiral search and so on. The proposed architecture has been verified by ARM-interfaced emulation board using Excalibur Altera FPGA and also by ASIC synthesis using Samsung 0.18 m CMOS cell library. The ASIC synthesis result shows that the proposed hardware can operate at 140 MHz, processing more than 1,100 QCIF video frames or 70 4CIF video frames per second. The hardware is going to be used as a core module when implementing a complete MPEG-4 AVC video encoder ASIC for real-time multimedia application.

Real-time Video Matting for Mobile Device (모바일 환경에서 실시간 영상 전경 추출 연구)

  • Yoon, Jong-Chul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.5
    • /
    • pp.487-492
    • /
    • 2018
  • Recently, various applications for image processing have been ported to the mobile environment due to the expansion of the image shooting on the mobile device. However, in the case of extracting the image foreground, which is one of the most important functions of image synthesis, is difficult since it needs complex calculation. In this paper, we propose an video synthesis technique that can divide images captured by mobile devices into foreground / background and combine them in real time on target images. Considering the characteristics of mobile shooting, our system can extract automatically foreground of input video that contains weak motion when shooting. Using SIMD and GPGPU-based acceleration algorithms, SD-quality images can be processed on mobile in real time.

Study on Compositing Editing of 360˚ VR Actual Video and 3D Computer Graphic Video (360˚ VR 실사 영상과 3D Computer Graphic 영상 합성 편집에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.17 no.4
    • /
    • pp.255-260
    • /
    • 2019
  • This study is about an efficient synthesis of $360^{\circ}$ video and 3D graphics. First, the video image filmed by a binocular integral type $360^{\circ}$ camera was stitched, and location values of the camera and objects were extracted. And the data of extracted location values were moved to the 3D program to create 3D objects, and the methods for natural compositing was researched. As a result, as the method for natural compositing of $360^{\circ}$ video image and 3D graphics, rendering factors and rendering method were derived. First, as for rendering factors, there were 3D objects' location and quality of material, lighting and shadow. Second, as for rendering method, actual video based rendering method's necessity was found. Providing the method for natural compositing of $360^{\circ}$ video image and 3D graphics through this study process and results is expected to be helpful for research and production of $360^{\circ}$ video image and VR video contents.

H.264 Encoding Technique of Multi-view Video expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 비디오에 대한 H.264 부호화 기술)

  • Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.43-51
    • /
    • 2014
  • Multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission, because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This efficient method to compress new contents is suggested to use layered depth image representation and to apply for video compression encoding by using 3D warping. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, we confirmed high compression performance and good quality of reconstructed image.

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.