• Title/Summary/Keyword: video resolution

Search Result 478, Processing Time 0.028 seconds

Implementation of AR Remote Rendering Techniques for Real-time Volumetric 3D Video

  • Lee, Daehyeon;Lee, Munyong;Lee, Sang-ha;Lee, Jaehyun;Kwon, Soonchul
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.90-97
    • /
    • 2020
  • Recently, with the growth of mixed reality industrial infrastructure, relevant convergence research has been proposed. For real-time mixed reality services such as remote video conferencing, the research on real-time acquisition-process-transfer methods is required. This paper aims to implement an AR remote rendering method of volumetric 3D video data. We have proposed and implemented two modules; one, the parsing module of the volumetric 3D video to a game engine, and two, the server rendering module. The result of the experiment showed that the volumetric 3D video sequence data of about 15 MB was compressed by 6-7%. The remote module was streamed at 27 fps at a 1200 by 1200 resolution. The results of this paper are expected to be applied to an AR cloud service.

Development of Video Transfer System using LTE/WiFi for Small UAV (LTE/WiFi 기반 소형 무인기용 영상 전송 시스템 개발)

  • Bae, Joong-Won;Lee, Sang-Jeong
    • Journal of Aerospace System Engineering
    • /
    • v.13 no.2
    • /
    • pp.10-18
    • /
    • 2019
  • In this paper, we present the results of a developed LTE/Wi-Fi-based video transmission system which can be applied in small unmanned aerial vehicles of 25kg or less. The developed video transmission system comprised of airborne datalink terminal, ground datalink terminal, and used LTE and Wi-Fi wireless data communication technologies to transmit videos of resolution higher than HD (720p/30fps, 1080p/30fps) taken by small UAV. The airborne device is designed to efficiently transmit real-time streaming video through the incorporation of H.264 video processing board. Ground tests and evaluation indicated the possibility of the developed system to transmit real-time videos from close distance in regards to non-line-of-sight area.

The research of transmission delay reduction for selectively encrypted video transmission scheme on real-time video streaming (실시간 비디오 스트리밍 서비스를 위한 선별적 비디오 암호화 방법의 전송지연 저감 연구)

  • Yoon, Yohann;Go, Kyungmin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.4
    • /
    • pp.581-587
    • /
    • 2021
  • Real-time video streaming scheme for multimedia content delivery and remote conference services is one of technologies that are significantly sensitive to data transmission delay. Recently, because of COVID-19, real-time video streaming contents for the services are significantly increased such as personal broadcasting and remote school class. In order to support the services, there is a growing emphasis on low transmission delay and secure content delivery, respectively. Therefore, our research proposed a packet aggregation algorithm to reduce the transmission delay of selectively encrypted video transmission for real-time video streaming services. Through the application of the proposed algorithm, the selectively encrypted video framework can control the amount of MPEG-2 TS packets for low latency transmission with a consideration of packet priorities. Evaluation results on testbed show that the application of the proposed algorithm to the video framework can reduce approximately 11% of the transmission delay for high and low resolution video.

A New Coding Technique for Scalable Video Service of Digital Hologram (디지털 홀로그램의 적응적 비디오 서비스를 위한 코딩 기법)

  • Seo, Young-Ho;Bea, Yoon-Jin;Lee, Yoon-Hyuk;Choi, Hyun-Jun;Yoo, Ji-Sang;Kim, Dong-Wook
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.92-103
    • /
    • 2012
  • In this paper, we discuss and propose a new algorithm of coding technique for scalably servicing holographic video in various decoding environment. The proposed algorithm consists of the hologram-based resolution scalable coding (HRS) and the light source-based SNR scalable coding (LSS). They are classified by the method generating and capturing hologram. HRS is a scalable coding technique for the optically captured hologram and LSS is one for the light source before generating hologram. HRS can provide the scalable service of 8 steps with the compression ratio from 1:1 to 100:1 for a $1,024{\times}1,024$ hologram. LSS can also provide the various service depending on the number of the light source division using lossless compression. The proposed techniques showed the scalable holographic video service according to the display with the various resolutions, computational power of the receiving equipment, and the network bandwidth.

Fast Generation of Digital Video Holograms Using Multiple PCs (다수의 PC를 이용한 디지털 비디오 홀로그램의 고속 생성)

  • Park, Hanhoon;Kim, Changseob;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.509-518
    • /
    • 2017
  • High-resolution digital holograms can be quickly generated by using a PC cluster that is based on server-client architecture and is composed of several GPU-equipped PCs. However, the data transmission time between PCs becomes a large obstacle for fast generation of video holograms because it linearly increases in proportion to the number of frames. To resolve the problem with the increase of data transmission time, this paper proposes a multi-threading-based method. Hologram generation in each client PC basically consists of three processes: acquisition of light sources, CGH operation using GPUs, and transmission of the result to the server PC. Unlike the previous method that sequentially executes the processes, the proposed method executes in parallel them by multi-threading and thus can significantly reduce the proportion of the data transmission time to the total hologram generation time. Through experiments, it was confirmed that the total generation time of a high-resolution video hologram with 150 frames can be reduced by about 30%.

VirtualDub as a Useful Program for Video Recording in Real-time TEM Analysis (실시간 TEM 분석에 유용한 영상 기록 프로그램, VirtualDub)

  • Kim, Jin-Gyu;Oh, Sang-Ho;Song, Kyung;Yoo, Seung-Jo;Kim, Young-Min
    • Applied Microscopy
    • /
    • v.40 no.1
    • /
    • pp.47-51
    • /
    • 2010
  • The capability of real-time observation in TEM is quite useful to study dynamic phenomena of materials in a certain variable ambience. In performing the experiment, the choice of video recording program is an important factor to obtain high quality of movie streaming. Window Movie Maker (WMM) is generally recommended as a default video recording program if one uses "DV Capture" function in DigitalMicrograph$^{TM}$ (DM) software. However, the image quality does not often satisfy the condition for high-resolution microscopic analysis since the severe information loss in the final result occurs during the conversion process. As a good candidate to overcome this problem, Virtual-Dub is highly recommended since the information loss can be minimized through the streaming process. In this report, we demonstrated how useful VirtualDub works in a high-resolution movie recording. Quantitative comparison of the information quality between the images recorded by each software, WMM and VirtualDub, was carried out based on histogram analysis. As a result, the image recorded by VirtualDub was improved ~13% in brightness and ~122% in contrast compared with the image obtained by WMM at the same imaging condition. Remarkably, the gray gradation (meaning an amount of information) becomes wider up to ~115% than that of the WMM result.

SPECIFIC ANALYSIS OF WEB CAMERA AND HIGH RESOLUTION PLANETARY IMAGING (웹 카메라의 특성 분석 및 고해상도 행성촬영)

  • Park, Young-Sik;Lee, Dong-Ju;Jin, Ho;Han, Won-Yong;Park, Jang-Hyun
    • Journal of Astronomy and Space Sciences
    • /
    • v.23 no.4
    • /
    • pp.453-464
    • /
    • 2006
  • Web camera is usually used for video communication between PC, it has small sensing area, cannot using long exposure application, so that is insufficient for astronomical application. But web camera is suitable for bright planet, moon, it doesn't need long exposure time. So many amateur astronomer using web camera for planetary imaging. We used ToUcam manufactured by Phillips for planetary imaging and Registax commercial program for a video file combining. And then, we are measure a property of web camera, such as linearity, gain that is usually using for analysis of CCD performance. Because of using combine technic selected high quality image from video frame, this method on take higher resolution planetary imaging than one shot image by film, digital camera and CCD. We describe a planetary observing method and a video frame combine method.

Hardware Implementation of Past Multi-resolution Motion Estimator for MPEG-4 AVC (MPEG-4 AVC를 위한 고속 다해상도 움직임 추정기의 하드웨어 구현)

  • Lim Young-hun;Jeong Yong-jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1541-1550
    • /
    • 2004
  • In this paper, we propose an advanced hardware architecture for fast multi-resolution motion estimation of the video coding standard MPEG-1,2 and MPEG-4 AVC. We describe the algorithm and derive hardware architecture emphasizing the importance of area for low cost and fast operation by using the shared memory, the special ram architecture, the motion vector for 4 pixel x 4 pixel, the spiral search and so on. The proposed architecture has been verified by ARM-interfaced emulation board using Excalibur Altera FPGA and also by ASIC synthesis using Samsung 0.18 m CMOS cell library. The ASIC synthesis result shows that the proposed hardware can operate at 140 MHz, processing more than 1,100 QCIF video frames or 70 4CIF video frames per second. The hardware is going to be used as a core module when implementing a complete MPEG-4 AVC video encoder ASIC for real-time multimedia application.

Design of 8K Broadcasting System based on MMT over Heterogeneous Networks

  • Sohn, Yejin;Cho, Minju;Paik, Jongho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.4077-4091
    • /
    • 2017
  • This paper presents the design of a broadcasting scenario and system for an 8K-resolution content. Due to an 8K content is four times larger than the 4K content in terms of size, many technologies such as content acquisition, video coding, and transmission are required to deal with it. Therefore, high-quality video and audio for 8K (ultra-high definition television) service is not possible to be transmitted only using the current terrestrial broadcasting system. The proposed broadcasting system divides the 8K content into four 4K contents by area, and each area is hierarchically encoded by Scalable High-efficiency Video Coding (SHVC) into three layers: L0, L1, and L2. Every part of the 8K video content divided into areas and hierarchy is independently treated. These parts are transmitted over heterogeneous networks such as digital broadcasting and broadband networks after going through several processes of generating signal messages, encapsulation, and packetization based on MPEG media transport. We propose three methods of generating streams at the sending entity to merge the divided streams into the original content at the receiving entity. First, we design the composition information, which defines the presentation structure for displays. Second, a descriptor for content synchronization is included in the signal message. Finally, we define the rules for generating "packet_id" among the packet header fields and design the transmission scheduler to acquire the divided streams quickly. We implement the 8K broadcasting system by adapting the proposed methods and show that the 8K-resolution contents are stably received and serviced with a low delay.

Fast Content-preserving Seam Estimation for Real-time High-resolution Video Stitching (실시간 고해상도 동영상 스티칭을 위한 고속 콘텐츠 보존 시접선 추정 방법)

  • Kim, Taeha;Yang, Seongyeop;Kang, Byeongkeun;Lee, Hee Kyung;Seo, Jeongil;Lee, Yeejin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.1004-1012
    • /
    • 2020
  • We present a novel content-preserving seam estimation algorithm for real-time high-resolution video stitching. Seam estimation is one of the fundamental steps in image/video stitching. It is to minimize visual artifacts in the transition areas between images. Typical seam estimation algorithms are based on optimization methods that demand intensive computations and large memory. The algorithms, however, often fail to avoid objects and results in cropped or duplicated objects. They also lack temporal consistency and induce flickering between frames. Hence, we propose an efficient and temporarily-consistent seam estimation algorithm that utilizes a straight line. The proposed method also uses convolutional neural network-based instance segmentation to locate seam at out-of-objects. Experimental results demonstrate that the proposed method produces visually plausible stitched videos with minimal visual artifacts in real-time.