• Title/Summary/Keyword: 3D video

Search Result 1,152, Processing Time 0.027 seconds

A Study of Video Synchronization Method for Live 3D Stereoscopic Camera (실시간 3D 영상 카메라의 영상 동기화 방법에 관한 연구)

  • Han, Byung-Wan;Lim, Sung-Jun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.6
    • /
    • pp.263-268
    • /
    • 2013
  • A stereoscopic image is made via 3 dimensional image processing for combining two images from left and right camera. In this case, it is very important to synchronize input images from two cameras. The synchronization method for two camera input images is proposed in this paper. A software system is used to support various video format. And it will be used in the system for glassless stereoscopic images using several cameras.

A Design of a Highly Linear 3 V 10b Video-Speed CMOS D/A Converter (높은 선형성을 가진 3 V 10b 영상 신호 처리용 CMOS D/A 변환기 설계)

  • 이성훈;전병렬;윤상원;이승훈
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.6
    • /
    • pp.28-36
    • /
    • 1997
  • In this work, a highly linear video-speed CMOS current-mode digital-to-analog converter (DAC) is proposed. A newswitching scheme for the current cell matrix of the DAC simultaneously reduces graded and symmetrical errors to improve integral nonlinearities (INL). The proposed DAC is designed to operate at any supply voltage between 3V and 5V, and minimizes the glitch energy of analog outputs with degliching circuits developed in this work. The prototype dAC was implemented in a LG 0.8um n-well single-poly double-metal CMOS technology. Experimental results show that the differential and integral nonlinearities are less than .+-. LSB and .+-.0.8LSB respectively. The DAC dissipates 75mW at a 3V single power supply and occupies a chip area of 2.4 mm * 2.9mm.

  • PDF

A Study on the Video Compression Technique based on 3D Wavelet Transform (3차원 웨이블릿 기반 영상 압축 기술에 대한 연구)

  • Zi, Cui-Hui;Moon, Joo-Hee
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.269-270
    • /
    • 2007
  • In previous work[1], each codeblock is coded using a context which is adaptively selected from three kinds of scanning directions after 3D DWT. But some subbands still have correlations among the coefficients in horizontal, vertical or temporal direction. In this paper, we propose a new 3D DWT-based video compression technique in which the difference of coefficients is calculated in one of three directions for every codeblocks and coded by a context adaptively selected for each codeblock. Experimental results show that the proposed compression technique outperforms measurably and visually compared to the conventional DWT-based techniques.

  • PDF

Multiresolution 3D Facial Model Compression (다해상도 3D 얼굴 모델의 압축)

  • 박동희;이종석;이영식;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.602-607
    • /
    • 2002
  • In this paper, we proposed an approach to efficiently compress and transmit multiresoltion 3D lariat models for multimedia and very low bit rate applications. A personal facial model is obtained by a 3D laser digitizer, and further re-quantized at several resolutions according to different scope of applications, such as animation, video game, and video conference. By deforming 2D templates to match and re-quantize a 3D digitized facial model, we obtain its compressed model. In the present study, we create hierarchical 2D lariat wireframe templates are adapted according to facial feature points and the proposed piecewise chainlet affined transformation(PACT) method. The 3D digitized model after requantization are reduced significantly without perceptual loss. Moreover the proposed multiresoulation lariat models possessed of hierarchial data structure are apt to be progressively transmitted and displayed across internet.

  • PDF

Implementation of an RF Module for 2.4GHz Wireless Audio/Video Transmission (2.4GHz 무선 음성/영상 송신용 RF 모듈 구현)

  • 김거성;권덕기;박종태;유종근
    • Proceedings of the IEEK Conference
    • /
    • 2002.06e
    • /
    • pp.55-58
    • /
    • 2002
  • This paper describes an RF module for 2.4GHz wireless audio/video transmission. The pre-processed baseband input signals are FM-modulated using a VCO and then transmitted through an antenna after RF filtering. The designed circuits are implemented using a Teflon board of which the size is 52mm${\times}$62mm. The measured maximum output signal levels are around -3dBm and the harmonics are less than -450dBc. The manufactured module consumes 130mA from a 8V supply.

  • PDF

A study on the Influence of VR360° Degree Panoramic Video on Theory of Modern Films (VR 360° 기술이 현대영화 이론에 끼친 영향에 관한 연구)

  • HUA, LU HUI;Kim, Hae Yoon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2016.05a
    • /
    • pp.353-354
    • /
    • 2016
  • 영화관 발전 이유는(보통 영화,3D MIX,4D)관중 시청각 효과가 위해서 발전한다. 이제 까지 최고한 시청각 효과 받을 수 있는 기술을 $VR360^{\circ}$ Degree panoramic video촬영 기술 밖에 없다. 하지만 이 기술을 전통 영화 촬영 기술 보다 완전히 다르다. 이 기술 영화 방면 운용하면 원래 영화적 이론 크게 충격 받다.

  • PDF

BoF based Action Recognition using Spatio-Temporal 2D Descriptor (시공간 2D 특징 설명자를 사용한 BOF 방식의 동작인식)

  • KIM, JinOk
    • Journal of Internet Computing and Services
    • /
    • v.16 no.3
    • /
    • pp.21-32
    • /
    • 2015
  • Since spatio-temporal local features for video representation have become an important issue of modeless bottom-up approaches in action recognition, various methods for feature extraction and description have been proposed in many papers. In particular, BoF(bag of features) has been promised coherent recognition results. The most important part for BoF is how to represent dynamic information of actions in videos. Most of existing BoF methods consider the video as a spatio-temporal volume and describe neighboring 3D interest points as complex volumetric patches. To simplify these complex 3D methods, this paper proposes a novel method that builds BoF representation as a way to learn 2D interest points directly from video data. The basic idea of proposed method is to gather feature points not only from 2D xy spatial planes of traditional frames, but from the 2D time axis called spatio-temporal frame as well. Such spatial-temporal features are able to capture dynamic information from the action videos and are well-suited to recognize human actions without need of 3D extensions for the feature descriptors. The spatio-temporal BoF approach using SIFT and SURF feature descriptors obtains good recognition rates on a well-known actions recognition dataset. Compared with more sophisticated scheme of 3D based HoG/HoF descriptors, proposed method is easier to compute and simpler to understand.

Current State of Animation Industry and Technology Trends - Focusing on Artificial Intelligence and Real-Time Rendering (애니메이션 산업 현황과 기술 동향 - 인공지능과 실시간 렌더링 중심으로)

  • Jibong Jeon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.821-830
    • /
    • 2023
  • The advancement of Internet network technology has triggered the emergence of new OTT video content platforms, increasing demand for content and altering consumption patterns. This trend is bringing positive changes to the South Korean animation industry, where diverse and high-quality animation content is becoming increasingly important. As investment in technology grows, video production technology continues to advance. Specifically, 3D animation and VFX production technologies are enabling effects that were previously unthinkable, offering detailed and realistic graphics. The Fourth Industrial Revolution is providing new opportunities for this technological growth. The rise of Artificial Intelligence (AI) is automating repetitive tasks, thereby enhancing production efficiency and enabling innovations that go beyond traditional production methods. Cutting-edge technologies like 3D animation and VFX are being continually researched and are expected to be more actively integrated into the production process. Digital technology is also expanding the creative horizons for artists. The future of AI and advanced technologies holds boundless potential, and there is growing anticipation for how these will elevate the video content industry to new heights.

MPEG Video-based Point Cloud Compression 표준 소개

  • Jang, Ui-Seon
    • Broadcasting and Media Magazine
    • /
    • v.26 no.2
    • /
    • pp.18-30
    • /
    • 2021
  • 본 고에서는 최근 국제표준으로 완성된 MPEG Video-based Point Cloud Compression(V-PCC) 표준 기술에 대해 소개하고자 한다. AR/VR 등 새로운 미디어 응용의 출현과 함께 그 관심이 3D 그래픽 데이터에 더 많이 모아지는 가운데, 지금까지는 효율적인 압축에 관심이 높지 않았던 포인트 클라우드 데이터의 표준 압축 기술로 만들어진 V-PCC 표준의 표준화 현황과 주요 응용분야, 그리고 주요 압축 기술에 대하여 살펴보고자 한다.

Projection format and quality metrics of 360 video (360 VR 영상의 프로젝션 포맷 및 성능 평가 방식)

  • Park, Seong-Hwan;Kim, Kyu-Heon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.182-184
    • /
    • 2019
  • 최근 사용자에게 더욱 몰입감 있는 콘텐츠를 제공하기 위한 기술에 대한 관심이 증가하고 있으며 그 중 가장 대표적인 것이 360 VR 영상이라고 할 수 있다. 미디어 표준화 단체인 MPEG(Moving Picture Experts Group)에서는 MPEG-I(Immersive) 차세대 프로젝트 그룹을 이용하여 이러한 움직임에 대응하고 있다. MPEG-I는 2021년 말 6DoF VR 영상을 목표로 8개의 파트가 표준화를 진행중이다. 360 VR 영상의 경우 획득시 영상의 픽셀들이 3D 공간 상에 존재하게 되는데, 이를 처리 및 출력 하귀 위해서는 2D 영상으로 전환이 필요하며 이 때 사용되는 것이 Projection format이다. 현재 JVET(Joint Video Exploration Team)에서는 3D에서 2D로 전환이 이루어 질 때 손실을 최소화 하기 위한 Projection format들에 대한 연구가 이루어 지고 있다. 본 논문에서는 현재까지 제안된 다양한 Projection format들에 대하여 소개하고 이에 대한 성능 측정 방식에 대하여 소개한다.

  • PDF