• Title/Summary/Keyword: 3D video

Search Result 1,152, Processing Time 0.034 seconds

3-DTIP: 3-D Stereoscopic Tour-Into-Picture Based on Depth Map (3-DTIP: 깊이 데이터 기반 3차원 입체 TIP)

  • Jo, Cheol-Yong;Kim, Je-Dong;Jeong, Da-Un;Gil, Jong-In;Lee, Kwang-Hoon;Kim, Man-Bae
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.28-30
    • /
    • 2009
  • This paper describes a 3-DTIP(3-D Tour Into Picture) using depth map for a Korean classical painting being composed of persons and landscape. Unlike conventional TIP methods providing 2-D image or video, our proposed TIP can provide users with 3-D stereoscopic contents. Navigating inside a picture provides more realistic and immersive perception. The method firstly makes depth map. Input data consists of foreground object, background image, depth map, foreground mask. Firstly we separate foreground object and background, make each of their depth map. Background is decomposed into polygons and assigned depth value to each vertexes. Then a polygon is decomposed into many triangles. Gouraud shading is used to make a final depth map. Navigating into a picture uses OpenGL library. Our proposed method was tested on "Danopungjun" and "Muyigido" that are famous paintings made in Chosun Dynasty. The stereoscopic video was proved to deliver new 3-D perception better than 2-D video.

  • PDF

A Study on the Kinematic Surveying Method Using the Digital Video Recorder (디지털 비디오 리코더에 의한 이동 측량 기법 연구)

  • 함창학;김원대
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.21 no.3
    • /
    • pp.229-236
    • /
    • 2003
  • This study recorded an object using a digital video recorder, and then tried to estimate 3-D positional information and to reconstruct an image. Firstly, the accuracy of measurement results from a video recorder was evaluated and tested for an applicability, then it applied to a real object to construct 3-D digital model. This study assumed that there is no lens distortion in a video recorder, and all bundles should precisely pass through the projection center of a lens. The image size for orientations is determined by the size of CCD chip and the number of pixels. The average squared error from the result by a digital video recorder and that by triangular survey from 1-second theodolite shows 0.0173m error in x,y coordinates. Without knowing the accurate information on the lens distortion and the coordinates of the projection center, this study reasonably produces acceptable results in the reconstruction of 3-D model. In consequence, this study found that the image from a digital video camera can be reconstructed 3-D model only from the information on a camera type.

MMT based V3C data packetizing method (MMT 기반 V3C 데이터 패킷화 방안)

  • Moon, Hyeongjun;Kim, Yeonwoong;Park, Seonghwan;Nam, Kwijung;Kim, Kyuhyeon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.836-838
    • /
    • 2022
  • 3D Point Cloud는 3D 콘텐츠를 더욱 실감 나게 표현하기 위한 데이터 포맷이다. Point Cloud 데이터는 3차원 공간상에 존재하는 데이터로 기존의 2D 영상에 비해 거대한 용량을 가지고 있다. 최근 대용량 Point Cloud의 3D 데이터를 압축하기 위해 V-PCC(Video-based Point Cloud Compression)와 같은 다양한 방법이 제시되고 있다. 따라서 Point Cloud 데이터의 원활한 전송 및 저장을 위해서는 V-PCC와 같은 압축 기술이 요구된다. V-PCC는 Point Cloud의 데이터들을 Patch로써 뜯어내고 2D에 Projection 시켜 3D의 영상을 2D 형식으로 변환하고 2D로 변환된 Point Cloud 영상을 기존의 2D 압축 코덱을 활용하여 압축하는 기술이다. 이 V-PCC로 변환된 2D 영상은 기존 2D 영상을 전송하는 방식을 활용하여 네트워크 기반 전송이 가능하다. 본 논문에서는 V-PCC 방식으로 압축한 V3C 데이터를 방송망으로 전송 및 소비하기 위해 MPEG Media Transport(MMT) Packet을 만드는 패킷화 방안을 제안한다. 또한 Server와 Client에서 주고받은 V3C(Visual Volumetric Video Coding) 데이터의 비트스트림을 비교하여 검증한다.

  • PDF

Offline Camera Movement Tracking from Video Sequences

  • Dewi, Primastuti;Choi, Yeon-Seok;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.69-72
    • /
    • 2011
  • In this paper, we propose a method to track the movement of camera from the video sequences. This method is useful for video analysis and can be applied as pre-processing step in some application such as video stabilizer and marker-less augmented reality. First, we extract the features in each frame using corner point detection. The features in current frame are then compared with the features in the adjacent frames to calculate the optical flow which represents the relative movement of the camera. The optical flow is then analyzed to obtain camera movement parameter. The final step is camera movement estimation and correction to increase the accuracy. The method performance is verified by generating a 3D map of camera movement and embedding 3D object to the video. The demonstrated examples in this paper show that this method has a high accuracy and rarely produce any jitter.

  • PDF

X3D Based Web Visualization by Data Fusion of 3D Spatial Information and Video Sequence (3D 공간정보와 비디오 융합에 의한 X3D기반 웹 가시화)

  • Sohn, Hong-Gyoo;Kim, Seong-Sam;Yoo, Byoung-Hyun;Kim, Sang-Min
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.4
    • /
    • pp.95-103
    • /
    • 2009
  • Global interests for construction of 3 dimensional spatial information has risen due to development of measurement sensors and data processing technologies. In spite of criticism for the violation of personal privacy, CCTV cameras equipped in outdoor public space of urban area are used as a fundamental sensor for traffic management, crime prevention or hazard monitoring. For safety guarantee in urban environment and disaster prevention, a surveillance system integrating pre-constructed 3 dimensional spatial information with CCTV data or video sequence is needed for monitoring and observing emergent situation interactively in real time. In this study, we proposed applicability of the prototype system for web visualization based on X3D, an international standard of real time web visualization, by integrating 3 dimensional spatial information with video sequence.

  • PDF

Effects of Spatio-temporal Features of Dynamic Hand Gestures on Learning Accuracy in 3D-CNN (3D-CNN에서 동적 손 제스처의 시공간적 특징이 학습 정확성에 미치는 영향)

  • Yeongjee Chung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.145-151
    • /
    • 2023
  • 3D-CNN is one of the deep learning techniques for learning time series data. Such three-dimensional learning can generate many parameters, so that high-performance machine learning is required or can have a large impact on the learning rate. When learning dynamic hand-gestures in spatiotemporal domain, it is necessary for the improvement of the efficiency of dynamic hand-gesture learning with 3D-CNN to find the optimal conditions of input video data by analyzing the learning accuracy according to the spatiotemporal change of input video data without structural change of the 3D-CNN model. First, the time ratio between dynamic hand-gesture actions is adjusted by setting the learning interval of image frames in the dynamic hand-gesture video data. Second, through 2D cross-correlation analysis between classes, similarity between image frames of input video data is measured and normalized to obtain an average value between frames and analyze learning accuracy. Based on this analysis, this work proposed two methods to effectively select input video data for 3D-CNN deep learning of dynamic hand-gestures. Experimental results showed that the learning interval of image data frames and the similarity of image frames between classes can affect the accuracy of the learning model.

Effect of Sexual Contents on Presence, Arousal, and Sexual Attitude in 3D TV (3D TV 시청환경에서 선정적 영상이 실재감과 각성, 성적 태도에 미치는 영향)

  • Kim, Hyo Sun;Kwon, Ji Young;Lee, Sangmin;Han, Kwanghee
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.2
    • /
    • pp.198-210
    • /
    • 2013
  • This study investigates the detrimental effect of watching sexual content in three-dimensional (3D) moving pictures. An increasing amount of investment is put especially in 3D adult content to boost their 3D media industries. It is crucial that the effect of sexual contents on viewers be identified. In this experiment, a between-subject design was employed to analyze various effect of sexual content on participants whether they viewed the same stimuli in 3D or in 2D. In particular, the presence scaling was used to evaluate how real the video clip is. In addition, the permissiveness toward sexual behaviors and the level of sexual arousal were measured to examine the different effect of sexual content by dividing two separate groups in 2D and 3D condition. The result shows that those who watched a 3D video clip perceived higher sense of presence compared to those who watched a 2D video clip. Furthermore, subjects in 3D condition reported lower scores of permissive attitude toward sexual behaviors. This confirms that 3D display delivers more visual experience and has an impact on people in terms of perceiving sexual contents and changing their attitudes towards sexual behaviors.

ROI-Based 3D Video Stabilization Using Warping (관심영역 기반 와핑을 이용한 3D 동영상 안정화 기법)

  • Lee, Tae-Hwan;Song, Byung-Cheol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.76-82
    • /
    • 2012
  • As the portable camcorder becomes popular, various video stabilization algorithms for de-shaking of camera motion have been developed. In the past, most video stabilization algorithms were based on 2-dimensional camera motion, but recent algorithms show much better performance by considering 3-dimensional camera motion. Among the previous video stabilization algorithms, 3D video stabilization algorithm using content-preserving warps is known as the state-of-the art owing to its superior performance. But, the major demerit of the algorithm is its high computational complexity. So, we present a computationally light full-frame warping algorithm based on ROI (region-of-interest) while providing comparable visual quality to the state-of-the art in terms of ROI. First, a proper ROI with a target depth is chosen for each frame, and full-frame warping based on the selected ROI is applied.

Synchronization Method of Stereoscopic Video in 3D Mobile Broadcasting through Heterogeneous Network (이종망을 통한 3D 모바일 방송에서의 스테레오스코픽 비디오 전송을 위한 동기화 방법)

  • Kwon, Ki-Deok;Yoo, Young-Hwan;Jeong, Hyeon-Jun;Lee, Gwang-Soon;Cheong, Won-Sik;Hur, Nam-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.596-610
    • /
    • 2012
  • This paper proposes a method to provide the high quality 3D broadcasting service in a mobile broadcasting system. In this method, audio and video data are delivered through a heterogeneous network, consisting of a mobile network as well as a broadcasting network, due to the limited bandwidth of the broadcasting system. However, it is more difficult to synchronize the left and right video frames of a 3D stereoscopic service, which come through different types of networks. The proposed method suggests the use of the offset from the initial timestamp of RTP (Real Time Protocol) to determine the order of frames and to find the pair of a left and a right frame that must be played at the same time. Additionally, a new signaling method is introduced for a mobile device to request a 3D service and to get the initial RTP timestamp.

Implementing 3DoF+ 360 Video Compression System for Immersive Media (실감형 미디어를 위한 3DoF+ 360 비디오 압축 시스템 구현)

  • Jeong, Jong-Beom;Lee, Soonbin;Jang, Dongmin;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.743-754
    • /
    • 2019
  • System for three degrees of freedom plus (3DoF+) and 6DoF requires multi-view high resolution 360 video transmission to provide user viewport adaptive 360 video streaming. In this paper, we implement 3DoF+ 360 video compression system which removes the redundancy between multi-view videos and merges the residual into one video to provide high quality 360 video corresponding to an user's head movement efficiently. Implementations about 3D warping based redundancy removal method between 3DoF+ 360 videos and residual extraction and merger are explained in this paper. With the proposed system, 20.14% of BD-rate reduction in maximum is shown compared to traditional high-efficiency video coding (HEVC) based system.