• Title/Summary/Keyword: 3-D video

Search Result 1,159, Processing Time 0.032 seconds

Coding Technology for Strereoscopic 3D Broadcasting (스테레오 3D 방송을 위한 비디오 부호화 기술)

  • Choe, Byeong-Ho;Kim, Yong-Hwan;Kim, Je-U;Park, Ji-Ho
    • Broadcasting and Media Magazine
    • /
    • v.15 no.1
    • /
    • pp.24-36
    • /
    • 2010
  • Nowadays, digital broadcasting providers have plan to extend their service area to 3D broadcasting without exchanging conventional system and equipments. The maintenance of backward compatibility to conventional 2D broadcasting system is very importance issue on digital broadcasting. To satisfy the requirement, highly-optimized MPEG-2 video encoder is essential for coding left-view and new video coding techniques having higher performance than MPEG-4 AVC/H.264 is needed for right-view since terrestrial broadcasting system has very limited and fixed bandwidth. In this paper, conventional video coding algorithms and new video coding algorithms are analyzed to present a capable solution for the best quality stereoscopic 3D broadcasting keeping backward compatibility within the bandwidth.

A Trend Study on 2D to 3D Video Conversion Technology using Analysis of Patent Data (특허 분석을 통한 2D to 3D 영상 데이터 변환 기술 동향 연구)

  • Kang, Michael M.;Lee, Wookey;Lee, Rich. C.
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.4
    • /
    • pp.495-504
    • /
    • 2014
  • This paper present a strategy of intellectual property acquisition and core technology development direction using analysis of 2D to 3D video conversion technology patent data. As a result of analysis of trends in patent 2D to 3D technology, it is very promising technology field. Using a strategic patent map using research of patent trend, you will keep ahead of the competition in 2D3D image data conversion market.

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

A 3D Wavelet Coding Scheme for Light-weight Video Codec (경량 비디오 코덱을 위한 3D 웨이블릿 코딩 기법)

  • Lee, Seung-Won;Kim, Sung-Min;Park, Seong-Ho;Chung, Ki-Dong
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.177-186
    • /
    • 2004
  • It is a weak point of the motion estimation technique for video compression that the predicted video encoding algorithm requires higher-order computational complexity. To reduce the computational complexity of encoding algorithms, researchers introduced techniques such as 3D-WT that don't require motion prediction. One of the weakest points of previous 3D-WT studies is that they require too much memory for encoding and too long delay for decoding. In this paper, we propose a technique called `FS (Fast playable and Scalable) 3D-WT' This technique uses a modified Haar wavelet transform algorithm and employs improved encoding algorithm for lower memory and shorter delay requirement. We have executed some tests to compare performance of FS 3D-WT and 3D-V. FS 3D-WT has exhibited the same high compression rate and the same short processing delay as 3D-V has.

An Analysis of Visual Fatigue Caused From Distortions in 3D Video Production (3D 영상의 제작 왜곡이 시청 피로도에 미치는 영향 분석)

  • Jang, Hyung-Jun;Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.17 no.1
    • /
    • pp.1-16
    • /
    • 2012
  • In order to improve the workflow of 3D video production, this paper analyzes the visual fatigue caused from distortions in 3D video production stage through a set of subjective visual assessment tests. To establish a set of objective indicators for subjective visual tests, various distortions in production stage are investigated to be categorized into 7 representative visual-fatigue-producing factors, and to conduct visual assessment tests for each selected category, 4 test video clips are produced by combining the extent of camera movement as well as the object(s) movement in the scene. Each produced test video is distorted to reflect each of the selected 7 visual-fatigue-producing factors, and we set 7 levels of distortion for each factor, resulting in 196 5-second-long video clips for testing. Based on these test materials and the recommendation of ITU-R BT.1438, subjective visual assessment tests are conducted by 101 applicants. The test results provide a relative importance and the tolerance limit of each visual-fatigue-producing factor, which corresponds to various distortions in 3D video production field.

Stereo Video Coding with Spatio-Temporal Scalability for Heterogeneous Collaboration Environments (이질적인 협업환경을 위한 시공간적 계위를 이용한 스테레오 비디오 압축)

  • Oh Sehchan;Lee Youngho;Woo Woontack
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.9
    • /
    • pp.1150-1160
    • /
    • 2004
  • In this paper, we propose a new 3D video coding method for heterogeneous display systems and network infrastructure over enhanced Access Grid (e-AG) using spatio-temporal scalability defined in MPEG-2. The proposed encoder produces several bit-streams for providing temporally and spatially scalable 3D video service. The generated bit-streams can be nelivered with proper spatio-temporal resolution according to network bandwidths and processing speeds, visualization capabilities of client systems. The functionality of proposed spatio-temporal scalability can be exploited for construction of highly scalable 3D video service in heterogeneous distributed environments.

3D Conversion of 2D Video Encoded by H.264

  • Hong, Ho-Ki;Ko, Min-Soo;Seo, Young-Ho;Kim, Dong-Wook;Yoo, Ji-Sang
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.6
    • /
    • pp.990-1000
    • /
    • 2012
  • In this paper, we propose an algorithm that creates three-dimensional (3D) stereoscopic video from two-dimensional (2D) video encoded by H.264 instead of using two cameras conventionally. Very accurate motion vectors are available in H.264 bit streams because of the availability of a variety of block sizes. 2D/3D conversion algorithm proposed in this paper can create left and right images by using extracted motion information. Image type of a given image is first determined from the extracted motion information and each image type gives a different conversion algorithm. The cut detection has also been performed in order to prevent overlapping of two totally different scenes for left and right images. We show an improved performance of the proposed algorithm through experimental results.

New Texture Prediction for Multi-view Video Coding

  • Park, Ji-Ho;Kim, Yong-Hwan;Choi, Byeong-Ho
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1508-1511
    • /
    • 2007
  • This paper introduces a new texture prediction for MVC( Multi-view Video Coding) which is currently being developed as an extension of the ITU-T Recommendation H.264 | ISO/IEC International Standard ISO/IEC 14496-10 AVC (Advanced Video Coding) [1]. The MVC's prcimary target is 3D video compression for 3D display system, thus, key technology compared to 2D video compression is reducing inter-view correlation. It is noticed, however, that the current JMVM [2] does not effectively eliminate inter-view correlation so that there is still a room to improve coding efficiency. The proposed method utilizes similarity of interview residual signal and can provide an additional coding gain. It is claimed that up to 0.2dB PSNR gain with 1.4% bit-rate saving is obtained for three multi-view test sequences.

  • PDF

Fast key-frame extraction for 3D reconstruction from a handheld video

  • Choi, Jongho;Kwon, Soonchul;Son, Kwangchul;Yoo, Jisang
    • International journal of advanced smart convergence
    • /
    • v.5 no.4
    • /
    • pp.1-9
    • /
    • 2016
  • In order to reconstruct a 3D model in video sequences, to select key frames that are easy to estimate a geometric model is essential. This paper proposes a method to easily extract informative frames from a handheld video. The method combines selection criteria based on appropriate-baseline determination between frames, frame jumping for fast searching in the video, geometric robust information criterion (GRIC) scores for the frame-to-frame homography and fundamental matrix, and blurry-frame removal. Through experiments with videos taken in indoor space, the proposed method shows creating a more robust 3D point cloud than existing methods, even in the presence of motion blur and degenerate motions.

A System for 3D Face Manipulation in Video (비디오 상의 얼굴에 대한 3차원 변형 시스템)

  • Park, Jungsik;Seo, Byung-Kuk;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.440-451
    • /
    • 2019
  • We propose a system that allows three dimensional manipulation of face in video. The 3D face manipulation of the proposed system overlays the 3D face model with the user 's manipulation on the face region of the video frame, and it allows 3D manipulation of the video in real time unlike existing applications or methods. To achieve this feature, first, the 3D morphable face model is registered with the image. At the same time, user's manipulation is applied to the registered model. Finally, the frame image mapped to the model as texture, and the texture-mapped and deformed model is rendered. Since this process requires lots of operations, parallel processing is adopted for real-time processing; the system is divided into modules according to functionalities, and each module runs in parallel on each thread. Experimental results show that specific parts of the face in video can be manipulated in real time.