• Title/Summary/Keyword: depth conversion

Search Result 289, Processing Time 0.035 seconds

Propriety analysis of Depth-Map production methods For Depth-Map based on 20 to 3D Conversion - the Last Bladesman (2D to 3D Conversion에서 Depth-Map 기반 제작 사례연구 - '명장 관우' 제작 중심으로 -)

  • Kim, Hyo In;Kim, Hyung Woo
    • Smart Media Journal
    • /
    • v.3 no.1
    • /
    • pp.52-62
    • /
    • 2014
  • Prevalence of common three-dimensional display progresses, increasing the demand for three-dimensional content. Starting from the year 2010 to meet increasing 2D to 3D conversion is insufficient to meet demand content was presented as an alternative. But, Convert 2D to 3D stereo effect only emphasizes content production as a three-dimensional visual fatigue and the degradation of the Quality problems are pointed out. In this study, opened in 2011 'Scenes Guan', the 13 selected Scene is made of the three-dimensional transform the content and the Quality of the transformation applied to the Depth-Map is a visual representation of three-dimensional fatigue and, the adequacy of whether the expert has group interviews and surveys were conducted. Many of the changes are applied to the motion picture of the three-dimensional configurations of Depth-Map conversion technology used in many ways before and after the analysis of the relationship of cascade configurations to create a depth map to the stage. Experiments, presented in this study is a three-dimensional configuration of Depth-Map transformation can lower the production of a three-dimensional visual fatigue and improve the results obtained for a reasonable place was more than half of the experiment accepted the expert group to show a positive reaction were. The results of this study with a rapid movement to convert 2D images into 3D images of applying Depth-map configuration cascade manner to reduce the visual fatigue, to increase the efficiency, and has a three-dimensional perception is the result derived.

Generation of Stereoscopic Image from 2D Image based on Saliency and Edge Modeling (관심맵과 에지 모델링을 이용한 2D 영상의 3D 변환)

  • Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.368-378
    • /
    • 2015
  • 3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. The 3D conversion plays an important role in the augmented functionality of three-dimensional television (3DTV), because it can easily provide 3D contents. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) rendering for producing a stereoscopic image. However except some particular images, the existence of depth cues is rare so that the consistent quality of a depth map cannot be accordingly guaranteed. Therefore, it is imperative to make a 3D conversion method that produces satisfactory and consistent 3D for diverse video contents. From this viewpoint, this paper proposes a novel method with applicability to general types of image. For this, saliency as well as edge is utilized. To generate a depth map, geometric perspective, affinity model and binomic filter are used. In the experiments, the proposed method was performed on 24 video clips with a variety of contents. From a subjective test for 3D perception and visual fatigue, satisfactory and comfortable viewing of 3D contents was validated.

Pattern-based Depth Map Generation for Low-complexity 2D-to-3D Video Conversion (저복잡도 2D-to-3D 비디오 변환을 위한 패턴기반의 깊이 생성 알고리즘)

  • Han, Chan-Hee;Kang, Hyun-Soo;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.31-39
    • /
    • 2015
  • 2D-to-3D video conversion vests 3D effects in a 2D video by generating stereoscopic views using depth cues inherent in the 2D video. This technology would be a good solution to resolve the problem of 3D content shortage during the transition period to the full ripe 3D video era. In this paper, a low-complexity depth generation method for 2D-to-3D video conversion is presented. For temporal consistency in global depth, a pattern-based depth generation method is newly introduced. A low-complexity refinement algorithm for local depth is also provided to improve 3D perception in object regions. Experimental results show that the proposed method outperforms conventional methods in terms of complexity and subjective quality.

Applying differential techniques for 2D/3D video conversion to the objects grouped by depth information (2D/3D 동영상 변환을 위한 그룹화된 객체별 깊이 정보의 차등 적용 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.3
    • /
    • pp.1302-1309
    • /
    • 2012
  • In this paper, we propose applying differential techniques for 2D/3D video conversion to the objects grouped by depth information. One of the problems converting 2D images to 3D images using the technique tracking the motion of pixels is that objects not moving between adjacent frames do not give any depth information. This problem can be solved by applying relative height cue only to the objects which have no moving information between frames, after the process of splitting the background and objects and extracting depth information using motion vectors between objects. Using this technique all the background and object can have their own depth information. This proposed method is used to generate depth map to generate 3D images using DIBR(Depth Image Based Rendering) and verified that the objects which have no movement between frames also had depth information.

Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion (2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합)

  • Han, Chan-Hee;Choi, Hae-Chul;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.1-13
    • /
    • 2012
  • 3D video is regarded as the next generation contents in numerous applications. The 2D-to-3D video conversion technologies are strongly required to resolve a lack of 3D videos during the period of transition to the full ripe 3D video era. In 2D-to-3D conversion methods, after the depth image of each scene in 2D video is estimated, stereoscopic video is synthesized using DIBR (Depth Image Based Rendering) technologies. This paper proposes a novel depth fusion algorithm that integrates multiple depth cues contained in 2D video to generate stereoscopic video. For the proper depth fusion, it is checked whether some cues are reliable or not in current scene. Based on the result of the reliability tests, current scene is classified into one of 4 scene types and scene-adaptive depth fusion is applied to combine those reliable depth cues to generate the final depth information. Simulation results show that each depth cue is reasonably utilized according to scene types and final depth is generated by cues which can effectively represent the current scene.

Conversion Method of 3D Point Cloud to Depth Image and Its Hardware Implementation (3차원 점군데이터의 깊이 영상 변환 방법 및 하드웨어 구현)

  • Jang, Kyounghoon;Jo, Gippeum;Kim, Geun-Jun;Kang, Bongsoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2443-2450
    • /
    • 2014
  • In the motion recognition system using depth image, the depth image is converted to the real world formed 3D point cloud data for efficient algorithm apply. And then, output depth image is converted by the projective world after algorithm apply. However, when coordinate conversion, rounding error and data loss by applied algorithm are occurred. In this paper, when convert 3D point cloud data to depth image, we proposed efficient conversion method and its hardware implementation without rounding error and data loss according image size change. The proposed system make progress using the OpenCV and the window program, and we test a system using the Kinect in real time. In addition, designed using Verilog-HDL and verified through the Zynq-7000 FPGA Board of Xilinx.

Experimental Study on the Relationships between Earthwork Volumes and Soil Conversion Factor with Depth (심도별 토량환산계수와 토공량 변화에 관한 실험적 연구)

  • Gichun Kang;Kyoungchul Shin;Seong-kyu Yun
    • Land and Housing Review
    • /
    • v.14 no.3
    • /
    • pp.137-144
    • /
    • 2023
  • The amount of soil cutting, transported, and filing up the soil in the project area is considered to change the volume depending on the condition of the soil; the volume change rate of the soil is calculated by collecting undisturbed samples below 1 m to 2.0 m above the surface through test pits. In this study, large-scale field tests are carried out. There are areas with an excavation depth of 10m or more, but some errors have occurred in calculating the soil volume by uniformly applying the soil conversion factor for a depth of 1 to 2 m. According to the field tests, the earthwork volumes applied with the soil conversion factor for each depth increase by 3.9 to 9.4% compared to the soil volume applied uniformly with that of 2 m depth.

A New Copyright Protection Scheme for Depth Map in 3D Video

  • Li, Zhaotian;Zhu, Yuesheng;Luo, Guibo;Guo, Biao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3558-3577
    • /
    • 2017
  • In 2D-to-3D video conversion process, the virtual left and right view can be generated from 2D video and its corresponding depth map by depth image based rendering (DIBR). The depth map plays an important role in conversion system, so the copyright protection for depth map is necessary. However, the provided virtual views may be distributed illegally and the depth map does not directly expose to viewers. In previous works, the copyright information embedded into the depth map cannot be extracted from virtual views after the DIBR process. In this paper, a new copyright protection scheme for the depth map is proposed, in which the copyright information can be detected from the virtual views even without the depth map. The experimental results have shown that the proposed method has a good robustness against JPEG attacks, filtering and noise.

Conversion from SIMS depth profiling to compositional depth profiling of multi-layer films

  • Jang, Jong-Shik;Hwang, Hye-Hyen;Kang, Hee-Jae;Kim, Kyung-Joong
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2011.02a
    • /
    • pp.347-347
    • /
    • 2011
  • Secondary ion mass spectrometry (SIMS) was fascinated by a quantitative analysis and a depth profiling and it was convinced of a in-depth analysis of multi-layer films. Precision determination of the interfaces of multi-layer films is important for conversion from the original SIMS depth profiling to the compositional depth profiling and the investigation of structure of multi-layer films. However, the determining of the interface between two kinds of species of the SIMS depth profile is distorted from original structure by the several effects due to sputtering with energetic ions. In this study, the feasibility of 50 atomic % definition for the determination of interface between two kinds of species in SIMS depth profiling of multilayer films was investigated by Si/Ge and Ti/Si multi-layer films. The original SIMS depth profiles were converted into compositional depth profiles by the relative sensitivity factors from Si-Ge and Si-Ti alloy reference films. The atomic compositions of Si-Ge and Si-Ti alloy films determined by Rutherford backscattering spectroscopy (RBS).

  • PDF

Computational Integral Imaging Reconstruction of 3D Object Using a Depth Conversion Technique

  • Shin, Dong-Hak;Kim, Eun-Soo
    • Journal of the Optical Society of Korea
    • /
    • v.12 no.3
    • /
    • pp.131-135
    • /
    • 2008
  • Computational integral imaging(CII) has the advantage of generating the volumetric information of the 3D scene without optical devices. However, the reconstruction process of CII requires increasingly larger sizes of reconstructed images and then the computational cost increases as the distance between the lenslet array and the reconstructed output plane increases. In this paper, to overcome this problem, we propose a novel CII method using a depth conversion technique. The proposed method can move a far 3D object near the lenslet array and reduce the computational cost dramatically. To show the usefulness of the proposed method, we carry out the preliminary experiment and its results are presented.