• Title/Summary/Keyword: Depth image-based rendering

Search Result 95, Processing Time 0.029 seconds

Model-Based Three-dimensional Multiview Object Implementation by OpenGL (OpenGL을 이용한 모델 기반 3차원 다시점 객체 구현)

  • Oh, Won-Sik;Kim, Dong-Uk;Kim, Hwa-Sung;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.299-309
    • /
    • 2008
  • In this paper, we propose an algorithm for object generation from model-based 3-dimensional multi-viewpoint images using OpenGL rendering. In the first step, we preprocess a depth map image in order to get a three-dimensional coordinate which is sampled as a vertex information on OpenGL and has a z-value as depth information. Next, the Delaunay Triangulation algorithm is used to construct a polygon for texture-mapping using the vertex information. Finally, by mapping a texture image on the constructed polygon, we generate a viewpoint-adaptive object by calculating 3-dimensional coordinates on OpenGL.

Interactive Haptic Deformation and Material Property Modeling Algorithm (인터랙티브 햅틱 변형 및 재질감 모델링 알고리즘)

  • Lee, Beom-Chan;Kim, Jong-Phil;Park, Hye-Shin;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1-7
    • /
    • 2007
  • 본 논문은 3차원 스캐너로 획득된 실제 얼굴 데이터를 햅틱 상호작용을 통해 직접 변형하고 재질감을 모델링 하는 알고리즘을 제안한다. 제안된 알고리즘은 그래픽 하드웨어 기반의 햅틱 렌더링 알고리즘을 기반으로 획득된 2.5D 얼굴 데이터를 mass-spring 모델을 적용하여 변형하고 얼굴의 재질감(탄성, 마찰, 거칠기) 정보를 모델링 하는 것이다. 햅틱 장치를 이용한 변형알고리즘은 변형 시 효율적인 변형 영역 탐색을 위하여 공간 분할방법인 k-d 트리 구조를 이용하여 최근방 탐색 알고리즘을 구현하였으며, 사실적인 힘 계산을 위하여 각 포인트 마다 mass-spring 모델을 적용하여 반력 연산 및 물체의 변형을 수행하였다. 아울러 재질감을 모델링 하기 위해 깊이 이미지 기반 표현(Depth Image Based Representation, DIBR)을 이용하여 가상 물체의 거칠기, 탄성, 및 마찰을 편집할 수 있는 방법론을 제시하고, 편집된 재질감을 직접 물체의 표면에 적용하여 렌더링 하는 알고리즘을 제안한다.

  • PDF

System Implementation for Generating Virtual View Digital Holographic using Vertical Rig (수직 리그를 이용한 임의시점 디지털 홀로그래픽 생성 시스템 구현)

  • Koo, Ja-Myung;Lee, Yoon-Hyuk;Seo, Young-Ho;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.11a
    • /
    • pp.46-49
    • /
    • 2012
  • 본 논문에서는 3차원 입체 비디오처리 기술의 최종목표인 디지털 홀로그램을 생성하는데 필요한 객체의 좌표와 색상정보가 들어있는 같은 시점과 해상도인 RGB 영상과 깊이 영상을 획득하여 가상 시점의 디지털 홀로그램을 생성하는 시스템을 제안한다. 먼저, 가시광선과 적외선의 파장을 이용하여 파장에 따라 투과율이 달라지는 콜드 미러를 사용하여 각각의 시점이 같은 다시점 RGB와 깊이 영상을 얻는다. 카메라 시스템이 갖는 다양한 렌즈 왜곡을 없애기 위한 보정 과정을 거친 후에 해상도가 서로 틀린 RGB 영상과 깊이 영상의 해상도를 같게 조절한다. 그 다음, DIBR(Depth Image Based Rendering) 알고리즘을 이용하여 원하는 가상 시점의 깊이 정보와 RGB 영상을 생성한다. 그리고 깊이 정보를 이용하여 디지털 홀로그램으로 구현할 객체만을 추출한다. 마지막으로 컴퓨터 생성 홀로그램 (computer-generated hologram, CGH) 알고리즘을 이용하여 추출한 가상 시점의 객체를 디지털 홀로그램으로 변환한다.

  • PDF

Free view video synthesis using multi-view 360-degree videos (다시점 360도 영상을 사용한 자유시점 영상 생성 방법)

  • Cho, Young-Gwang;Ahn, Heejune
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.600-603
    • /
    • 2020
  • 360 영상은 시청자가 시야방향을 결정하는 3DoF(3 Degree of Freedom)를 지원한다. 본 연구에서는 다수의 360 영상에서 깊이 정보를 획득하고, 이를 DIBR (Depth -based Image Rendering) 기법을 사용하여 임의 시점 시청기능을 제공하는 6DoF(6 Degree of Freedom) 영상제작 기법을 제안한다. 이를 위하여 기존의 평면 다시점 영상기법을 확장하여 360 ERP 투영 영상으로부터 카메라의 파라미터 예측을 하는 방법과 깊이영상 추출 방법을 설계 및 구현하고 그 성능을 조사하였으며, OpenGL 그래픽스기반의 RVS(Reference View Synthesizer) 라이브러리를 사용하여 DIBR을 적용하였다.

Consider the directional hole filling method for virtual view point synthesis (가상 시점 영상 합성을 위한 방향성 고려 홀 채움 방법)

  • Mun, Ji Hun;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.28-34
    • /
    • 2014
  • Recently the depth-image-based rendering (DIBR) method is usually used in 3D image application filed. Virtual view image is created by using a known view with associated depth map to make a virtual view point which did not taken by the camera. But, disocclusion area occur because the virtual view point is created using a depth image based image 3D warping. To remove those kind of disocclusion region, many hole filling methods are proposed until now. Constant color region searching, horizontal interpolation, horizontal extrapolation, and variational inpainting techniques are proposed as a hole filling methods. But when using those hole filling method some problem occurred. The different types of annoying artifacts are appear in texture region hole filling procedure. In this paper to solve those problem, the multi-directional extrapolation method is newly proposed for efficiency of expanded hole filling performance. The proposed method is efficient when performing hole filling which complex texture background region. Consideration of directionality for hole filling method use the hole neighbor texture pixel value when estimate the hole pixel value. We can check the proposed hole filling method can more efficiently fill the hole region which generated by virtual view synthesis result.

Real-time Stereo Video Generation using Graphics Processing Unit (GPU를 이용한 실시간 양안식 영상 생성 방법)

  • Shin, In-Yong;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.16 no.4
    • /
    • pp.596-601
    • /
    • 2011
  • In this paper, we propose a fast depth-image-based rendering method to generate a virtual view image in real-time using a graphic processor unit (GPU) for a 3D broadcasting system. Before the transmission, we encode the input 2D+depth video using the H.264 coding standard. At the receiver, we decode the received bitstream and generate a stereo video using a GPU which can compute in parallel. In this paper, we apply a simple and efficient hole filling method to reduce the decoder complexity and reduce hole filling errors. Besides, we design a vertical parallel structure for a forward mapping process to take advantage of the single instruction multiple thread structure of GPU. We also utilize high speed GPU memories to boost the computation speed. As a result, we can generate virtual view images 15 times faster than the case of CPU-based processing.

Template-Based Object-Order Volume Rendering with Perspective Projection (원형기반 객체순서의 원근 투영 볼륨 렌더링)

  • Koo, Yun-Mo;Lee, Cheol-Hi;Shin, Yeong-Gil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.7
    • /
    • pp.619-628
    • /
    • 2000
  • Abstract Perspective views provide a powerful depth cue and thus aid the interpretation of complicated images. The main drawback of current perspective volume rendering is the long execution time. In this paper, we present an efficient perspective volume rendering algorithm based on coherency between rays. Two sets of templates are built for the rays cast from horizontal and vertical scanlines in the intermediate image which is parallel to one of volume faces. Each sample along a ray is calculated by interpolating neighboring voxels with the pre-computed weights in the templates. We also solve the problem of uneven sampling rate due to perspective ray divergence by building more templates for the regions far away from a viewpoint. Since our algorithm operates in object-order, it can avoid redundant access to each voxel and exploit spatial data coherency by using run-length encoded volume. Experimental results show that the use of templates and the object-order processing with run-length encoded volume provide speedups, compared to the other approaches. Additionally, the image quality of our algorithm improves by solving uneven sampling rate due to perspective ray di vergence.

  • PDF

Analysis of Relationship between Objective Performance Measurement and 3D Visual Discomfort in Depth Map Upsampling (깊이맵 업샘플링 방법의 객관적 성능 측정과 3D 시각적 피로도의 관계 분석)

  • Gil, Jong In;Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.31-43
    • /
    • 2014
  • A depth map is an important component for stereoscopic image generation. Since the depth map acquired from a depth camera has a low resolution, upsamling a low-resolution depth map to a high-resolution one has been studied past decades. Upsampling methods are evaluated by objective evaluation tools such as PSNR, Sharpness Degree, Blur Metric. As well, the subjective quality is compared using virtual views generated by DIBR (depth image based rendering). However, works on the analysis of the relation between depth map upsampling and stereoscopic images are relatively few. In this paper, we investigate the relationship between subjective evaluation of stereoscopic images and objective performance of upsampling methods using cross correlation and linear regression. Experimental results demonstrate that the correlation of edge PSNR and visual fatigue is the highest and the blur metric has lowest correlation. Further, from the linear regression, we found relative weights of objective measurements. Further we introduce a formulae that can estimate 3D performance of conventional or new upsampling methods.

Digital Watermarking on Image for View-point Change and Malicious Attacks (영상의 시점변화와 악의적 공격에 대한 디지털 워터마킹)

  • Kim, Bo-Ra;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.342-354
    • /
    • 2014
  • This paper deals with digital watermarking methods to protect ownership of image with targeting the ultra-high multi-view or free-view image service in which an arbitrary viewpoint image should be rendered at the user side. The main purpose of it is not to propose a superior method to the previous methods but to show how difficult to construct a watermarking scheme to overcome the viewpoint translation attack. Therefore we target the images with various attacks including viewpoint translation. This paper first shows how high the error rate of the extracted watermark data from viewpoint-translated image by basic schemes of the method using 2DDCT(2D discrete cosine transform) and the one using 2DDWT(2D discrete wavelet transform), which are for 2D image. Because the difficulty in watermarking for the viewpoint-translated image comes from the fact that we don't know the translated viewpoint, we propose a scheme to find the translated viewpoint, which uses the image and the corresponding depth information at the original viewpoint. This method is used to construct the two non-blind watermarking methods to be proposed. They are used to show that recovery of the viewpoint affect a great deal of the error rate of the extracted watermark. Also by comparing the performances of the proposed methods and the previous ones, we show that the proposed ones are better in invisibility and robustness, even if they are non-blind.

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.