• Title/Summary/Keyword: Depth image-based rendering

Search Result 95, Processing Time 0.029 seconds

Object Segmentation Using Depth Map (깊이 맵을 이용한 객체 분리 방법)

  • Yu, Kyung-Min;Cho, Yongjoo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.639-640
    • /
    • 2013
  • In this study, a new method that finds an area where interesting objects are placed to generate DIBR-based intermediate images with higher quality. This method complements the existing object segmentation algorithm called Grabcut by finding the bounding box automatically, whereas the existing algorithm requires a user to select the region specifically. Then, the histogram of the depth map information is then used to separate the background and the frontal objects after applying the GrabCut algorithm. By using the new method, it is found that it produces better result than the existing algorithm. This paper describes the new method and future research.

  • PDF

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

A Study on Synthetic Techniques Utilizing Map of 3D Animation - A Case of Occlusion Properties (오클루전 맵(Occlusion Map)을 활용한 3D애니메이션 합성 기법 연구)

  • Park, Sung-Won
    • Cartoon and Animation Studies
    • /
    • s.40
    • /
    • pp.157-176
    • /
    • 2015
  • This research describes render pass synthetic techniques required to use for the effectiveness of them in 3D animation synthetic technology. As the render pass is divided by property and synthesized after rendering, elaborate, rapid synthesis can be achieved. In particular, occlusion pass creates a screen as if it had a soft, light shading, expressing a sense of depth and boundary softness. It is converted into 2D image through a process of pass rendering of animation projects created in 3D space, then completed in synthetic software. Namely, 3D animation realizes the completeness of work originally planned through compositing, a synthetic process in the last half. To complete in-depth image, a scene manufactured in 3D software can be sent as a synthetic program by rendering the scene by layer and property. As recently the occlusion pass can express depth notwithstanding conducting GI rendering of 3D graphic outputs, it is an important synthetic map not omitted in the post-production process. Nonetheless, for the importance of it, currently the occlusion pass leaves much to be desired for research support and books summarizing and analyzing the characteristics of properties, and the principles and usages of them. Hence, this research was aimed to summarize the principles and usages of occlusion map, and analyze differences in the results of synthesis. Furthermore, it also summarized a process designating renderers and the map utilizing the properties, and synthetic software usages. For the future, it is hoped that effective and diverse latter expression techniques will be studied beyond the limitation of graphic expression based on trends diversifying technique development.

Noise filtering for Depth Images using Shape Smoothing and Z-buffer Rendering (형상 스무딩과 Z-buffer 렌더링을 이용한 깊이 영상의 노이즈 필터링)

  • Kim, Seung-Man;Park, Jeung-Chul;Cho, Ji-Ho;Lee, Kwan-H.
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1188-1193
    • /
    • 2006
  • 본 논문에서는 동적 객체의 3 차원 정보를 표현하는 깊이 영상의 노이즈 필터링 방법을 제안한다. 실제 객체의 동적인 3 차원 정보는 적외선 깊이 센서가 장착된 깊이 비디오 카메라를 이용하여 실시간으로 획득되며, 일련의 깊이 영상, 즉 깊이 비디오(depth video)로 표현될 수 있다. 하지만 측정환경의 조명조건, 객체의 반사속성, 카메라의 시스템 오차 등으로 인해 깊이 영상에는 고주파 성분의 노이즈가 발생하게 된다. 이를 효과적으로 제거하기 위해 깊이 영상기반의 모델링 기법(depth image-based modeling)을 이용한 3 차원 메쉬 모델링을 수행한다. 생성된 3 차원 메쉬 모델은 깊이 영상의 노이즈로 인해 경계 영역과 형상 내부 영역에 심각한 형상 오차를 가진다. 경계 영역의 오차를 제거하기 위해 깊이 영상으로부터 경계 영역을 추출하고, 가까운 순서로 정렬한 후 angular deviation 을 이용하여 불필요하게 중복된 점들을 제거한다. 그리고 나서 2 차원 가우시안 스무딩 기법을 적용하여 부드러운 경계영역을 생성한다. 형상 내부에 대해서는 경계영역에 제약조건을 주고 3 차원 가우시안 스무딩 기법을 적용하여 전체적으로 부드러운 형상을 생성한다. 최종적으로 스무딩된 3 차원 메쉬모델을 렌더링할 때, 깊이 버퍼에 있는 정규화된 깊이 값들을 추출하여 원래 깊이 영상과 동일한 깊이 영역을 가지도록 저장함으로서 전역적으로 연속적이면서 부드러운 깊이 영상을 생성할 수 있다. 제안된 방법에 의해 노이즈가 제거된 깊이 영상을 이용하여 고품질의 영상기반 렌더링이나 깊이 비디오 기반의 햅틱 렌더링에 적용할 수 있다.

  • PDF

MPEG-I RVS Software Speed-up for Real-time Application (실시간 렌더링을 위한 MPEG-I RVS 가속화 기법)

  • Ahn, Heejune;Lee, Myeong-jin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.655-664
    • /
    • 2020
  • Free viewpoint image synthesis technology is one of the important technologies in the MPEG-I (Immersive) standard. RVS (Reference View Synthesizer) developed by MPEG-I and in use in MPEG group is a DIBR (Depth Information-Based Rendering) program that generates an image at a virtual (intermediate) viewpoint from multiple viewpoints' inputs. RVS uses the mesh surface method based on computer graphics, and outperforms the pixel-based ones by 2.5dB or more compared to the previous pixel method. Even though its OpenGL version provides 10 times speed up over the non OpenGL based one, it still shows a non-real-time processing speed, i.e., 0.75 fps on the two 2k resolution input images. In this paper, we analyze the internal of RVS implementation and modify its structure, achieving 34 times speed up, therefore, real-time performance (22-26 fps), through the 3 key improvements: 1) the reuse of OpenGL buffers and texture objects 2) the parallelization of file I/O and OpenGL execution 3) the parallelization of GPU shader program and buffer transfer.

Comparison of LoG and DoG for 3D reconstruction in haptic systems (햅틱스 시스템용 3D 재구성을 위한 LoG 방법과 DoG 방법의 성능 분석)

  • Sung, Mee-Young;Kim, Ki-Kwon
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.711-721
    • /
    • 2012
  • The objective of this study is to propose an efficient 3D reconstruction method for developing a stereo-vision-based haptics system which can replace "robotic eyes" and "robotic touch." The haptic rendering for 3D images requires to capture depth information and edge information of stereo images. This paper proposes the 3D reconstruction methods using LoG(Laplacian of Gaussian) algorithm and DoG(Difference of Gaussian) algorithm for edge detection in addition to the basic 3D depth extraction method for better haptic rendering. Also, some experiments are performed for evaluating the CPU time and the error rates of those methods. The experimental results lead us to conclude that the DoG method is more efficient for haptic rendering. This paper may contribute to investigate the effective methods for 3D image reconstruction such as in improving the performance of mobile patrol robots.

Analysis of Digital Hologram Rendering Using a Computational Method

  • Choi, Hyun-Jun;Seo, Young-Ho;Jang, Seok-Woo;Kim, Dong-Wook
    • Journal of information and communication convergence engineering
    • /
    • v.10 no.2
    • /
    • pp.205-209
    • /
    • 2012
  • To manufacture a real time digital holographic display system capable of being applied to next-generation television, it is important to rapidly generate a digital hologram. In this paper, we analyze digital hologram rendering based on a computer computation scheme. We analyze previous recursive methods to identify regularity between the depth-map image and the digital hologram.

3D Shape Reconstruction based on Superquadrics and Single Z-buffer CSG Rendering (Superquadric과 Z-버퍼 CSG 렌더링 기반의 3차원 형상 모델링)

  • Kim, Tae-Eun
    • Journal of Digital Contents Society
    • /
    • v.9 no.2
    • /
    • pp.363-369
    • /
    • 2008
  • In this paper, we have proposed 3D shape reconstruction using superquadrics and single z-buffer Constructive Solid Geometry (CSG) rendering algorithm. Superquadrics can obtain various 3D model using 11 parameters and both superquadrics and deformed-superquadrics play a role of primitives which are consisted of CSG tree. In addition, we defined some effective equations using z-buffer algorithm and stencil buffer for synthesizing 3D model. Using this proposed algorithm, we need not to consider the coordinate of each 3D model because we simply compare the depth value of 3D model.

  • PDF

Accelerating Gaussian Hole-Filling Algorithm using GPU (GPU를 이용한 Gaussian Hole-Filling Algorithm 가속)

  • Park, Jun-Ho;Han, Tack-Don
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.79-82
    • /
    • 2012
  • 3차원 멀티미디어 서비스에 대한 관심이 높아짐에 따라 관련 연구들이 현재 다양하게 논의되고 있다. Stereoscopy영상을 생성하기 위한 기존의 방법으로는 두 대의 촬영용 카메라를 일정한 간격으로 띄워놓고 피사체를 촬영한 후 해당 좌시점과 우시점을 생성하는 방법을 이용하였다. 하지만 이는 영상 대역폭의 부담을 가져오게 된다. 이를 해결하기 위하여 Depth정보와 한 장의 영상을 이용한 DIBR(Depth Image Based Rendering) Algorithm에 대한 연구가 많이 이루어지고 있다. 그중 Gaussian Depth Map을 이용한 Hole-Filling 방법은 DIBR에서 가장 자연스러운 결과를 보여주지만 다른 DIBR Algorithm들에 비해 속도가 현저히 느리다는 단점이 있다. 본 논문에서는 영상 생성의 고속화를 위해 GPU를 이용한 Gaussian Hole-Filling Algorithm의 병렬처리 구조를 제안하고 이를 이용한 DIBR Algorithm 생성과정을 제시한다.

  • PDF

Panoramic Navigation using Orthogonal Cross Cylinder Mapping and Image-Segmentation Based Environment Modeling (직각 교차 실린더 매핑과 영상 분할 기반 환경 모델링을 이용한 파노라마 네비게이션)

  • 류승택;조청운;윤경현
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.3_4
    • /
    • pp.138-148
    • /
    • 2003
  • Orthogonal Cross Cylinder mapping and segmentation based modeling methods have been implemented for constructing the image-based navigation system in this paper. The Orthogonal Cross Cylinder (OCC) is the object expressed by the intersection area that occurs when a cylinder is orthogonal with another. OCC mapping method eliminates the singularity effect caused in the environment maps and shows an almost even amount of area for the environment occupied by a single texel. A full-view image from a fixed point-of-view can be obtained with OCC mapping although it becomes difficult to express another image when the point-of-view has been changed. The OCC map is segmented according to the objects that form the environment and the depth value is set by the characteristics of the classified objects for the segmentation based modeling. This method can easily be implemented on an environment map and makes the environment modeling easier through extracting the depth value by the image segmentation. An environment navigation system with a full-view can be developed with these methods.