• Title/Summary/Keyword: Image-Based Rendering

Search Result 325, Processing Time 0.022 seconds

View Synthesis for 3D Database (3차원 데이터베이스 구축을 위한 시점 합성)

  • 이종원;이광연;김성대
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06a
    • /
    • pp.17-21
    • /
    • 1998
  • 시점 합성(View synthesis)이란 일반적으로 주어진 기준 영상들(reference viewpoint images 혹은 key frames)으로부터 임의의 시점에서 바라본 영상을 예측하여 합성하는 것을 의미한다. 이러한 시점 합성은 영상 기반 렌더링(Image-based rendering)기법의 일종으로 컴퓨터 그래픽스 분야에서 실시간 렌더링이나 가상 현실(Virtual reality) 구현을 위한 매우 강력한 접근 방식이다. 시점 합성은 물체를 표현하기 위한 3차원 모델링이 필요가 없으며, 렌더링 시간이 물체의 복잡도와는 무관한 장점이 있다. 이러한 시점 합성 기법으로 물체의 3차원 표현과 효과적인 데이터베이스 구축에 응용이 가능하다. 본 논문에서는 물체의 3차원 데이터베이스 구축을 위해 영상 기반 렌더링 기법에 근거한 간단하고, 효과적인 시점 합성 방법을 제안한다.

  • PDF

System Implementation for Generating Virtual View Digital Holographic using Vertical Rig (수직 리그를 이용한 임의시점 디지털 홀로그래픽 생성 시스템 구현)

  • Koo, Ja-Myung;Lee, Yoon-Hyuk;Seo, Young-Ho;Kim, Dong-Wook
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.11a
    • /
    • pp.46-49
    • /
    • 2012
  • 본 논문에서는 3차원 입체 비디오처리 기술의 최종목표인 디지털 홀로그램을 생성하는데 필요한 객체의 좌표와 색상정보가 들어있는 같은 시점과 해상도인 RGB 영상과 깊이 영상을 획득하여 가상 시점의 디지털 홀로그램을 생성하는 시스템을 제안한다. 먼저, 가시광선과 적외선의 파장을 이용하여 파장에 따라 투과율이 달라지는 콜드 미러를 사용하여 각각의 시점이 같은 다시점 RGB와 깊이 영상을 얻는다. 카메라 시스템이 갖는 다양한 렌즈 왜곡을 없애기 위한 보정 과정을 거친 후에 해상도가 서로 틀린 RGB 영상과 깊이 영상의 해상도를 같게 조절한다. 그 다음, DIBR(Depth Image Based Rendering) 알고리즘을 이용하여 원하는 가상 시점의 깊이 정보와 RGB 영상을 생성한다. 그리고 깊이 정보를 이용하여 디지털 홀로그램으로 구현할 객체만을 추출한다. 마지막으로 컴퓨터 생성 홀로그램 (computer-generated hologram, CGH) 알고리즘을 이용하여 추출한 가상 시점의 객체를 디지털 홀로그램으로 변환한다.

  • PDF

Accelerating Gaussian Hole-Filling Algorithm using GPU (GPU를 이용한 Gaussian Hole-Filling Algorithm 가속)

  • Park, Jun-Ho;Han, Tack-Don
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.79-82
    • /
    • 2012
  • 3차원 멀티미디어 서비스에 대한 관심이 높아짐에 따라 관련 연구들이 현재 다양하게 논의되고 있다. Stereoscopy영상을 생성하기 위한 기존의 방법으로는 두 대의 촬영용 카메라를 일정한 간격으로 띄워놓고 피사체를 촬영한 후 해당 좌시점과 우시점을 생성하는 방법을 이용하였다. 하지만 이는 영상 대역폭의 부담을 가져오게 된다. 이를 해결하기 위하여 Depth정보와 한 장의 영상을 이용한 DIBR(Depth Image Based Rendering) Algorithm에 대한 연구가 많이 이루어지고 있다. 그중 Gaussian Depth Map을 이용한 Hole-Filling 방법은 DIBR에서 가장 자연스러운 결과를 보여주지만 다른 DIBR Algorithm들에 비해 속도가 현저히 느리다는 단점이 있다. 본 논문에서는 영상 생성의 고속화를 위해 GPU를 이용한 Gaussian Hole-Filling Algorithm의 병렬처리 구조를 제안하고 이를 이용한 DIBR Algorithm 생성과정을 제시한다.

  • PDF

Graphic Image Dithering Technique Based on Symmetric Error Diffusion (대칭 오차 확산에 의한 그래픽 영상의 디더링 기법)

  • Kwon, Sung-Bok;Kim, Young-Mo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.7
    • /
    • pp.1893-1899
    • /
    • 1997
  • Spatial dithering techniques are the method of rendering the illusion of continuous-tone pictures on displays that are capable of producing only binary picture elements. In this paper, we propose a new dithering algorithm which diffuses error into nearby pixels symmetrically. This method complements the artifacts of the error diffusion dither for the graphic images and the short-comings of the ordered dither that can't display some intensity level. We applied this method to graphic images and obtained results that complement the short-comings of conventional method.

  • PDF

Free view video synthesis using multi-view 360-degree videos (다시점 360도 영상을 사용한 자유시점 영상 생성 방법)

  • Cho, Young-Gwang;Ahn, Heejune
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.600-603
    • /
    • 2020
  • 360 영상은 시청자가 시야방향을 결정하는 3DoF(3 Degree of Freedom)를 지원한다. 본 연구에서는 다수의 360 영상에서 깊이 정보를 획득하고, 이를 DIBR (Depth -based Image Rendering) 기법을 사용하여 임의 시점 시청기능을 제공하는 6DoF(6 Degree of Freedom) 영상제작 기법을 제안한다. 이를 위하여 기존의 평면 다시점 영상기법을 확장하여 360 ERP 투영 영상으로부터 카메라의 파라미터 예측을 하는 방법과 깊이영상 추출 방법을 설계 및 구현하고 그 성능을 조사하였으며, OpenGL 그래픽스기반의 RVS(Reference View Synthesizer) 라이브러리를 사용하여 DIBR을 적용하였다.

Model Development of Unit-care Welfare Facility for a Traditional Korean House Using Computer Graphics (컴퓨터 그래픽을 이용한 한옥 유니트형 노인복지시설 모델 제시)

  • Nam, Yun-Cheol
    • Journal of The Korean Digital Architecture Interior Association
    • /
    • v.13 no.1
    • /
    • pp.23-31
    • /
    • 2013
  • This paper presents computer graphics applying the traditional Korean house(Hanok) style interior to unit-care space of Welfare Facility and proposes the possibility as interior design and construction materials. In this paper, the proposed computer graphic-based model is a single-story building that provides convenient traffic between rooms. Computer graphic-based model is presented by Auto CAD, 3D program (Sketch-UP v.8), rendering program (Podium v.2) based on the traditional Korean house and related work of unit-care welfare facility. Computer graphic-based model that combined unit-care and the traditional Korean house has the following characteristics. In each room of living space, wallpaper and flooring Korean paper(Hnaji) is considered and windows, door, furniture of traditional pattern were placed. The living room(Daechung) that is representative of the traditional Korean house and the corridor (toenmaru) are the elements to save the image of the traditional Korean house as much as possible. Especially, the corridor (toenmaru) is placed to conveniently use in nursing-care facility and home-care support facility. A public space is placed around the inside court (An-madang), while the living space (unit-care) has a sense of independence by separation. Bathroom and kitchen have a modern design for functionality than aesthetic elements.

Interpolation method of head-related transfer function based on the least squares method and an acoustic modeling with a small number of measurement points (최소자승법과 음향학적 모델링 기반의 적은 개수의 측정점에 대한 머리전달함수 보간 기법)

  • Lee, Seokjin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.5
    • /
    • pp.338-344
    • /
    • 2017
  • In this paper, an interpolation method of HRTF (Head-Related Transfer Function) is proposed for small-sized measurement data set, especially. The proposed algorithm is based on acoustic modeling of HRTFs, and the algorithm tries to interpolate the HRTFs via estimation the model coefficients. However, the estimation of the model coefficients is hard if there is lack of measurement points, so the algorithm solves the problem by a data augmentation using the VBAP (Vector Based Amplitude Panning). Therefore, the proposed algorithm consists of two steps, which are data augmentation step based on VBAP and model coefficients estimation step by least squares method. The proposed algorithm was evaluated by a simulation with a measured data from CIPIC (Center for Image Processing and Integrated Computing) HRTF database, and the simulation results show that the proposed algorithm reduces mean-squared error by 1.5 dB ~ 4 dB than the conventional algorithms.

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

Luminescence Characteristics of Blue Phosphor and Fabrication of a UV-based White LED (UV 기반 백색 LED용 청색 형광체의 발광특성 및 백색 LED 제조)

  • Jung, Hyungsik;Park, Seongwoo;Kim, Taehoon;Kim, Jongsu
    • Korean Journal of Optics and Photonics
    • /
    • v.25 no.4
    • /
    • pp.216-220
    • /
    • 2014
  • We have synthesized a $CaMgSi_2O_6:Eu^{2+}$ blue phosphor via a solid-state reaction method. The $CaMgSi_2O_6:Eu^{2+}$ phosphor has monoclinic structure with a space group of C2/c (15), and an emission band peaking at 450 nm (blue) due to the $4f^7-4f^65d$ transition of the $Eu^{2+}ion$. The emission intensity at $100^{\circ}C$ is 54% of the value at room temperature. A white LED was fabricated by integrating a UV LED (400 nm) with our blue phosphor plus two commercial green and red phosphors. The white LED shows a color temperature of 3500 K with a color rendering index of 87 (x = 0.3936, y = 0.3605), and a luminous efficiency of 18 lm/W. The white LED shows a luminance maintenance of 97% after operation at 350 mA for 400 hours at $85^{\circ}C$.

Fast Mode Decision For Depth Video Coding Based On Depth Segmentation

  • Wang, Yequn;Peng, Zongju;Jiang, Gangyi;Yu, Mei;Shao, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1128-1139
    • /
    • 2012
  • With the development of three-dimensional display and related technologies, depth video coding becomes a new topic and attracts great attention from industries and research institutes. Because (1) the depth video is not a sequence of images for final viewing by end users but an aid for rendering, and (2) depth video is simpler than the corresponding color video, fast algorithm for depth video is necessary and possible to reduce the computational burden of the encoder. This paper proposes a fast mode decision algorithm for depth video coding based on depth segmentation. Firstly, based on depth perception, the depth video is segmented into three regions: edge, foreground and background. Then, different mode candidates are searched to decide the encoding macroblock mode. Finally, encoding time, bit rate and video quality of virtual view of the proposed algorithm are tested. Experimental results show that the proposed algorithm save encoding time ranging from 82.49% to 93.21% with negligible quality degradation of rendered virtual view image and bit rate increment.