• Title/Summary/Keyword: HDR 환경 맵

Search Result 6, Processing Time 0.018 seconds

Reconstruction of HDR Environment Map using a Single LDR Environment Map (단일 LDR 환경 맵을 이용한 HDR 환경 맵 복원)

  • Yoo, Jae-Doug;Cho, Ji-Ho;Lee, Kwan H.
    • Annual Conference of KIPS
    • /
    • 2010.04a
    • /
    • pp.550-553
    • /
    • 2010
  • 최근 영화, 광고 그리고 증강현실과 혼합현실 등 다양한 분야에서 실제 영상에 가상의 객체를 합성하는 기법이 자주 사용되고 있다. 보다 사실적인 합성 결과를 생성하기 위해서는 실제 배경영상의 광원정보를 그대로 적용해야 한다. 이러한 실 세계의 광원 정보를 이용하기 위해서는 HDR(High Dynamic Range) 영상을 생성해야 한다. 일반적으로 HDR 영상을 생성하기 위해서는 고가의 HDR 카메라를 사용하거나 LDR(Low Dynamic Range) 카메라를 사용하여 노출 시간을 달리한 일련의 LDR 영상을 촬영하여 이를 기반으로 HDR 영상을 생성해야 한다. 본 논문에서는 이러한 단점을 보완하기 위해 한 장의 LDR 환경 맵을 HDR 환경 맵으로 복원하는 방법에 대해 제안한다. 제안하는 방법을 통해 LDR 환경 맵을 HDR 환경 맵으로 복원할 수 있으며 결과에서 볼 수 있듯이 HDR 영상을 이용했을 때와 유사한 렌더링 결과를 생성할 수 있다.

Generating Dynamic Virtual Light Sources by Interpolating HDR Environment Maps (HDR 환경 맵 보간을 이용한 동적 가상 조명 생성)

  • Hwang, Gyuhyun;Park, Sanghun
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.11
    • /
    • pp.1399-1408
    • /
    • 2012
  • The light source is an important visual component that empirically affects the color and illumination of graphic objects, and it is necessary to precisely store and appropriately employ the information of all light sources in the real world in order to obtain photo-realistic composition results. The information of real light sources can be accurately stored in HDR environment maps; however, it is impossible to create new environment maps corresponding to dynamic virtual light sources from a single HDR environment map captured under a fixed lighting situation. In this paper, we present a technique to dynamically generate well-matched information for arbitrarily selected virtual light sources using HDR environment maps created under predefined lighting position and orientation. Using the information obtained from light intensity and distribution analysis, our technique automatically generates HDR environment maps for virtual light sources via image interpolation. By applying the interpolated environment maps to an image-based lighting technique, we show that virtual light can create photo-realistically rendered images for graphic models.

Image based Relighting Using HDRI Enviroment Map & Progressive refinement radiosity on GPU (HDRI 환경맵과 GPU 기반 점진적 세분 래디오시티를 이용한 영상기반 재조명)

  • Kim, Jun-Hwan;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.53-62
    • /
    • 2007
  • Although radiosity can represent diffuse reflections of the object surfaces by modeling energy exchange in 3D space, there are some restrictions for real-time applications because of its computation loads. Therefore, GPU(Graphics Processing Unit) based radiosity algorithms have been presented actively to improve its rendering performance. We implement the progressive refinement radiosity on GPU by G. Coombe in 3D scene that is constructed with HDR(High Dynamic Range) radiance map. This radiosity method can generate a photo-realistic rendering image in 3D space, where the synthetic objects were illuminated by the environmental light sources. In the simulation results, the rendering performance is analyzed according to the resolution of the texel in the environmental map and mipmaping. In addition, we compare the rendering results by our method with those by the incremental radiosity.

  • PDF

Deep Learning-Based Lighting Estimation for Indoor and Outdoor (딥러닝기반 실내와 실외 환경에서의 광원 추출)

  • Lee, Jiwon;Seo, Kwanggyoon;Lee, Hanui;Yoo, Jung Eun;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.3
    • /
    • pp.31-42
    • /
    • 2021
  • We propose a deep learning-based method that can estimate an appropriate lighting of both indoor and outdoor images. The method consists of two networks: Crop-to-PanoLDR network and LDR-to-HDR network. The Crop-to-PanoLDR network predicts a low dynamic range (LDR) environment map from a single partially observed normal field of view image, and the LDR-to-HDR network transforms the predicted LDR image into a high dynamic range (HDR) environment map which includes the high intensity light information. The HDR environment map generated through this process is applied when rendering virtual objects in the given image. The direction of the estimated light along with ambient light illuminating the virtual object is examined to verify the effectiveness of the proposed method. For this, the results from our method are compared with those from the methods that consider either indoor images or outdoor images only. In addition, the effect of the loss function, which plays the role of classifying images into indoor or outdoor was tested and verified. Finally, a user test was conducted to compare the quality of the environment map created in this study with those created by existing research.

Development of High Dynamic Range Panorama Environment Map Production System Using General-Purpose Digital Cameras (범용 디지털 카메라를 이용한 HDR 파노라마 환경 맵 제작 시스템 개발)

  • Park, Eun-Hea;Hwang, Gyu-Hyun;Park, Sang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.2
    • /
    • pp.1-8
    • /
    • 2012
  • High dynamic range (HDR) images represent a far wider numerical range of exposures than common digital images. Thus it can accurately store intensity levels of light found in the specific scenes generated by light sources in the real world. Although a kind of professional HDR cameras which support fast accurate capturing has been developed, high costs prevent from employing those in general working environments. The common method to produce a HDR image with lower cost is to take a set of photos of the target scene with a range of exposures by general purpose cameras, and then to transform them into a HDR image by commercial softwares. However, the method needs complicate and accurate camera calibration processes. Furthermore, creating HDR environment maps which are used to produce high quality imaging contents includes delicate time-consuming manual processes. In this paper, we present an automatic HDR panorama environment map generating system which was constructed to make the complicated jobs of taking pictures easier. And we show that our system can be effectively applicable to photo-realistic compositing tasks which combine 3D graphic models with a 2D background scene using image-based lighting techniques.

3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis (영상합성을 위한 3D 공간 해석 및 조명환경의 재구성)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.45-50
    • /
    • 2006
  • In order to generate a photo-realistic synthesized image, we should reconstruct light environment by 3D analysis of scene. This paper presents a novel method for identifying the positions and characteristics of the lights-the global and local lights-in the real image, which are used to illuminate the synthetic objects. First, we generate High Dynamic Range(HDR) radiance map from omni-directional images taken by a digital camera with a fisheye lens. Then, the positions of the camera and light sources in the scene are identified automatically from the correspondences between images without a priori camera calibration. Types of the light sources are classified according to whether they illuminate the whole scene, and then we reconstruct 3D illumination environment. Experimental results showed that the proposed method with distributed ray tracing makes it possible to achieve photo-realistic image synthesis. It is expected that animators and lighting experts for the film and animation industry would benefit highly from it.

  • PDF