• Title/Summary/Keyword: 그림자 텍스처

Search Result 4, Processing Time 0.02 seconds

A shadow texture to represent the shadow including a non-convex object (오목한 물체의 그림자를 포함하는 그림자 텍스처)

  • Ryu, Tae-Gyu;Oh, Kyung-Soo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11a
    • /
    • pp.748-750
    • /
    • 2005
  • 본 논문은 오목한 물체의 그림자 표현이 가능한 그림자 텍스처를 생성하기 위해, 기존의 그림자 맵 (shadow MaP) 방법으로 그림자 정보를 생성하고 아틀라스 텍스처 (atlas texture)에 저장하는 방법을 제안한다. 기존 그림자 텍스처 방법은 그림자 정보를 이미지 텍스처에 저장하므로 그래픽스 하드웨어의 색상 처리 기능을 사용하여 고품질의 그림자를 효율적으로 표현하는 장점이 있지만 오목한 물체의 그림자를 표현하지 못하고 그림자 텍스처 생성에 많은 시간이 소요되는 문제점을 가지고 있다. 실험 결과, 새로운 방법은 그림자 텍스처의 문제점을 해결하고 고품질의 그림자를 표현하는 것을 알 수 있다.

  • PDF

Shadow Texture Generation Using Temporal Coherence (시간일관성을 이용한 그림자 텍스처 생성방법)

  • Oh Kyoung-su;Shin Byeong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1550-1555
    • /
    • 2004
  • Shadows increase the visual realism of computer-generated images and they are good hint for spatial relationships between objects. Previous methods to produce a shadow texture for an object are to render all objects between the object and light source. Consequently entire time for generating shadow textures between all objects is Ο(Ν$^2$), where Ν is the number of objects. We propose a novel shadow texture generation method with constant processing time for each object using shadow depth buffet. In addition, we also present method to achieve further speed-up using temporal coherence. If the transition between dynamic and static state is not frequent, depth values of static objects does not vary significantly. So we can reuse the depth value for static objects and render only dynamic objects.

  • PDF

Robust Illumination Change Detection Using Image Intensity and Texture (영상의 밝기와 텍스처를 이용한 조명 변화에 강인한 변화 검출)

  • Yeon, Seungho;Kim, Jaemin
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.2
    • /
    • pp.169-179
    • /
    • 2013
  • Change detection algorithms take two image frames and return the locations of newly introduced objects which cause differences between the images. This paper presents a new change detection method, which classifies intensity changes due to introduced objects, reflected light and shadow from the objects to their neighborhood, and the noise, and exactly localizes the introduced objects. For classification and localization, first we analyze the histogram of the intensity difference between two images, and estimate multiple threshold values. Second we estimate candidate object boundaries using the gradient difference between two images. Using those threshold values and candidate object boundaries, we segment the frame difference image into multiple regions. Finally we classify whether each region belongs to the introduced objects or not using textures in the region. Experiments show that the proposed method exactly localizes the objects in various scenes with different lighting.

Face Relighting Based on Virtual Irradiance Sphere and Reflection Coefficients (가상 복사조도 반구와 반사계수에 근거한 얼굴 재조명)

  • Han, Hee-Chul;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.339-349
    • /
    • 2008
  • We present a novel method to estimate the light source direction and relight a face texture image of a single 3D model under arbitrary unknown illumination conditions. We create a virtual irradiance sphere to detect the light source direction from a given illuminated texture image using both normal vector mapping and weighted bilinear interpolation. We then induce a relighting equation with estimated ambient and diffuse coefficients. We provide the result of a series of experiments on light source estimation, relighting and face recognition to show the efficiency and accuracy of the proposed method in restoring the shading and shadows areas of a face texture image. Our approach for face relighting can be used for not only illuminant invariant face recognition applications but also reducing visual load and Improving visual performance in tasks using 3D displays.