• Title/Summary/Keyword: render to texture

Search Result 31, Processing Time 0.024 seconds

Fast Computation of DWT and JPEG2000 using GPU (GPU를 이용한 DWT 및 JPEG2000의 고속 연산)

  • Lee, Man-Hee;Park, In-Kyu;Won, Seok-Jin;Cho, Sung-Dae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.9-15
    • /
    • 2007
  • In this paper, we propose an efficient method for Processing DWT (Discrete Wavelet Transform) on GPU (Graphics Processing Unit). Since the DWT and EBCOT (embedded block coding with optimized truncation) are the most complicated submodules in JPEG2000, we design a high-performance processing framework for performing DWT using the fragment shader of GPU based on the render-to-texture (RTT) architecture. Experimental results show that the performance increases significantly, in which DWT running on modern GPU is more than 10 times faster than on modern CPU. Furthermore, by replacing the DWT part of Jasper which is the JPEG2000 reference software, the overall processing is 2$\sim$16 times faster than the original JasPer. The GPU-driven render-to-texture architecture proposed in this paper can be used in the general image and computer vision processing for high-speed processing.

Shadow Texture Generation Using Temporal Coherence (시간일관성을 이용한 그림자 텍스처 생성방법)

  • Oh Kyoung-su;Shin Byeong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.11
    • /
    • pp.1550-1555
    • /
    • 2004
  • Shadows increase the visual realism of computer-generated images and they are good hint for spatial relationships between objects. Previous methods to produce a shadow texture for an object are to render all objects between the object and light source. Consequently entire time for generating shadow textures between all objects is Ο(Ν$^2$), where Ν is the number of objects. We propose a novel shadow texture generation method with constant processing time for each object using shadow depth buffet. In addition, we also present method to achieve further speed-up using temporal coherence. If the transition between dynamic and static state is not frequent, depth values of static objects does not vary significantly. So we can reuse the depth value for static objects and render only dynamic objects.

  • PDF

Real-Time Water Surface Simulation on GPU (GPU기반 실시간 물 표면 시뮬레이션)

  • Sung, Mankyu;Kwon, DeokHo;Lee, JaeSung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.12
    • /
    • pp.581-586
    • /
    • 2017
  • This paper proposes a GPU based water surface animation and rendering technique for interactive applications such as games. On the water surface, a lot of physical phenomenon occurs including reflection and refraction depending on the viewing direction. When we represent the water surface, not only showing them in real time, but also make them adjusted automatically. In our implementation, we are able to capture the reflection and refraction through render-to-texture technique and then modify the texture coordinates for applying separate DU/DV map. Also, we make the amount of ratio between reflection and refraction change automatically based on Fresnel formula. All proposed method are implemented using OpenGL 3D graphics API.

Evaluation of Volumetric Texture Features for Computerized Cell Nuclei Grading

  • Kim, Tae-Yun;Choi, Hyun-Ju;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1635-1648
    • /
    • 2008
  • The extraction of important features in cancer cell image analysis is a key process in grading renal cell carcinoma. In this study, we applied three-dimensional (3D) texture feature extraction methods to cell nuclei images and evaluated the validity of them for computerized cell nuclei grading. Individual images of 2,423 cell nuclei were extracted from 80 renal cell carcinomas (RCCs) using confocal laser scanning microscopy (CLSM). First, we applied the 3D texture mapping method to render the volume of entire tissue sections. Then, we determined the chromatin texture quantitatively by calculating 3D gray-level co-occurrence matrices (3D GLCM) and 3D run length matrices (3D GLRLM). Finally, to demonstrate the suitability of 3D texture features for grading, we performed a discriminant analysis. In addition, we conducted a principal component analysis to obtain optimized texture features. Automatic grading of cell nuclei using 3D texture features had an accuracy of 78.30%. Combining 3D textural and 3D morphological features improved the accuracy to 82.19%. As a comparative study, we also performed a stepwise feature selection. Using the 4 optimized features, we could obtain more improved accuracy of 84.32%. Three dimensional texture features have potential for use as fundamental elements in developing a new nuclear grading system with accurate diagnosis and predicting prognosis.

  • PDF

Generation of 3D Terrain Mesh Using Noise Function and Height Map (노이즈 함수 및 높이맵을 이용한 3차원 지형 메쉬의 생성)

  • Sangkun, Park
    • Journal of Institute of Convergence Technology
    • /
    • v.12 no.1
    • /
    • pp.1-5
    • /
    • 2022
  • This paper describes an algorithm for generating a terrain using a noise function and a height map as one of the procedural terrain generation methods. The polygon mesh data structure to represent the generated terrain concisely and render it is also described. The Perlin noise function is used as the noise technique for terrain mesh, and the height data of the terrain is generated by combining the four noise waves. In addition, the terrain height information can be also obtained from actual image data taken from the satellite. The algorithm presented in this paper generates the geometry part of the polygon topography from the height data obtained, and generated a material for texture mapping with two textures, that is, a diffuse texture and a normal texture. The validity of the terrain method proposed in this paper is verified through application examples, and its possibility can be confirmed through performance verification.

Stylized Specular Reflections Using Projective Textures based on Principal Curvature Analysis (주곡률 해석 기반의 투영 텍스처를 이용한 스타일 반사 효과)

  • Lee, Hwan-Jik;Choi, Jung-Ju
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.37-44
    • /
    • 2006
  • Specular reflections provide the visual feedback that describes the material type of an object, its local shape, and lighting environment. In photorealistic rendering, there have been a number of research available to render specular reflections effectively based on a local reflection model. In traditional cel animations and cartoons, specular reflections plays important role in representing artistic intentions for an object and its related environment reflections, so the shapes of highlights are quite stylistic. In this paper, we present a method to render and control stylized specular reflections using projective textures based on principal curvature analysis. Specifying a texture as a pattern of a highlight and projecting the texture on the specular region of a given 3D model, we can obtain a stylized representation of specular reflections. For a given polygonal model, a view point, and a light source, we first find the maximum specular intensity point, and then locate the texture projector along the line parallel to the normal vector and passing through the point. The orientation of the projector is determined by the principal directions at the point. Finally, the size of the projection frustum is determined by the principal curvatures corresponding to the principal directions. The proposed method can control the position, orientation, and size of the specular reflection efficiently by translating the projector along the principal directions, rotating the projector about the normal vector, and scaling the principal curvatures, respectively. The method is be applicable to real-time applications such as cartoon style 3D games. We implement the method by Microsoft DirectX 9.0c SDK and programmable vertex/pixel shaders on Nvidia GeForce FX 7800 graphics subsystems. According to our experimental results, we can render and control the stylized specular reflections for a 3D model of several ten thousands of triangles in real-time.

  • PDF

Real-Time Animation of large Crowds

  • Kang, In-Gu;Han, Jung-Hyun
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02c
    • /
    • pp.318-321
    • /
    • 2007
  • This paper proposes a GPU-based approach to real-time skinning animation of large crowds, where each character is animated independently of the others. In the first pass of the proposed approach, skinning is done by a pixel shader and the transformed vertex data are written into the render target texture. With the transformed vertices, the second pass renders the large crowds. The proposed approach is attractive for real-time applications such as video games.

  • PDF

An Efficient Real-time Rendering Method for Compressed Terrain Dataset with Wavelet Transform (웨이블릿 변환으로 압축된 지형 데이터의 효율적인 실시간 렌더링 기법)

  • Kim, Tae-Gwon;Lee, Eun-Seok;Shin, Byeong-Seok
    • Journal of Korea Game Society
    • /
    • v.14 no.4
    • /
    • pp.45-52
    • /
    • 2014
  • We cannot load the entire data for high-resolution terrain model to the GPU memory since its size is too big. Out-of-core approaches are commonly used to solve the problem. However, due to limited bandwidth of the secondary storage, it is difficult to render the terrain in real-time. A method that compresses the DEM data with wavelet transform on GPU, and renders the decoded data is suggested. However, it is inefficient since it has to sample the values from textures, convert them to vertices, and generate a mesh periodically. We propose a method to store the approximation coefficients of wavelet compression as vertex attributes and render the terrain by decoding the data on geometric shader. It can reduce the amount of transferring terrain texture since approximation coefficients are given as an attribute of the vertex. Also, it generate meshes without additional upload of terrain texture.

An Approximation Technique for Real-time Rendering of Phong Reflection Model with Image-based Lighting (영상 기반 조명을 적용한 퐁 반사 모델의 실시간 렌더링을위한 근사 기법)

  • Jeong, Taehong;Shin, Hyun Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.1
    • /
    • pp.13-19
    • /
    • 2014
  • In this paper, we introduce a real-time method to render a 3D scene using image-based lighting. Previous approaches for image-based lighting focused on diffuse reflection and mirror-like specular reflection. In this paper, we provide a simple preprocessing approach to efficiently approximate Phong reflection model, which has been used for computer graphics applications for several decades. For diffuse reflection, we generate a texture map for diffuse reflection by integrating the source image in preprocessing step, similarly to the previous approaches. We adopt the similar idea to produce a set of specular reflection maps for various material shininess. By doing this, we can render a dynamic scene without high computational complexity or numerous texture map access.