• Title/Summary/Keyword: Rendering

Search Result 1,797, Processing Time 0.025 seconds

Accurate and efficient GPU ray-casting algorithm for volume rendering of unstructured grid data

  • Gu, Gibeom;Kim, Duksu
    • ETRI Journal
    • /
    • v.42 no.4
    • /
    • pp.608-618
    • /
    • 2020
  • We present a novel GPU-based ray-casting algorithm for volume rendering of unstructured grid data. Our volume rendering system uses a ray-casting method that guarantees accurate rendering results. We also employ the per-pixel intersection list concept in the Bunyk algorithm to guarantee an accurate result for non-convex meshes. For efficient memory access for the lists on the GPU, we represent the intersection lists for all faces as an array with our novel construction algorithm. With the intersection lists, we perform ray-casting on a GPU, and a GPU thread handles each ray. To increase ray-coherency in a thread block and improve memory access efficiency, we extend a prior image-tile-based work distribution method to fit modern GPU architectures. We also show that a prior approach using a per-thread local buffer to reduce redundant computation is not appropriate for modern GPU architectures. Instead, we take an on-demand calculation strategy that achieves better performance even though it allows duplicate computations. We applied our method to three unstructured grid datasets with different characteristics. With a GPU, our method achieved up to 36.5 times higher performance for the ray-casting process and 19.7 times higher performance for the whole volume rendering process compared with the Bunyk algorithm using a CPU core. Also, our approach showed up to 8.2 times higher performance than a GPU-based cell projection method while generating more accurate rendering results. These results demonstrate the efficiency and accuracy of our method.

Implementation of Raindrop Rendering Using Unity3D Engine (Unity3D를 이용한 빗방울 렌더링 구현)

  • Lee, MyounJae;Kim, Kyoung-Nam
    • Journal of Digital Convergence
    • /
    • v.12 no.1
    • /
    • pp.519-524
    • /
    • 2014
  • This research is the study of raindrop rendering. In case of rendering for raindrop in existing games, it is used on sprites images or roughly raindrops images using texture rendering. These methods are similar to the shape and size of all rendered raindrop. That's why players are limitations to provide a sense of reality. To overcome this limitation, this paper proposes a method for generating raindrop considering surface tension and contac angle, the amount of water, implements the raindrop using Unity3D engine. To demonstrate the usefulness of this paper, this paper shows the generated raindrop in accordance with the change in the area and pulling force in surface tension formula. This paper can help to provide the actuality in game in case of rendering the raindrop.

Development of Mobile Volume Visualization System (모바일 볼륨 가시화 시스템 개발)

  • Park, Sang-Hun;Kim, Won-Tae;Ihm, In-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.5
    • /
    • pp.286-299
    • /
    • 2006
  • Due to the continuing technical progress in the capabilities of modeling, simulation, and sensor devices, huge volume data with very high resolution are common. In scientific visualization, various interactive real-time techniques on high performance parallel computers to effectively render such large scale volume data sets have been proposed. In this paper, we present a mobile volume visualization system that consists of mobile clients, gateways, and parallel rendering servers. The mobile clients allow to explore the regions of interests adaptively in higher resolution level as well as specify rendering / viewing parameters interactively which are sent to parallel rendering server. The gateways play a role in managing requests / responses between mobile clients and parallel rendering servers for stable services. The parallel rendering servers visualize the specified sub-volume with rendering contexts from clients and then transfer the high quality final images back. This proposed system lets multi-users with PDA simultaneously share commonly interesting parts of huge volume, rendering contexts, and final images through CSCW(Computer Supported Cooperative Work) mode.

Mobile Volume Rendering System for Client-Server Environment (클라이언트 서버 기반 모바일 볼륨 가시화 시스템)

  • Lee, Woongkyu;Kye, Heewon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.17-26
    • /
    • 2015
  • In this paper, we explain a volume rendering system for client-server environment. A single GPU-equipped PC works as a server which is based on the ideas that only a few concurrent users use a volume rendering system in a small hospital. As the clients, we used Android mobile devices such as smart phones. User events are transformed to rendering requests by the client application. When the server receives a rendering request, it renders the volume using the GPU. The rendered image is compressed to JPEG or PNG format so that we can save network bandwidth and reduce transfer time. In addition, we perform an event pruning method while a user is dragging the touch to enhance latency. The server compensates the pruning by interpolating the touch positions. As the result, real-time volume rendering is possible for 5 concurrent users on single GPU-equipped commodity hardware.

Implementation of Real-time Interactive Ray Tracing on GPU (GPU 기반의 실시간 인터렉티브 광선추적법 구현)

  • Bae, Sung-Min;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.7 no.3
    • /
    • pp.59-66
    • /
    • 2007
  • Ray tracing is one of the classical global illumination methods to generate a photo-realistic rendering image with various lighting effects such as reflection and refraction. However, there are some restrictions on real-time applications because of its computation load. In order to overcome these limitations, many researches of the ray tracing based on GPU (Graphics Processing Unit) have been presented up to now. In this paper, we implement the ray tracing algorithm by J. Purcell and combine it with two methods in order to improve the rendering performance for interactive applications. First, intersection points of the primary ray are determined efficiently using rasterization on graphics hardware. We then construct the acceleration structure of 3D objects to improve the rendering performance. There are few researches on a detail analysis of improved performance by these considerations in ray tracing rendering. We compare the rendering system with environment mapping based on GPU and implement the wireless remote rendering system. This system is useful for interactive applications such as the realtime composition, augmented reality and virtual reality.

  • PDF

Non-Photorealistic Rendering Using CUDA-Based Image Segmentation (CUDA 기반 영상 분할을 사용한 비사실적 렌더링)

  • Yoon, Hyun-Cheol;Park, Jong-Seung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.11
    • /
    • pp.529-536
    • /
    • 2015
  • When rendering both three-dimensional objects and photo images together, the non-photorealistic rendering results are in visual discord since the two contents have their own independent color distributions. This paper proposes a non-photorealistic rendering technique which renders both three-dimensional objects and photo images such as cartoons and sketches. The proposed technique computes the color distribution property of the photo images and reduces the number of colors of both photo images and 3D objects. NPR is performed based on the reduced colormaps and edge features. To enhance the natural scene presentation, the image region segmentation process is preferred when extracting and applying colormaps. However, the image segmentation technique needs a lot of computational operations. It takes a long time for non-photorealistic rendering for large size frames. To speed up the time-consuming segmentation procedure, we use GPGPU for the parallel computing using the GPU. As a result, we significantly improve the execution speed of the algorithm.

Study on Compositing Editing of 360˚ VR Actual Video and 3D Computer Graphic Video (360˚ VR 실사 영상과 3D Computer Graphic 영상 합성 편집에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.17 no.4
    • /
    • pp.255-260
    • /
    • 2019
  • This study is about an efficient synthesis of $360^{\circ}$ video and 3D graphics. First, the video image filmed by a binocular integral type $360^{\circ}$ camera was stitched, and location values of the camera and objects were extracted. And the data of extracted location values were moved to the 3D program to create 3D objects, and the methods for natural compositing was researched. As a result, as the method for natural compositing of $360^{\circ}$ video image and 3D graphics, rendering factors and rendering method were derived. First, as for rendering factors, there were 3D objects' location and quality of material, lighting and shadow. Second, as for rendering method, actual video based rendering method's necessity was found. Providing the method for natural compositing of $360^{\circ}$ video image and 3D graphics through this study process and results is expected to be helpful for research and production of $360^{\circ}$ video image and VR video contents.

Adaptive Foveated Ray Tracing Based on Time-Constrained Rendering for Head-Mounted Display (헤드 마운티드 디스플레이를 위한 시간 제약 렌더링을 이용한 적응적 포비티드 광선 추적법)

  • Kim, Youngwook;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.113-123
    • /
    • 2022
  • Ray tracing-based rendering creates by far more realistic images than the traditional rasterization-based rendering. However, it is still burdensome when implemented for a Head-Mounted Display (HMD) system that demands a wide field of view and a high display refresh rate. Furthermore, for presenting high-quality images on the HMD screen, a sufficient number of ray sampling should be carried out per pixel to alleviate visually annoying spatial and temporal aliases. In this paper, we extend the recent selective foveated ray tracing technique by Kim et al. [1], and propose an improved real-time rendering technique that realizes the rendering effect of the classic Whitted-style ray tracing on the HMD system. In particular, by combining the ray tracing hardware-based acceleration technique and time-constrained rendering scheme, we show that fast HMD ray tracing is possible that is well suited to human visual systems.

Effects of Application of Rendered Carcass Residue on Greenhouse Gases and Pepper Growth (랜더링된 가축사체 잔류물 시용이 온실가스 및 고추 생육에 미치는 영향)

  • Jae-Hyuk Park;Dong-Wook Kim;Se-Won Kang;Ju-Sik Cho
    • Korean Journal of Environmental Agriculture
    • /
    • v.42 no.4
    • /
    • pp.457-464
    • /
    • 2023
  • The rendering residue generated by rendering disposal, an eco-friendly livestock carcass disposal method, is a useful agricultural resource. Methods for recycling this are being actively researched, and this study investigated the impact of applying rendered residue directly to soil on crop productivity and the agricultural environment. The chemical properties of the rendering residue were examined. The pH, OM, T-N, T-P, CaO, K2O, and MgO content values were 5.47%, 59.8%, 9.22%, 2.96%, 2.16%, 0.51% and 0.10%, respectively. Treatment conditions were divided into control, inorganic fertilizer, and rendering residue, and rendering residue corresponding to 50, 100, and 200% nitrogen content was applied based on the amount of inorganic fertilizer nitrogen input. Greenhouse gases and ammonia were collected during the cultivation period. Rendering residue increased both the yield and growth of peppers and was effective in improving nutrients such as pH and OM of the soil after harvest. However, compared to inorganic fertilizer treatment, it increased emissions of nitrous oxide and methane as well as ammonia. It is judged that the direct agricultural use of rendering residue is difficult, and a utilization method is needed.