• Title/Summary/Keyword: High quality Volume Rendering

Search Result 24, Processing Time 0.019 seconds

High-quality Shear-warp Volume Rendering Using Efficient Supersampling and Pre-integration Technique (효율적인 수퍼샘플링과 선-적분을 이용한 고화질 쉬어-왑 분해 볼륨 렌더링)

  • Kye, Hee-Won;Kim, Tae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.8
    • /
    • pp.971-981
    • /
    • 2006
  • As shear-warp volume rendering is the fastest rendering method among the software based approaches, image quality is not good as that of other high-quality rendering methods. In this paper, we propose two methods to improve the image quality of shear-warp volume rendering without sacrificing computational efficiency. First, supersampling is performed in intermediate image space. We propose an efficient method to transform between volume and image coordinates at the arbitrary ratio. Second, we utilize pre-integrated rendering technique for shear-warp rendering. We propose new data structure called overlapped min-max map. Using this structure, empty space leaping can be performed so that we can maintain the rendering speed even though pre-integrated rendering is applied. Consequently, shear-warp rendering can generate high-qualify images comparable to those generated by the ray-casting without degrading speed.

  • PDF

Volume Rendering Using Multi-Textures (Multi-Textures를 이용한 Volume Rendering)

  • 박재영;이병일;최흥국
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.12a
    • /
    • pp.169-172
    • /
    • 2000
  • Direct volume rendering has yet been restricted to high-end graphic workstations and special-purpose hardware, due to the large amount of trilinear interpolation, that are necessary to obtain high image quality. In this paper, we implemented the volume rendering techniques using the 2D-texture at the environment of standard PC hardware. In addition, we show how multi-texturing capabilities of modern PC graphics board are enable to volume rendering. Besides using extended OpenGL function, we improved pixel operations and rendering capacity.

  • PDF

Fast Ambient Occlusion Volume Rendering using Local Statistics (지역적 통계량을 이용한 고속 환경-광 가림 볼륨 가시화)

  • Nam, Jinhyun;Kye, Heewon
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.2
    • /
    • pp.158-167
    • /
    • 2015
  • This study presents a new method to improve the speed of high quality volume rendering. We improve the speed of ambient occlusion which is one of the global illumination techniques used in traditional volume visualization. Calculating ambient occlusion takes much time because it determines an illumination value of a sample by integrating opacities of nearby samples. This study proposes an improved method for this by using local statistics such as averages and standard deviations. We calculate local statistics for each volume block, a set of nearby samples, in pre-processing time. In the rendering process, we efficiently determine the illumination value by assuming the density distribution as a normal distribution. As the results, we can generate high quality images that combine ambient occlusion illumination with local illumination in real time.

GPU based Maximum Intensity Projection using Clipping Plane Re-rendering Method (절단면 재렌더링 기법을 이용한 GPU 기반 MIP 볼륨 렌더링)

  • Hong, In-Sil;Kye, Hee-Won;Shin, Yeong-Gil
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.3
    • /
    • pp.316-324
    • /
    • 2007
  • Maximum Intensity Projection (MIP) identifies patients' anatomical structures from MR or CT data sets. Recently, it becomes possible to generate MIP images with interactive speed by exploiting Graphics Processing Unit (GPU) even in large volume data sets. Generally, volume boundary plane is obliquely crossed with view-aligned texture plane in hardware-texture based volume rendering. Since the ray sampling distance is not increased at volume boundary in volume rendering, the aliasing problem occurs due to data loss. In this paper, we propose an efficient method to overcome this problem by Re-rendering volume boundary planes. Our method improves image quality to make dense distances between samples near volume boundary which is a high frequency area. Since it is only 6 clipping planes are additionally needed for Re-rendering, high quality rendering can be performed without sacrificing computational efficiency. Furthermore, our method couldbe applied to Minimum Intensity Projection (MinIP) volume rendering.

  • PDF

Performance Analysis of Cloud Rendering Based on Web Real-Time Communication

  • Lim, Gyubeom;Hong, Sukjun;Lee, Seunghyun;Kwon, Soonchul
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.276-284
    • /
    • 2022
  • In this paper, we implemented cloud rendering using WebRTC for high-quality AR and VR services. Cloud rendering is an applied technology of cloud computing. It efficiently handles the rendering of large volumes of 3D content. The conventional VR and AR service is a method of downloading 3D content. The download time is delayed as the 3D content capacity increases. Cloud rendering is a streaming method according to the user's point of view. Therefore, stable service is possible regardless of the 3D content capacity. In this paper, we implemented cloud rendering using WebRTC and analyzed its performance. We compared latency of 100MB, 300MB, and 500MB 3D AR content in 100Mbps and 300Mbps internet environments. As a result of the analysis, cloud rendering showed stable latency regardless of data volume. On the other hand, the conventional method showed an increase in latency as the data volume increased. The results of this paper quantitatively evaluate the stability of cloud rendering. This is expected to contribute to high-quality VR and AR services

High Quality Volume Rendering Using the Empty Space Jittering and the Sampling Alignment Method (빈공간 교란과 샘플링 위치 정렬을 이용한 고화질 볼륨 가시화)

  • Kye, Heewon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.7
    • /
    • pp.852-861
    • /
    • 2013
  • When users use medical volume rendering applications, selecting specific region of volume data and observing the region by magnification is a common process.As the wood-grain artifact is arise from the magnified image, the jittered sampling technique has been used to remove the problem. However, the jittered sampling leads to some noise along the volume edge. In this research, we reveal the reason of the noise, and present a solution. To remove the wood-grain artifact without the noise, we propose the empty space jittering and the sampling alignment method. Using these methods, we can produce high quality volume rendering images without noticeable time consuming.

Development of Mobile Volume Visualization System (모바일 볼륨 가시화 시스템 개발)

  • Park, Sang-Hun;Kim, Won-Tae;Ihm, In-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.5
    • /
    • pp.286-299
    • /
    • 2006
  • Due to the continuing technical progress in the capabilities of modeling, simulation, and sensor devices, huge volume data with very high resolution are common. In scientific visualization, various interactive real-time techniques on high performance parallel computers to effectively render such large scale volume data sets have been proposed. In this paper, we present a mobile volume visualization system that consists of mobile clients, gateways, and parallel rendering servers. The mobile clients allow to explore the regions of interests adaptively in higher resolution level as well as specify rendering / viewing parameters interactively which are sent to parallel rendering server. The gateways play a role in managing requests / responses between mobile clients and parallel rendering servers for stable services. The parallel rendering servers visualize the specified sub-volume with rendering contexts from clients and then transfer the high quality final images back. This proposed system lets multi-users with PDA simultaneously share commonly interesting parts of huge volume, rendering contexts, and final images through CSCW(Computer Supported Cooperative Work) mode.

A Data Structure for Real-time Volume Ray Casting (실시간 볼륨 광선 투사법을 위한 자료구조)

  • Lim, Suk-Hyun;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.11 no.1
    • /
    • pp.40-49
    • /
    • 2005
  • Several optimization techniques have been proposed for volume ray casting, but these cannot achieve real-time frame rates. In addition, it is difficult to apply them to some applications that require perspective projection. Recently, hardware-based methods using 3D texture mapping are being used for real-time volume rendering. Although rendering speed approaches real time, the larger volumes require more swapping of volume bricks for the limited texture memory. Also, image quality deteriorates compared with that of conventional volume ray casting. In this paper, we propose a data structure for real-time volume ray casting named PERM (Precomputed dEnsity and gRadient Map). The PERM stores interpolated density and gradient vector for quantized cells. Since the information requiring time-consuming computations is stored in the PERM, our method can ensure interactive frame rates on a consumer PC platform. Our method normally produces high-quality images because it is based on conventional volume ray casting.

  • PDF

Real-time Volume Rendering using Point-Primitive (포인트 프리미티브를 이용한 실시간 볼륨 렌더링 기법)

  • Kang, Dong-Soo;Shin, Byeong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.10
    • /
    • pp.1229-1237
    • /
    • 2011
  • The volume ray-casting method is one of the direct volume rendering methods that produces high-quality images as well as manipulates semi-transparent object. Although the volume ray-casting method produces high-quality image by sampling in the region of interest, its rendering speed is slow since the color acquisition process is complicated for repetitive memory reference and accumulation of sample values. Recently, the GPU-based acceleration techniques are introduced. However, they require pre-processing or additional memory. In this paper, we propose efficient point-primitive based method to overcome complicated computation of GPU ray-casting. It presents semi-transparent objects, however it does not require preprocessing and additional memory. Our method is fast since it generates point-primitives from volume dataset during sampling process and it projects the primitives onto the image plane. Also, our method can easily cope with OTF change because we can add or delete point-primitive in real-time.

High-Speed Virtual Endoscopy using Improved Space-Leaping (개선된 공간 도약법을 이용한 고속 가상 내시경 기법)

  • Shin, Byeong-Seok;Jin, Ge
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.8
    • /
    • pp.463-471
    • /
    • 2002
  • In order to implement virtual endoscopy, high-speed rendering algorithm that generates accurate perspective projection images and efficient collision detection method are essential. In this paper we propose an efficient virtual endoscopy system based on volume rendering technique. It is possible to skip over empty (transparent) space using the distance value produced in preprocessing time, and it does not deteriorate image quality since it is an extension of ray-casting. It also accelerates rendering speed with minimal loss of image quality by adjusting sampling interval along a ray according to direction of the ray. Using the distance information, we can simplify the collision detection of volumetric objects.