• Title/Summary/Keyword: point rendering

Search Result 135, Processing Time 0.03 seconds

Accelerating Depth Image-Based Rendering Using GPU (GPU를 이용한 깊이 영상기반 렌더링의 가속)

  • Lee, Man-Hee;Park, In-Kyu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.11
    • /
    • pp.853-858
    • /
    • 2006
  • In this paper, we propose a practical method for hardware-accelerated rendering of the depth image-based representation(DIBR) of 3D graphic object using graphic processing unit(GPU). The proposed method overcomes the drawbacks of the conventional rendering, i.e. it is slow since it is hardly assisted by graphics hardware and surface lighting is static. Utilizing the new features of modem GPU and programmable shader support, we develop an efficient hardware-accelerating rendering algorithm of depth image-based 3D object. Surface rendering in response of varying illumination is performed inside the vertex shader while adaptive point splatting is performed inside the fragment shader. Experimental results show that the rendering speed increases considerably compared with the software-based rendering and the conventional OpenGL-based rendering method.

The design of 3D graphics rendering processor for portable device (휴대용기기에 적합한 3차원 그래픽 렌더링 처리기의 파이프라인 설계)

  • 우현재;정종철;이문기
    • Proceedings of the IEEK Conference
    • /
    • 2003.07b
    • /
    • pp.1213-1216
    • /
    • 2003
  • This paper proposes an 3D graphics rendering processor for portable device. One the most important factor is chip size for portable device, but the conventional 3D graphics rendering processor is not a suitable because the processor needs a lot of multiplication and division units. So the proposed architecture substitutes single precision floating point by 32 bit fixed point, and uses recursive units for the same operation such as color values(z, r, g, b, a) and texture values (s, t, u, v). In this approach, we reduce numbers of multiplications and divisions by 66.1% and 75% respectively at the sacrifice of performance degradation by 2.12%.

  • PDF

Enhanced Image Mapping Method for Computer-Generated Integral Imaging System (집적 영상 시스템을 위한 향상된 이미지 매핑 방법)

  • Lee Bin-Na-Ra;Cho Yong-Joo;Park Kyoung-Shin;Min Sung-Wook
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.295-300
    • /
    • 2006
  • The integral imaging system is an auto-stereoscopic display that allows users to see 3D images without wearing special glasses. In the integral imaging system, the 3D object information is taken from several view points and stored as elemental images. Then, users can see a 3D reconstructed image by the elemental images displayed through a lens array. The elemental images can be created by computer graphics, which is referred to the computer-generated integral imaging. The process of creating the elemental images is called image mapping. There are some image mapping methods proposed in the past, such as PRR(Point Retracing Rendering), MVR(Multi-Viewpoint Rendering) and PGR(Parallel Group Rendering). However, they have problems with heavy rendering computations or performance barrier as the number of elemental lenses in the lens array increases. Thus, it is difficult to use them in real-time graphics applications, such as virtual reality or real-time, interactive games. In this paper, we propose a new image mapping method named VVR(Viewpoint Vector Rendering) that improves real-time rendering performance. This paper describes the concept of VVR first and the performance comparison of image mapping process with previous methods. Then, it discusses possible directions for the future improvements.

Realistic Skin Rendering for 3D Facial Makeup (3차원 얼굴 메이크업을 위한 사실적인 피부 렌더링)

  • Lee, Sang-Hoon;Kim, Hyeon-Joong;Choi, Soo-Mi
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.4
    • /
    • pp.520-528
    • /
    • 2013
  • Makeup simulation is a tool that tests various makeup methods on a virtual digital face using input and display devices. Although several simulation systems supporting various makeup styles have been recently developed, most systems have many limitations on realistic skin representations because they use 2D facial images. We develope a realistic makeup simulation method which can control skin reflectance and roughness parameters. The method allows a user to simulate makeup applications while changing skin parameters using high-resolution facial data acquired by 3D scanners. Besides we use a point-based shape representation which enables simple and flexible 3D rendering, and provide a more realistic makeup simulation by applying different skin parameters on each part of the face.

Efficient Haptic Interaction for Highly Complex Object Generated by Point-based Surfaces (점 기반 곡면으로 이루어진 복잡한 가상 물체와의 효율적인 햅틱 상호작용)

  • Lee, Beom-Chan;Kim, Duck-Bong;Park, Hye-Shin;Kim, Jong-Phil;Lee, Kwan-Heng;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.70-75
    • /
    • 2007
  • 본 논문은 연결정보(connectivity) 및 미리 계산된 계층적 데이터 구조(hierarchical data structure)를 이용하지 않는 그래픽 및 햅틱 렌더링 알고리즘을 제안한다. 제안된 알고리즘은 점 기반 그래픽 표현(point-based graphic representation) 기법을 이용하여 3차원 자유 곡면을 생성한다. 생성된 점 기반 곡면 물체와의 햅틱 상호작용을 위해 그래픽 하드웨어(GPU)에 접근하여 점 기반 곡면에서 생성된 깊이 이미지(depth image)를 이용하여 햅틱 상호작용에 필수 요소인 충돌검출(collision detection) 및 반력 연산(contact force computation)을 수행한다.

  • PDF

Volume Rendering Using Special Point of Volume Data (체적 데이터의 특징점을 이용한 효율적인 볼륨 랜더링)

  • Kim, Hyeong-Gyun;Kim, Yong-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.666-669
    • /
    • 2005
  • 본 논문에서는 3차원 형태로 체적 데이터를 효율적으로 랜더링 하기 위해서, 체적 데이터의 특징점을 추출하고 이를 이용하여 3차원 형태로 복원한다. 여기서, 3D Point(Vertext)를 이용하여 체적 데이터를 랜더링하고자 하여 체적소들에 대해 특정한 3D Points 추출하는 PEF 과정과 랜더링 과정을 담당하는 정점 변환 파이프라인 과정을 제안한다. 일반적으로, 고화질의 광선 추적 랜더링 처리의 경우 계샨량이 많아 그 만큼 랜더링 속도가 떨어져 체적에 대한 다른 ?A너링 기법들이 많이 제안되고 있지만, 본 논문은 다른 각도로의 접근하고자 하여, 기존의 광선 추적에 비해 저화질과 매끄럽지 않는 영상을 나타내지만, 추출된 데이터만 고려하기 때문에 계샨량을 많이 줄일 수 있어 처리속도가 개선되어 졌을 볼 수가 있다. 또한, 본 논문에서 기존의 광선 추적 기법에서 표현하는 회전, 절단, 축소/확대의 기능을 그대로 OpenGL을 이용하여 본 논문에서 제안한 처리 단계로 하여 3차원 랜더링 프로그램 제작 하였다.

  • PDF

Volume Rendering using Grid Computing for Large-Scale Volume Data

  • Nishihashi, Kunihiko;Higaki, Toru;Okabe, Kenji;Raytchev, Bisser;Tamaki, Toru;Kaneda, Kazufumi
    • International Journal of CAD/CAM
    • /
    • v.9 no.1
    • /
    • pp.111-120
    • /
    • 2010
  • In this paper, we propose a volume rendering method using grid computing for large-scale volume data. Grid computing is attractive because medical institutions and research facilities often have a large number of idle computers. A large-scale volume data is divided into sub-volumes and the sub-volumes are rendered using grid computing. When using grid computing, different computers rarely have the same processor speeds. Thus the return order of results rarely matches the sending order. However order is vital when combining results to create a final image. Job-Scheduling is important in grid computing for volume rendering, so we use an obstacle-flag which changes priorities dynamically to manage sub-volume results. Obstacle-Flags manage visibility of each sub-volume when line of sight from the view point is obscured by other subvolumes. The proposed Dynamic Job-Scheduling based on visibility substantially increases efficiency. Our Dynamic Job-Scheduling method was implemented on our university's campus grid and we conducted comparative experiments, which showed that the proposed method provides significant improvements in efficiency for large-scale volume rendering.

Massive 3D Point Cloud Visualization by Generating Artificial Center Points from Multi-Resolution Cube Grid Structure (다단계 정육면체 격자 기반의 가상점 생성을 통한 대용량 3D point cloud 가시화)

  • Yang, Seung-Chan;Han, Soo Hee;Heo, Joon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.4
    • /
    • pp.335-342
    • /
    • 2012
  • 3D point cloud is widely used in Architecture, Civil Engineering, Medical, Computer Graphics, and many other fields. Due to the improvement of 3D laser scanner, a massive 3D point cloud whose gigantic file size is bigger than computer's memory requires efficient preprocessing and visualization. We suggest a data structure to solve the problem; a 3D point cloud is gradually subdivided by arbitrary-sized cube grids structure and corresponding point cloud subsets generated by the center of each grid cell are achieved while preprocessing. A massive 3D point cloud file is tested through two algorithms: QSplat and ours. Our algorithm, grid-based, showed slower speed in preprocessing but performed faster rendering speed comparing to QSplat. Also our algorithm is further designed to editing or segmentation using the original coordinates of 3D point cloud.

Rendering Quality Improvement Method based on Depth and Inverse Warping (깊이정보와 역변환 기반의 포인트 클라우드 렌더링 품질 향상 방법)

  • Lee, Heejea;Yun, Junyoung;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.714-724
    • /
    • 2021
  • The point cloud content is immersive content recorded by acquiring points and colors corresponding to the real environment and objects having three-dimensional location information. When a point cloud content consisting of three-dimensional points having position and color information is enlarged and rendered, the gap between the points widens and an empty hole occurs. In this paper, we propose a method for improving the quality of point cloud contents through inverse transformation-based interpolation using depth information for holes by finding holes that occur due to the gap between points when expanding the point cloud. The points on the back are rendered between the holes created by the gap between the points, acting as a hindrance to applying the interpolation method. To solve this, remove the points corresponding to the back side of the point cloud. Next, a depth map at the point in time when an empty hole is generated is extracted. Finally, inverse transform is performed to extract pixels from the original data. As a result of rendering content by the proposed method, the rendering quality improved by 1.2 dB in terms of average PSNR compared to the conventional method of increasing the size to fill the blank area.

Rendering of Particle-Based Water Data Using Point Rendering Method (점 렌더링 기법을 사용한 입자 기반 물 데이터의 렌더링)

  • Lee, Jae-Hak;Cha, Deuk-Hyun;Chang, Byung-Joon;Ihm, In-Sung;Kim, Jang-Hee;Koo, Bon-Ki
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1262-1270
    • /
    • 2006
  • 사실적인 물 애니메이션을 위한 격자 기반 시뮬레이션 기법은 자연스러운 물의 움직임뿐만 아니라 부드러운 물의 표면을 잘 표현해주는 장점이 있다. 이러한 격자 기반 방법과 함께 상대적으로 적은 계산으로 안정적인 결과를 산출해주는 입자 기반의 액체 시뮬레이션 기법이 최근 애니메이션 분야에 적용되기 시작했고, 그로 인하여 입자로 이루어진 시뮬레이션 데이터에 특화된 효과적인 렌더링 기술의 개발이 요구되고 있다. 본 논문에서는 주로 3차원 스캔 데이터와 같이 물체 표면을 샘플링 하여 얻어진 점 집합에 대한 렌더링 기법을 확장하여, 위상 변화가 크고 점 집합에 의해 내부까지 표현되는 물 데이터의 특성에 적합한 렌더링 기법을 제안한다. 본 기법에서는 시뮬레이션을 통하여 얻은 입자 데이터로부터 물의 표면을 표현해주는 새로운 점 집합을 생성하고, 시뮬레이션 된 데이터의 특성을 잘 반영하도록 각 점에 대한 법선 벡터와 반지름을 결정한다. 특히 가공된 점 집합 데이터에 대하여 확장된 점 집합 렌더링 기법을 적용함으로써 입자 데이터가 표현해주는 세밀한 부분들을 보존하면서, 부드러운 물의 표면을 가시화할 수 있도록 하였다.

  • PDF