Browse > Article
http://dx.doi.org/10.9766/KIMST.2020.23.6.542

Point Cloud Data Driven Level of detail Generation in Low Level GPU Devices  

Kam, JungWon (The 1st Research and Department of Electronic Engineering, Changwon National University)
Gu, BonWoo (Department of M&S 1, SIMNET Coperation)
Jin, KyoHong (The 1st Research and Department of Electronic Engineering, Changwon National University)
Publication Information
Journal of the Korea Institute of Military Science and Technology / v.23, no.6, 2020 , pp. 542-553 More about this Journal
Abstract
Virtual world and simulation need large scale map rendering. However, rendering too many vertices is a computationally complex and time-consuming process. Some game development companies have developed 3D LOD objects for high-speed rendering based on distance between camera and 3D object. Terrain physics simulation researchers need a way to recognize the original object shape from 3D LOD objects. In this paper, we proposed simply automatic LOD framework using point cloud data (PCD). This PCD was created using a 6-direct orthographic ray. Various experiments are performed to validate the effectiveness of the proposed method. We hope the proposed automatic LOD generation framework can play an important role in game development and terrain physic simulation.
Keywords
Level of Detail; Point Cloud Data; Volumetric Rendering;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Burt and Adelson, The Laplacian PYRAMID as a Compact Image Code, IEEE Transactions on Communications, 31(4), pp. 532-540, 1983.   DOI
2 Ghiasi and Fowlkes, Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation, In European Conference on Computer Vision, pp. 519-534, 2016.
3 He et al. A System for Rapid, Automatic Shader Level-of-Detail, ACM Transactions on Graphics (TOG), 34(6), p. 187, 2015.
4 Schneider et al., "GPU-Friendly High-Quality Terrain Rendering," 2006.
5 WEN, et al., Real-Time Rendering of Large Terrain on Mobile Device, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, p. 37, 2008.
6 Zhou et al., A Hybrid Level-of-Detail Representation for Large-Scale Urban Scenes Rendering, Computer Animation and Virtual Worlds, 25(3-4), pp. 243-253, 2014.   DOI
7 Zienkiewicz et al., Monocular, Real-Time Surface Reconstruction using Dynamic Level of Detail, In 2016 Fourth International Conference on 3D Vision (3DV), IEEE, pp. 37-46, 2016.
8 Shreiner et al., OpenGL Programming Guide: The Official Guide to Learning OpenGL, Versions 3.0 and 3.1. Pearson Education, 2009.
9 Mantler et al., Displacement Mapped Billboard Clouds, In Proceedings of Symposium on Interactive 3D Graphics and Games, Citeseer, January 2007.
10 Decaudin and Neyret, Volumetric Billboards, In Computer Graphics Forum, Oxford, UK: Blackwell Publishing Ltd., Vol. 28, No. 8, pp. 2079-2089, December 2009.
11 Vichitvejpaisal and Kanongchaiyos, Enhanced Billboards for Model Simplification, 2006.
12 Laine and Karras, Efficient Sparse Voxel Octrees, IEEE Transactions on Visualization and Computer Graphics, 17(8), pp. 1048-1059, 2011.   DOI
13 Lorensen et al., Marching Cubes: A High Resolution 3D Surface Construction Algorithm, In ACM Siggraph Computer Graphics, ACM, Vol. 21, No. 4, pp. 163-169, August 1987.   DOI
14 Jablonski, S. and Martyn, Real-Time Voxel Rendering Algorithm based on Screen Space Billboard Voxel Buffer with Sparse Lookup Textures, 2016.
15 Baert et al., Out-of-Core Construction of Sparse Voxel Octrees, In Proceedings of the 5th High-Performance Graphics Conference, ACM, pp. 27-32, July 2013.
16 Ginsburg et al., OpenGL ES 3.0 Programming Guide, Addison-Wesley Professional, 2014.
17 Brothaler, OpenGL ES 2 for Android: A Quick-Start Guide, Pragmatic Bookshelf, 2013.
18 Rost et al., OpenGL Shading Language, Pearson Education, 2009.
19 Sanz-Pastor et al., Volumetric Three-Dimensional Fog Rendering Technique, U.S. Patent 6,268,861, 2001.
20 Linsen, Point Cloud Representation, Technical Report, Faculty of Computer Science, University of Karlsruhe: Univ., Fak. fur Informatik, Bibliothek, 2001.
21 Zhao and Zhu, Image Parsing with Stochastic Scene Grammar, In Advances in Neural Information Processing Systems, pp. 73-81, 2011.
22 Yan et al., 3D Point Cloud Map Construction based on Line Segments with Two Mutually Perpendicular Laser Sensors, In 2013 13th International Conference on Control, Automation and Systems(ICCAS 2013), IEEE, pp. 1114-1116, October 2013.
23 Camplani and Salgado, Efficient Spatio-Temporal Hole Filling Strategy for Kinect Depth Maps, In Three-Dimensional Image Processing(3DIP) and Applications Ii(Vol. 8290, p. 82900E), International Society for Optics and Photonics, January 2012.
24 Eckart et al., Accelerated Generative Models for 3D Point Cloud Data, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5497-5505, 2016.
25 Belton et al., Processing Tree Point Clouds using Gaussian Mixture Models, Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Antalya, Turkey, pp. 11-13, 2013.
26 Arce and Gonzalo, Nonlinear Signal Processing: A Statistical Approach, John Wiley & Sons, 2005.
27 Touma et al., 3D Mouse and Game Controller based on Spherical Coordinates System and System for use, U.S. Patent 7,683,883, 2010.
28 Zheng et al., Beyond Point Clouds: Scene Understanding by Reasoning Geometry and Physics, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3127-3134, 2013.