Browse > Article
http://dx.doi.org/10.4218/etrij.2021-0300

Voxel-wise UV parameterization and view-dependent texture synthesis for immersive rendering of truncated signed distance field scene model  

Kim, Soowoong (Media Coding Research Section, Telecommunications & Media Research Laboratory, Electronics and Telecommunications Research Institute)
Kang, Jungwon (Media Coding Research Section, Telecommunications & Media Research Laboratory, Electronics and Telecommunications Research Institute)
Publication Information
ETRI Journal / v.44, no.1, 2022 , pp. 51-61 More about this Journal
Abstract
In this paper, we introduced a novel voxel-wise UV parameterization and view-dependent texture synthesis for the immersive rendering of a truncated signed distance field (TSDF) scene model. The proposed UV parameterization delegates a precomputed UV map to each voxel using the UV map lookup table and consequently, enabling efficient and high-quality texture mapping without a complex process. By leveraging the convenient UV parameterization, our view-dependent texture synthesis method extracts a set of local texture maps for each voxel from the multiview color images and separates them into a single view-independent diffuse map and a set of weight coefficients for an orthogonal specular map basis. Furthermore, the view-dependent specular maps for an arbitrary view are estimated by combining the specular weights of each source view using the location of the arbitrary and source viewpoints to generate the view-dependent textures for arbitrary views. The experimental results demonstrate that the proposed method effectively synthesizes texture for an arbitrary view, thereby enabling the visualization of view-dependent effects, such as specularity and mirror reflection.
Keywords
immersive rendering; multiview image processing; texture synthesis; view-dependent texture mapping; volumetric video representation;
Citations & Related Records
Times Cited By KSCI : 3  (Citation Analysis)
연도 인용수 순위
1 D. Werner, A. Al-Hamadi, and P. Werner, Truncated signed distance function: experiments on voxel size, in Proc. Int. Conf. Image Anal. Recogn. (Vilamoura, Portugal), Oct. 2014, pp. 357-364. https://doi.org/10.1007/978-3-319-11755-3_40   DOI
2 T. Whelan et al., Robust real-time visual odometry for dense RGB-D mapping, in Proc. IEEE Int. Conf. Robotics Autom. (Karlsruhe, German), 6-10 May 2013. https://doi.org/10.1109/ICRA.2013.6631400   DOI
3 S. A. Shafer, Using color to separate reflection components, Color Res. Appl. 10 (1985), no. 4, 210-218.   DOI
4 K. Honauer et al., A dataset and evaluation methodology for depth estimation on 4D light fields, in Proc. Asian Conf. Comput. Vision. (Taipei, Taiwan). 2016, pp. 19-34. https://doi.org.10.1007/978-3-319-54187-7_2   DOI
5 M. Volino et al., Light field compression using eigen textures, in Proc. Int. Conf. 3D Vision (Quebec, Canada), Sept. 2019, pp. 482-490. https://doi.org/10.1109/3DV.2019.00060   DOI
6 J. Jeon, Y. Jung, H. Kim, and S. Lee, Texture map generation for 3D reconstructed scenes, The Vis. Comput. 32 (2016), no. 6-8, 955-965. http://doi.org/10.1007/s00371-016-1249-5   DOI
7 K. Hormann, B. Levy, and A. Sheffer, Mesh parameterization: theory and practice, in Proc. ACM SIGGRAPH ASIA (Singapore), Dec. 2008. https://doi.org/10.1145/1508044.1508091   DOI
8 D. Tang et al., Deep implicit volume compression, in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recogn. (Seattle, WA, USA), June 2020, pp. 1293-1303. https://doi.org/10.1109/CVPR42600.2020.00137   DOI
9 C. Montani, R. Scateni, and R. Scopigno, A modified look-up table for implicit disambiguation of marching cubes, The Vis. Comput. 10 (1994), no. 6, 353-355. https://doi.org/10.1007/bf01900830   DOI
10 P. V. Sander et al., Multi-chart geometry images, in Proc. Eurographics/ACM SIGGRAPH symp. Geometry Process. (Aachen, Germany), June 2003, pp. 146-155.
11 Blender Online Community, Blender - a 3D modelling and rendering package, 2018. http://www.blender.org
12 R. Maier, J. Stuckler, and D. Cremers, Super-resolution keyframe fusion for 3D modeling with high-quality textures, in Proc. Int. Conf. 3S Vision (Lyon, France). Oct. 2015, pp. 536-544. https://doi.org/10.1109/3DV.2015.66   DOI
13 M. Niessner, M. Zollhofer, S. Izadi, and M. Stamminger, Real-Time 3D reconstruction at scale using voxel hashing, ACM Trans. Graph. 32 (2013), no. 6, 1-11. https://doi.org/10.1145/2508363.2508374   DOI
14 L. Yang, Q. Yan, Y. Fu, and C. Xiao, Surface reconstruction via fusing sparse-sequence of depth images, IEEE Trans. Vis. Comput. Graph. 24 (2018), no. 2, 1190-1203.   DOI
15 M. Zollhofer, P. Stotko, A. Gorlitz, C. Theobalt, M. Niessner, R. Klein, and A. Kolb, State of the art on 3D reconstruction with RGB-D cameras, Comput. Graph. Forum 37 (2018), no. 2, 625-652.   DOI
16 E. Bylow et al., Real-time camera tracking and 3D reconstruction using signed distance functions, Robotics: Sci. Syst., 2013, http://doi.org/10.15607/RSS.2013.IX.035   DOI
17 G. M. Morton, A Computer Oriented Geodetic Data Base and a New Technique in File Sequencing, International Business Machines Company, 1966. https://books.google.co.kr/books?id=9FFdHAAACAAJ
18 J. H. Lee et al., TextureFusion: high-quality texture acquisition for real-time RGB-D scanning, in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recogn. (Seattle, WA, USA), June 2020, pp. 1272-1280. https://doi.org/10.1109/CVPR42600.2020.00135   DOI
19 G. H. Golub and C. F. Van Loan, Matrix computations, 4th ed., Johns Hopkins University Press, Baltimore, MD, 2013.
20 O. Kahler, V. Prisacariu, J. Valentin, and D. Murray, Hierarchical voxel block hashing for efficient integration of depth images, IEEE Robot. Autom. Lett. 1 (2016), no. 1, 192-197.   DOI
21 G. Sandri, R. L. de Queiroz, and P. A. Chou, Compression of plenoptic point clouds, IEEE Trans. Image Process. 28 (2019), no. 3, 1419-1427.   DOI
22 R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, A system for acquiring, processing, and rendering panoramic light field stills for virtual reality, ACM Trans. Graph. (TOG) 37 (2019), no. 6, 1-15.
23 S. Moezzi et al., Immersive video, in Proc. IEEE 1996 Virtual Reality Annu. Int. Symp. (Santa Clara, CA, USA), 1996, pp. 17-24. https://doi.org/10.1109/VRAIS.1996.490506   DOI
24 P. Bourke, Polygonising a scalar field (marching cubes), 2016. Dostopno na: http://paulbourke.net/geometry/polygonise
25 B. Curless and M. Levoy, A volumetric method for building complex models from range images, in Proc. Annu. Conf. Comput. Graphics Interactive Techniques, Aug. 1996, pp. 303-312. https://doi.org/10.1145/237170.237269   DOI
26 B. Levy, S. Petitjean, N. Ray, and J. Maillot, Least squares conformal maps for automatic texture atlas generation, ACM Trans. Graph. (TOG) 21 (2002), no. 3, 362-371.   DOI
27 P. Ondruska, P. Kohli, and S. Izadi, MobileFusion: Real-time volumetric surface reconstruction and dense tracking on mobile phones, IEEE Trans. Vis. Comput. Graph. 21 (2015), no. 11, 1251-1258.   DOI
28 M. Wien et al., Standardization Status of Immersive Video Coding, IEEE J. Emerg. Selected Topics Circuits Syst. 9 (2019), no. 1, 5-17. http://doi.org/10.1109/jetcas.2019.2898948   DOI
29 M. Domaski et al., Immersive visual media - MPEG-I: 360 video, in Proc. Virtual Navigation Beyond, 2017 Int. Conf. Syst., Signals Image Process. (Poznan, Poland), May 2017, pp. 1-9. https://doi.org/10.1109/IWSSIP.2017.7965623   DOI
30 B. Salahieh, S. Bhatia, and J. Boyce, Multi-pass renderer in MPEG test model for immersive video, in Proc. Picture Coding Symp. (Ningbo, China), Nov. 2019, pp. 1-5. https://doi.org/10.1109/PCS48520.2019.8954515   DOI
31 K. Zhou et al., ISO-charts: stretch-driven mesh parameterization using spectral analysis, in Proc. Eurographics/ACM SIGGRAPH Symp. Geometry Process., SGP '04, Association Comput. Machinery (New York, NY, USA), July 2004, pp. 45-54. https://doi.org/10.1145/1057432.1057439   DOI
32 G. Lafruit, D. Bonatto, C. Tulvan, M. Preda, and L. Yu, Understanding MPEG-I coding standardization in immersive VR/AR applications, SMPTE Motion Imaging J. 128 (2019), no. 10, 33-39.
33 R. A. Newcombe et al., KinectFusion: Real-time dense surface mapping and tracking, in Proc. IEEE Int. Symp. Mixed Augmented reality (Basel, Switzerland), Oct. 2011, pp. 127-136. https://doi.org/10.1109/ISMAR.2011.6092378   DOI
34 W. E. Lorensen and H. E. Cline, Marching cubes: a high resolution 3D surface construction algorithm, SIGGRAPH Comput. Graph. 21 (1987), no. 4, 163-169. https://doi.org/10.1145/37402.37422   DOI
35 M. Slavcheva, M. Baust, and S. Ilic, Variational Level Set Evolution for Non-rigid 3D Reconstruction from a Single Depth Camera, IEEE Trans. Pattern Anal. Machine Intell. 43 (2021), no. 8, 2838-2850. https://doi.org/10.1109/tpami.2020.2976065   DOI
36 A. Sheffer, E. Praun, and K. Rose, Mesh parameterization methods and their applications, Found. Trends. Comput. Graph. Vis. 2 (2006), no. 2, 105-171. https://doi.org/10.1561/0600000011   DOI
37 M. Klingensmith et al., Chisel: Real time large scale 3D reconstruction onboard a mobile device using spatially hashed signed distance fields, Robotics: Sci. Syst., 2015. https://doi.org/10.15607/RSS.2015.XI.040   DOI