• Title/Summary/Keyword: 3D cloud rendering

Search Result 20, Processing Time 0.023 seconds

Performance Analysis of Cloud Rendering Based on Web Real-Time Communication

  • Lim, Gyubeom;Hong, Sukjun;Lee, Seunghyun;Kwon, Soonchul
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.276-284
    • /
    • 2022
  • In this paper, we implemented cloud rendering using WebRTC for high-quality AR and VR services. Cloud rendering is an applied technology of cloud computing. It efficiently handles the rendering of large volumes of 3D content. The conventional VR and AR service is a method of downloading 3D content. The download time is delayed as the 3D content capacity increases. Cloud rendering is a streaming method according to the user's point of view. Therefore, stable service is possible regardless of the 3D content capacity. In this paper, we implemented cloud rendering using WebRTC and analyzed its performance. We compared latency of 100MB, 300MB, and 500MB 3D AR content in 100Mbps and 300Mbps internet environments. As a result of the analysis, cloud rendering showed stable latency regardless of data volume. On the other hand, the conventional method showed an increase in latency as the data volume increased. The results of this paper quantitatively evaluate the stability of cloud rendering. This is expected to contribute to high-quality VR and AR services

Massive 3D Point Cloud Visualization by Generating Artificial Center Points from Multi-Resolution Cube Grid Structure (다단계 정육면체 격자 기반의 가상점 생성을 통한 대용량 3D point cloud 가시화)

  • Yang, Seung-Chan;Han, Soo Hee;Heo, Joon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.4
    • /
    • pp.335-342
    • /
    • 2012
  • 3D point cloud is widely used in Architecture, Civil Engineering, Medical, Computer Graphics, and many other fields. Due to the improvement of 3D laser scanner, a massive 3D point cloud whose gigantic file size is bigger than computer's memory requires efficient preprocessing and visualization. We suggest a data structure to solve the problem; a 3D point cloud is gradually subdivided by arbitrary-sized cube grids structure and corresponding point cloud subsets generated by the center of each grid cell are achieved while preprocessing. A massive 3D point cloud file is tested through two algorithms: QSplat and ours. Our algorithm, grid-based, showed slower speed in preprocessing but performed faster rendering speed comparing to QSplat. Also our algorithm is further designed to editing or segmentation using the original coordinates of 3D point cloud.

Point Cloud Data Driven Level of detail Generation in Low Level GPU Devices (Low Level GPU에서 Point Cloud를 이용한 Level of detail 생성에 대한 연구)

  • Kam, JungWon;Gu, BonWoo;Jin, KyoHong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.6
    • /
    • pp.542-553
    • /
    • 2020
  • Virtual world and simulation need large scale map rendering. However, rendering too many vertices is a computationally complex and time-consuming process. Some game development companies have developed 3D LOD objects for high-speed rendering based on distance between camera and 3D object. Terrain physics simulation researchers need a way to recognize the original object shape from 3D LOD objects. In this paper, we proposed simply automatic LOD framework using point cloud data (PCD). This PCD was created using a 6-direct orthographic ray. Various experiments are performed to validate the effectiveness of the proposed method. We hope the proposed automatic LOD generation framework can play an important role in game development and terrain physic simulation.

Development of a Remote Rendering System using Direct3D API (Direct3D API의 원격 실시간 실행 시스템 개발)

  • Lim, Choong-Gyoo
    • Journal of Korea Game Society
    • /
    • v.14 no.5
    • /
    • pp.117-126
    • /
    • 2014
  • There are various kinds of applications if one can develop a remote execution system using for legacy 3D APIs. It can be used in implementing a cloud gaming service based on the real-time video streaming technology. Or, it can also be used in implementing a GPU virtualization for simultaneously rendering of many different 3D applications. The OpenGL API consists of independent global functions while the Direct3D API consists of Microsoft COM-based interfaces and their member functions, which makes the implementation of remote rendering system more difficult. The purpose of the paper is to show the applicability of the technology to any legacy 3D API by successfully designing and implementing a remote rendering system using the Direct3D API. It applies the implementation to a sample Direct3D application and also performs a few experimentations to show the technical feasibility.

GPU-based modeling and rendering techniques of 3D clouds using procedural functions (절차적 함수를 이용한 GPU기반 실시간 3D구름 모델링 및 렌더링 기법)

  • Sung, Mankyu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.4
    • /
    • pp.416-422
    • /
    • 2019
  • This paper proposes a GPU-based modeling and rendering of 3D clouds using procedural functions. The formation of clouds is based on modified noise function made with fbm(Fractional Brownian Motion). Those noise values turn into densities of droplets of liquid water, which is a critical parameter for forming the three different types of clouds. At the rendering stage, the algorithm applies the ray marching technique to decide the colors of cloud using density values obtained from the noise function. In this process, all lighting attenuation and scattering are calculated by physically based manner. Once we have the clouds, they are blended on the sky, which is also rendered physically. We also make the clouds moving in the sky by the wind force. All algorithms are implemented and tested on GPU using GLSL.

A 2-Tier Server Architecture for Real-time Multiple Rendering (실시간 다중 렌더링을 위한 이중 서버 구조)

  • Lim, Choong-Gyoo
    • Journal of Korea Game Society
    • /
    • v.12 no.4
    • /
    • pp.13-22
    • /
    • 2012
  • The wide-spread use of the broadband Internet service makes the cloud computing-based gaming service possible. A game program is executed on a cloud node and its live image is fed into a remote user's display device via video streaming. The user's input is immediately transmitted and applied to the game. The minimization of the time to process remote user's input and transmit the live image back to the user and thus satisfying the requirement of instant responsiveness for gaming makes it possible. However, the cost to build its servers can be very expensive to provide high quality 3D games because a general purpose graphics system that cloud nodes are likely to have for the service supports a single 3D application at a time. Thus, the server must have a technology of 'realtime multiple rendering' to execute multiple 3D games simultaneously. This paper proposes a new architecture of 2-tier servers of clouds nodes of which one group executes multiple games and the other produces game's live images. It also performs a few experimentations to prove the feasibility of the new architecture.

Implementation of AR Remote Rendering Techniques for Real-time Volumetric 3D Video

  • Lee, Daehyeon;Lee, Munyong;Lee, Sang-ha;Lee, Jaehyun;Kwon, Soonchul
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.90-97
    • /
    • 2020
  • Recently, with the growth of mixed reality industrial infrastructure, relevant convergence research has been proposed. For real-time mixed reality services such as remote video conferencing, the research on real-time acquisition-process-transfer methods is required. This paper aims to implement an AR remote rendering method of volumetric 3D video data. We have proposed and implemented two modules; one, the parsing module of the volumetric 3D video to a game engine, and two, the server rendering module. The result of the experiment showed that the volumetric 3D video sequence data of about 15 MB was compressed by 6-7%. The remote module was streamed at 27 fps at a 1200 by 1200 resolution. The results of this paper are expected to be applied to an AR cloud service.

Client Rendering Method for Desktop Virtualization Services

  • Jang, Su Min;Choi, Won Hyuk;Kim, Won Young
    • ETRI Journal
    • /
    • v.35 no.2
    • /
    • pp.348-351
    • /
    • 2013
  • Cloud computing has recently become a significant technology trend in the IT field. Among the related technologies, desktop virtualization has been applied to various commercial applications since it provides many advantages, such as lower maintenance and operation costs and higher utilization. However, the existing solutions offer a very limited performance for 3D graphics applications. Therefore, we propose a novel method in which rendering commands are not executed at the host server but rather are delivered to the client through the network and are executed by the client's graphics device. This method prominently reduces server overhead and makes it possible to provide a stable service at low cost. The results of various experiments prove that the proposed method outperforms all existing solutions.

A Progressive Rendering Method to Enhance the Resolution of Point Cloud Contents (포인트 클라우드 콘텐츠 해상도 향상을 위한 점진적 렌더링 방법)

  • Lee, Heejea;Yun, Junyoung;Kim, Jongwook;Kim, Chanhee;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.258-268
    • /
    • 2021
  • Point cloud content is immersive content that represents real-world objects with three-dimensional (3D) points. In the process of acquiring point cloud data or encoding and decoding point cloud data, the resolution of point cloud content could be degraded. In this paper, we propose a method of progressively enhancing the resolution of sequential point cloud contents through inter-frame registration. To register a point cloud, the iterative closest point (ICP) algorithm is commonly used. Existing ICP algorithms can transform rigid bodies, but there is a disadvantage that transformation is not possible for non-rigid bodies having motion vectors in different directions locally, such as point cloud content. We overcome the limitations of the existing ICP-based method by registering regions with motion vectors in different directions locally between the point cloud content of the current frame and the previous frame. In this manner, the resolution of the point cloud content with geometric movement is enhanced through the process of registering points between frames. We provide four different point cloud content that has been enhanced with our method in the experiment.

3D Cloud Animation using Cloud Modeling Method of 2D Meteorological Satellite Images (2차원 기상 위성 영상의 구름 모델링 기법을 이용한 3차원 구름 애니메이션)

  • Lee, Jeong-Jin;Kang, Moon-Koo;Lee, Ho;Shin, Byeong-Seok
    • Journal of Korea Game Society
    • /
    • v.10 no.1
    • /
    • pp.147-156
    • /
    • 2010
  • In this paper, we propose 3D cloud animation by cloud modeling method of 2D images retrieved from a meteorological satellite. First, on the satellite images, we locate numerous control points to perform thin-plate spline warping analysis between consecutive frames for the modeling of cloud motion. In addition, the spectrum channels of visible and infrared wavelengths are used to determine the amount and altitude of clouds for 3D cloud image reconstruction. Pre-integrated volume rendering method is used to achieve seamless inter-laminar shades in real-time using small number of slices of the volume data. The proposed method could successfully construct continuously moving 3D clouds from 2D satellite images at an acceptable speed and image quality.