• 제목/요약/키워드: 3D Content Rendering

Search Result 39, Processing Time 0.026 seconds

Design of Security Method for Network Rendering of Augmented Reality Object (홀로그램 용 증강현실 객체의 네트워크 랜더링을 위한 보안 기법 설계)

  • Kim, Seoksoo;Kim, Donghyun
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.1
    • /
    • pp.92-98
    • /
    • 2019
  • Due to the development of hologram display technology, various studies are being conducted to provide realistic contents for augmented reality. In the case of the HMD for hologram, since augmented reality objects must be rendered by a small processor, it is necessary to use a low-capacity content. To solve this problem, there is a need for a technique of rendering resources by providing resources through a network. In the case of the existing augmented reality system, there is no problem of contents modulation because the resources are loaded and rendered in the internal storage space. However, when providing resources through the network, security problems such as content tampering and malicious code insertion should be considered. Therefore, in this paper, we propose a network rendering technique applying security techniques to provide augmented reality contents in a holographic HMD device.

Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.746-754
    • /
    • 2008
  • A 3D stereoscopic image is generated by interdigitating every scene with video editing tools that are rendered by two cameras' views in 3D modeling tools, like Autodesk MAX(R) and Autodesk MAYA(R). However, the depth of object from a static scene and the continuous stereo effect in the view of transformation, are not represented in a natural method. This is because after choosing the settings of arbitrary angle of convergence and the distance between the modeling and those two cameras, the user needs to render the view from both cameras. So, the user needs a process of controlling the camera's interval and rendering repetitively, which takes too much time. Therefore, in this paper, we will propose the 3D stereoscopic image editing system for solving such problems as well as exposing the system's inherent limitations. We can generate the view of two cameras and can confirm the stereo effect in real-time on 3D modeling tools. Then, we can intuitively determine immersion of 3D stereoscopic image in real-time, by using the 3D stereoscopic image preview function.

  • PDF

3D Object Generation and Renderer System based on VAE ResNet-GAN

  • Min-Su Yu;Tae-Won Jung;GyoungHyun Kim;Soonchul Kwon;Kye-Dong Jung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.142-146
    • /
    • 2023
  • We present a method for generating 3D structures and rendering objects by combining VAE (Variational Autoencoder) and GAN (Generative Adversarial Network). This approach focuses on generating and rendering 3D models with improved quality using residual learning as the learning method for the encoder. We deep stack the encoder layers to accurately reflect the features of the image and apply residual blocks to solve the problems of deep layers to improve the encoder performance. This solves the problems of gradient vanishing and exploding, which are problems when constructing a deep neural network, and creates a 3D model of improved quality. To accurately extract image features, we construct deep layers of the encoder model and apply the residual function to learning to model with more detailed information. The generated model has more detailed voxels for more accurate representation, is rendered by adding materials and lighting, and is finally converted into a mesh model. 3D models have excellent visual quality and accuracy, making them useful in various fields such as virtual reality, game development, and metaverse.

Relighting 3D Scenes with a Continuously Moving Camera

  • Kim, Soon-Hyun;Kyung, Min-Ho;Lee, Joo-Haeng
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.429-437
    • /
    • 2009
  • This paper proposes a novel technique for 3D scene relighting with interactive viewpoint changes. The proposed technique is based on a deep framebuffer framework for fast relighting computation which adopts image-based techniques to provide arbitrary view-changing. In the preprocessing stage, the shading parameters required for the surface shaders, such as surface color, normal, depth, ambient/diffuse/specular coefficients, and roughness, are cached into multiple deep framebuffers generated by several caching cameras which are created in an automatic manner. When the user designs the lighting setup, the relighting renderer builds a map to connect a screen pixel for the current rendering camera to the corresponding deep framebuffer pixel and then computes illumination at each pixel with the cache values taken from the deep framebuffers. All the relighting computations except the deep framebuffer pre-computation are carried out at interactive rates by the GPU.

A Study on the Effective Production of Game Weapons Using ZBrush

  • YunChao Yang;Xinyi Shan;Jeanhun Chung
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.2
    • /
    • pp.397-402
    • /
    • 2023
  • With the rapid adoption of 5G, the gaming industry has undergone significant innovation, with the quality of game content and player experience becoming the focal point of attention. ZBrush, as a professional digital sculpting software, plays a crucial role in the production of 3D game models. In this paper, we explore the application methods and techniques of ZBrush in game weapons production through specific case analyses. We provide a detailed analysis of two game weapon models, discussing the design and modeling process, lowto-high poly conversion, UV unwrapping and texture baking, material texture creation and optimization, and final rendering. By comparing the production process and analyzing the advantages and disadvantages of ZBrush, we establish a theoretical foundation for further design research and provide reference materials for game industry professionals, aiming to achieve higher quality and efficiency in 3D game model production.

Rendering Quality Improvement Method based on Depth and Inverse Warping (깊이정보와 역변환 기반의 포인트 클라우드 렌더링 품질 향상 방법)

  • Lee, Heejea;Yun, Junyoung;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.714-724
    • /
    • 2021
  • The point cloud content is immersive content recorded by acquiring points and colors corresponding to the real environment and objects having three-dimensional location information. When a point cloud content consisting of three-dimensional points having position and color information is enlarged and rendered, the gap between the points widens and an empty hole occurs. In this paper, we propose a method for improving the quality of point cloud contents through inverse transformation-based interpolation using depth information for holes by finding holes that occur due to the gap between points when expanding the point cloud. The points on the back are rendered between the holes created by the gap between the points, acting as a hindrance to applying the interpolation method. To solve this, remove the points corresponding to the back side of the point cloud. Next, a depth map at the point in time when an empty hole is generated is extracted. Finally, inverse transform is performed to extract pixels from the original data. As a result of rendering content by the proposed method, the rendering quality improved by 1.2 dB in terms of average PSNR compared to the conventional method of increasing the size to fill the blank area.

A Progressive Rendering Method to Enhance the Resolution of Point Cloud Contents (포인트 클라우드 콘텐츠 해상도 향상을 위한 점진적 렌더링 방법)

  • Lee, Heejea;Yun, Junyoung;Kim, Jongwook;Kim, Chanhee;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.258-268
    • /
    • 2021
  • Point cloud content is immersive content that represents real-world objects with three-dimensional (3D) points. In the process of acquiring point cloud data or encoding and decoding point cloud data, the resolution of point cloud content could be degraded. In this paper, we propose a method of progressively enhancing the resolution of sequential point cloud contents through inter-frame registration. To register a point cloud, the iterative closest point (ICP) algorithm is commonly used. Existing ICP algorithms can transform rigid bodies, but there is a disadvantage that transformation is not possible for non-rigid bodies having motion vectors in different directions locally, such as point cloud content. We overcome the limitations of the existing ICP-based method by registering regions with motion vectors in different directions locally between the point cloud content of the current frame and the previous frame. In this manner, the resolution of the point cloud content with geometric movement is enhanced through the process of registering points between frames. We provide four different point cloud content that has been enhanced with our method in the experiment.

Research on Utilizing Volumetric Studio for XR Content Production (XR 콘텐츠 제작을 위한 볼류메트릭 스튜디오 활용 연구)

  • Sukchang Lee;Won Ho Choi
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.849-857
    • /
    • 2023
  • Volumetric Studio is catalyzing the expansion of the XR content market. Consequently, there is a rising demand for in-depth research on volumetric capture technology. This research delves into the methodology and outcomes of capturing dancers' movements in the form of 3D video images. Furthermore, this research examines the practical applications of volumetric capture technology by assessing the infrastructure and operational workflow of the studio specializing in this domain, aiming to derive significant findings. Notably, this research highlights constraints associated with video image distortion and extended rendering durations within Volumetric Studio system.

Hologram based Internet of Signage Design Using Raspberry Pi

  • Timur, Khudaybergenov;Han, Jungdo;Cha, Jae-Sang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.35-41
    • /
    • 2019
  • This paper propose design of remotely controllable hologram based interactive signage. General idea is organization of work of hologram signage through using Raspberry Pi hardware platform and Intel realsense r200 for interaction opportunity. Remote content management is based on Screenly software solution. Open CV based solutions are used for content controlling on the spectators side. Represented work describe of using of the 3D content rendering algorithm based on 3D gaming technology Unity 5. An experimental model was carried out with the purpose of IoS designing, to 3D data visualization and to introduce a new method for visualizing and displaying 3D data on a hologram pyramid signage. Description of working model of hologram signage is given in this paper.

Implementation of Stereoscopic 3D Video Player System Having Less Visual Fatigue and Its Computational Complexity Analysis for Real-Time Processing (시청피로 저감형 S3D 영상 재생 시스템 구현 및 실시간 처리를 위한 알고리즘 연산량 분석)

  • Lee, Jaesung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.2865-2874
    • /
    • 2013
  • Recently, most of movies top-ranked in the box office are screening in Stereoscopic 3D, and the world's leading electronics companies such as Samsung and LG are getting the hots for 3DTV sales. However, each person has different binocular disparity and different viewing distance, and thus he or she feels the severe visual fatigue and headaches if he or she is watching 3D content with the same binocular disparity, which is very different from things he or she feels in the real world. To solve this problem, this paper proposes and implement a 3D rendering system that correct the disparity of 3D content by reflecting binocular distance and viewing distance. Then, the computational complexity is analyzed. Optical-flow and Warping algorithms turn out to consume 732 seconds and 5.7 seconds per frame, respectively. Therefore, a dedicated chip-set for both blocks is strongly required for real-time HD 3D display.