• Title/Summary/Keyword: 3D Content Rendering

Search Result 39, Processing Time 0.026 seconds

Performance Analysis of Cloud Rendering Based on Web Real-Time Communication

  • Lim, Gyubeom;Hong, Sukjun;Lee, Seunghyun;Kwon, Soonchul
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.276-284
    • /
    • 2022
  • In this paper, we implemented cloud rendering using WebRTC for high-quality AR and VR services. Cloud rendering is an applied technology of cloud computing. It efficiently handles the rendering of large volumes of 3D content. The conventional VR and AR service is a method of downloading 3D content. The download time is delayed as the 3D content capacity increases. Cloud rendering is a streaming method according to the user's point of view. Therefore, stable service is possible regardless of the 3D content capacity. In this paper, we implemented cloud rendering using WebRTC and analyzed its performance. We compared latency of 100MB, 300MB, and 500MB 3D AR content in 100Mbps and 300Mbps internet environments. As a result of the analysis, cloud rendering showed stable latency regardless of data volume. On the other hand, the conventional method showed an increase in latency as the data volume increased. The results of this paper quantitatively evaluate the stability of cloud rendering. This is expected to contribute to high-quality VR and AR services

A Research of Real-time Rendering Potentials on 3D Animation Production

  • Ke Ma;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.293-299
    • /
    • 2023
  • In recent years, with the rapid development of real-time rendering technology, the quality of the images produced by real-time rendering has been improving, and its application scope has been expanded from games to animation and advertising and other fields. This paper analyses the development status of real-time rendering technology in 3D animation by investigating the 3D animation market in China, which concludes that the number of 3D animations in China has been increasing over the past 20 years, and the number of 3D animations using real-time rendering has been increasing year by year and exceeds that of 3D animations using offline rendering. In this study, a real-time rendering and offline rendering 3D animation are selected respectively to observe the screen effect of characters, special effects and environment props, and analyse the advantages and disadvantages of the two rendering technologies, and finally conclude that there is not much difference between real-time rendering 3D animation and offline rendering 3D animation in terms of quality and the overall sense of view, and due to the real-time rendering of the characteristics of the WYSIWYG, the animation designers can better focus on the creation of art performance. Real-time rendering technology has a good development prospect and potential in 3D animation, which paves the way for designers to create 3D content more efficiently.

Performance Analysis of GLTF/GLB to Improve 3D Content Rendering Performance

  • Jae Myeong Choi;Ki-Hong Park
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.13-18
    • /
    • 2023
  • 3D content rendering is one of the important factors that give a sense of realism when creating content, and this process takes a lot of time. In this paper, we proposed a method to improve rendering performance by reducing the vast amount of 3D data in the web environment, and conducted a performance test using DEM and 3D model elevation data. As a result of the experiment, the digital elevation model showed faster performance than the Blender-based 3D modeling, but when the screen was moved using OrbitControl, the fps dropped momentarily. In the case of Terrain, if the range is limited to a speed that maintains 24 to 60 fps using frustum culling and LOD techniques, it is considered that a higher quality map can be produced than GeoTIFF.

  • PDF

Multimodal Interaction on Automultiscopic Content with Mobile Surface Haptics

  • Kim, Jin Ryong;Shin, Seunghyup;Choi, Seungho;Yoo, Yeonwoo
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1085-1094
    • /
    • 2016
  • In this work, we present interactive automultiscopic content with mobile surface haptics for multimodal interaction. Our system consists of a 40-view automultiscopic display and a tablet supporting surface haptics in an immersive room. Animated graphics are projected onto the walls of the room. The 40-view automultiscopic display is placed at the center of the front wall. The haptic tablet is installed at the mobile station to enable the user to interact with the tablet. The 40-view real-time rendering and multiplexing technology is applied by establishing virtual cameras in the convergence layout. Surface haptics rendering is synchronized with three-dimensional (3D) objects on the display for real-time haptic interaction. We conduct an experiment to evaluate user experiences of the proposed system. The results demonstrate that the system's multimodal interaction provides positive user experiences of immersion, control, user interface intuitiveness, and 3D effects.

Spectrum-Based Color Reproduction Algorithm for Makeup Simulation of 3D Facial Avatar

  • Jang, In-Su;Kim, Jae Woo;You, Ju-Yeon;Kim, Jin Seo
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.969-979
    • /
    • 2013
  • Various simulation applications for hair, clothing, and makeup of a 3D avatar can provide more useful information to users before they select a hairstyle, clothes, or cosmetics. To enhance their reality, the shapes, textures, and colors of the avatars should be similar to those found in the real world. For a more realistic 3D avatar color reproduction, this paper proposes a spectrum-based color reproduction algorithm and color management process with respect to the implementation of the algorithm. First, a makeup color reproduction model is estimated by analyzing the measured spectral reflectance of the skin samples before and after applying the makeup. To implement the model for a makeup simulation system, the color management process controls all color information of the 3D facial avatar during the 3D scanning, modeling, and rendering stages. During 3D scanning with a multi-camera system, spectrum-based camera calibration and characterization are performed to estimate the spectrum data. During the virtual makeup process, the spectrum data of the 3D facial avatar is modified based on the makeup color reproduction model. Finally, during 3D rendering, the estimated spectrum is converted into RGB data through gamut mapping and display characterization.

Research on the Expression of Ink-and-Wash Painting by using 3D Animation (3D애니메이션을 활용한 수묵화기법 표현연구)

  • Han, Myung-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.1105-1114
    • /
    • 2010
  • This thesis is a summary of the research result about the 20 seconds official trailer requested from the Pusan International Film Festival organizing committee. Since producing digital content is very significant at this moment, I tired to make an official trailer for 13th Pusan International Film Festival by integrating 3D animation with ink-and-wash painting. Motivated by a western-style painter, Shin Chang Sic's painting,' Arirang_HopeⅠ(An Official poster), I got to know how to express ink-and-wash painting by using digital technique and considering ink stick depth, line control and color elements for modeling, shading and rendering stage.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

A Study of Artificial Intelligence Generated 3D Engine Animation Workflow

  • Chenghao Wang;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.286-292
    • /
    • 2023
  • This article is set against the backdrop of the rapid development of the metaverse and artificial intelligence technologies, and aims to explore the possibility and potential impact of integrating AI technology into the traditional 3D animation production process. Through an in-depth analysis of the differences when merging traditional production processes with AI technology, it aims to summarize a new innovative workflow for 3D animation production. This new process takes full advantage of the efficiency and intelligent features of AI technology, significantly improving the efficiency of animation production and enhancing the overall quality of the animations. Furthermore, the paper delves into the creative methods and developmental implications of artificial intelligence technology in real-time rendering engines for 3D animation. It highlights the importance of these technologies in driving innovation and optimizing workflows in the field of animation production, showcasing how they provide new perspectives and possibilities for the future development of the animation industry.

Current State of Animation Industry and Technology Trends - Focusing on Artificial Intelligence and Real-Time Rendering (애니메이션 산업 현황과 기술 동향 - 인공지능과 실시간 렌더링 중심으로)

  • Jibong Jeon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.821-830
    • /
    • 2023
  • The advancement of Internet network technology has triggered the emergence of new OTT video content platforms, increasing demand for content and altering consumption patterns. This trend is bringing positive changes to the South Korean animation industry, where diverse and high-quality animation content is becoming increasingly important. As investment in technology grows, video production technology continues to advance. Specifically, 3D animation and VFX production technologies are enabling effects that were previously unthinkable, offering detailed and realistic graphics. The Fourth Industrial Revolution is providing new opportunities for this technological growth. The rise of Artificial Intelligence (AI) is automating repetitive tasks, thereby enhancing production efficiency and enabling innovations that go beyond traditional production methods. Cutting-edge technologies like 3D animation and VFX are being continually researched and are expected to be more actively integrated into the production process. Digital technology is also expanding the creative horizons for artists. The future of AI and advanced technologies holds boundless potential, and there is growing anticipation for how these will elevate the video content industry to new heights.

Feature Detection and Simplification of 3D Face Data with Facial Expressions

  • Kim, Yong-Guk;Kim, Hyeon-Joong;Choi, In-Ho;Kim, Jin-Seo;Choi, Soo-Mi
    • ETRI Journal
    • /
    • v.34 no.5
    • /
    • pp.791-794
    • /
    • 2012
  • We propose an efficient framework to realistically render 3D faces with a reduced set of points. First, a robust active appearance model is presented to detect facial features in the projected faces under different illumination conditions. Then, an adaptive simplification of 3D faces is proposed to reduce the number of points, yet preserve the detected facial features. Finally, the point model is rendered directly, without such additional processing as parameterization of skin texture. This fully automatic framework is very effective in rendering massive facial data on mobile devices.