• Title/Summary/Keyword: Animation workflow

Search Result 12, Processing Time 0.024 seconds

Comparative Analysis of 3D Tools Suitable for the Rotoscoping Cell Animation Production Process

  • Choi, Chul Young
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.113-120
    • /
    • 2024
  • Recently, case presentations using AI functions such as ChatGPT are increasing in many industrial fields. As AI-based results emerge even in the areas of images and videos, traditional animation production tools are in need of significant changes. Unreal Engine is the tool that adapts most quickly to these changes, proposing a new animation production workflow by integrating tools such as Metahuman and Marvelous Designer. Working with realistic metahumans allows for the production of realistic and natural movements, such as those captured through motion capture data. Implementing this approach presents many challenges for production tools that adhere to traditional methods. In this study, we investigated the differences between the cell animation workflow and the computer graphics animation production workflow. We compared and analyzed whether these differences could be reduced by creating sample movements using character rigs in Maya and Cascadeur tools. Our results showed that a similar cell animation workflow could be constructed using the Cascadeur tool. To improve the accuracy of our conclusions, we created large, action-packed short animations to demonstrate and validate our findings.

A Study of Artificial Intelligence Generated 3D Engine Animation Workflow

  • Chenghao Wang;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.286-292
    • /
    • 2023
  • This article is set against the backdrop of the rapid development of the metaverse and artificial intelligence technologies, and aims to explore the possibility and potential impact of integrating AI technology into the traditional 3D animation production process. Through an in-depth analysis of the differences when merging traditional production processes with AI technology, it aims to summarize a new innovative workflow for 3D animation production. This new process takes full advantage of the efficiency and intelligent features of AI technology, significantly improving the efficiency of animation production and enhancing the overall quality of the animations. Furthermore, the paper delves into the creative methods and developmental implications of artificial intelligence technology in real-time rendering engines for 3D animation. It highlights the importance of these technologies in driving innovation and optimizing workflows in the field of animation production, showcasing how they provide new perspectives and possibilities for the future development of the animation industry.

A Study on Real-time Graphic Workflow For Achieving The Photorealistic Virtual Influencer

  • Haitao Jiang
    • International journal of advanced smart convergence
    • /
    • v.12 no.1
    • /
    • pp.130-139
    • /
    • 2023
  • With the increasing popularity of computer-generated virtual influencers, the trend is rising especially on social media. Famous virtual influencer characters Lil Miquela and Imma were all created by CGI graphics workflows. The process is typically a linear affair. Iteration is challenging and costly. Development efforts are frequently siloed off from one another. Moreover, it does not provide a real-time interactive experience. In the previous study, a real-time graphic workflow was proposed for the Digital Actor Hologram project while the output graphic quality is less than the results obtained from the CGI graphic workflow. Therefore, a real-time engine graphic workflow for Virtual Influencers is proposed in this paper to facilitate the creation of real-time interactive functions and realistic graphic quality. The real-time graphic workflow is obtained from four processes: Facial Modeling, Facial Texture, Material Shader, and Look-Development. The analysis of performance with real-time graphical workflow for Digital Actor Hologram demonstrates the usefulness of this research result. Our research will be efficient in producing virtual influencers.

A Study on the Realization of Virtual Simulation Face Based on Artificial Intelligence

  • Zheng-Dong Hou;Ki-Hong Kim;Gao-He Zhang;Peng-Hui Li
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.152-158
    • /
    • 2023
  • In recent years, as computer-generated imagery has been applied to more industries, realistic facial animation is one of the important research topics. The current solution for realistic facial animation is to create realistic rendered 3D characters, but the 3D characters created by traditional methods are always different from the actual characters and require high cost in terms of staff and time. Deepfake technology can achieve the effect of realistic faces and replicate facial animation. The facial details and animations are automatically done by the computer after the AI model is trained, and the AI model can be reused, thus reducing the human and time costs of realistic face animation. In addition, this study summarizes the way human face information is captured and proposes a new workflow for video to image conversion and demonstrates that the new work scheme can obtain higher quality images and exchange effects by evaluating the quality of No Reference Image Quality Assessment.

Real-time Markerless Facial Motion Capture of Personalized 3D Real Human Research

  • Hou, Zheng-Dong;Kim, Ki-Hong;Lee, David-Junesok;Zhang, Gao-He
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.129-135
    • /
    • 2022
  • Real human digital models appear more and more frequently in VR/AR application scenarios, in which real-time markerless face capture animation of personalized virtual human faces is an important research topic. The traditional way to achieve personalized real human facial animation requires multiple mature animation staff, and in practice, the complex process and difficult technology may bring obstacles to inexperienced users. This paper proposes a new process to solve this kind of work, which has the advantages of low cost and less time than the traditional production method. For the personalized real human face model obtained by 3D reconstruction technology, first, use R3ds Wrap to topology the model, then use Avatary to make 52 Blend-Shape model files suitable for AR-Kit, and finally realize real-time markerless face capture 3D real human on the UE4 platform facial motion capture, this study makes rational use of the advantages of software and proposes a more efficient workflow for real-time markerless facial motion capture of personalized 3D real human models, The process ideas proposed in this paper can be helpful for other scholars who study this kind of work.

Stereoscopic Contents Production Workflow Based on Nonlinear Editing (비선형 편집기반의 입체영상 제작 흐름에 관한 연구)

  • Kim, Chul-Hyun;Paik, Joon-Ki
    • Journal of Broadcast Engineering
    • /
    • v.15 no.3
    • /
    • pp.391-406
    • /
    • 2010
  • Digital cinema based on digital master distribution increases with stereoscopic film as the center. DCI specification V1.0 announced at 2004, it considerates stereoscopic film screening. And now, the Society of Motion Picture and Television Engineers is establishing a task force to define the standards of a stereoscopic contents viewed in the home. Today, most Hollywood commercial stereoscopic film features animation using computer graphic. However, considering film making characteristic, stereoscopic digital cinema is required shooting in real world and editing, screening. This paper presents possibility of stereoscopic examination at NLE in the stereoscopic workflow. And we will propose new stereoscopic digital cinema workflow to apply the stereoscopic examination. Based on experimental results, the 3D ready television using 120Hz has some obstacles for contents editing, but most domestic stereoscopic monitor using circular polarization is possible for successful editing.

Body Motion Retargeting to Rig-space (리깅 공간으로의 몸체 동작 리타겟팅)

  • Song, Jaewon;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.3
    • /
    • pp.9-17
    • /
    • 2014
  • This paper presents a method to retarget a source motion to the rig-space parameter for a target character that can be equipped with a complex rig structure as used in traditional animation pipelines. Our solution allows the animators to edit the retargeted motion easily and intuitively as they can work with the same rig parameters that have been used for keyframe animation. To acheive this, we analyze the correspondence between the source motion space and the target rig-space, followed by performing non-linear optimization for the motion retargeting to target rig-space. We observed the general workflow practiced by animators and apply this process to the optimization step.

Scene Production using Unity Cinemachine (유니티 시네머신을 활용한 장면 연출 모색)

  • Park, Sung Suk
    • Journal of Information Technology Applications and Management
    • /
    • v.28 no.6
    • /
    • pp.133-143
    • /
    • 2021
  • Unity's Cinemachine, one of the game production technologies, can produce 3D images. However, since Cinemachine is a game production tool, video creators may find it difficult because the work process of Cinemachine is unfamiliar and complicated. However, it is necessary to learn new technologies for the future of video contents that will be advanced in the future. In order to understand Chinemachine as a video producer, I would like to embody the scene of the movie. We are going to produce a storyboard scene for storytelling in Cinemachine. We defined the workflow of Unity Cinemachine while directing the scene by creating a story, and also checked the advantages and cautions. I hope it will be an opportunity to enjoy using Unity Cinemachine, which is constantly evolving.

Creating and Utilization of Virtual Human via Facial Capturing based on Photogrammetry (포토그래메트리 기반 페이셜 캡처를 통한 버추얼 휴먼 제작 및 활용)

  • Ji Yun;Haitao Jiang;Zhou Jiani;Sunghoon Cho;Tae Soo Yun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.25 no.2
    • /
    • pp.113-118
    • /
    • 2024
  • Recently, advancements in artificial intelligence and computer graphics technology have led to the emergence of various virtual humans across multiple media such as movies, advertisements, broadcasts, games, and social networking services (SNS). In particular, in the advertising marketing sector centered around virtual influencers, virtual humans have already proven to be an important promotional tool for businesses in terms of time and cost efficiency. In Korea, the virtual influencer market is in its nascent stage, and both large corporations and startups are preparing to launch new services related to virtual influencers without clear boundaries. However, due to the lack of public disclosure of the development process, they face the situation of having to incur significant expenses. To address these requirements and challenges faced by businesses, this paper implements a photogrammetry-based facial capture system for creating realistic virtual humans and explores the use of these models and their application cases. The paper also examines an optimal workflow in terms of cost and quality through MetaHuman modeling based on Unreal Engine, which simplifies the complex CG work steps from facial capture to the actual animation process. Additionally, the paper introduces cases where virtual humans have been utilized in SNS marketing, such as on Instagram, and demonstrates the performance of the proposed workflow by comparing it with traditional CG work through an Unreal Engine-based workflow.