References
- J. Svensson, "Watch out, wedding videographers, AI is coming for you," IEEE Spectrum, Nov. 2021.
- Ofcom, "Object-based media report," Sept. 2021.
- 유건식, "OTT와 위드코로나 시대, 방송 제작 현장의 변화," 방송트렌드&인사이트, 제28권 제3호, 2021.
- Nevion, "5G VIRTUOSA project introduction," IRT, 2021.
- A. Pennington, "Virtual production can be real for everybody-Here's how," TVTECH, June 2020.
- R . Krishna e t al., "Visual genome: Connecting language and vision using crowdsourced dense image annotations," arXiv preprint, CoRR, 2016, arXiv: 1602.07332v1.
- 유홍준, "명작순례: 유홍준의 미를 보는 눈 2," 눌와, 2013.
- 유미, "가상 제작의 개념과 해외 제작 사례 분석," 애니메이션 연구, 제17권 제1호, 2020, pp. 98-113.
- C.K. Ellie, "Graphic masters: Creating real time VFX with virtual production," Genero, https://genero.com/insights/graphics-masters-creating-real-time-vfx-with-virtual-production
- 김미라, "포스트 코로나, 영상 콘텐츠 제작 기술," 영상기술연구, 제1권 제35호, 2021, pp. 27-44. https://doi.org/10.34269/MITAK.2021.1.35.002
- 이남수, 이한결, "VFX는 달리는 말이다," 키움증권 리서치센터, 2021. 4. 6.
- 삼성전자 뉴스룸, "삼성전자, CJ ENM과 버추얼 스튜디오 구축 파트너십 체결," 2021. 7. 26.
- 이동호, "버추얼 프로덕션 기술을 활용한 제작 기술 동향 연구," 영상기술연구, 제1권 제31호, 2019, pp. 61-78.
- P. Debevec, Y. Yizhou, and G. Borshukov, "Efficient view-dependent image-based rendering with projective texture-mapping," in Eurographics Workshop on Rendering Techniques, Springer, Vienna, Austria, 1998, pp. 105-116.
- M. Dou et al., "Fusion4d: Real-time performance capture of challenging scenes." ACM Trans. Graph., vol. 35, no. 4, 2016, pp. 1-13.
- K. Guo et al., "The relightables: Volumetric performance capture of humans with realistic relighting," ACM Trans. Graph., vol. 38, no. 6, 2019, pp. 1-19.
- M. Levoy and P. Hanrahan, "Light field rendering," in Proc. 23rd Annu. Conf. Comput. Graph. Interact. Techniques (SIGGRAPH), (New Orleans, LA, USA), Aug. 1996.
- J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), (Boston, MA, USA), 2015, pp. 3431-3440.
- L. Karacan et al., "Learning to generate images of outdoor scenes from attributes and semantic layouts," arXiv preprint, CoRR, 2016, arXiv: 1612.00215.
- Q. Chen and V. Koltun, "Photographic image synthesis with cascaded refinement networks," in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), (Venice, Italy), Oct. 2017, pp. 1511-1520.
- T.C. Wang et al., "Video-to-video synthesis," arXiv preprint, CoRR, 2018, arXiv: 1808.06601.
- S.M.A. Eslami et al., "Neural scene representation and rendering," Science, vol. 360, no. 6394, 2018, pp. 1204-1210. https://doi.org/10.1126/science.aar6170
- J .Y . Zhu et al., "Visual object networks: Image generation with disentangled 3d representation," arXiv preprint, CoRR, 2018, arXiv: 1812.02725.
- M. Meshry et al., "Neural rerendering in the wild," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), (Long Beach, CA, USA), June 2019, pp. 6878-6887.
- Z . Xu et al., "Deep view synthesis from sparse photometric images," ACM Trans. Graph., vol. 38, no. 4, 2019, pp. 1-13.
- S. Saito et al., "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization," in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), (Seoul, Republic of Korea), Oct. 2019, pp. 2304-2314.
- S. Lombardi et al., "Neural volumes: Learning dynamic renderable volumes from images," arXiv preprint, CoRR, 2019, arXiv: 1906.07751.
- P. Debevec et al., "Acquiring the reflectance field of a human face," in Proc. 27th Annu. Conf. Comput. Graph. Interact. Techniques (SIGGRAPH), (New Orleans, LA, USA), July 2000, pp. 145-156.
- A. Meka et al., "Deep reflectance fields: High-quality facial reflectance field inference from color gradient illumination," ACM Trans. Graph., vol. 38, no. 4, 2019, pp. 1-12. https://doi.org/10.1145/3306346.3323027
- J. Thies et al., "Face2face: Real-time face capture and reenactment of rgb videos," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), (Las Vegas, NV, USA), June 2016, pp. 2387-2395.
- H. Kim et al., "Deep video portraits," ACM Trans. Graph., vol. 37, no. 4, 2018, pp. 1-14.
- J. Quiroga et al., "As seen on TV: Automatic basketball video production using Gaussian-based actionness and game states recognition," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), (Seattle, WA, USA), June 2020, pp. 3911-3920.
- H. Zhang et al., "Vid2player: Controllable video sprites that behave and appear like professional tennis players," ACM Trans. Graph., vol. 40, no. 3, 2021, pp. 1-16.
- 손정우, 한민호, 김선중, "인공지능 기반 영상 콘텐츠 생성 기술동향," 전자통신동향분석, 제34권 제3호, 2019, pp. 34-42. https://doi.org/10.22648/ETRI.2019.J.340304
- O. Fried et al., "Text-based editing of talking-head video," ACM Trans. Graph., vol. 38, no. 4, 2019.
- D. Kim, D. Joo, and J. Kim, "TiVGAN: Text to image to video generation with step-by-step evolutionary generator," IEEE Access, vol. 8, 2020, pp. 153113-153122. https://doi.org/10.1109/access.2020.3017881
- S. Tulyakov et al., "MoCoGAN: Decomposing motion and content for video generation," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), (Salt Lake City, UT, USA), June 2018, pp. 1526-1535.
- ISO/IEC 14496-11, Coding of audio-visual objects, Part 11: Scene description and Application engine(BIFS, XMT, MPEG-J).
- GL Transmission Format(glTF) version 2.0, 2017.
- Imed Bouazizi, MPEG-I Scene Description Overview, mpeg-sg.org, 2021.
- 조용성 외, "미디어와 AI 기술: 미디어 지능화," 전자통신동향분석, 제35권 제5호, 2020, pp. 92-101. https://doi.org/10.22648/ETRI.2020.J.350508