References
- L. Dzelzkaleja, J. K. Knets, N. Rozenovskis, and A. Silitis, "Mobile apps for 3D face scanning," Proceedings of SAI Intelligent Systems Conference. pp. 34-50, 2022. DOI: 10.1007/978-3-030-82196-8_4.
- P. Amornvit and S. Sanohkan, "The accuracy of digital face scans obtained from 3D scanners: an in vitro study," International Journal of Environmental Research and Public Health, vol. 16, no. 24, p. 5061, Dec. 2019. DOI: 10.3390/ijerph16245061.
- Z. Wang, "Robust three-dimensional face reconstruction by one-shot structured light line pattern," Optics and Lasers in Engineering, vol. 124, p. 105798, Jan. 2020. DOI: 10.1016/j.optlaseng.2019.105798.
- A. Richard, C. Lea, S. Ma, J. Gall, F. de la Torre, and Y. Sheikh, "Audio-and gaze-driven facial animation of codec avatars," in Proceedings of the IEEE/CVF winter conference on applications of computer vision. Waikoloa, USA, Jan. 2021. DOI: 10.1109/wacv48630.2021.00009.
- V. Barrielle, and N. Stoiber, "Realtime performance-driven physical simulation for facial animation," in Computer Graphics Forum, vol. 38, no. 1, Feb. 2019. pp. 151-166. DOI: 10.1111/cgf.13450.
- T-N. Nguyen, S. Dakpe, M-C. Ho Ba Tho, and T.-T. Dao, "Real-time computer vision system for tracking simultaneously subject-specific rigid head and non-rigid facial mimic movements using a contactless sensor and system of systems approach," Computer Methods and Programs in Biomedicine, vol. 191, p. 105410, Jul. 2020. DOI:10.1016/j.cmpb.2020.105410.
- Y. Pan, R. Zhang, J. Wang, N. Chen, Y. Qiu, Y. Ding, and K. Mitchell, "MienCap: Performance-based facial animation with live mood dynamics," in 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Christchurch, New Zealand, pp. 654-655, 2022. DOI: 10.1109/VRW55335.2022.00178.
- Y. Ye, S. Zhan, and Z. Juan, "High-fidelity 3D real-time facial animation using infrared structured light sensing system," Computers & Graphics, vol. 104, pp. 46-58, May. 2022. DOI: 10.1016/j.cag.2022.03.007.
- K. Gu, Y. Zhou, and T. Huang, "Flnet: Landmark driven fetching and learning network for faithful talking facial animation synthesis," Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, pp. 10861-10868, 2020. DOI: 10.1609/aaai.v34i07.6717.
- K. Vougioukas, S. Petridis, and M. Pantic, "End-to-end speech-driven realistic facial animation with temporal GANs," CVPR Workshops, pp. 37-40, 2019.
- S. W. Bailey, D. Omens, P. Dilorenzo, and J. F. O'Brien, "Fast and deep facial deformations," ACM Transactions on Graphics, vol. 39, no. 4, Aug. 2020. DOI: 10.1145/3386569.3392397.
- P. Chandran, D. Bradley, M. Gross, and T. Beeler, "Semantic deep face models," in 2020 International Conference on 3D Vision (3DV), Fukuoka, Japan, pp. 345-354, 2020. DOI: 10.1109/3DV50981.2020.00044.
- T. Karras, T. Aila, S. Laine, A. Herva, and J. Lehtinen, "Audiodriven facial animation by joint end-to-end learning of pose and emotion," ACM Transactions on Graphics, vol. 36, no. 4, pp. 1-12, Jul. 2017. DOI: 10.1145/3072959.3073658.
- T. T. Nguyen, C. M. Nguyen, T. D. Nguyen, T. Duc, S. Nahavandi, "Deep learning for deepfakes creation and detection," arXiv preprint arXiv:1909.11573, vol. 1, no. 2, p. 2, Sep. 2019.
- A. Tewari, M. Zollhoefer, F. Bernard, P. Garrido, H. Kim, P. Perez, and C. Theobalt, "High-fidelity monocular face reconstruction based on an unsupervised model-based face autoencoder," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 357-370, Feb. 2020. DOI: 10.1109/TPAMI.2018.2876842.
- J. Lin, Y. Li, and G. Yang, "FPGAN: Face deidentification method with generative adversarial networks for social robots," Neural Networks, vol. 133, pp. 132-147, Jan. 2021. DOI: 10.1016/j.neunet.2020.09.001.
- M-Y. Liu, X. Huang, J. Yu, T-C. Wang, and A. Mallya, "Generative adversarial networks for image and video synthesis: Algorithms and applications," Proceedings of the IEEE, vol. 109, no. 5, pp. 839-862, May 2021. DOI: 10.1109/JPROC.2021.3049196.
- T. T. Nguyen, Q. V. H. Nguyen, D. T. Nguyen, D. T. Nguyen, T. Huynh-The, S. Nahavandi, T. T. Nguyen, Q-V. Pham, and C. M. Nguyen, "Deep learning for deepfakes creation and detection: A survey," Computer Vision and Image Understanding, vol. 223, p. 103525, Oct. 2022. DOI: 10.1016/j.cviu.2022.103525.
- I. Perov, D. Gao, N. Chervoniy, K. Liu, S. Marangonda, C. Ume, Mr. Dpfks, C. S. Facenheim, L. RP, J. Jiang, S. Zhang, P. Wu, B. Zhou, and W. Zhang, "DeepFaceLab: Integrated, flexible and extensible face-swapping framework," arXiv preprint arXiv:2005.05535, May 2020. DOI: 10.48550/arXiv.2005.05535.
- F. Jia and S. Yang, "Video face swap with DeepFaceLab," in International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2021), Harbin, China, vol. 12168, pp. 326-332, Mar. 2022. DOI: 10.1117/12.2631297.
- L. Her and X. Yang, "Research of image sharpness assessment algorithm for autofocus," in 2019 IEEE 4th International Conference on Image, Vision, and Computing (ICIVC), IEEE, Xiamen, China, pp. 93-98, 2019. DOI: 10.1109/ICIVC47709.2019.8980980.
- M. K. Rohil, N. Gupta, and P. Yadav, "An improved model for no-reference image quality assessment and a no-reference video quality assessment model based on frame analysis," Signal, Image and Video Processing, vol. 14. No. 1, pp. 205-213, Feb. 2020. DOI: 10.1007/s11760-019-01543-z.
- X. Zhou, J. Zhang, M. Li, X. Su, F. Chen, "Thermal infrared spectrometer on-orbit defocus assessment based on blind image blur kernel estimation," Infrared Physics & Technology, vol. 130 p. 104538, May 2022. DOI: 10.1016/j.infrared.2022.104538.
- J. Rajevenceltha and V. H. Gaidhane, "An efficient approach for no-reference image quality assessment based on statistical texture and structural features," Engineering Science and Technology, an International Journal, vol. 30, p. 101039, Jun. 2022. DOI: 10.1016/j.jestch.2021.07.002.
- J. Harder, "What other programs that are part of Adobe Creative Cloud can I use to display my graphics or multimedia online?," in Graphics and Multimedia for the Web with Adobe Creative Cloud, Apress, Berkeley, CA, pp. 993-1000, Nov. 2018. DOI: 10.1007/978-1-4842-3823-3_40.
- X. Wu, P. Qu, S. Wang, L. Xie, J. Dong, "Extend the FFmpeg framework to analyze media content," arXiv preprint arXiv:2103. 03539, Mar. 2021. DOI: 10.48550/arXiv.2103.03539.
- M. Gupta, S. Shah, and S. Salmani, "Improving whatsapp Video Statuses using FFMPEG and Software based encoding," in 2021 International Conference on Communication information and Computing Technology (ICCICT), IEEE, Mumbai, India, pp. 1-6, 2021. DOI: 10.1109/ICCICT50803.2021.9510129.
- C. Bartneck, D. Kulic, E. Croft, et al., "Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots," International Journal of Social Robotics, vol. 1, no. 1, pp. 71-81, Jan. 2009. DOI: 10.1007/s12369-008-0001-3.