1 |
Ginosar, S., Bar, A., Kohavi, G., Chan, C., Owens, A., & Malik, J. (2019). Learning individual styles of conversational gesture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3497-3506.
|
2 |
Kizilcec, R., Bailenson, J., & Gomez, C. (2015). The Instructor's Face in Video Instruction: Evidence From Two Large-Scale Field Studies. Journal of Educational Psychology, 107(3), 724-739. https://doi.org/10.1037/edu0000013.
DOI
|
3 |
Mio, C., Ventura-Medina, E. & Joao, E. (2019). Scenario-based eLearning to promote active learning in large cohorts: Students' perspective. Computer Applications in Engineering Education. 27(4), 894-909. 10.1002/cae.22123.
DOI
|
4 |
Suwajanakorn, S., Seitz, S. M., & Kemelmacher-Shlizerman, I. (2017). Synthesizing obama: learning lip sync from audio. ACM Transactions on Graphics (ToG), 36(4), 95:1-95:13.
|
5 |
Taylor, S., Kim, T., Yue, Y., Mahler, M., Krahe, J., Rodriguez, A. G., ... & Matthews, I. (2017). A deep learning approach for generalized speech animation. ACM Transactions on Graphics (ToG), 36(4), 93:1-93:11.
|
6 |
Thies, J., Elgharib, M., Tewari, A., Theobalt, C., & Niessner, M. (2020). Neural voice puppetry: Audio-driven facial reenactment. In European conference on computer vision, 716-731. Springer, Cham.
|
7 |
Vougioukas, K., Petridis, S., & Pantic, M. (2020). Realistic Speech-Driven Facial Animation with GANs, International Journal of Computer Vision, 128, 1398-1413.
DOI
|
8 |
W, H., Lee, D. H., & Kim, Y. (2021). Implementation of an Integrated Online Class Model using Open-Source Technology and SNS, International Journal on Informatics Visualization, 5(3), 218-223. 10.30630/joiv.5.3.668.
DOI
|
9 |
Zhou, Y., Han, X., Shechtman, E., Echevarria, J., Kalogerakis, E., & Li, D. (2020). Makelttalk: speaker-aware talking-head animation. ACM Transactions on Graphics (TOG), 39(6), 1-15.
|
10 |
Zhou, Y., Xu, Z., Landreth, C., Kalogerakis, E., Maji, S., & Singh, K. (2018). Visemenet: Audio-driven animator-centric speech animation. ACM Transactions on Graphics (TOG), 37(4), 1:1-1:10.
|
11 |
Shiratori, T., Nakazawa, A., & Ikeuchi, K. (2006). Dancing-to-music character animation. In Computer Graphics Forum, 25(3), 449-458. doi:10.1111/j.1467-8659.2006.00964.x.
DOI
|
12 |
Brand, M. (1999). Voice puppetry. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, 21-28.
|
13 |
Chen, L., Maddox, R. K., Duan, Z., & Xu, C. (2019). Hierarchical cross-modal talking face generation with dynamic pixel-wise loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7832-7841.
|
14 |
Lee, D. H., Kim, Y., & You, Y. Y. (2018). Learning window design and implementation based on Moodle-Based interactive learning activities, Indian Journal of Public Health Research and Development, 9(8), 626-632. DOI:10.5958/0976-5506.2018.00803.3.
DOI
|
15 |
Eri, W. I., & Susiana., (2019). Using the ADDIE model to develop learning material for actuarial mathematics. Journal of Physics: Conference Series, 1188. DOI:10.1088/1742-6596/1188/1/012052.
DOI
|
16 |
Karras, T, Aila, T. Laine, S., Herva, A. & Lehtinen, J. (2017). Audiodriven facial animation by joint end-to-end learning of pose and emotion. ACM Transactions on Graphics (ToG). 36(4), 94:1-94:12.
|
17 |
Kim, Y. (2017). A Study on e-learning Contents Opening Information for Distribution Industry Labor Competence, Journal of Distribution Science, 15(8), 65-73. DOI:10.15722/jds.15.8. 201708.65.
DOI
|
18 |
Kim, Y. (2018). A Design of Human Cloud Platform Framework for Human Resources Distribution of e-Learning Instructional Designer, Journal of Distribution Science, 16(7), 67-75. doi:10.15722/jds.16.7.201807. 67.
DOI
|