1 |
Snapchat. [Online]. Available: https://play.google.com/store/apps/details?id=com.snapchat.android&hl=ko
|
2 |
Timestamp Camera. [Online]. Available: https://play.google.com/store/apps/details?id=com. artifyapp.timestamp&hl=en_US
|
3 |
J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycleconsistent adversarial networks," in Proc. of the IEEE international conference on computer vision, pp. 2242-2251, 2017.
|
4 |
A. Almahairi, S. Rajeswar, A. Sordoni, P. Bachman, and A. Courville, "Augmented cyclegan: Learning many-to-many mappings from unpaired data," arXiv preprint arXiv:1802.10151, 2018.
|
5 |
Y. Lu, Y. W. Tai, and C. K. Tang, "Conditional cyclegan for attribute guided face image generation," arXiv preprint arXiv:1705.09966, 2018.
|
6 |
S. Hong, S. Kim, and S. Kang, "Game sprite generator using a multi discriminator GAN," KSII Transactions on Internet & Information Systems, vol. 13, no. 8, 2019.
|
7 |
H. Su, J. Niu, X. Liu, Q. Li, J. Cui, and J. Wan, "Unpaired photo-to-manga translation based on the methodology of manga drawing," arXiv preprint arXiv:2004.10634, 2020.
|
8 |
C. Furusawa, K. Hiroshiba, K. Ogaki, and Y. Odagiri, "Comicolorization: semi-automatic manga colorization," SIGGRAPH Asia 2017 Technical Briefs, vol. 12, pp. 1-4, 2017.
|
9 |
T. Shi, Y. Yuan, C. Fan, Z. Zou, Z. Shi, and Y. Liu, "Face-to-parameter translation for game character auto-creation," in Proc. of the IEEE International Conference on Computer Vision, pp. 161-170, 2019.
|
10 |
J. Kim, M. Kim, H. Kang, and K. Lee, "U-GAT-IT: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation," arXiv preprint arXiv:1907.10830, 2019.
|
11 |
L. A. Gatys, A. S. Ecker, and M. Bethge, "Image style transfer using convolutional neural networks," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414-2423, 2016.
|
12 |
V. Dumoulin, J. Shlens, and M. Kudlur, "A learned representation for artistic style," arXiv preprint arXiv:1610.07629, 2016.
|
13 |
E. Risser, P. Wilmot, and C. Barnes, "Stable and controllable neural texture synthesis and style transfer using histogram losses," arXiv preprint arXiv:1701.08893, 2017.
|
14 |
F. Luan, S. Paris, E. Shechtman, and K. Bala, "Deep photo style transfer," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6997-7005, 2017.
|
15 |
D. Chen, L. Yuan, J. Liao, N. Yu, and G. Hua, "Stylebank: An explicit representation for neural image style transfer," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2770-2779, 2017.
|
16 |
X. Huang and S. Belongie, "Arbitrary style transfer in real-time with adaptive instance normalization," in Proc. of the IEEE International Conference on Computer Vision, pp. 1510-1519, 2017.
|
17 |
J. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual losses for real-time style transfer and superresolution," in Proc. of European Conference on Computer Vision, vol. 9906, pp. 694-711, 2016.
|
18 |
Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M. H. Yang, "Universal style transfer via feature transforms," Advances in Neural Information Processing Systems, pp. 386-396, 2017.
|
19 |
A. Selim, M. Elgharib, and L. Doyle, "Painting style transfer for head portraits using convolutional neural networks," ACM Transactions on Graphics (ToG), vol. 35, no. 4, pp. 1-18, 2016.
|
20 |
M. Lu, H. Zhao, A. Yao, F. Xu, Y. Chen, and L. Zhang, "Decoder network over lightweight reconstructed feature for fast semantic style transfer," in Proc. of the IEEE International Conference on Computer Vision, pp. 2488-2496, 2017.
|
21 |
O. Sendik and D. Cohen-Or, "Deep correlations for texture synthesis," ACM Transactions on Graphics (ToG), vol. 36, no. 5, pp. 1-15, 2017.
|
22 |
T. Park, A. A. Efros, R. Zhang, and J. Y. Zhu, "Contrastive learning for unpaired image-to-image translation," arXiv preprint arXiv:2007.15651, 2020.
|
23 |
K. Cao, J. Liao, and L. Yuan, "Carigans: Unpaired photo-to-caricature translation," arXiv preprint arXiv:1811.00222, 2018.
|
24 |
M. Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz, "Few-shot unsupervised image-to-image translation," in Proc. of the IEEE International Conference on Computer Vision, pp. 10551-10560, 2019.
|
25 |
B. AlBahar and J. B. Huang, "Guided image-to-image translation with bi-directional feature transformation," in Proc. of the IEEE International Conference on Computer Vision, pp. 9016-9025, 2019.
|
26 |
A. Royer, K. Bousmalis, S. Gouws, F. Bertsch, I. Mosseri, F. Cole, and K. Murphy, "XGAN: Unsupervised image-to-image translation for many-to-many mappings," Domain Adaptation for Visual Understanding, pp. 33-49, 2020.
|
27 |
C. H. Lee, Z. Liu, L. Wu, and P. Luo, "MaskGAN: Towards diverse and interactive facial image manipulation," arXiv preprint arXiv:1907.11922, 2019.
|
28 |
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," in Proc. of the IEEE International Conference on Computer Vision, pp. 618-626, 2017.
|
29 |
L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr, "Fully-convolutional Siamese networks for object tracking," in Proc. of European Conference on Computer Vision, pp. 850-865, 2016.
|
30 |
K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
|
31 |
X. Wu, R. He, Z. Sun, and T. Tan, "A light CNN for deep face representation with noisy labels," IEEE Transactions on Information Forensics and Security, vol. 13, no. 11, pp. 2884-2896, 2018.
DOI
|
32 |
Anime-Face-Dataset. [Online]. Available: https://github.com/Mckinsey666/Anime-Face-Dataset
|
33 |
Black Desert and Perl Abyss. [Online]. Available: https://www.blackdesertonline.com/midseason
|
34 |
Dlib. [Online]. Available: http://dlib.net/
|
35 |
M. Y. Liu, T. Breuel, and J. Kautz, "Unsupervised image-to-image translation networks," Advances in Neural Information Processing Systems, pp. 700-708, 2017.
|
36 |
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, "GANs trained by a two time-scale update rule converge to a local Nash equilibrium," Advances in Neural Information Processing Systems, pp. 6626-6637, 2017.
|
37 |
B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Learning deep features for discriminative localization," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921-2929, 2016.
|