Browse > Article
http://dx.doi.org/10.7472/jksii.2022.23.4.35

Makeup transfer by applying a loss function based on facial segmentation combining edge with color information  

Lim, So-hyun (Department of Computer Science, Kyonggi University)
Chun, Jun-chul (Department of Computer Science, Kyonggi University)
Publication Information
Journal of Internet Computing and Services / v.23, no.4, 2022 , pp. 35-43 More about this Journal
Abstract
Makeup is the most common way to improve a person's appearance. However, since makeup styles are very diverse, there are many time and cost problems for an individual to apply makeup directly to himself/herself.. Accordingly, the need for makeup automation is increasing. Makeup transfer is being studied for makeup automation. Makeup transfer is a field of applying makeup style to a face image without makeup. Makeup transfer can be divided into a traditional image processing-based method and a deep learning-based method. In particular, in deep learning-based methods, many studies based on Generative Adversarial Networks have been performed. However, both methods have disadvantages in that the resulting image is unnatural, the result of makeup conversion is not clear, and it is smeared or heavily influenced by the makeup style face image. In order to express the clear boundary of makeup and to alleviate the influence of makeup style facial images, this study divides the makeup area and calculates the loss function using HoG (Histogram of Gradient). HoG is a method of extracting image features through the size and directionality of edges present in the image. Through this, we propose a makeup transfer network that performs robust learning on edges.By comparing the image generated through the proposed model with the image generated through BeautyGAN used as the base model, it was confirmed that the performance of the model proposed in this study was superior, and the method of using facial information that can be additionally presented as a future study.
Keywords
Makeup Transfer; Generative Adversarial Networks; Histogram of Gradient; BeautyGAN; Loss function; Facial segmentation;
Citations & Related Records
Times Cited By KSCI : 3  (Citation Analysis)
연도 인용수 순위
1 J. Wang, C. Ke, M. Wu, M. Liu and C. Zeng, "Infrared and visible image fusion based on Laplacian pyramid and generative adversarial network," KSII Transactions on Internet and Information Systems, vol. 15, no. 5, pp. 1761-1777, 2021. https://doi.org/10.3837/tiis.2021.05.010   DOI
2 Q. Gu, G. Wang, M. T. Chiu, Y.-W. Tai, and C.-K. Tang, "LADN: Local Adversarial Disentangling Network for Facial Makeup and De-Makeup", 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, Oct-2019. https://doi.org/10.1109/iccv.2019.01058   DOI
3 N. Dalal and B. Triggs, "Histograms of Oriented Gradients for Human Detection", 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). IEEE. https://doi.org/10.1109/cvpr.2005.177   DOI
4 A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin. "Image analogies", In SIGGRAPH, pages 327-340, 2001. https://doi.org/10.1145/383259.383295   DOI
5 S. Liu, X. Ou, R. Qian, W. Wang, and X. Cao, "Makeup like a superstar: Deep localized makeup transfer network", In the Association for the Advance of Artificial Intelligence. AAAI Press, 2568-2575. https://arxiv.org/pdf/1604.07102.pdf
6 Dong Guo and T. Sim, "Digital face makeup by example", 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Jun-2009. https://doi.org/10.1109/cvpr.2009.5206833   DOI
7 L. Xu, Y. Du, and Y. Zhang, "An automatic framework for example-based virtual makeup," 2013 IEEE International Conference on Image Processing. IEEE, Sep-2013. https://doi.org/10.1109/icip.2013.6738660   DOI
8 W. Jiang et al., "PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer", 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Jun-2020. https://doi.org/10.1109/cvpr42600.2020.00524   DOI
9 R. Kips, P. Gori, M. Perrot, and I. Bloch, "CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer", Computer Vision - ECCV 2020 Workshops. Springer International Publishing, pp. 280-296, 2020. https://doi.org/10.1007/978-3-030-67070-2_17   DOI
10 H.-J. Chen, K.-M. Hui, S.-Y. Wang, L.-W. Tsao, H.-H. Shuai, and W.-H. Cheng, "BeautyGlow: On-Demand Makeup Transfer Framework With Reversible Generative Network", 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Jun-2019. https://doi.org/10.1109/cvpr.2019.01028   DOI
11 Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, "Generative Adversarial Networks", In NIPS, arXiv preprint arXiv:1406.2661, 2014 https://arxiv.org/pdf/1406.2661.pdf
12 Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, Lawrence Carin, "Improving Sequence-to-Sequence Learning via Optimal Transport", In ICLR, 2019 https://arxiv.org/pdf/1901.06283.pdf
13 Li et al., "BeautyGAN: Instance-level Facial Makeup Transfer with Deep Generative Adversarial Network", Proceedings of the 26th ACM international conference on Multimedia. ACM, 15-Oct-2018. https://doi.org/10.1145/3240508.3240618   DOI
14 A. Jabbar, X. Li, M. M. Iqbal and A. J. Malik, "FD-StackGAN: Face De-occlusion Using Stacked Generative Adversarial Networks", KSII Transactions on Internet and Information Systems, vol. 15, no. 7, pp. 2547-2567, 2021. https://doi.org/10.3837/tiis.2021.07.014   DOI
15 Eirikur Agustsson, Michael Tschannen, Fabian Mentzer, Radu Timofte, Luc Van Gool, "Generative Adversarial Networks for Extreme Learned Image Compression", The IEEE International Conference on Computer Vision (ICCV), pp. 221-231, 2019. https://arxiv.org/pdf/1804.02958.pdf
16 H. Chang, J. Lu, F. Yu, and A. Finkelstein, "PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup", 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Jun-2018. https://doi.org/10.1109/cvpr.2018.00012   DOI
17 Q.-L. Yuan and H.-L. Zhang, "RAMT-GAN: Realistic and accurate makeup transfer with generative adversarial network", Image and Vision Computing, vol. 120. Elsevier BV, p. 104400, Apr-2022. https://doi.org/10.1016/j.imavis.2022.104400   DOI
18 C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang, "BiSeNet: Bilateral Segmentation Network for Real-Time Semantic Segmentation", Computer Vision- ECCV 2018. Springer International Publishing, pp. 334-349, 2018. https://doi.org/10.1007/978-3-030-01261-8_20   DOI
19 M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium," arXiv, 2017. https://doi.org/10.48550/arXiv.1706.08500   DOI