1 |
Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," in Proc. of Advances in Neural Information Processing Systems (NIPS), pp. 2672-2680, 2014.
|
2 |
M. Arjovsky and L. Bottou, "Towards principled methods for training generative adversarial networks," in Proc. of International Conference on Learning Representations, 2017.
|
3 |
X. Liu, G. Meng, S. Xiang, and C. Pan, "FontGAN: A Unified Generative Framework for Chinese Character Stylization and De-stylization," arXiv preprint arXiv:1910.12604, 2019.
|
4 |
LAU, V. M. K, "Learning by example for parametric font design," in Proc. of SIGGRAPH ASIA '09 Posters, 5:1-5:1, 2009.
|
5 |
C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., "Photorealistic single image super-resolution using a generative adversarial network," in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681-4690, 2017.
|
6 |
H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas, "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks," in Proc. of the IEEE International Conference on Computer Vision, pp. 5907-5915, 2017.
|
7 |
KNUTH D. E., TEX and METAFONT: New Directions in Typesetting, Boston, MA, USA: American Mathematical Society, 1979.
|
8 |
Z. Lian, B. Zhao, and J. Xiao, "Automatic generation of largescale handwriting fonts via style learning," in Proc. of SIGGRAPH ASIA 2016 Technical Briefs, ACM, pp. 1-4, 2016.
|
9 |
H. Q. Phan H. Fu A. B. Chan, "Flexyfont: Learning transferring rules for flexible typeface synthesis," Computer Graph Forum, vol. 34(7), pp. 245-256, 2015.
DOI
|
10 |
D. Ulyanov, A. Vedaldi, and V. Lempitsky, "Instance normalization: The missing ingredient for fast stylization," arXiv preprint, 2017.
|
11 |
A. L. Maas, A. Y. Hannun, and A. Y. Ng., "Rectifier nonlinearities improve neural network acoustic models," in Proc. of ICML, 2013.
|
12 |
D. H. Ko, A.U. Hassan, S. Majeed, and J. Choi, "Font2Fonts: A modified Image-to-Image translation framework for font generation," in Proc. of SMA 2020, September 17-19, 2020.
|
13 |
Canny, J., "A Computational Approach to Edge Detection," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 8, pp. 679-698, 1986.
DOI
|
14 |
I. Avcibas, B. Sankur and K. Sayood, "Statistical evaluation of image quality measures," Journal of Electronic Imaging, vol. 11, no. 2, pp. 206-223, 2002.
DOI
|
15 |
T.-C. Lee, R.L. Kashyap and C.-N. Chu, "Building skeleton models via 3-D medial surface axis thinning algorithms," CVGIP: Graphical Models and Image Processing, 56(6), 462-478, 1994.
DOI
|
16 |
P. Upchurch, N. Snavely, and K. Bala, "From A to z: Supervised transfer of style and content using deep neural network generators," arXiv preprint arXiv:1603.02003, 2016.
|
17 |
Y. Tian: zi2zi: "Master Chinese Calligraphy with Conditional Adversarial Networks," 2017. [Online]. Available: https://github.com/kaonashi-tyc/zi2zi
|
18 |
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proc. of ICCV, 2017.
|
19 |
A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv preprint arXiv:1511.06434, 2016.
|
20 |
A. Odena, C. Olah, and J. Shlens, "Conditional image synthesis with auxiliary classifier GANs", in Proc. of the 34th International Conference on Machine Learning, pp. 2642-2651, 2017.
|
21 |
Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P, "Image quality assessment: From error visibility to structural similarity," IEEE Transactions on Image Processing, 13, 600-612, 2004.
DOI
|
22 |
H. Hayashi, K. Abe, and S. Uchida, "GlyphGAN: Style-Consistent Font Generation Based on Generative Adversarial Networks," Knowledge-Based Systems, 186, 2019.
|
23 |
Y. Kataoka, T. Matsubara and K. Uehara, "Image generation using generative adversarial networks and attention mechanism," in Proc. of 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), 2016.
|
24 |
M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet, "Are GANs created equal? a large-scale study," in Proc. of NIPS'18: Proceedings of the 32nd International Conference on Neural Information Processing Systems, December 2018.
|
25 |
Y. Taigman, A. Polyak, and L. Wolf, "Unsupervised cross-domain image generation," arXiv preprint arXiv:1611.02200, 2016.
|
26 |
T. Miyazaki, T. Tsuchiya, Y. Sugaya, S. Omachi, M. Iwamura, S. Uchida, K. Kise, "Automatic generation of typographic font from a small font subset," IEEE Computer Graphics and Applications, vol. 40, no. 1, pp. 99-111, 2020.
DOI
|
27 |
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
|
28 |
Y. Jiang, Z. Lian, Y. Tang, and J. Xiao, "SCFont: Structure-guided Chinese font generation via deep stacked networks," in Proc. of the AAAI Conference on Artificial Intelligence (AAAI), vol. 33(01), pp. 4015-4022, 2019.
|
29 |
Gao Yue, Guo Yuan, Lian Zhouhui, Tang Yingmin, and Xiao Jianguo, "Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning," ACM Trans. Graph, 38(6), 1-12, 2019, Article 185.
|
30 |
Y. Jing, Y. Yang, Z. Feng, J. Ye, Y. Yu and M. Song, "Neural Style Transfer: A Review," IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 11, pp. 3365-3385, 1 Nov. 2020.
DOI
|
31 |
K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," in Proc. of Int. Conf. Learn. Representations, 2015.
|
32 |
K. J. Liang, C. Li, G. Wang, and L. Carin, "Generative adversarial network training is a continual learning problem," arXiv preprint arXiv:1811.11083, 2018.
|
33 |
M. Mirza and S. Osindero, "Conditional generative adversarial nets," arXiv preprint arXiv:1411.1784, 2014.
|
34 |
XU S., LAU F., CHEUNG W. K., PAN Y, "Automatic generation of artistic Chinese calligraphy," Intelligent Systems, IEEE, 20(3), 32-39, 2005.
DOI
|
35 |
S. Yang, J. Liu, Z. Lian, and Z. Guo, "Awesome typography: Statistics-based text effects transfer," in Proc. of IEEE Conf. Computer. Vis. Pattern Recognition, pp. 2886-2895, 2016.
|
36 |
A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin, "Image analogies," in Proc. of SIGGRAPH, ACM, pp. 327-340, 2001.
|
37 |
N. D. Campbell and J. Kautz, "Learning a manifold of fonts," ACM Transactions on Graphics (TOG), 33(4), 91:1-91:11, 2014.
|
38 |
S. Azadi, M. Fisher, V. Kim, Z. Wang, E. Shechtman, and T. Darrell, "Multi-content GAN for few-shot font style transfer," in Proc. of the IEEE 555 Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7564-7573, 2018.
|
39 |
A. Odena, C. Olah, J. Shlens, "Conditional image synthesis with auxiliary classifier GANs," in Proc. of the 34th International Conference on Machine Learning, ICML, pp. 2642-2651, 2017.
|
40 |
Y. Jiang, Z. Lian, Y. Tang, and J. Xiao, "Dcfont: an end-to-end deep Chinese font generation system," in Proc. of SIGGRAPH Asia 2017 Technical Briefs, pp. 1-4. 2017.
|
41 |
S. Uchida, Y. Egashira, K. Sato, "Exploring the world of fonts for discovering the most standard fonts and the missing fonts," in Proc. of the 13th International Conference on Document Analysis and Recognition, ICDAR, pp. 441-445, 2015.
|