Browse > Article
http://dx.doi.org/10.5909/JBE.2018.23.5.614

Depth Image Restoration Using Generative Adversarial Network  

Nah, John Junyeop (Inha University, Department of Information and Communication Engineering)
Sim, Chang Hun (Inha University, Department of Information and Communication Engineering)
Park, In Kyu (Inha University, Department of Information and Communication Engineering)
Publication Information
Journal of Broadcast Engineering / v.23, no.5, 2018 , pp. 614-621 More about this Journal
Abstract
This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.
Keywords
Deep learning; generative adversarial network; depth image; depth camera; restoration;
Citations & Related Records
연도 인용수 순위
  • Reference
1 C. Cao, Y. Weng, S. Zhou, Y. Tong, and K.Zhou, "FaceWarehouse: a 3D facial expression database for visual computing," IEEE Transaction on Visualization and Computer Graphics, vol. 20, no. 3, pp. 413-425, March 2014.   DOI
2 P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, "A 3D face model for pose and illumination invariant face recognition," Proceeding of IEEE International Conference on Advanced Video and Signal Based Surveillance, October 2009.
3 $Intel^{(R)}$ $RealSense^{TM}$ Camera SR300, https://software.intel.com/sites/default/files/managed/0c/ec/realsense-sr300-product-data-sheet-rev-1-0.pdf (accessed August 13, 2018).
4 K. Xu, J. Zhou, and Z. Wang, "A method of hole-filling for the depth map generated by Kinect with moving objects detection," Proceeding of IEEE international Symposium on Broadband Multimedia Systems and Broadcasting, pp. 1-5, June 2012.
5 L. Feng, L.-M. Po, X. Xu, K.-H. Ng, C.-H. Cheung, and K.-w. Cheung, "An adaptive background biased depth map hole-filling method for Kinect," Proceeding of IEEE Industrial Electronics Society, pp. 2366-2371, November 2013.
6 S. Ikehata, J. Cho, and K. Aizawa, "Depth map inpainting and super-resolution based on internal statistics of geometry and appearance," Proceeding of IEEE International Conference on Image Processing, pp. 938-942, September 2013.
7 A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," Proceeding of International Conference on Learning Representations, May 2016.
8 R. A. Yeh, C. Chen, T. Yian Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do, "Semantic image inpainting with deep generative models," Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, July 2017.
9 M. Arjovsky, S.Chintala, and L. Bottou, "Wasserstein GAN," Proceeding of International Conference on Machine Learning, vol. 70, pp.214-223, August. 2017.
10 I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D.Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial networks," Proceeding of Advances in Neural Information Processing Systems, December 2014.
11 A. T. Tran, T. Hassner, I. Masi, and G. Medioni, "Regressing robust and discriminative 3D morphable models with a very deep neural net-work," Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, July 2017.