DOI QR코드

DOI QR Code

Implementation of Multi Channel Network Platform based Augmented Reality Facial Emotion Sticker using Deep Learning

딥러닝을 이용한 증강현실 얼굴감정스티커 기반의 다중채널네트워크 플랫폼 구현

  • Kim, Dae-Jin (Research Institute for Image & Cultural Contents, Dongguk University)
  • 김대진 (동국대학교 영상문화콘텐츠연구원)
  • Received : 2018.06.25
  • Accepted : 2018.07.25
  • Published : 2018.07.31

Abstract

Recently, a variety of contents services over the internet are becoming popular, among which MCN(Multi Channel Network) platform services have become popular with the generalization of smart phones. The MCN platform is based on streaming, and various factors are added to improve the service. Among them, augmented reality sticker service using face recognition is widely used. In this paper, we implemented the MCN platform that masks the augmented reality sticker on the face through facial emotion recognition in order to further increase the interest factor. We analyzed seven facial emotions using deep learning technology for facial emotion recognition, and applied the emotional sticker to the face based on it. To implement the proposed MCN platform, emotional stickers were applied to the clients and various servers that can stream the servers were designed.

최근 인터넷을 통한 다양한 콘텐츠 서비스가 일반화 되고 있으며, 그 중에서 다중채널네트워크 플랫폼 서비스는 스마트 폰의 일반화와 함께 인기가 높아지고 있다. 다중채널네트워크 플랫폼은 스트리밍을 기본으로 하면서, 서비스 향상을 위하여 다양한 요소를 추가하고 있다. 그중 얼굴인식을 이용한 증강현실 스티커 서비스가 많이 이용되고 있다. 본 논문에서는 기존 서비스보다 흥미 요소를 더욱 증가 시킬 목적으로 얼굴 감정인식을 통하여 증강현실 스티커를 얼굴에 마스킹 하는 다중채널네트워크 플랫폼을 구현하였다. 얼굴감정인식을 위해 딥러닝 기술을 이용하여 7가지 얼굴의 감정을 분석하였고, 이를 기반으로 감정 스티커를 얼굴에 적용하여, 사용자들의 만족도를 더욱 높일 수 있었다. 제안하는 다중채널네트워크 플랫폼 구현을 위해서 클라이언트에 감정 스티커를 적용하였고, 서버에서 스트리밍 서비스 할 수 있는 여러 가지 서버들을 설계하였다.

Keywords

References

  1. Dae-Jin Kim, "Implementation of One-Pserson Media Live System in Closed network Environment," Journal of Digital contents Society, Vol. 18, No. 1, pp. 1-4, 2017 https://doi.org/10.9728/dcs.2017.18.1.1
  2. Luis Rodriguez-Gil, Pablo Orduna, Javier Garcia-Zubia, Diego Lopez-de-lpina, "Interactive live-streaming technologies and approaches for web-based applications," Multimedia Tools and Applications, Vol. 77, pp. 6471-6502, 2018 https://doi.org/10.1007/s11042-017-4556-6
  3. Jianwei Zhang, Xinchang Zhang, Chunling Yang, "Towards the multi-request mechanism in pull-based peer-to-peer live streaming systems," Computer Networks, Vol. 138, pp. 77-89, 2018 https://doi.org/10.1016/j.comnet.2018.03.031
  4. Yi Sun1, Xiaogang Wang, Xiaoou Tang, "Deep Learning Face Representation from Predicting 10,000 Classes," Computer Vision and Pattern Recognition(CVPR), pp. 1891-1898, 2014
  5. Donghee Shin, "Empathy and embodied experience in virtual environment: To what extent can virtual reality stimulate empathy and embodied experience?", Computers in Human Behavior, Vol. 78, pp. 64-73, 2018 https://doi.org/10.1016/j.chb.2017.09.012
  6. Mobitalk project [Internet]. Available: http://www.maneullab.com/.
  7. Rajeev Ranjan, Swami Sankaranarayanan, Ankan Bansal, Navaneeth Bodla, Jun-Cheng Chen, Vishal M. Patel, Carlos D. Castillo, Rama Chellappa, "Deep Learning for Understanding Faces: Machines May Be Just as Good, or Better, than Humans", IEEE Signal Processing Magazine, Vol. 35, pp. 66-83, 2018 https://doi.org/10.1109/MSP.2017.2764116
  8. Facial Express Recognition Challenge [Internet]. https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data
  9. Dae-Jin Kim, "Implementation of Real-time Video Surveillance System based on Multi-Screen in Mobile-phone Environment," Journal of Digital contents Society, Vol. 18, No. 6, pp. 1009-1015, 2017

Cited by

  1. Distributed and Parallel Deep Learning Architecture Exploiting Dynamic Stale Synchronous Parallel Method vol.20, pp.2, 2018, https://doi.org/10.9728/dcs.2019.20.2.387
  2. Detecting Facial Region and Landmarks at Once via Deep Network vol.21, pp.16, 2021, https://doi.org/10.3390/s21165360