DOI QR코드

DOI QR Code

Quantified Lockscreen: Integration of Personalized Facial Expression Detection and Mobile Lockscreen application for Emotion Mining and Quantified Self

Quantified Lockscreen: 감정 마이닝과 자기정량화를 위한 개인화된 표정인식 및 모바일 잠금화면 통합 어플리케이션

  • 김성실 (한국과학기술원 전산학과) ;
  • 박준수 (한국과학기술원 전산학과) ;
  • 우운택 (한국과학기술원 문화기술대학원)
  • Received : 2015.05.28
  • Accepted : 2015.08.19
  • Published : 2015.11.15

Abstract

Lockscreen is one of the most frequently encountered interfaces by smartphone users. Although users perform unlocking actions every day, there are no benefits in using lockscreens apart from security and authentication purposes. In this paper, we replace the traditional lockscreen with an application that analyzes facial expressions in order to collect facial expression data and provide real-time feedback to users. To evaluate this concept, we have implemented Quantified Lockscreen application, supporting the following contributions of this paper: 1) an unobtrusive interface for collecting facial expression data and evaluating emotional patterns, 2) an improvement in accuracy of facial expression detection through a personalized machine learning process, and 3) an enhancement of the validity of emotion data through bidirectional, multi-channel and multi-input methodology.

잠금화면은 현대인들이 모바일 플랫폼에서 가장 자주 대면하는 인터페이스 중 하나이다. 조사에 따르면 스마트폰 사용자들은 일일 평균 150번 잠금해제를 수행하지만[1], 패턴인식, 비밀번호와 같은 잠금화면 인터페이스등은 보안 및 인증의 목적을 제외하곤 별 다른 이익을 제공하지 못하는 것이 현 실정이다. 본 논문에서는 보안용도의 기존 잠금화면을 전방 카메라를 활용한 얼굴 및 표정인식 어플리케이션으로 대체하여 표정 데이터를 수집한 뒤 실시간 표정 및 감정 변화 피드백을 제공하는 인터페이스를 제시한다. 본 연구에선 Quantified Lockscreen 어플리케이션을 통한 실험을 통해 1) 잠금화면을 활용한 비침습적인 인터페이스를 통해 연속적인 표정데이터 획득과 감정패턴을 분석할 수 있는 것을 검증했으며 2) 개인화된 학습 및 분석으로 표정인식 및 감정 검출의 정확도를 개선하였으며 3) 표정으로부터 추론된 감정 데이터의 타당성을 강화하기 위한 양괄식 검증기법을 도입하여 감정 검출의 다중채널 및 다중입력 방법론의 가능성을 확인하였다.

Keywords

References

  1. M. Mary. (2013, May 29). 2013 Internet Trends [Online]. Available: http://www.kpcb.com/blog/2013-internet-trends (downloaded 2015, May. 18)
  2. S. Berthoz, and E. L. Hill, "The validity of using self-reports to assess emotion regulation abilities in adults with autism spectrum disorder, "European psychiatry, Vol. 20, No. 3, pp. 291-298, May 2005. https://doi.org/10.1016/j.eurpsy.2004.06.013
  3. M. Swan. "The quantified self: fundamental disruption in big data science and biological discovery," Big Data, Vol. 1, No. 2 pp. 85-99, Jun. 2013. https://doi.org/10.1089/big.2012.0002
  4. Q. Zheng, Q. Jiwei, "Evaluating the emotion based on ontology," Web Society (SWS), 2011 3rd Symposium on. IEEE, pp. 32-36, 2011.
  5. T. Sharma, K. Bhanu, "Emotion estimation of physiological signals by using low power embedded system," Proc. of the Conference on Advances in Communication and Control Systems, pp. 42-45, 2013.
  6. A. Barliya, L Omlor, M. A. Giese, A. Berthoz and T. Flash, "Expression of emotion in the kinematics of locomotion," Experimental brain research, Vol. 22, No. 2, pp. 159-176, 2013.
  7. M. E. Ayadi, M. S. Kamel, and F. Karray, "Survey on speech emotion recognition: Features, classification schemes, and databases," Pattern Recognition, Vol. 44, No. 3, pp. 572-587, 2011. https://doi.org/10.1016/j.patcog.2010.09.020
  8. M. Thelwall, D. Wilkinson, and S. Uppal, "Data mining emotion in social network communication: Gender differences in MySpace," Journal of the American Society for Information Science and Technology, Vol. 61, No. 1, pp. 190-199, 2010. https://doi.org/10.1002/asi.21180
  9. G. R. Duncan. (2012, Jan 9), A Smart Phone That Knows You're Angry [Online]. Available: http://www.technologyreview.com/news/426560/a-smart-phone-that-knows-youre-angry/ (downloaded 2015, Feb. 11)
  10. P. Ekman, and W. V. Friesen, "Facial action coding system," 1977.
  11. T. Kanade, J. F. Cohn, Y. Tian, "Comprehensive database for facial expression analysis," Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on. IEEE, pp. 484-491, 2000.
  12. R. Gross, I. Matthews, and S. Baker, "Generic vs. person specific active appearance models," Image and Vision Computing, Vol. 23, No. 12, pp. 1080-1093, 2005. https://doi.org/10.1016/j.imavis.2005.07.009
  13. J. Hamm, C. G. Kohler, R. C. Gur and R. Verma, "Automated facial action coding system for dynamic analysis of facial expressions in neuropsychiatric disorders," Journal of neuroscience methods, Vol. 200, No. 2, pp. 237-256, 2011. https://doi.org/10.1016/j.jneumeth.2011.06.023
  14. G. Castellano, L. Kessous, G. Caridakis, "Emotion recognition through multiple modalities: face, body gesture, speech," Affect and emotion in human-computer interaction, pp. 92-103, 2008.
  15. S. D'Mello, and J. Kory, "Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies," Proc. of the 14th ACM international conference on Multimodal interaction, pp. 31-38, 2012.
  16. N. Cummins, J. Joshi, A. Dhall, V. Sethu, R. Goecke, J. Epps, "Diagnosis of depression by behavioural signals: a multimodal approach," Proc. of the 3rd ACM international workshop on Audio/visual emotion challenge, pp. 11-20, 2013.
  17. S. M. Sergio, O. C. Santos, J. G. Boticario, "Affective state detection in educational systems through mining multimodal data sources," 6th International Conference on Educational Data Mining, pp. 348-349, 2013.
  18. C. C. Chang, and C. J. Lin, "LIBSVM: a library for support vector machines," ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 3, pp. 27, 2011.
  19. T. Ojala, M. Pietikainen, and D. Harwood, "A comparative study of texture measures with classification based on featured distributions," Pattern recognition, Vol. 29, No. 1, pp. 51-59, 1996. https://doi.org/10.1016/0031-3203(95)00067-4