The interaction between emotion recognition through facial expression based on cognitive user-centered television

이용자 중심의 얼굴 표정을 통한 감정 인식 TV의 상호관계 연구 -인간의 표정을 통한 감정 인식기반의 TV과 인간의 상호 작용 연구

  • 이종식 (성균관대학교 인터렉션사이언스학과) ;
  • 신동희 (성균관대학교 인터렉션사이언스학과)
  • Received : 2014.03.05
  • Accepted : 2014.05.14
  • Published : 2014.05.31

Abstract

In this study we focus on the effect of the interaction between humans and reactive television when emotion recognition through facial expression mechanism is used. Most of today's user interfaces in electronic products are passive and are not properly fitted into users' needs. In terms of the user centered device, we propose that the emotion based reactive television is the most effective in interaction compared to other passive input products. We have developed and researched next generation cognitive TV models in user centered. In this paper we present a result of the experiment that had been taken with Fraunhofer IIS $SHORE^{TM}$ demo software version to measure emotion recognition. This new approach was based on the real time cognitive TV models and through this approach we studied the relationship between humans and cognitive TV. This study follows following steps: 1) Cognitive TV systems can be on automatic ON/OFF mode responding to motions of people 2) Cognitive TV can directly select channels as face changes (ex, Neutral Mode and Happy Mode, Sad Mode, Angry Mode) 3) Cognitive TV can detect emotion recognition from facial expression of people within the fixed time and then if Happy mode is detected the programs of TV would be shifted into funny or interesting shows and if Angry mode is detected it would be changed to moving or touching shows. In addition, we focus on improving the emotion recognition through facial expression. Furthermore, the improvement of cognition TV based on personal characteristics is needed for the different personality of users in human to computer interaction. In this manner, the study on how people feel and how cognitive TV responds accordingly, plus the effects of media as cognitive mechanism will be thoroughly discussed.

Keywords

References

  1. Nakano, T., Ando, H., Ishizu, H., Morie, T., &Iwata, A. "Coarse image region segmentation using resistive-fuse networks implemented in FPGA." 7th World Multiconference on Systemics, Cybernetics and Informatics. Vol. 4, pp. 186-191. 2003.
  2. T.Nakano, T.Morie,and A.Iwata "A Face/Object Recognition System Using FPGA Implementation of Carse Region Segmentation. (SICE Annual Conference in Fukui, Fukui University, Japan, 2003.
  3. Morizet, N., Amiel, F., Hamed, I. D., &Ea, T. "A comparative implementation of PCA face recognition algorithm." Electronics, Circuits and Systems, 2007. ICECS 2007. 14th IEEE International Conference on. IEEE, pp. 865-868. 2007.
  4. Sannella, Michael John. Constraint satisfaction and debugging for interactive user interfaces. Diss. University of Washington, 1994.
  5. Morizet, N., Amiel, F., Hamed, I. D., &Ea, T. "A comparative implementation of PCA face recognition algorithm." Electronics, Circuits and Systems, 2007. ICECS 2007. 14th IEEE International Conference on. IEEE, p. 865-868. 2007.
  6. Sharma, Sudhir, and Wang Chen. "Using modelbased design to accelerate FPGA development for automotive applications." The MathWorks (2009).
  7. Madi, R., Lahoud, R., Sawan, B., &Saghir, M. "Face Recognition on FPGA." Spring Term Report, EECE 501 (2006).
  8. Veloso, M. Pinkal H. Uszkoreit M., W. Wahlster, and M. J. Wooldridge. "Cognitive Technologies." 2007.
  9. Stork, Hans-Georg. "Towards a Scientific Foundation for Engineering Cognitive Systems-an interim report." 2007.
  10. Dan BurnsAir Force Research Laboratory (AFRL) FPGA Hardware Acceleration of DNA Code Library Design (and Cognitive Models)2008.
  11. Farabet, Cl ment, Cyril Poulet, and Yann LeCun. "An fpga-based stream processor for embedded real-time vision with convolutional networks." Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on. pp. 878-885, IEEE, 2009.
  12. Jiang, Jintao, et al. "On the relationship between face movements, tongue movements, and speech acoustics." EURASIP Journal on Applied Signal Processing 11, pp. 1174-1188. 2002
  13. Oviatt, Sharon. "Human-centered design meets cognitive load theory: designing interfaces that help people think." Proceedings of the 14th annual ACM international conference on Multimedia. pp 871-880. 2006.
  14. So, Hayden Kwok-Hay, and Robert Brodersen. "A unified hardware/software runtime environment for FPGA-based reconfigurable computers using BORPH." ACM Transactions on Embedded Computing Systems (TECS) 7.2. 14 2008
  15. Deivamani, M., R. Baskaran, and P. Dhavachelvan. "Improving Emotion Recognition with a Learning Multi-agent system." Department of Computer Science &Engineering, Anna University, Chennai, India
  16. Vogt, Thurid, and Elisabeth Andr. "Improving automatic emotion recognition from speech via gender differentiation." Proc. Language Resources and Evaluation Conference (LREC 2006), Genoa. 2006.
  17. Stephen M. Fiore, Ph.D. Editor, Cognitive Technology http://www.cognitivetechnologyjournal.com/Default.aspx.html Apr.20 .2014
  18. Richard E. Mayer and Roxana Moreno University of California, Santa Barbara "A Cognitive Theory of Multimedia Learning: Implications for Design Principles." 2000
  19. Jim Sullivan University of Colorado 2003 http://portal.acm.org/citation.cfm?id=957205.957232.html Apr.23 2014
  20. Chapter 11: Cognitive Theory http://allpsych.com/personalitysynopsis/cognitive.html Apr.23 2014
  21. Social cognitive theory http://en.wikipedia.org/wiki/Social_cognitive_theory.html Apr.23 2014
  22. Pao Chung and Chao "Face recognition system based on front-end facial feature extraction using FPGA." 2009
  23. Sex-related differences in spatial cognition http://en.wikipedia.org/wiki/Sex-related_differences_in_spatial_cognition. html Apr.23 2014
  24. Fraunhofer IIS $SHORE^{TM}$ http://www.iis.fraunhofer.de/bf/bsy/produkte/shore/.html Apr.25 2014