Acknowledgement
Supported by : 정보통신기술진흥센터, 한국연구재단, 한국산업기술평가관리원
References
- P. Ekman and W. V. Friesen, “Detecting deception from the body or face,” J. Pers. Soc. Psychol., Vol. 29, No. 3, pp. 288-298, 1974. https://doi.org/10.1037/h0036006
- I. B. Mauss and M. D. Robinson, “Measures of emotion: A review,” Cogn. Emot., Vol. 23, No. 2, pp. 209-237, 2009. https://doi.org/10.1080/02699930802204677
- M. Shah, B. Mears, C. Chakrabarti, and A. Spanias, "Lifelogging: Archival and retrieval of continuously recorded audio using wearable devices," IEEE International Conference on Emerging Signal Processing Applications, pp. 99-102, 2012.
- J. Tao and T. Tan, "Affective Computing : A Review," ACII 2005, pp. 981-995, 2005.
- M. Soleymani, G. Chanel, J. J. M. Kierkels, and T. Pun, “Affective characterization of movie scenes based on content analysis and physiological changes,” Int. J. Semant. Comput., Vol. 3, No. 2, pp. 235-254, 2009. https://doi.org/10.1142/S1793351X09000744
- K. Scherer, "Adding the affective dimension: a new look in speech analysis and synthesis," ICSLP, 1996.
- G. Caridakis, G. Castellano, and L. Kessous, "Multimodal emotion recognition from expressive faces, body gestures and speech," Boukis C., Pnevmatikakis A., Polymenakos L. (eds) Artificial Intelligence and Innovations 2007: from Theory to Applications, 2007.
- E. Navas, I. Hernaez, and Iker Luengo, “An objective and subjective study of the role of semantics and prosodic features in building corpora for emotional TTS,” IEEE Trans. Audio, Speech Lang. Process, Vol. 14, No. 4, pp. 1117-1127, Jul. 2006. https://doi.org/10.1109/TASL.2006.876121
- F. Burkhardt, a Paeschke, M. Rolfes, W. Sendlmeier, and B. Weiss, "A database of German emotional speech," Eur. Conf. Speech Commun. Technol, Vol. 2005, pp. 3-6, 2005.
- H. Atassi and A. Esposito, "A speaker independent approach to the classification of emotional vocal expressions," 2008 20th IEEE Int. Conf., 2008.
- Huang, et al., "Speech emotion recognition using CNN," ACM International Conference on Multimedia, pp. 801-804, 2014.
- D. Bogdanov, N. Wack, E. Gomez, S. Gulati, P. Herrera, O. Mayor, G. Roma, J. Salamon, J. Zapata, and X. Serra, "ESSENTIA: An audio analysis library for music information retrieval," ISMIR 2013, pp. 493-498, 2013.
- E.-S. Kim, et al., "Behavioral pattern modeling of human-human interaction for teaching restaurant service robots," AAAI 2015 Fall Symposium on AI for Human-Robot Interaction, 2015.
- C. Chamaret, L. Chen, S. Member, Y. Baveye, and E. Dellandr, “LIRIS-ACCEDE: A video database for affective content analysis,” IEEE Trans. Affect. Comput., Vol. 6, No. 1, pp. 43-55, 2015. https://doi.org/10.1109/TAFFC.2015.2396531