References
- Antonacopoulos, A., Dimosthenis, K., and David B. (2006). Ground Truth for Layout Analysis Performance Evaluation. International Workshop on Document Analysis Systems. Springer Berlin Heidelberg.
- Ahn, H. (2014). Improvement of a Context-aware Recommender System through User's Emotional State Prediction. Journal of Information Technology Applications & Management, 21(4), 203-223.
- Ahn, H., Kim, S., and Kim, J. K. (2014). GA-optimized Support Vector Regression for an Improved Emotional State Estimation Model. TIIS, 8(6), 2056-2069. https://doi.org/10.3837/tiis.2014.06.014
- Atkinson, A.P., Dittrich, W. H., Gemmell, A.J., and Young, A. W. (2004). Emotion Perception from Dynamic and Static Body Expressions in Point-Light and Full-Light Displays. Perception, 33(6), 717-746. https://doi.org/10.1068/p5096
- Baird, A. A., Gruber, S. A., Fein, D.A., MASS., L. C., Steingard, R.J., Renshaw, P. F., and Yurgelun-Todd, D. A. (1999). Functional Magnetic Resonance Imaging of Facial Affect Recognition in Children and Adolescents. Journal of the American Academy of Child & Adolescent Psychiatry, 38(2), 195-199. https://doi.org/10.1097/00004583-199902000-00019
- Banziger, T., Grandjean, D., and Scherer; K. R. (2009). Emotion Recognition from Expressions in Face, Voice, and Body: the Multimodal Emotion Recognition Test (MERT). Emotion, 9(5), 91-704. https://doi.org/10.1037/a0017088
- Barrett. L. F., and Russell, J. A. (1999). The Structure of Current Affect Controversies and Emerging Consensus. Current Directions in Psychological Science, 8(1), 10-14. https://doi.org/10.1111/1467-8721.00003
- Bejani, M., Gharavian, D., and Charkari, N. M. (2014). Audiovisual Emotion Recognition using ANOVA Feature Selection Method and Multi-Classifier Neural Networks. Neural Computing and Applications, 24(2), 399-412. https://doi.org/10.1007/s00521-012-1228-3
- Boughrara, H., Chtourou, M., Amar, C. B., and Chen, L. (2016). Facial Expression Recognition based on a MLP Neural Network using Constructive Training Algorithm. Multimedia Tools and Applications, 75(2), 709-731. https://doi.org/10.1007/s11042-014-2322-6
- Buck, R. (1994). Social and Emotional Functions in Facial Expression and Communication: The Read-Out Hypothesis. Biological Psychology, 38(2-3), 95-115. https://doi.org/10.1016/0301-0511(94)90032-9
- Carroll, J. M., and Russell, J. A. (1997). Facial Expressions in Hollywood's Portrayal of Emotion. Personality and Social Psychology, 72(1), 164-176. https://doi.org/10.1037/0022-3514.72.1.164
- Chen S, Tian Y, Liu, Q., and Metaxas, D. N. (2014). Recognizing Expressions from Face and Body Gesture by Temporal Normalized Motion and Appearance Features. Image and Vision Computing, 31(2), 175-185
- Citron, F. M., Gray, M. A., Critchley, H. D., Weekes, B. S., and Ferstl, E. C. (2014). Emotional Valence and Arousal Affect Reading in an Interactive Way: Neuroimaging Evidence for an Approach- Withdrawal Framework. Neuropsychologia, 56, 79-89. https://doi.org/10.1016/j.neuropsychologia.2014.01.002
- Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., and Schroder, M. (2000). Feeltrace: An Instrument for Recording Perceived Emotion in Real Time. Proceeding of ISCA Workshop on Speech and Emotion, 19-24.
- Cybenko, G. (1989). Approximation by Superpositions of a Sigmoidal Function. Mathematics of Control, Signals and Systems, 2(4), 303-314. https://doi.org/10.1007/BF02551274
- Ekman, .P. E. (1993). Facial Expression and Emotion. American Psychologist, 48(4), 384-392. https://doi.org/10.1037/0003-066X.48.4.384
- Ekman, P. E., and Davidson, R. J. (1994). The Nature of emotion: Fundamental Questions. New York, NY, US: Oxford University Press.
- Ekman, P. E., and Friesen, W. V. (1976). Pictures of Facial Affect. Palo Alto, CA: Consulting Psychologists Press.
- Gross, M. M., Crane, E. A., and Fredrickson, B. L. (2012). Effort-Shape and Kinematic Assessment of Bodily Expression of Emotion during Gait. Human Movement Science, 31(1), 202-221. https://doi.org/10.1016/j.humov.2011.05.001
- Gunes, H., and Pantic, M. (2010). Automatic, Dimensional and Continuous Emotion Recognition. International Journal of Synthetic Emotions, 1(1), 68-99. https://doi.org/10.4018/jse.2010101605
- Gunes, H., and Piccardi, M. (2007). Bi-Modal Emotion Recognition from Expressive Face and Body Gestures. Journal of Network and Computer Applications, 30(4), 1334-1345. https://doi.org/10.1016/j.jnca.2006.09.007
- Gunes, H., Piccardi, M., and Pantic, M. (2008). Affective Computing: Focus on Emotion Expression, Synthesis, and Recognition. Vienna: I-Tech Education and Publishing KG, Vienna Austria.
- Jin, Q., Li, C., Chen, S., and Wu, H. (2015). Speech Emotion Recognition with Acoustic and Lexical Features. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE.
- Jung, M. K., and Kim, J. K. (2012). An Intelligent Determination Model of Audience Emotion for Implementing Presonalized Exhibition. Journal of Intelligence and Information Systems, 18(1), 39-57.
- Kim, S., Ryoo, E., Jung, M. K., Kim, J. K., and Ahn, H. (2012). Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model. Journal of Intelligence and Information Systems, 18(3), 185-202. https://doi.org/10.13088/JIIS.2012.18.3.185
- Kolodyazhniy, V., Kreibig, S. D., Gross, J. J, Roth, W. T., and Wilhelm, F. H. (2011). An Affective Computing Approach to Physiological Emotion Specificity: Toward Subject Independent and Stimulus Independent Classification of Film Induced Emotions. Psychophysiology, 48(7), 908-922. https://doi.org/10.1111/j.1469-8986.2010.01170.x
- Koolagudi, S. G., and Rao, K. S. (2012). Emotion recognition from speech: a review. International Journal of Speech Technology, 15(2), 99-117. https://doi.org/10.1007/s10772-011-9125-1
- Lang, P. J. (1995). The Emotion Probe: Studies of Motivation and Attention. American Psychologist, 50(5), 372-385. https://doi.org/10.1037/0003-066X.50.5.372
- Lee, H. C., Wu, C. Y., and Lin, T. M. (2013). Facial Expression Recognition Using Image Processing Techniques and Neural Networks. In Advances in Intelligent Systems and Applications, 2, 259-267. https://doi.org/10.1007/978-3-642-35473-1_26
- Lee, K., Choi, S. Y., Kim, J. K., and Ahn, H. (2014). Multimodal Emotional State Estimation Model for Implementation of Intelligent Exhibition Services. Journal of Intelligence and Information Systems, 20(1), 1-14. https://doi.org/10.13088/JIIS.2014.20.1.001
- Levine, D. S. (2007). Neural Network Modeling of Emotion. Physics of Life Reviews, 4(1), 37-63. https://doi.org/10.1016/j.plrev.2006.10.001
- Li, B. Y., Mian, A., Liu, W., and Krishna, A. (2013). Using Kinect for Face Recognition under Varying Poses, Expressions, Illumination and Disguise. In Applications of Computer Vision (WACV), 2013 IEEE Workshop on, IEEE.
- Lischke, A., Berger, C., Prehn, K., Heinrichs, M., and Herpertz, S. C., and Domes, G. (2012). Intranasal Oxytocin Enhances Emotion Recognition from Dynamic Facial Expressions and Leaves Eye-Gaze Unaffected. Psychoneuroendocrinology, 37(4), 475-481. https://doi.org/10.1016/j.psyneuen.2011.07.015
- Liu, S., Ruan, Q., Wang, C., and An, G. (2012). Tensor Rank one Differential Graph Preserving Analysis for Facial Expression Recognition. Image and Vision Computing, 30(8), 535-545. https://doi.org/10.1016/j.imavis.2012.05.004
- Lundqvist, L. O., Carlsson, F., Hilmersson, P., and Juslin, P. N. (2009). Emotional Responses to Music: Experience, Expression, and Physiology. Psychology of Music, 37(1), 61-90. https://doi.org/10.1177/0305735607086048
- Ma, L., and Khorasani, K. (2004). Facial Expression Recognition using Constructive Feedforward Neural Networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 34(3), 1588-1595. https://doi.org/10.1109/TSMCB.2004.825930
- Mark, R., Yaser, Y., and Larry, S.D. (1996). Human Expression Recognition from Motion using a Radial Basis Function Network Architecture. IEEE Transactions on Neural Network,7(5), 1121-1138. https://doi.org/10.1109/72.536309
- Nakasone, A., Prendinger, H., and Ishizuka, M. (2005). Emotion Recognition from Electromyography and Skin Conductance. Proceedings of the 5th International Workshop on Biosignal Interpretation.
- Nicolaou, M., Gunes, H., and Pantic, M. (2011). Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space. IEEE Transactions on Affective Computing, 2(2), 92-105. https://doi.org/10.1109/T-AFFC.2011.9
- Pantic, M., and Rothkrantz, L. (2003). Toward an Affect Sensitive Multimodal Human-Computer Interaction. Proceeding of the IEEE, 91(9), 1370-1390. https://doi.org/10.1109/JPROC.2003.817122
- Parrot, W.R. (2004). The Nature of Emotion, In Brewer MB, Hewstone M (Eds.), Emotion and Motivation (5-20). Malden, MA: Blackwell.
- Rosenblum, M., Yacoob, Y., and Davis, L. S. (1996). Human expression recognition from motion using a radial basis function network architecture. IEEE Transactions on Neural Networks, 7(5), 1121-1138. https://doi.org/10.1109/72.536309
- Ryoo, E. C., Ahn, H., and Kim, J. K. (2013). The Audience Behavior-based Emotion Prediction Model for Personalized Servic. Journal of Intelligence and Information Systems, 19(2), 73-85. https://doi.org/10.13088/jiis.2013.19.2.073
- Russell, J. A. (1980). A Circumplex Model of Affect. Journal of Personality and Social Psychology, 39(6), 1161-1178. https://doi.org/10.1037/h0077714
- Russell, J. A. (2003). Core Affect and the Psychological Construction of Emotion. Psychological review, 110(1), 145.
- Tartter, V. C. (1980). Happy Talk: Perceptual and Acoustic Effects of Smiling on Speech. Perception & Psychophysics, 27(1), 24-27. https://doi.org/10.3758/BF03199901
- Thayer, R. E. (1989). The Biopsychology of Mood and Arousal. New York: Oxford University Press.
- Wang, W., Enescu, V., and Sahli, H. (2013). Towards Real-Time Continuous Emotion Recognition from Body Movements. International Workshop on Human Behavior Understanding, Springer International Publishing.
- Xiao, Y., Chandrasiri, N. P., Tadokoro, Y., and Oda, M. (1999). Recognition of Facial Expressions using 2D DCT and Neural Network. Electronics and Communications in Japan (Part III: Fundamental Electronic Science), 82(7), 1-11. https://doi.org/10.1002/(SICI)1520-6440(199907)82:7<1::AID-ECJC1>3.0.CO;2-E
- Yurgelun-Todd, D. A., Waternaux, C. M., Cohen, B. M., Gruber, S. A., English, C. D., and Renshaw, P. F. (1996). Functional Magnetic Resonance Imaging of Schizophrenic Patients and Comparison Subjects during Word Production. American Journal of Psychiatry, 153(2), 200-205. https://doi.org/10.1176/ajp.153.2.200
- Zeng, Z., Pantic, M., Roisman, G., and Huang, T. (2009). A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1), 39-58. https://doi.org/10.1109/TPAMI.2008.52