Browse > Article
http://dx.doi.org/10.13088/jiis.2012.18.1.039

The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition  

Jung, Min-Kyu (The School of Management, Kyunghee University)
Kim, Jae-Kyeong (The School of Management, Kyunghee University)
Publication Information
Journal of Intelligence and Information Systems / v.18, no.1, 2012 , pp. 39-57 More about this Journal
Abstract
Recently, due to the introduction of high-tech equipment in interactive exhibits, many people's attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience's reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.
Keywords
Emotion Determination Model; Interactive Exhibition; Valence-Arousal Model; Facial Features; Artificial Neural Network;
Citations & Related Records
연도 인용수 순위
  • Reference
1 김규정, 김봉화, 이명학, 한혜정, "음성인식 기반인터랙티브 미디어 아트의 연구 1-소리-시각 인터랙티브 설치미술 불꽃 뮤직을 중심으로-", 기초조형학연구, 9권 5호(2008), 27-35.
2 김대수, 신경망 이론과 응용, 하이테크정보, 1993.
3 김미연, 김정현, 최진원, "인터렉티브 체험형 전시공간 디자인을 위한 사례분석 연구", 대한건축학회, 24권 1호(2008), 11-18.
4 김용준, 조성배, "감정 모델을 통해 휴대폰의 벨소리를 추천하는 상황 인식 추천 시스템", 한국정보과학회 2009 한국컴퓨터종합학술대회 논문집, 36권 1호(2009), 162-165.
5 김철근 외, 전시이론과 기법 연구집, 국립중앙과학관, 학술총서 12권(1996).
6 송현미, 최준혁, 임채진, "첨단과학기술 체험전시를 위한 연출기법 및 전시구성체계에 관한 연구-첨단과학분야(NT, MEMS, BT)의 체험전시를 중심으로-", 한국실내디자인학회 학술대회논문집, 7권 1호(2005), 182-186.
7 심홍기, 김승권, "인공신경망을 이용한 대대전투간작전지속능력 예측에 관한 연구", 대한산업공학회/한국경영과학회 춘계공동학술대회, 2008.
8 유민준, 김현주, 이인권, "감성모델을 이용한 음악탐색 인터페이스", 한국 HCI 2009 학술대회,(2009), 707-710.
9 유재엽, "영상미디어 연출 특성에 따른 공간 표현에 관한 연구", 한국실내디자인학회논문집, 13권 6호(2004), 175-183.
10 임승희, 배경수, 곽수정, 박인석, 박지수, "체험전시 콘텐츠의 몰입도 분석을 위한 주관적 경험측정", 디자인학 연구, 22권 4호(2009), 19-30.
11 장남식, 홍성완, 장재호, 데이터마이닝, 대청, 1999.
12 최지영, 노승석, 박진완, "디지털 미디어를 활용한어린이 체험 전시 작품 구현 방안 연구", 디자인학 연구, 23권 5호(2010), 233-242.
13 Alvarado, N., "Arousal and Valence in the Direct Scaling of Emotional Response to Film Clips", Motivation and Emotion, Vol.21(1997), 323- 348.   DOI   ScienceOn
14 Bouzalmat, A., N. Belghini, A. Zarghili, J. Kharroubi, and A. Majda, "Face Recognition Using Neural Network Based Fourier Gabor Filters and Random Projection", International Journal of Computer Science and Security (IJCSS), Vol.5, No.3(2011), 376-386.
15 Brainerd, C. J., R. E. Holliday, V. F. Reyna, Y. Yang, and M. P. Toglia, "Developmental reversals in false memory : Effects of emotional valence and arousal", Journal of Experimental Child Psychology, Vol.107(2010), 137-154.   DOI   ScienceOn
16 Colibazzi, T., J. Posner, Z. Wang, D. Gorman, A. Gerber, and S. Yu, "Neural systems subserving valence and arousal during the experience of induced emotions", Emotion, Vol.10, No.3(2010), 377-389.   DOI
17 Ekman, P., "Universals and Cultural Differences in Facial Expressions of Emotion. In J. Cole (Ed.)", Nebraska Symposium on Motivation, Vol.19(1972), 207-282.
18 Haykin, S., "Neural Networks : A Comprehensive Foundation", Prentice Hall : Upper Saddle River, NJ, USA, 1999.
19 Henry, A. R., B. Shumeet, and K. Takeo, "Neural Network-Based Face Detection", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.20, No.1(1998), 23-38.   DOI   ScienceOn
20 Huang, C. L. and Y. M. Huang, "Facial Expression Recognition Using Model-Based Feature Extraction and Action Parameters Classification", Journal of Visual Communication and Image Representation, Vol.8, No.3(1977), 278 -290.
21 Lane, R. and L. Nadel, Cognitive Neuroscience of Emotion. Oxford Univ. Press, 2000.
22 Lewis, P. A., H. D. Critchley, P. Rotshtein, and R. J. Dolan, "Neural Correlates of Processing Valence and Arousal in Affective Words", Cerebral Cortex, Vol.17, No.3(2007), 742- 748.
23 Luigina, C., "Situating 'Place' in Interaction Design : Enhancing the User Experience in Interactive Environments", Dept. of Computer Science and Information Systems, University of Limerick, 2004.
24 MaCulloch, W. S. and W. H. Pitts, "A logical calculus of ideas immanent in nervous activity", Bulletin of Mathematical Biophysics, Vol.9 (1943), 115-133.
25 Oliveira, A. M., M. P. Teixeira, I. B. Fonseca, and M. Oliveira, "Joint Model-Parameter Validation of Self-Estimates of Valence and Arousal : Probing a Differential-Weighting Model of Affective Intensity", Proc. 22nd Ann. Meeting Int'l Soc. for Psychophysics (2006), 245-250.
26 Mark, R., Y. Yaser, and S. D. Larry, "Human Expression Recognition from Motion Using : a Radial Basis Function Network Architecture", IEEE Transactions on Neural Networks, Vol.7, No.5(1996), 1121-1138.   DOI   ScienceOn
27 Mehrabian, A. and J. Russell, An Approach to Environmental Psychology. MIT Press, 1974.
28 Nicolaou, M., H. Gunes, and M. Pantic, "Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space", IEEE Transactions on Affective Computing, Vol.2, No.2(2011), 92-105.   DOI
29 Pantic, M. and L. J. M. Rothkrantz, "Expert system for automatic analysis of facial expressions", Image and Vision Computing, Vol.18(2000), 881-905.   DOI   ScienceOn
30 Russell, J. A., "A Circumplex Model of Affect", J. Personality and Social Psychology, Vol. 39(1980), 1161-1178.   DOI
31 Stathopoulou, I. O. and G. A. Tsihrintzis, "An Improved Neural Network-Based Face Detection and Facial Expression Classification System", IEEE International Conference on Systems, Man, and Cybernetics 2004, The Hague, The Netherlands(2004).
32 Xu, M., J. S. Jin, S. Luo, and L. Duan, "Hierarchical movie affective content analysis based on arousal and valence features", MM '08 : Proc . of the 16th ACM Mult, (2008), 677- 680.
33 Zahid, R., G. Suat, B. Micheal, and R. Bernd, "A Unified Features Approach to Human Face Image Analysis and Interpretation", Affective Computing and Intelligent Interaction(ACII), (2009), 41-48.