DOI QR코드

DOI QR Code

Exploratory Understanding of the Uncanny Valley Phenomena Based on Event-Related Potential Measurement

사건관련전위 관찰에 기초한 언캐니 밸리 현상에 대한 탐색적 이해

  • Kim, Dae-Gyu (Department of Psychology, Chung-Ang University) ;
  • Kim, Hye-Yun (Department of Psychology, Chung-Ang University) ;
  • Kim, Giyeon (Department of Psychology, Chung-Ang University) ;
  • Jang, Phil-Sik (Department of IT & Logistics, Sehan University) ;
  • Jung, Woo Hyun (Department of Psychology, Chungbuk National University) ;
  • Hyun, Joo-Seok (Department of Psychology, Chung-Ang University)
  • Received : 2015.11.23
  • Accepted : 2016.03.07
  • Published : 2016.03.31

Abstract

Uncanny valley refers to the condition where the affinity of a human-like object decreases dramatically if the object becomes extremely similar to human, and has been hypothesized to derive from the cognitive load of categorical conflict against an uncanny object. According to the hypothesis, the present study ran an oddball task consisting of trials each displaying one among a non-human, human and uncanny face, and measured event-related potentials (ERPs) for each trial condition. In Experiment 1, a non-human face was presented in 80% of the trials (standard) whereas a human face for another 10% trials (target) and an uncanny face for the remaining 10% trials (uncanny). Participants' responses were relatively inaccurate and delayed in both the target and uncanny oddball trials, but neither P3 nor N170 component differed across the three trial conditions. Experiment 2 used 3-D rendered realistic faces to increase the degree of categorical conflict, and found the behavioral results were similar to Experiment 1. However, the peak amplitude of N170 of the target and uncanny trials were higher than the standard trials while P3 mean amplitudes for both the target and uncanny trials were comparable but higher than the amplitude for the standard trials. P3 latencies were delayed in the order of the standard, target, and uncanny trials. The changes in N170 and P3 patterns across the experiments appear to arise from the categorical conflict that the uncanny face must be categorized as a non-target according to the oddball-task requirement despite its perceived category of a human face. The observed increase of cognitive load following the added reality to the uncanny face also indicates that the cognitive load, supposedly responsible for the uncanny experience, would depend on the increase of categorical conflict information subsequent to added stimulus complexity.

언캐니 밸리 현상이란 인간 유사성을 보이는 대상에 대한 부정적 감정의 발생되는 상황을 의미하며, 이는 언캐니 자극에 대한 범주화 과정에서 초래된 인지적 부담이 원인일 가능성이 있다. 본 연구는 인지적 부담 가설에 근거해 비인간, 인간 및 언캐니 얼굴에 대한 oddball 과제를 실시하고 세 얼굴이 촉발시킨 사건관련전위를 관찰했다. 실험 1에서는 도식적 얼굴을 사용해 전체 시행 중 80%의 시행에서 비인간 얼굴을(일반 시행), 10% 시행에서 인간(표적 시행) 그리고 나머지 10%의 나머지 시행에서 언캐니 얼굴(언캐니 시행)을 제시하였다. 그 결과, oddball 시행에 해당하는 표적 및 언캐니 시행의 반응이 상대적으로 부정확했으며 반응시간 또한 지연되었으나 세 시행 유형 간 P3 및 N170 성분의 차이는 분명하지 않았다. 실험 2에서는 3-D 랜더링을 통해 사실감을 증가시켜 범주적 상충의 정도를 증가시킨 얼굴 자극을 사용한 결과 행동적 수준에서 실험 1과 유사한 결과가 관찰되었다. 반면 N170의 경우 일반 시행에 비해 표적과 언캐니 시행의 정점 전위가 분명하게 증가하였으며, P3 성분의 경우 일반 시행에서 진폭이 가장 낮았고 언캐니와 표적 시행 간에는 차이가 없었다. P3 성분의 잠재기 또한 일반, 표적, 언캐니 순으로 지연된 것이 관찰되었다. 실험 1과 2에 걸친 N170와 P3의 발현 패턴의 변화는 언캐니 얼굴이 감각적 수준에서는 인간의 얼굴로 식별되지만 이를 비표적으로 범주화할 것을 oddball 과제에서 강제하기 때문에 초래된 범주화 상충이 원인인 것으로 짐작된다. 또한 사실감이 추가된 언캐니 얼굴 자극이 사용되었을 때 범주화 상충에 의한 인지적 부담이 증가했다는 점은 언캐니 밸리 현상의 배후로 추정되는 인지적 부담이 자극의 복잡성 증가에 따른 상충 정보의 증가에 의해 유발될 가능성을 시사한다.

Keywords

References

  1. Agam, Y., Hyun, J. S., Danker, J. F., Zhou, F., Kahana, M. J., & Sekuler, R. (2009). Early neural signatures of visual short-term memory. Neuroimage, 44(2), 531-536. https://doi.org/10.1016/j.neuroimage.2008.09.018
  2. Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433-436. https://doi.org/10.1163/156856897X00357
  3. Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), 759-771. https://doi.org/10.1016/j.chb.2012.11.021
  4. Churches, O., Baron-Cohen, S., & Ring, H. (2009). Seeing face-like objects: an event-related potential study. Neuroreport, 20(14), 1290-1294. https://doi.org/10.1097/WNR.0b013e3283305a65
  5. Delorme, A. & Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9-21. https://doi.org/10.1016/j.jneumeth.2003.10.009
  6. Donchin, E. (1981). Surprise!... surprise? Psychophysiology, 18(5), 493-513. https://doi.org/10.1111/j.1469-8986.1981.tb01815.x
  7. Eimer, M. (2000). The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport, 11(10), 2319-2324. https://doi.org/10.1097/00001756-200007140-00050
  8. Eimer, M. (2011). The face-sensitive N170 component of the event-related brain potential. In: Calder, A.J., Rhodes, G., Johnson, M.H., Haxby, J. (Eds.), The Oxford Handbook of Face Perception. Oxford University Press, Oxford, (pp.329-344).
  9. Eom, J,-S., Han, Y.-H., Sohn, J,-H., & Park, K.-B. (2010). Effects of stimulus similarity on P300 amplitude in P300-based concealed information test(P300-기반 숨긴정보검사에서 자극유사성이 P300의 진폭에 미치는 영향), Science of Emotion and Sensibility, 13(3), 541-550.
  10. Ferrey, A. E., Burleigh, T. J., & Fenske, M. J. (2015). Stimulus-category competition, inhibition, and affective devaluation: a novel account of the uncanny valley. Frontiers in Psychology, 6.
  11. Foxe, J. J. & Simpson, G. V. (2002). Flow of activation from V1 to frontal cortex in humans. Experimental Brain Research, 142(1), 139-150. https://doi.org/10.1007/s00221-001-0906-7
  12. Freedman, D. J., Riesenhuber, M., Poggio, T., & Miller, E. K. (2003). A comparison of primate prefrontal and inferior temporal cortices during visual categorization. The Journal of Neuroscience, 23(12), 5235-5246. https://doi.org/10.1523/JNEUROSCI.23-12-05235.2003
  13. Frischen, A., Ferrey, A. E., Burt, D. H., Pistchik, M., & Fenske, M. J. (2012). The affective consequences of cognitive inhibition: devaluation or neutralization?. Journal of Experimental Psychology: Human Perception and Performance, 38(1), 169. https://doi.org/10.1037/a0025981
  14. Garbarino, E. C. & Edell, J. A. (1997). Cognitive effort, affect, and choice. Journal of Consumer Research, 24(2), 147-158. https://doi.org/10.1086/209500
  15. Hagen, G. F., Gatherwright, J. R., Lopez, B. A., & Polich, J. (2006). P3a from visual stimuli: task difficulty effects. International Journal of Psychophysiology, 59(1), 8-14. https://doi.org/10.1016/j.ijpsycho.2005.08.003
  16. Hoogerheide, V. & Paas, F. (2012). Remembered utility of unpleasant and pleasant learning experiences: Is all well that ends well?. Applied Cognitive Psychology, 26(6), 887-894. https://doi.org/10.1002/acp.2890
  17. Hyun, J.-S. (2008). The properties and measurements of N2pc component evoked by a target in visual search(시각적 탐색에서 표적에 의해 유발된 N2pc 성분의 특성 및 측정). The Korean Journal of Experimental Psychology, 20(4), 247-263.
  18. Hyun, J.-S. (2009). Properties of visual working memory representations as examined by memory-perception comparison process(기억 표상과 지각적 입력 간 비교 과정을 통해 본 시각작업기억 표상의 특성). The Korean Journal of Cognitive and Biological Psychology, 21(4), 265-282. https://doi.org/10.22172/cogbio.2009.21.4.002
  19. Jang, P. S. (2007). An Experimental Approach to Uncanny Valley Hypothesis(Uncanny Valley 가설에 대한 실험적 접근). Journal of the Ergonomics Society of Korea, 26(1), 47-53. https://doi.org/10.5143/JESK.2007.26.1.047
  20. Johnson, R., Jr. (1993). On the neural generators of the P300 component of the event-related potential. Psychophysiology, 30(1), 90-97. https://doi.org/10.1111/j.1469-8986.1993.tb03208.x
  21. Johnson, J. S. & Olshausen, B. A. (2003). Timecourse of neural signatures of object recognition. Journal of Vision, 3(7), 4. https://doi.org/10.1167/3.7.4
  22. Kahana, M. J. & Sekuler, R. (2002). Recognizing spatial patterns: a noisy examplar approach. Vision Research, 42, 2177-2192. https://doi.org/10.1016/S0042-6989(02)00118-9
  23. Katayama, J. & Polich, J. (1996). P300, probability, and the three-tone paradigm. Evoked Potentials-Electroencephalography and Clinical Neurophysiology, 100(6), 555-562. https://doi.org/10.1016/S0168-5597(96)95171-0
  24. Kok, A. (2001). On the utility of P3 amplitude as a measure of processing capacity. Psychophysiology, 38(3), 557-577. https://doi.org/10.1017/S0048577201990559
  25. Loftus, G. R. & Masson, M. E. J. (1994). Using Confidence-Intervals in within-Subject Designs. Psychonomic Bulletin & Review, 1(4), 476-490. https://doi.org/10.3758/BF03210951
  26. Lopez-Calderon, J. & Luck, S. J. (2014). ERPLAB: an open-source toolbox for the analysis of event-related potentials. Frontiers in Human Neuroscience, 8.
  27. Luck, S. J. (2014). An Introduction to the Event-related Potential Technique. Boston: MIT press.
  28. Mori, M. (1970). Bukimi no tani the un-canny valley. Energy, 7, 33-35. (In Japanese)
  29. Mori, M. (2012). The Uncanny Valley. Ieee Robotics & Automation Magazine, 19(2), 98-100.
  30. Nosofsky, R. (1991). Tests of an exemplar model for relating perception classification and recognition memory. Journal of Experimental Psychology:Human Perception & Performance, 17(1), 3-27. https://doi.org/10.1037/0096-1523.17.1.3
  31. Nosofsky, R. M. & Palmeri, T. J. (1997). An exemplarbased random walk model of speeded classification. Psychological Review, 104(2), 266-300. https://doi.org/10.1037/0033-295X.104.2.266
  32. Pashler, H. (1998). Attention. Hove, England UK:Psychology Press/Erlbaum (Uk) Taylor & Francis.
  33. Polich, J. (2003). Theoretical overview of P3a and P3b. In Polich. J. (Ed.), Detection of Change: Event-related Potential and fMRI Findings. Springer, US. 83-98.
  34. Polich, J. (2007). Updating P300: an integrative theory of P3a and P3b. Clinical Neurophysiology, 118(10), 2128-2148. https://doi.org/10.1016/j.clinph.2007.04.019
  35. Seyama, J. I. & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 16(4), 337-351. https://doi.org/10.1162/pres.16.4.337