DOI QR코드

DOI QR Code

Motion Sickness Measurement and Analysis in Virtual Reality using Deep Neural Networks Algorithm

심층신경망 알고리즘을 이용한 가상환경에서의 멀미 측정 및 분석

  • Received : 2019.02.19
  • Accepted : 2019.02.28
  • Published : 2019.03.01

Abstract

Cybersickness is a symptom of dizziness that occurs while experiencing Virtual Reality (VR) technology and it is presumed to occur mainly by crosstalk between the sensory and cognitive systems. However, since the sensory and cognitive systems cannot be measured objectively, it is difficult to measure cybersickness. Therefore, methodologies for measuring cybersickness have been studied in various ways. Traditional studies have collected answers to questionnaires or analyzed EEG data using machine learning algorithms. However, the system relying on the questionnaires lacks objectivity, and it is difficult to obtain highly accurate measurements with the machine learning algorithms. In this work, we apply Deep Neural Network (DNN) deep learning algorithm for objective cybersickness measurement from EEG data. We also propose a data preprocessing for learning and network structures allowing us to achieve high performance when learning EEG data with the deep learning algorithms. Our approach provides cybersickness measurement with an accuracy up to 98.88%. Besides, we analyze video characteristics where cybersickness occurs by examining the video segments causing cybersickness in the experiments. We discover that cybersickness happens even in unusually persistent changes in the darkness such as the light in a room keeps switching on and off.

사이버 멀미는 VR 체험 중 발생하는 증상으로, 주로 감각과 인지 시스템 사이의 불일치로 인해 발생하는 것으로 추정된다. 하지만 감각 및 인지 시스템을 객관적으로 측정할 수 있는 방법이 없기 때문에, 사이버 멀미를 측정하는 것은 어렵다. 이를 해결하기 위해 사이버 멀미를 측정하기 위해 다양한 방법론들이 연구되고 있다. 기존의 멀미를 측정하기 위한 방법은 설문방식을 이용하거나, 머신 러닝을 이용하여 뇌파 데이터를 분석하는 방식으로 진행되어 왔다. 하지만 설문을 이용한 방식은 다소 객관성이 떨어지며, 머신 러닝을 사용하는 방식은 아직까지 높은 정확도를 얻은 연구가 부족하다. 본 논문에서는 뇌파 데이터를 Deep Neural Network (DNN) 딥러닝 알고리즘에 적용하여 객관적인 사이버 멀미 측정 방식을 제안한다. 또한 우리는 더 정확한 사이버 멀미 측정 결과를 위하여 딥러닝 네트워크 구조와 뇌파 데이터 전처리 기법을 제안한다. 우리의 접근 방법은 최대 98.88%의 정확도로 사이버 멀미를 측정한다. 또한 우리는 실험에서 사이버 멀미를 유발하는 영상의 특성을 분석한다. 일반적으로 사이버 멀미는 상하 움직임이 심한 화면, 화면의 지속적이고 빠른 전환, 공중에 떠있는 상황에서 발생한다.

Keywords

References

  1. J. J. LaViola Jr. A discussion of cybersickness in virtual environments. ACM SIGCHI Bulletin, 32(1):47-56, 2000. https://doi.org/10.1145/333329.333344
  2. K. Nesbitt, S. Davis, K. Blackmore, and E. Nalivaiko. Correlating reaction time and nausea measures with traditional measures of cybersickness. Displays, 48:1-8, 2017. https://doi.org/10.1016/j.displa.2017.01.002
  3. B. Keshavarz and H. Hecht. Pleasant music as a countermeasure against visually induced motion sickness. Applied ergonomics, 45(3):521-527, 2014. https://doi.org/10.1016/j.apergo.2013.07.009
  4. Y.-H. Yu, P.-C. Lai, L.-W. Ko, C.-H. Chuang, B.-C. Kuo, and C.-T. Lin. An eeg-based classification system of passenger's motion sickness level by using feature extraction/selection technologies. In Neural Networks (IJCNN), The 2010 International Joint Conference on, pp. 1-6. IEEE, 2010.
  5. C.-S. Wei, L.-W. Ko, S.-W. Chuang, T.-P. Jung, and C.-T. Lin. Eegbased evaluation system for motion sickness estimation. In Neural Engineering (NER), 2011 5th International IEEE/EMBS Conference on, pp. 100-103. IEEE, 2011.
  6. C.-T. Lin, S.-F. Tsai, L.-W. Ko, et al. Eeg-based learning system for online motion sickness level estimation in a dynamic vehicle environment. IEEE transactions on neural networks and learning systems, 24(10):1689-1700, 2013. https://doi.org/10.1109/TNNLS.2013.2275003
  7. K. G. Hartmann, R. T. Schirrmeister, and T. Ball. Hierarchical internal representation of spectral features in deep convolutional networks trained for eeg decoding. In Brain-Computer Interface (BCI), 2018 6th International Conference on, pp. 1-6. IEEE, 2018.
  8. R. Schirrmeister, L. Gemein, K. Eggensperger, F. Hutter, and T. Ball. Deep learning with convolutional neural networks for decoding and visualization of eeg pathology. In Signal Processing in Medicine and Biology Symposium (SPMB), 2017 IEEE, pp. 1-7. IEEE, 2017.
  9. O. Dressler, G. Schneider, G. Stockmanns, and E. Kochs. Awareness and the eeg power spectrum: analysis of frequencies. British journal of anaesthesia, 93(6):806-809, 2004. https://doi.org/10.1093/bja/aeh270
  10. B. Hjorth. Eeg analysis based on time domain properties. Electroencephalography and clinical neurophysiology, 29(3):306-310, 1970. https://doi.org/10.1016/0013-4694(70)90143-4
  11. T.-P. Jung, S. Makeig, M. Stensmo, and T. J. Sejnowski. Estimating alertness from the eeg power spectrum. IEEE transactions on biomedical engineering, 44(1):60-69, 1997. https://doi.org/10.1109/10.553713
  12. S. Liang, C. Lin, R. Wu, Y. Chen, T. Huang, and T. Jung. Monitoring driver's alertness based on the driving performance estimation and the eeg power spectrum analysis. In Conf Proc IEEE Eng Med Biol Soc, vol. 6, pp. 5738-5741, 2005.
  13. A. J. Bell and T. J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129-1159, 1995. https://doi.org/10.1162/neco.1995.7.6.1129
  14. P. Comon. Independent component analysis, a new concept? Signal processing, 36(3):287-314, 1994. https://doi.org/10.1016/0165-1684(94)90029-9
  15. C. Jutten and J. Herault. Blind separation of sources, part i: An adaptive algorithm based on neuromimetic architecture. Signal processing, 24(1):1-10, 1991. https://doi.org/10.1016/0165-1684(91)90079-X
  16. B.-C. Kuo and K.-Y. Chang. Feature extractions for small sample size classification problem. IEEE Transactions on Geoscience and Remote Sensing, 45(3):756-764, 2007. https://doi.org/10.1109/TGRS.2006.885074
  17. T.-W. Lee, M. Girolami, and T. J. Sejnowski. Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources. Neural computation, 11(2):417-441, 1999. https://doi.org/10.1162/089976699300016719
  18. S. Makeig, T.-P. Jung, A. J. Bell, D. Ghahremani, and T. J. Sejnowski. Blind separation of auditory event-related brain responses into independent components. Proceedings of the National Academy of Sciences, 94(20):10979-10984, 1997. https://doi.org/10.1073/pnas.94.20.10979
  19. J. A. Uriguen and B. Garcia-Zapirain. Eeg artifact removal-state-ofthe- art and guidelines. Journal of neural engineering, 12(3):031001, 2015. https://doi.org/10.1088/1741-2560/12/3/031001
  20. M. Othman, A. Wahab, I. Karim, M. A. Dzulkifli, and I. F. T. Alshaikli. Eeg emotion recognition based on the dimensional models of emotions. Procedia-Social and Behavioral Sciences, 97:30-37, 2013. https://doi.org/10.1016/j.sbspro.2013.10.201
  21. E. Durall, T. Leinonen, B. Gros, and T. Rodriguez-Kaarto. Reflection in learning through a self-monitoring device: Design research on eeg self-monitoring during a study session. Designs for Learning, 9(1), 2017.
  22. D. O. Bos et al. Eeg-based emotion recognition. The Influence of Visual and Auditory Stimuli, 56(3):1-17, 2006.
  23. P. C. Petrantonakis and L. J. Hadjileontiadis. Emotion recognition from eeg using higher order crossings. IEEE Transactions on Information Technology in Biomedicine, 14(2):186-197, 2010. https://doi.org/10.1109/TITB.2009.2034649
  24. Y.-P. Lin, C.-H. Wang, T.-P. Jung, T.-L. Wu, S.-K. Jeng, J.-R. Duann, and J.-H. Chen. Eeg-based emotion recognition in music listening. IEEE Transactions on Biomedical Engineering, 57(7):1798-1806, 2010. https://doi.org/10.1109/TBME.2010.2048568
  25. Y. Liu, O. Sourina, and M. K. Nguyen. Real-time eeg-based human emotion recognition and visualization. In 2010 international conference on cyberworlds, pp. 262-269. IEEE, 2010.
  26. D. Nie, X. Wang, L. Shi, and B. Lu. Eeg-based emotion recognition during watching movies. In 2011 5th International IEEE/EMBS Conference on Neural Engineering, pp. 667-670, April 2011. doi: 10. 1109/NER.2011.5910636 https://doi.org/10.1109/NER.2011.5910636
  27. W. Zheng, B. Dong, and B. Lu. Multimodal emotion recognition using eeg and eye tracking data. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 5040-5043, Aug 2014. doi: 10.1109/EMBC.2014.6944757
  28. EMOTIV. Developers - emotiv. Updated May 28, 2018, https://www.emotiv.com/developer/, 2018.
  29. Supercell. Clash of clans: Hog rider 360. Updated November 19, 2016, https://www.youtube.com/watch?v=yVLfEHXQk08, 2016.
  30. FOX International. The walking dead - vr 360 video. Updated October 20, 2016, https://www.youtube.com/watch?v=nRQsnqd2Vs, 2016.
  31. Google Spotlight Stories. 360 google spotlight stories: Pearl. Updated May 20, 2016, https://www.youtube.com/watch?v=WqCH4DNQBUA, 2016.
  32. ApexTV. 2017. 360 Jeff The Killer VR Horror Experience. Updated October 28, 2017, https://www.youtube.com/watch?v=icV-OzwKS-k.
  33. D. E. Rumelhart, J. L. McClelland, P. R. Group, et al. Parallel distributed processing, vol. 1. MIT press Cambridge, MA, 1987.
  34. M. Minsky and S. A. Papert. Perceptrons: An introduction to computational geometry. MIT press, 2017.

Cited by

  1. 가상현실 기반 3차원 공간에 대한 감정분류 딥러닝 모델 vol.36, pp.4, 2019, https://doi.org/10.5659/jaik_pd.2020.36.4.41