Browse > Article
http://dx.doi.org/10.5392/JKCA.2010.10.12.027

Improvement of Environment Recognition using Multimodal Signal  

Park, Jun-Qyu (전남대학교 전자컴퓨터공학부)
Baek, Seong-Joon (전남대학교 전자컴퓨터공학부)
Publication Information
Abstract
In this study, we conducted the classification experiments with GMM (Gaussian Mixture Model) from combining the extracted features by using microphone, Gyro sensor and Acceleration sensor in 9 different environment types. Existing studies of Context Aware wanted to recognize the Environment situation mainly using the Environment sound data with microphone, but there was limitation of reflecting recognition owing to structural characteristics of Environment sound which are composed of various noises combination. Hence we proposed the additional application methods which added Gyro sensor and Acceleration sensor data in order to reflect recognition agent's movement feature. According to the experimental results, the method combining Acceleration sensor data with the data of existing Environment sound feature improves the recognition performance by more than 5%, when compared with existing methods of getting only Environment sound feature data from the Microphone.
Keywords
Context Aware; Acceleration Sensor; Gyro Sensor; GMM;
Citations & Related Records
연도 인용수 순위
  • Reference
1 C. M. Bishop, Neural networks for pattern recognition, Oxford University Press, UK, 1995.
2 R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classfication, Jone Wiley & Son, Inc. 2001.
3 http://www.teco.edu/tea/
4 L. Ma, B. P. Milner, and D. Smith, “Acoustic environment classification,” ACM Transactions on Speech and Language Processing, Vol.3, No.2, pp.1-22. 2006.
5 Y. Toyoda, J. Huang, S. Ding, and Y. Liu, “Environmental sound recognition by multilayered neural networks,” International Conference on Computer and Information Technology, pp.123-127, 2004.
6 L. Couvreur and M. Laniray, “Automatic noise recognition in urban environments based on artificial neural networks and hidden Markov models,” InterNoise, Prague, Czech Republic, pp.1-8, 2004.
7 N. Sawhney, “Situational awareness from environmental sounds,” MIT Media Lab. Technical Report, 1997.
8 S. Chu, S. Narayana, C.-C. J. Kuo, and M. J. Mataric, "Where am I? Scene recognition for mobile robots using audio features," in Proc. ICME, 2006.
9 A. Eronen, V. Peltonen, J. Tuomi, A. Klapuri, S. Fagerlund, T. Sorsa, G. Lorho, and J. Huopaniemi, “Audio-based context recognition,” IEEE Trans. on Audio, Speech, and Language Processing, Vol.14, No.1, pp.321-329, 2006(1).   DOI   ScienceOn
10 R. G. Malkin and A. Waibel, "Classifying user environment for mobile applications using linear autoencoding of ambient audio," in Proc. ICASSP, 2005.
11 S. Chu, S. Narayanan, and C.-C. Jay Kuo “Environmental Sound Recognition With Time-Frequency Audio Features,” IEEE Trans. on Audio, Speech, and Language Processing, Vol.17, No.6, pp.1-16, 2009.   DOI   ScienceOn
12 A. Kobayashi, T. Iwamoto, S. Nishiwama “UME : Method for Estimating User Movement Using an Acceleration Sensor,” in Proc. SAINT, 2008.