References
- Alan Wexelblat, 'An approach to natural gesture in virtual environments,' ACM Transactions on Computer-Human Interaction(TOCHI), 2, 179-200, 1995 https://doi.org/10.1145/210079.210080
- Cohen, P. R. and Johnston, M., 'QuickSet: Multimodal Interaction for Simulation Set-up and Control,' in Proceedings of the fifth conference on Applied natural language processing, 20-24, 1997
- Adam Cheter, Luc Julia, 'MVIEW: Multimodal Tools for the Video Analyst,' in Proceeding of IUI98, 55-62, 1998
- Somsak Walairacht, '4 + 4 fingers manipulating virtual objects in mixedreality environment,' Presence: Teleoperators and Virtual Environments, 11, 2002
- Buchmann, S., Violich, M. and Billinghurst, A., Cockburn. 'FingARtips: gesture based direct manipulation in Augmented Reality,' In Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and SouthEast Asia (Graphite 2004), ACM Press, 212-221, 2004
- Dizio, P., Proprioceptive Adaptation and Aftereffects. In Handbook of Virtual Environments, 751-771, 2002
- Klatzky, R. and Lederman, S., Touch., In Handbook of Psychology, l.4, 147-176, 2003
- Andrea Corradini, Richard M. Wesson, Philip R. Cohen, 'A Map-based System Using Speech and 3D Gestures for Pervasive computing,' in Proceedings of the 4th IEEE International Conference on Multimodal Interface, IEEE Computer Society, 191, 2002
- Oviatt, S. L., Multimodal Interfaces, Handbook of Human-Computer Interface, Ed. By J.Jacko & A.Sears, Lawrence Erlbaum: New Jersey, 2002
- 존 R.앤더슨 著., 李永愛 譯., 認知心理學 (Cognitive psychology and its implications), 乙酉文化社, 88-91, 1987
- Koons, D. B., Sparrell, C. J. and Thorisson, K. R., 'Integrating simultaneous input from speech, gaze, and hand gestures. M.Maybury(Ed.),' Intelligent Multimodal Interfaces, 257-276, Menlo Park, CA: MIT, 1993
- Qiaohui Zhang, Atsumi Imamiya, Kentaro Go, Xiaoyang Mao, 'Gaze and Speech Multimodal Interface,' Int. Conf. on Distributed Computing Systems Workshops(ICDCSW'04), 208-214
- Oviatt, S. L., Mutual disambiguation of recognition errors in a multimodal Architecture, in Proc. CHI'99 Human Factors in Computing Systems Conf., Pittsburgh, PA, 576-583, 1999
- Oviatt, S. L., 'User-Centered Modeling and Evaluation of Multimodal Interfaces,' in Proc. of the IEEE, 91(9), 1457-1468, 2003
- http://www.w3.org/TR/2003/NOTE-mmi-reqs-20030108/
- http://www.w3.org/TR/mmi-framework/
- Jennifer L. Leopold and allen L. Ambler, 'Keyboardless Visual Programming Using Voice, Handwriting, and Gesture', in Proc. of the 1997 IEEE Symposium on Visual Languages(VL '97), 28-35, 1997
- Hauptmann, A. G. and McAvinney, P., 'Gesture with speech for graphics manipulation,' Int. J. Man-Machine Studies, 38, 231-249, 1993 https://doi.org/10.1006/imms.1993.1011
- Oviatt, S., DeAngeli, A. and Kuhn, K., 'Integration and synchronization of input modes during multimodal human-computer interaction,' in Proc. Conf. human Factors in Computing Systems(CHI'97), Atlanta, GA, 415-422, 1997
- Cohen, P. R., Darlymple, M., Pereira, F. C. N., Sullivan, J. W., Gargan, Jr. R. A., Schlossberg, J. L. and Tyler, S. W., 'Synergic use of direct manipulation and natural language,' in Proc. Conf. human Factors in Computing Systems(CHI '89), Austin, TX, 227-233, 1989
- Sharma, R., 'Toward multimodal human-computer interface,' in Proc. IEEE, 86(5), 853-869, 1998
- Anthony G. Greenwald, 'A Reminder about procedures needed to reliably produce perfect timesharing: Comment on Lien, McCann, Ruthruff, and Proctor,' in Journal of Experimental Psychology: Human Perception and Performance, 31(1), 221-225, 2005 https://doi.org/10.1037/0096-1523.31.1.221