1 |
R. J. Mooney, 'Learning to connect language and perception,' Proceedings of the 23th AAI Conference on Artificial Intelligence, Chicago, pp. 1598-1601, July 2008
|
2 |
J. R Anderson, D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Qin, 'An integrated theory of the mind,' Psychological Review, vol. 111, no. 4, pp. 1036-1060, 2004
DOI
ScienceOn
|
3 |
D. E. Kieras, S. D. Wood, and D. E. Meyer, 'Predictive engineering models based on the EPIC architecture for a multimodal high-perfonnance human-computer interaction task,' ACM Transactions on Computer-Human Interaction, vol. 4, pp. 230-275, 1997
DOI
ScienceOn
|
4 |
S. D. Lathrop and J. E. Laird, 'Towards incorporating visual imagery into a cognitive architecture,' Proceedings of the Eighth International Conference on Cognitive Modeling. Ann Arbor, 2007
|
5 |
정만태, '로봇산업의 2020 비젼과 전략,' 정책자료 2007-63, 산업연구원, Aug. 2007
|
6 |
D. Roy, 'Semiotic schemas: A framework for grounding language in action and perception' Artificial Intelligence, vol. 167, pp. 170-205, 2005
DOI
ScienceOn
|
7 |
M. Levit and D. Roy, 'Interpretation of spatial language in a map navigation task,' IEEE Transac- tions on Systems, Man, and Cybernetics, Part B, vol. 37, no. 3,
pp. 667-679, 2007
DOI
ScienceOn
|
8 |
P. Gorniak and D. Roy, 'Grounded semantic composition for visual scenes,' Journal of Artificial Intelligence Research, vol. 21, pp. 429-470, 2004
|
9 |
J. M. Siskind, 'Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic,' Journal of ArtificialIntelligence Research, no. 15, pp. 31-90, 2001
|