SNU Videome Project: 인간수준의 비디오 학습 기술

  • Published : 2011.02.28

Abstract

Keywords

Acknowledgement

Supported by : 한국연구재단

References

  1. Turing, A. M., Computing machinery and intelligence, Mind, 59: 433-460, 1950.
  2. Cassimatis, N. L., Mueller, E. K., & Winston, P. H., Achieving human-level intelligence through integrated systems and research: introduction to this special issue, AI Magazine, 27(2): 12-14, 2006.
  3. Langley, P., Cognitive architectures and general intelligent systems, AI Magazine, 27(2): 33-44, 2006.
  4. McCarthy, J., From here to human-level AI, Artificial Intelligence, 171: 1174-1182, 2007. https://doi.org/10.1016/j.artint.2007.10.009
  5. McClelland, J. L., Is a machine realization of truly human-like intelligence achievable?, Cognitive Computation, 1(1): 4-16, 2009. https://doi.org/10.1007/s12559-008-9001-8
  6. Zhang, B.-T., Hypernetworks: A molecular evolutionary architecture for cognitive learning and memory, IEEE Computational Intelligence Magazine, 3(3): 49-63, 2008.
  7. Zhang, B.-T., Cognitive learning and the multimodal memory game: Toward human-level machine learning, IEEE World Congress on Computational Intelligence (WCCI-2008), pp. 3261-3267.
  8. Zhang, B.-T., Teaching an agent by playing a multimodal memory game: challenges for machine learners and human teachers, AAAI 2009 Spring Symposium: Agents that Learn from Human Teachers, pp. 144-149, 2009.
  9. Thrun, S., A personal account of the development of Stanley, the robot that won the DARPA Grand Challenge, AI Magazine, 27(4): 69-82, 2006.
  10. Bishop, C., Pattern Recognition and Machine Learning, Springer, 2006.
  11. Duda, R. O., Hart, P. E., & Stork, D. G., Pattern Classification, Wiley, 2000.
  12. 장병탁, 차세대 기계학습 기술, 정보과학회지, 제25 권, 제3호, pp. 96-107, 2007년 3월.
  13. Michalski, R. S., Carbonell, J. G., & Mitchell, T. M. (Eds.), Machine Learning: An Artificial Intelligence Approach, Springer, 1984.
  14. Rumelhart, D. E. & McClleland, J. L. (Eds.) Parallel Distributed Processing, Vol. I, MIT Press, 1987.
  15. Aarts, E. & Korst, J., Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing, Wiley, 1989.
  16. Neal, R. M., Probabilistic Inference Using Markov Chain Monte Carlo Methods, Technical Report CRGTR- 93-1, Dept. of Computer Science, University of Toronto, 1993.
  17. Jordan, M. I., Learning in Graphical Models, MIT Press, 1998.
  18. Schoelkopf, B. and Smola, A., Learning with Kernels: Support Vector Machines, Regularization, Optimization,and Beyond, MIT Press, 2001.
  19. MacKay, D. J. C., Information Theory, Inference, and Learning Algorithms, Cambridge University Press, 2003.
  20. Koller, D. & Friedman, N., Probabilistic Graphical Models: Principles and Techniques, MIT Press, 2009.
  21. Hjort, N. L., Holmes, C., Müller, P., & Walker, S. G. (Eds.), Bayesian Nonparametrics, Cambridge University Press, 2010.
  22. Zhang, B.-T., Dynamic Learning: Architectures and Algorithms, Graduate Course Notes, School of Computer Science and Engineering, Seoul National University, http://bi.snu.ac.kr/Courses/g-ai10f/g-dl10f.html, 2010.
  23. Minsky, M., The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind, Simon & Schuster, 2007.
  24. Rumelhart, D. E., Brain style computation: learning and generalization, In: An Introduction to Neural and Electronic Networks, Academic Press, 1990.
  25. Hinton, G. E. & Salakhutdinov, R. R., Reducing the dimensionality of data with neural networks, Science, 313(5786): 504-507, 2006. https://doi.org/10.1126/science.1127647
  26. Rudy, J. W., The Neurobiology of Learning and Memory, Sinauer, 2008.
  27. van Hemmen, J. L. & Sejnowski, T. J., 23 Problems in Systems Neuroscience, Oxford University Press, 2006.
  28. Bear, M. F., Connors, B. W., & Paradiso, M. A., Neuroscience: Exploring the Brain, Lippincott Williams & Wilkins, 2007.
  29. Pomerantz, J. R., Topics in Integrative Neuroscience, Cambridge University Press, 2008.
  30. Gazzaniga, M. S., Ivry, R. B., & Mangun, G. R., Cognitive Neuroscience: The Biology of the Mind, Norton, 2008.
  31. Sporns, O., Networks in the Brain, MIT Press, 2010.
  32. 장병탁, 나노바이오지능분자컴퓨터: 컴퓨터공학과 바이오공학, 나노기술, 인지뇌과학의 만남, 정보과학회지, 제23권 제5호 pp. 41-56, 2005년 5월.
  33. Sendhoff, B., Koerner, E., Sporns, O., Ritter, H., & Doya, K., Creating Brain-Like Intelligence, Springer, 2009.
  34. Doya, K., Ishii, S., Pouget, A., & Rao, R. (Eds.), Bayesian Brain: Probabilistic Approaches to Neural Coding, MIT Press, 2007.
  35. Chater, N. & Oaksford, M. (Eds.), The Probabilistic Mind: Prospects for Bayesian Cognitive Science, Oxford University Press, 2008.
  36. Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B., Probabilistic models of cognition: Exploring representations and inductive biases, Trends in Cognitive Sciences, 14: 357-364, 2010.
  37. Eichenbaum, H., Learning & Memory, Norton, 2008.
  38. Spivey, M., The Continuity of Mind, Oxford University Press, 2008.
  39. Lefrancois, G. R., Theories of Human Learning, Thomson, 2006.
  40. Squire, L. R. & Kandel, E. R., Memory: From Mind to Molecules, Roberts & Company, 2009.
  41. van Campen, C., The Hidden Sense: Synesthesia in Art and Science, MIT Press, 2007.
  42. Turner, M. & Fauconnier, G., The Way We Think. Conceptual Blending and the Mind's Hidden Complexities, Basic Books, 2002.
  43. Schonfeld, D., Shan, C., Tao, D., & Wang, L., Video Search and Mining, Springer, 2010.
  44. Zheng, N. & Xue, J. Statistical Learning and Pattern Analysis for Image and Video Processing, Springer, 2009.
  45. Yuille, A. & Kersten, D., Vision as Bayesian inference: analysis by synthesis?, Trends in Cognitive Sciences, 10(7): 301-308, 2006. https://doi.org/10.1016/j.tics.2006.05.002
  46. Chater, N. & Manning, C. D., Probabilistic models of language processing and acquisition, Trends in Cognitive Sciences, 10(7): 335-344, 2006. https://doi.org/10.1016/j.tics.2006.05.006
  47. Fareed, U. & Zhang, B.-T., MMG: A learning game platform for understanding and predicting human recall memory, Lecture Notes in Artificial Intelligence: PKAW- 2010, 6232: 300-309, 2010.
  48. Ha, J.-W., Kim, B.-H., Lee, B., & Zhang, B.-T., Layered hypernetwork models for cross-modal associative text and image keyword generation in multimodal information retrieval, Lecture Notes in Artificial Intelligence: PRICAI-2010, 6230:76-87, 2010.
  49. Smith, L.B. & Yu, C., Infants rapidly learn word-referent mappings via cross-situational statistics, Cognition, 106: 333-338, 2008.
  50. Frank, M. C., Slemmer, J. A., Marcus, G., & Johnson, S. P., Information from multiple modalities helps fivemonth- olds learn abstract rules, Developmental Science, 12: 504-509, 2009. https://doi.org/10.1111/j.1467-7687.2008.00794.x
  51. 이지훈, 이은석, 장병탁, 유아 언어학습에 대한 하이퍼망 메모리 기반 모델, 정보과학회논문지: 컴퓨팅의 실제 및 레터, 제15권 제12호), 983-987, 2009.
  52. Heo, M.-O., Kang, M.-G., & Zhang, B.-T., Visual query expansion via incremental hypernetwork models of image and text, Lecture Notes in Artificial Intelligence: PRICAI-2010, 6230: 88-99, 2010.
  53. Luck, S. J. & Hollingworth, A. (Eds.), Visual Memory, Oxford University Press, 2008.