DOI QR코드

DOI QR Code

미로 환경에서 최단 경로 탐색을 위한 실시간 강화 학습

Online Reinforcement Learning to Search the Shortest Path in Maze Environments

  • 김병천 (한경대학교 컴퓨터공학과) ;
  • 김삼근 (한경대학교 컴퓨터공학과) ;
  • 윤병주 (명지대학교 컴퓨터공학과)
  • 발행 : 2002.04.01

초록

강화 학습(reinforcement teaming)은 시행-착오(trial-and-er개r)를 통해 동적 환경과 상호작용하면서 학습을 수행하는 학습 방법으로, 실시간 강화 학습(online reinforcement learning)과 지연 강화 학습(delayed reinforcement teaming)으로 분류된다. 본 논문에서는 미로 환경에서 최단 경로를 빠르게 탐색할 수 있는 실시간 강화 학습 시스템(ONRELS : Outline REinforcement Learning System)을 제안한다. ONRELS는 현재 상태에서 상태전이를 하기 전에 선택 가능한 모든 (상태-행동) 쌍에 대한 평가 값을 갱신하고 나서 상태전이를 한다. ONRELS는 미로 환경의 상태 공간을 압축(compression)하고 나서 압축된 환경과 시행-착오를 통해 상호 작용하면서 학습을 수행한다. 실험을 통해 미로 환경에서 ONRELS는 TD -오류를 이용한 Q-학습과 $TD(\lambda{)}$를 이용한 $Q(\lambda{)}$-학습보다 최단 경로를 빠르게 탐색할 수 있음을 알 수 있었다.

Reinforcement learning is a learning method that uses trial-and-error to perform Learning by interacting with dynamic environments. It is classified into online reinforcement learning and delayed reinforcement learning. In this paper, we propose an online reinforcement learning system (ONRELS : Outline REinforcement Learning System). ONRELS updates the estimate-value about all the selectable (state, action) pairs before making state-transition at the current state. The ONRELS learns by interacting with the compressed environments through trial-and-error after it compresses the state space of the mage environments. Through experiments, we can see that ONRELS can search the shortest path faster than Q-learning using TD-ewor and $Q(\lambda{)}$-learning using $TD(\lambda{)}$ in the maze environments.

키워드

참고문헌

  1. M. L. Minsky, Theory of Neural-Analog Reinforcement Systems and Application to the Brain-Model Problem, Ph.D. Thesis, Princeton University, Princeton, 1954
  2. M. L. Minsky, 'Steps towards artificial intelligence,' In Proceedings of the Institute of Radio Engineers, 49, pp.8-30, 1961
  3. A. G. Barto, D. A. White and D. A. Sofge, 'Reinforcement learning and adaptive critic methods,' Handbook of Intelligent Control, pp.469-491, 1992
  4. A. W. Moore and C. G. Atkeson, 'Prioritized sweeping : Reinforcement Learning with less data and less real time,' Machine Learning, 13, pp.103-130, 1993 https://doi.org/10.1007/BF00993104
  5. C. W. Anderson, 'Learning to control an inverted pendulum using neural networks,' IEEE Control Systems Magazine, pp.31-37, 1989 https://doi.org/10.1109/37.24809
  6. F. S. Ho, 'Traffic flow modeling and control using artificial neural networks,' IEEE Control Systems, 16(5), pp.16-26, 1996 https://doi.org/10.1109/37.537205
  7. R. H. Crites and A. G. Barto, 'Improving Elevator Performance Using Reinforcement Learning,' Advances in Neural Information Processing Systems, 8, MIT Press, Cambridge MA, 1996
  8. G. Rummery and M. Niranjan, 'On-line Q-learning using connectionist systems,' Technical Report CUED/F-INFENG-TR 166, Cambridge University, U.K., 1994
  9. J. Peng and R. Williams, 'Incremental multi-step Q-learning,' Machine Learning, 22, pp.283-290, 1996 https://doi.org/10.1023/A:1018076709321
  10. P. Dayan, 'Navigating through temporal difference,' In Advances in Neural Information Processing Systems, 3, Morgan Kaufmann, 1991
  11. R. S. Sutton and A. G. Barto, An Introduction to Reinforcement Learning : An Introduction, MIT Press, 1998
  12. G. A. Rummery, Problem Solving with Reinforcement Learning, Ph.D. thesis, Cambridge University, 1995
  13. P. Cichosz, 'Truncating temporal differences : On the efficient implementation of TD($\lambda$) for reinforcement learning,' Journal of Artificial Intelligence Research, 2, pp.287-318, 1995
  14. S. P. Singh and R. S. Sutton, 'Reinforcement Learning with Replacing Eligibility Traces,' Machine Learning, 22, pp 123-158, 1996 https://doi.org/10.1007/BF00114726