• Title/Summary/Keyword: 복잡계망 모델

Search Result 1, Processing Time 0.019 seconds

Efficient Approximation of State Space for Reinforcement Learning Using Complex Network Models (복잡계망 모델을 사용한 강화 학습 상태 공간의 효율적인 근사)

  • Yi, Seung-Joon;Eom, Jae-Hong;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.479-490
    • /
    • 2009
  • A number of temporal abstraction approaches have been suggested so far to handle the high computational complexity of Markov decision problems (MDPs). Although the structure of temporal abstraction can significantly affect the efficiency of solving the MDP, to our knowledge none of current temporal abstraction approaches explicitly consider the relationship between topology and efficiency. In this paper, we first show that a topological measurement from complex network literature, mean geodesic distance, can reflect the efficiency of solving MDP. Based on this, we build an incremental method to systematically build temporal abstractions using a network model that guarantees a small mean geodesic distance. We test our algorithm on a realistic 3D game environment, and experimental results show that our model has subpolynomial growth of mean geodesic distance according to problem size, which enables efficient solving of resulting MDP.