Browse > Article

Online Evolution for Cooperative Behavior in Group Robot Systems  

Lee, Dong-Wook (Division for Applied Robot Technology, Korea Institute of Industrial Technology)
Seo, Sang-Wook (School of Electrical and Electronics Engineering, Chung-Ang University)
Sim, Kwee-Bo (School of Electrical and Electronics Engineering, Chung-Ang University)
Publication Information
International Journal of Control, Automation, and Systems / v.6, no.2, 2008 , pp. 282-287 More about this Journal
Abstract
In distributed mobile robot systems, autonomous robots accomplish complicated tasks through intelligent cooperation with each other. This paper presents behavior learning and online distributed evolution for cooperative behavior of a group of autonomous robots. Learning and evolution capabilities are essential for a group of autonomous robots to adapt to unstructured environments. Behavior learning finds an optimal state-action mapping of a robot for a given operating condition. In behavior learning, a Q-learning algorithm is modified to handle delayed rewards in the distributed robot systems. A group of robots implements cooperative behaviors through communication with other robots. Individual robots improve the state-action mapping through online evolution with the crossover operator based on the Q-values and their update frequencies. A cooperative material search problem demonstrated the effectiveness of the proposed behavior learning and online distributed evolution method for implementing cooperative behavior of a group of autonomous mobile robots.
Keywords
Cooperative behavior; distributed evolutionary algorithm; distributed mobile robot system; dxperience-based crossover; Q-learning; reinforcement learning;
Citations & Related Records
Times Cited By KSCI : 1  (Citation Analysis)
Times Cited By Web Of Science : 2  (Related Records In Web of Science)
Times Cited By SCOPUS : 2
연도 인용수 순위
1 L. E. Parker, "ALLIANCE: An architecture for fault-tolerant multirobot cooperation," IEEE Trans. on Robotics and Automation, vol. 14, no. 2, pp. 220-240, April 1998   DOI   ScienceOn
2 R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, MIT Press, Cambridge, MA, 1998
3 J. S. R. Jang, C. T. Sun, and E. Mizutani, Neuro-Fuzzy and Soft Computing, Prentice Hall, 1997
4 L. E. Parker, C. Touzet, and D. Jung, "Learning and adaptation in multi-robot teams," Proc. of Eighteenth Symposium on Energy Engineering Sciences, pp. 177-185, 2000
5 P. J. 't Hoen, K. Tuyls, L. Panait, S. Luke, and J. A. La Poutre, "An overview of cooperative and competitive multiagent learning," Learning and Adaption in Multi-Agent System, LNAI 3898, pp. 1-46, 2006   DOI   ScienceOn
6 H. Asama, "Perspective of distributed autonomous robotic systems," Distributed Autonomous Robotic Systems 5, H. Asama, T. Arai, T. Fukuda, T. Hasegawa (Eds.), Springer, pp. 3-4, 2002
7 L. P. Kaelbling, "On reinforcement learning for robotics," Proc. of Int. Conf. on Intelligent Robot Systems, pp. 1319-1320, 1996
8 A. L. Jaimes and C. A. Coello, "MRMOGA: A new parallel multi-objective evolutionary algorithm based on the use of multiple resolutions," Concurrency and Computation: Practice and Experience, vol. 19, no. 4, pp. 397-441, March 2007   DOI   ScienceOn
9 S. O. Kimbrough and M. Lu, "Simple reinforcement learning agents: Pareto beats Nash in an algorithmic game theory study," Information Systems and E-Business Management, vol. 3, no. 2, pp. 1-19, March 2005   DOI
10 M. Nakamura, N. Yamashiro, and Y. Gong, "Iterative parallel and distributed genetic algorithms with biased initial population," Proc. of Congress on Evolutionary Computation, vol. 2, pp. 2296-2301, 2004
11 T. Fukuda and T. Ueyama, Cellular Robotics and Micro Robotic System, World Scientific, 1994
12 T. Arai, E. Pagello, and L. E. Parker, "Advances in multirobot systems," IEEE Trans. on Robotics and Automation, vol. 18, no. 5, pp. 655-661, October 2002   DOI   ScienceOn
13 C. J. C. H. Watkins and P. Dayan, "Q-learning," Machine Learning, vol. 8, pp. 279-292, 1992