Browse > Article
http://dx.doi.org/10.7746/jkros.2022.17.1.076

Online Adaptation of Control Parameters with Safe Exploration by Control Barrier Function  

Kim, Suyeong (Dept. of Mechanical Engineering, Ulsan National Institute of Science and Technology)
Son, Hungsun (Dept. of Mechanical Engineering, Ulsan National Institute of Science and Technology)
Publication Information
The Journal of Korea Robotics Society / v.17, no.1, 2022 , pp. 76-85 More about this Journal
Abstract
One of the most fundamental challenges when designing controllers for dynamic systems is the adjustment of controller parameters. Usually the system model is used to get the initial controller, but eventually the controller parameters must be manually adjusted in the real system to achieve the best performance. To avoid this manual tuning step, data-driven methods such as machine learning were used. Recently, reinforcement learning became one alternative of this problem to be considered as an agent learns policies in large state space with trial-and-error Markov Decision Process (MDP) which is widely used in the field of robotics. However, on initial training step, as an agent tries to explore to the new state space with random action and acts directly on the controller parameters in real systems, MDP can lead the system safety-critical system failures. Therefore, the issue of 'safe exploration' became important. In this paper we meet 'safe exploration' condition with Control Barrier Function (CBF) which converts direct constraints on the state space to the implicit constraint of the control inputs. Given an initial low-performance controller, it automatically optimizes the parameters of the control law while ensuring safety by the CBF so that the agent can learn how to predict and control unknown and often stochastic environments. Simulation results on a quadrotor UAV indicate that the proposed method can safely optimize controller parameters quickly and automatically.
Keywords
Automatic Gain Tuning; Reinforcement Learning; Control Barrier Function; And Safe Exploration;
Citations & Related Records
Times Cited By KSCI : 6  (Citation Analysis)
연도 인용수 순위
1 N. O. Lambert, D. S. Drew, J. Yaconelli, R. Calandra, S. Levine, and K. S. J. Pister, "Low level control of a quadrotor with deep model-based reinforcement learning," arXiv:1901.03737v2 [cs.RO], 2019, [Online], https://arxiv.org/pdf/1901.03737.pdf.
2 F. Berkenkamp, A. P. Schoellig, and A. Krause, "Safe and automatic controller tuning with Gaussian processes," Workshop on Machine Learning in Planning and Control of Robot Motion, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, [Online], https://www.dynsyslab.org/wp-content/papercite-data/pdf/berkenkamp-icra16.pdf.
3 A.Y. Zomaya, "Reinforcement Learning to Adaptive Control of Nonlinear Systems," IEEE Transactions on Systems, Man, and Cybernetics, vol. 24, no. 2, Feb., 1994, DOI: 10.1109/21.281435.   DOI
4 Z. S. Jin, H. C. Li, and H. M. Gao, "An intelligent weld control strategy based on reinforcement learning approach," The International Journal of Advanced Manufacturing Technology, Feb., 2019, DOI: 10.1007/s00170-018-2864-2.   DOI
5 J. Achiam, D. Held, A. Tamar, and P. Abbeel, "Constrained policy optimization," arXiv:1705.10528v1 [cs.LG], 2017, [Online], https://arxiv.org/pdf/1705.10528.pdf.
6 S. Gangapurwala, A. Mitchell, and I. Havoutis, "Guided constrained policy optimization for dynamic quadrupedal robot locomotion," IEEE Robotics and Automation Letters, vol. 5, no. 2, Apr., 2020, DOI: 10.1109/LRA.2020.2979656.   DOI
7 X.-S. Wang, Y.-H. Cheng, and W. Sun, "A proposal of adaptive PID controller based on reinforcement learning," Journal of China Univ. Mining and Technology, vol. 17, no. 1, 2007, [Online], http://www.paper.edu.cn/scholar/showpdf/MUT2MNzINTD0cx2h.
8 T. M. Moldovan and P. Abbeel, "Safe exploration in Markov decision processes," arXiv:1205.4810v3 [cs.LG], 2012, [Online], https://arxiv.org/pdf/1205.4810.pdf.
9 Y. Sui, A. Gotovos, J. W. Burdick, and A. Krause, "Safe exploration for optimization with Gaussian processes," 32nd International Conference on Machine Learning, 2015, [Online], http://proceedings.mlr.press/v37/sui15.pdf.
10 A. K. Akametalu, J. F. Fisac, J. H. Gillula, S. Kaynama, M. N. Zeilinger, and C. J. Tomlin, "Reachability-based safe learning with Gaussian processes," 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, 2014, DOI: 10.1109/CDC.2014.7039601.   DOI
11 P. A. Ioannou and C. C. Chien, "Autonomous intelligent cruise control," IEEE Transactions on Vehicular Technology, vol. 42, no. 4, pp. 657-672, Nov., 1993, DOI: 10.1109/25.260745.   DOI
12 J.-M. Kai, G. Allibert, M.-D. Hua, and T. Hamel, "Nonlinear feedback control of quadrotors exploiting first-order drag effects," IFAC-PapersOnLine, Jul., 2017, DOI: 10.1016/j.ifacol. 2017.08.1267.   DOI
13 A. M. Lyapunov, The General Problem of the Stability of Motion, Taylor and Francis Ltd, London, UK, 1992, [Online], https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.910.9566&rep=rep1&type=pdf.
14 K. Galloway, K. Sreenath, A. D. Ames, and J. W. Grizzle, "Torque saturation in bipedal robotic walking through control Lyapunov function-based quadratic programs," IEEE Access, vol. 3, pp. 323-332, 2015, DOI: 10.1109/ACCESS.2015.2419630.   DOI
15 A. D. Ames and M. Powell, "Towards the unification of locomotion and manipulation through control Lyapunov functions and quadratic programs," Control of Cyber-Physical Systems, vol. 449, 2013, DOI: 10.1007/978-3-319-01159-2_12.   DOI
16 S. Li, K. Li, R. Rajamani, and J. Wang, "Model predictive multiobjective vehicular adaptive cruise control," IEEE Transactions on Control Systems Technology, vol. 19, no. 3, pp. 556-566, 2011, DOI: 10.1109/TCST.2010.2049203.   DOI
17 J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, "Proximal policy optimization algorithms," arXiv:1707.06347v2 [cs.LG], 2017, [Online], https://arxiv.org/pdf/1707.06347.pdf.
18 E. D. Sontag, "A Lyapunov-like stabilization of asymptotic controllability," SIAM Journal of Control and Optimization, vol. 21, no. 3, 1983, DOI: 10.1137/0321028.   DOI
19 E. Squires, P. Pierpaoli, and M. Egerstedt, "Constructive barrier certificates with applications to fixed-wing aircraft collision avoidance," 2018 IEEE Conference on Control Technology and Applications (CCTA), Aug., 2018, DOI: 10.1109/CCTA.2018.8511342.   DOI
20 P. Auer. "Using confidence bounds for exploitation-exploration trade-offs," The Journal of Machine Learning Research, vol. 3, pp. 397-422, 2002, [Online], https://www.jmlr.org/papers/volume3/auer02a/auer02a.pdf.
21 F. Berkenkamp, M. Turchetta, A. P. Schoellig, and A. Krause, "Safe model-based reinforcement learning with stability guarantees," IEEE Transactions on Automatic Control, vol. 64, no. 7, Jul., 2017, DOI: 10.1109/TAC.2018.2876389.   DOI
22 A. Vahidi and A. Eskandarian, "Research advances in intelligent collision avoidance and adaptive cruise control," IEEE Trans. Intell. Transp. Syst., vol. 4, no. 3, pp. 143-153, Sep. 2003. DOI: 10.1109/TITS.2003.821292.   DOI
23 G. J. L. Naus, J. Ploeg, M. J. G. Van de Molengraft, W. P. M. H. Heemels, and M. Steinbuch, "Design and implementation of parameterized adaptive cruise control: An explicit model predictive control approach," Control Engineering Practice, vol. 18, no. 8, pp. 882-892, Aug., 2010, DOI: 10.1016/j.conengprac.2010.03.012.   DOI
24 M. Sedighizadeh and A. Rezazadeh, "Adaptive PID Controller based on Reinforcement Learning for Wind Turbine Control," World Academy of Science, Engineering and Technology, 2008, DOI: 10.5281/zenodo.1057789.   DOI
25 F. Berkenkamp, A. P. Schoellig, and A. Krause, "Safe controller optimization for quadrotors with Gaussian processes," 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 2016, DOI: 10.1109/ICRA.2016.7487170.   DOI
26 M. N. Howell and M. C. Best, "On-line PID tuning for engine idle-speed control using continuous action reinforcement learning automata," Control Engineering Practice, vol. 8, no. 2, Feb., 2000, DOI: 10.1016/S0967-0661(99)00141-0.   DOI
27 A. Aswani, H. Gonzalez, S. S. Sastry, and C. Tomlin, "Provably safe and robust learning-based model predictive control," Automatica, vol. 49, no. 5, May, 2013, DOI: 10.1016/j.automatica.2013.02.003.   DOI
28 A. D. Ames, X. Xu, J. W. Grizzle, and P. Tabuada, "Control barrier function based quadratic programs for safety critical systems," IEEE Transactions on Automatic Control, vol. 62, no. 8, Aug., 2017, DOI: 10.1109/TAC.2016.2638961.   DOI
29 B. J. Morris, M. J. Powell, and A. D. Ames, "Continuity and smoothness properties of nonlinear optimization-based feedback controllers," 2015 54th IEEE Conference on Decision and Control (CDC), 2015, DOI: 10.1109/CDC.2015.7402101.   DOI