Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe and Robust Control of Uncertain Systems

Uncertainty-based Safety-Critical Control using Bayesian Methods

Carlos Montenegro · Santiago Jimenez Leudo · Carlos RodrĂ­guez


Abstract:

Modern nonlinear control theory seeks to endow systems with properties of stability and safety, and have been deployed successfully in multiple domains. Despite this success, model uncertainty remains as a significant challenge in synthesizing controllers, leading to degradation in the performance. Reinforcement learning (RL) algorithms, on the other hand, have found success in controlling systems with no model at all but it is limited beyond simulated applications, and one main reason is the absence of safety and stability guarantees during the learning process. To address this issue, we develop a controller architecture that combines a model-free RL controller with model-based controllers and online learning of the unknown system dynamics, to guarantee stability and safety during learning. This general framework leverages the success of RL algorithms to learn high-performance controllers, while the proposed model-based controllers guarantee safety and guide the learning process by constraining the set of explorable polices. We validate this method in simulation of a robotic Segway platform.