Re­in­for­ce­ment Lear­ning for Con­trol of Au­to­no­mous Sys­tems

 

Autonomous systems are characterized by their ability to act without human control. The most prominent examples are drones and cars, but also autonomous systems in smart homes, agriculture, logistics, etc. Traditionally, control theory seeks to steer a system to follow a desired reference. In the last decade, model-free reinforcement learning (RL) solutions have become popular for controlling autonomous systems due to their ability to succeed without prior knowledge of system dynamics. RL solutions are currently focused on episodic tasks, where an autonomous system has to achieve a goal or complete a mission, which is encoded by human feedback. However, the problems for which we
expect autonomous systems to improve people’s lives and work significantly are much more complex. Systems should achieve changing objectives quickly and accurately. Time-varying objectives naturally occur for autonomous systems due to human decisions and non-stationary environments. 

We are interested in the study and description of autonomous control scenarios where single or multiple mobile systems must act in complex environments with different time-varying tasks. Specifically our fundamental research concentrates on algorithm design, mathematical analysis, and simulative studies.