Op­ti­mi­za­ti­on-Ba­sed Con­trol Me­thods

Students: Elective master program (Catalogue: Process dynamics)
Lecturer: Dr. Adrian Redder
Credit points 6

The course begins with Bellman’s Principle of Optimality and introduces the foundations of Dynamic Programming (DP) techniques for solving optimization problems in finite and infinite time horizons. Based on this framework, students will study Finite and Infinite Horizon Optimal Control Problems, learning how to apply DP to real-world control problems over different time scales.

A significant portion of the course is then devoted to linear quadratic regulator (LQR) and linear quadratic estimator (LQE) problems, focusing on designing Optimal State Feedback controllers and estimators for linear systems. The linear quadratic Gaussian (LQG) framework then combines LQR and LQE into a robust framework for Gaussian state and measurement noise.

To address optimization problems with state and input constraints, students will be introduced to Quadratic Programming (QP) and the fundamentals of Convex Optimization, which are essential for handling real-world control applications that impose limits on system inputs and states. A section on Model Predictive Control (MPC) will then focus on designing optimal control strategies for constrained linear systems by solving a series of optimization problems in real-time, offering a bridge between theory and industrial applications.

The course then covers Approximate Dynamic Programming (ADP) and its connection to MPC, exploring methods to approximate optimal solutions for computationally intractable problems using traditional DP techniques. Lastly, students will use modern methods for designing Convex Optimization Control Policies, learning to create efficient and scalable control solutions using convexity and optimization frameworks. Finally, if time permits, the course will close with a short introduction to Proximal Policy Optimization methods.