Wel­come to the Auto­mat­ic Con­trol Group

Our primary interest is the design, analysis, simulation, and implementation of data-driven control and navigation algorithms for autonomous systems, such as ground or aerial robots.

On our web pages, you will find information about us, our research, and our teaching activities.
You can also find opportunities for open positions, open thesis topics, and open master group projects.

Ac­counce­ments/News

Excited to share our paper “Palette Inpainting Diffusion Curriculum Reinforcement Learning (PIDCRL)”, accepted at WCCI 2026! Curriculum Reinforcement Learning (CRL) helps agents learn faster by structuring tasks from easy → hard. We introduce PIDCRL — a diffusion-based CRL framework that converts trajectory heatmaps into candidate goals via a pretrained image-to-image diffusion model. To select meaningful goals, we employ several strategies,…

Read more

Accepted at IEEE WCCI 2026, our work "Fuzzy Logic Theory-based Adaptive Reward Shaping for Robust Reinforcement Learning (FARS)" explores how fuzzy-logic-based reward shaping can improve reinforcement learning in complex, real-world scenarios with high-dimensional states and long horizons.

Read more

How do we stabilize learning in differentiable simulation? Our new work, accepted at IEEE WCCI 2026, “ARE: Adaptive TD-λ Return Estimation for Learning Control in Differentiable Simulation,” proposes to tackle this challenge by replacing traditional k-step returns with TD-λ returns in first-order model-based reinforcement learning (FO-MBRL) algorithms.

Read more