Student Projects

ALT

The goal of the project is to study an application of the Koopman operator based control approach. For autonomous systems, Koopman operator theory shows that a nonlinear system can be represented as a linear system in a higher dimensional (possibly infinite dimensional) space. Thus, by lifting a nonlinear system to a higher dimensional space this approach enables the use of tools from linear systems theory for studying nonlinear systems. By using the Koopman operator approach, our goal is to obtain a linear system representation in a high dimensional space and employ linear control design techniques to control the actually nonlinear system.

The lifted system representation will be obtained in a data-driven fashion. As a result of this as well as further structural restrictions, the lifted representation never exactly captures the original system dynamics. Thus, the project will also aim for bounding these errors such that guarantees for the true closed-loop system can be provided by using robust control tools.

For now a pendulum system is considered as the application.

Skills needed:

-Advanced control systems (familiarity with frequency domain approach and robust control)

-Matlab/Simulink, LabVIEW

Comment
Contact: [email protected]
Professor(s)
Alireza Karimi
Administration
Mert Eyuboglu
Site
ddmac.epfl.ch, la.epfl.ch

During the past few years, policy gradient methods have gained renewed attention within the control community, primarily due to their global convergence properties for certain classical control problems, such as $\mathcal{H}_2$ and $\mathcal{H}_\infty$ optimal control. However, the characteristics of policy gradient methods in the context of robust control with data-dependent uncertainty remain an open question.

This project focuses on the design and analysis of policy gradient algorithms for data-driven robust control. Given a set of noisy offline input-output data, and under assumptions on the boundedness of noise, the goal is to propose policy gradient algorithms minimizing worst-case closed-loop performance. Moreover, the properties of the proposed algorithms, such as global optimality and convergence rate, will be investigated.

Comment
Required knowledge:

1. Control theory

2. Convex optimization

3. Numerical analysis

References:

[1] Maryam Fazel, et al. “Global convergence of policy gradient methods for the linear quadratic regulator.” International conference on machine learning. PMLR, 2018.

[2] Kaiqing Zhang, Bin Hu, and Tamer Basar. “Policy Optimization for $\mathcal{H}_2$ Linear Control with $\mathcal{H}_\infty$ Robustness Guarantee: Implicit Regularization and Global Convergence.” Learning for Dynamics and Control. PMLR, 2020.

[3] Yang Zheng, Chih-fan Pai, and Yujie Tang. “Benign nonconvex landscapes in optimal and robust control, Part I: Global optimality.” arXiv preprint arXiv:2312.15332 (2023).

Professor(s)
Alireza Karimi, Zhaoming Qin
Administration
Barbara Marie-Louise Frédérique Schenkel
ALT

In this project, the objective is to deploy a swarm of mini-hovers (Crazyflies) using ROS and a hierarchical control scheme. The drones position would be then measured using a motion capture system (OptiTrack) which would be used for swarm controls. The student would need to: 1. Design a controller synthesis procedure for low-level controls of each drone. 2. Integrate the Crazyflie into the ROS architecture. 3. Design a high-level controller for swarm dynamics.

Comment
Assistant: Gupta Vaibhav
Professor(s)
Alireza Karimi, Christophe Salzmann
Administration
Barbara Marie-Louise Frédérique Schenkel
Site
ddmac.epfl.ch, la.epfl.ch