2023/2024
Projects fall
- Zhan Li “Research on in-context learning”
- Haoming Lin “Understanding self distillation toward neural tangent kernel”
- Somesh Mehra “Polynomial networks with train-time activations”
- Pol Plana Puigdemont “Combinatorial Optimization through Deep Reinforcement Learning”
- Mariia Vidmuk “Verification of transformers”
- Shaotian Wu “The inductive bias of polynomial neural networks”
- Daria Yakovchuk “Ethical aspects of Stable Diffusion: The Attribution Challenge”
Project spring
- Balloco Brando “Graph Neural Networks for Scalable Combinatorial Optimization”
- Golaz Thibault Jean-Baptiste “2-degree polynomial expansions”
- Mehra Somesh “Polynomial networks with train-time activations”
- Noamen Syrine “Beyond additive noise: Evaluating Scene Text Recognition model”
- Pyatko Daniil “Improved theoretical bounds for infinite horizon imitation learning in linear MDPs”
- Rizzuto Massimo “Optimistic Exploration for Deep Imitation Learning”
- Serbest Sanberk “Exploration Methods in Deep Reinforcement Learning”
- Sharipov Alexander “Polynomial networks, scaling, interpretability”
2022/2023
Projects fall
- Bonnet Antoine “Not All Wasserstein GANs Are Created Equal”
- Richard Gao “Convex duality for distributed multiagent reinforcement learning”
- Ioannis Mavrothalassitis “Federated Learning Under Distribution Shifts”
- Eelis Mielonen “Robust Domain Adaptation”
- Henry Papadatos “Autoregressive model for audio generation”
- Huan Yang “Fairness without demographics”
Projects spring
- Ioannis Bantzis “Adversarial Training The role of width, depth and initialization”
- Lorenzo Brusca “Combinatorial Optimization through Deep Reinforcement learning”
- Yagiz Dilberoglu “Robust domain adaptation”
- Xiyun Fu “Contrastive graph neural networks for combinatorial optimization”
- Mert Karagözlü “Double Descent on Self-Supervised Learning”
- Yuan Ma “Higher-order interactions in self-attention”
- Arvind Menon “Combinatorial optimization using reinforcement learning”
- Fan Nie “Generative models for neural combinatorial optimization”
- Lars Quaedvlieg “Tackling job-shop scheduling with reinforcement learning”
- Zechen Wu “High-frequency trading”
- Zechen Wu “Robustness in Imitation Learning”
2021/2022
Projects fall
- Anougmar Mohamed “Adaptative Federated Learning”
- Yihang Chen “Target private classes in domain adaptation”
- Abdülkadir Gökce “Adversarial Dropout”
- Elie Graham “Private and Fair Federated Learning”
- Xavier Jeanmonod “Conditional GANs for accelerated MRI”
- Perich Pujol “Benchmarking and improving attention variants”
- Denys Pushkin “Bias in score-based generative modeling”
- Peng Ren “Neural network architectures for approximate combinatorial optimization”
- Lucas Streit “GANs for data-driven data augmentation”
- Bohan Wang “Super-exponential polynomial expansions”
- Jiaqi Wang “Learning in polynomials”
- Yongtao Wu “Audio and music synthesis in the complex domain”
Projects spring
- Elias Abad Rocamora “Guarantees on the robustness of polynomial networks”
- Daniel Berg Thomsen “Solving inverse problems using score-based generative modeling with stochastic differential equations”
- Xiangcheng Cao “Federated Learning Under Distribution Shifts”
- Justin Deschenaux “Active exploration of a combinatorial action space via a 3 player game”
- Lorena Egger “Worst performing class on neural architecture search”
- Alec Flowers “Polynomial Neural Networks – Adaptive Polynomial Expansion”
- Orest Gkini “Few-shot domain adaptation”
- Arnaud Guibbert “Neural Architecture Search on the complex field for audio recognition”
- Sam Houliston “Data-Constrained Self Supervised Learning”
- Berk Mandiracioglu “Data-constrained self-supervised learning”
- Perich Pujol “Architectures for neural EDA”
- Pierre Reboud “Fairness Without Demographics”
- Fernandez Ruiz De Velasco “Blind Fairness”
- Fadel Seydou “Personalized Federated Learning under Distribution Shifts”
- Eliot Walt “Active Model Based RL”
2020/2021
Projets fall
- Alvaro Caudéran “Generative Adverserial Imitation Learning for Circuit Design”
- Lucien Iseli “Reinforcement learning for circuit design”
- Ara Ibrar Malik “An investigation of generative models for deep classifier calibration”
- Florian Ravasi “Reinforcement learning for circuit design”
- Jonathan Sauder “Adaptive Sampling for MRI”
- Aleksandr Timofeev “Finding Mixed Nash Equilibrium of GANs”
Projets spring
- Yehya El Hassan “Implementation of SketchyCGAL for NN verification in JAX”
- Kiarash Farivar “Disentangling adversarial examples”
- Nicolas Feppon “Advanced Neural Network Architectures for Neuroprosthetics”
- Tushar Goel “Disentangled Representations for Circuit RL goal vectors”
- Berk Mandiracioglu “Safe exploration algorithms for automatic optimization of Epidural Electrical Stimulation (EES)”
- Aleksandr Timofeev “Neural Architecture Search with Fairness”
- Bohan Wang “Stability of polynomial neural networks”
- Zhiyuan Wu “Representation learning in GAN”
- Zhenyu Zhu “Lipschitz constant for polynomial neural network”
2019/2020
Projets fall
- Tianyang Dong “Reinforcement learning for sampling in MRI”
- Jiahua Wu “Robust Training and Interpretability for Fundus Imaging”
Projets spring
- Patrick Ley “Library for variational inequalities and saddle point problems”
- Cyrille Masserey “Low-power analog-to-digital conversion for multichannel neural recording”
- Ambroise Renaud “Self-supervised learning for MRI sampling”
- Luca Viano “Robust Inverse Reinforcement Learning”
- Aras Yazgan “DeepMDP replication study”
- Adrian Ziegler “Keeping kids safe online with privacy-preserving machine learning”
2018/2019
Projets fall
- Doruk Erden “Learning-based neural network decoder”
- Niklaus Immer “Probability function maximization via smooth Beta processes”
Projets spring
- Fengyu Cai “Efficient Tensor Decomposition with Application to Topic Modelling”
- Sébastien Chevaleyre “Neural network-based hardware decoder design”
- Yu-Ting Huang “Solving two-player Markov games via SGLD/sampling mechanisms”
- Zhaodong Sun “Uncertainty Quantification on MRI”
- Doga Tekin “Optimizing RL Agents with Langevin Dynamics and Homotopy Based Optimization”
2017/2018
Projets fall
- Manuel Vonlanthen “Sampling optimization for 3D MRI and cluster computing”
Projets spring
- Xingce Bao “Learning-based compressive sampling (Iot)”
- Dilara Günay “Optimization for Tensor Decompositions”
- Jean Lamiot “Learning based Measurement Design for Biomedical Applications”
- Julien Leal “Project Learning Representations via Weak Supervision Using Deep Learning”
- Qianqian Qiao “Application of three operator splitting method for semidefinite programming”
2016/2017
Projets spring
- Jean Benvenuti “Distributed Convex Optimization with TensorFlow”
2015/2016
Projects spring
- Chen Liu “Novel Optimization methods for Deep Neural Networks by Non-Euclidean Geometry”
- Martin Wüthrich “Learning Based Compressive Sensing applied for Image Compression”
2014/2015
Projects fall
- Tasorn Sornnarong “Sparse covariance estimation for Markowitz portfolio design”
Projects spring
- Arnaud Reymond “Latent variable graphical model selection via convex optimization”
- Hugo von Däniken “Adaptive algorithms for regret minimization under partial monitoring”
2012/2013
Projects spring
- Gizem Tabak “Topic Model Discovery Using Low Rank Tensors“
- Ender Tinkir “Topic Model Discovery Using Low Rank Tensors”