Completed semester projects @ LIONS

2023/2024

Projects fall

  1. Zhan Li “Research on in-context learning”
  2. Haoming Lin “Understanding self distillation toward neural tangent kernel”
  3. Somesh Mehra “Polynomial networks with train-time activations”
  4. Pol Plana Puigdemont “Combinatorial Optimization through Deep Reinforcement Learning”
  5. Mariia Vidmuk “Verification of transformers”
  6. Shaotian Wu “The inductive bias of polynomial neural networks”
  7. Daria Yakovchuk “Ethical aspects of Stable Diffusion: The Attribution Challenge”

Project spring

  1. Balloco Brando “Graph Neural Networks for Scalable Combinatorial Optimization”
  2. Golaz Thibault Jean-Baptiste “2-degree polynomial expansions”
  3. Mehra Somesh “Polynomial networks with train-time activations”
  4. Noamen Syrine “Beyond additive noise: Evaluating Scene Text Recognition model”
  5. Pyatko Daniil “Improved theoretical bounds for infinite horizon imitation learning in linear MDPs”
  6. Rizzuto Massimo “Optimistic Exploration for Deep Imitation Learning”
  7. Serbest Sanberk “Exploration Methods in Deep Reinforcement Learning”
  8. Sharipov Alexander “Polynomial networks, scaling, interpretability”

2022/2023

Projects fall

  1. Bonnet Antoine “Not All Wasserstein GANs Are Created Equal”
  2. Richard Gao “Convex duality for distributed multiagent reinforcement learning”
  3. Ioannis Mavrothalassitis “Federated Learning Under Distribution Shifts”
  4. Eelis Mielonen “Robust Domain Adaptation”
  5. Henry Papadatos “Autoregressive model for audio generation”
  6. Huan Yang “Fairness without demographics”

Projects spring

  1. Ioannis Bantzis “Adversarial Training The role of width, depth and initialization”
  2. Lorenzo Brusca “Combinatorial Optimization through Deep Reinforcement learning”
  3. Yagiz Dilberoglu “Robust domain adaptation”
  4. Xiyun Fu “Contrastive graph neural networks for combinatorial optimization”
  5. Mert Karagözlü “Double Descent on Self-Supervised Learning”
  6. Yuan Ma  “Higher-order interactions in self-attention”
  7. Arvind Menon “Combinatorial optimization using reinforcement learning”
  8. Fan Nie “Generative models for neural combinatorial optimization”
  9. Lars Quaedvlieg  “Tackling job-shop scheduling with reinforcement learning”
  10. Zechen Wu “High-frequency trading”
  11. Zechen Wu “Robustness in Imitation Learning”

2021/2022

Projects fall

  1. Anougmar Mohamed “Adaptative Federated Learning”
  2. Yihang Chen “Target private classes in domain adaptation”
  3. Abdülkadir Gökce “Adversarial Dropout”
  4. Elie Graham “Private and Fair Federated Learning”
  5. Xavier Jeanmonod “Conditional GANs for accelerated MRI”
  6. Perich Pujol “Benchmarking and improving attention variants”
  7. Denys Pushkin “Bias in score-based generative modeling”
  8. Peng Ren “Neural network architectures for approximate combinatorial optimization”
  9. Lucas Streit “GANs for data-driven data augmentation”
  10. Bohan Wang “Super-exponential polynomial expansions”
  11. Jiaqi Wang “Learning in polynomials”
  12. Yongtao Wu “Audio and music synthesis in the complex domain”

Projects spring

  1. Elias Abad Rocamora “Guarantees on the robustness of polynomial networks”
  2. Daniel Berg Thomsen “Solving inverse problems using score-based generative modeling with stochastic differential equations”
  3. Xiangcheng Cao “Federated Learning Under Distribution Shifts”
  4. Justin Deschenaux “Active exploration of a combinatorial action space via a 3 player game”
  5. Lorena Egger “Worst performing class on neural architecture search”
  6. Alec Flowers “Polynomial Neural Networks – Adaptive Polynomial Expansion”
  7. Orest Gkini “Few-shot domain adaptation”
  8. Arnaud Guibbert “Neural Architecture Search on the complex field for audio recognition”
  9. Sam Houliston “Data-Constrained Self Supervised Learning”
  10. Berk Mandiracioglu “Data-constrained self-supervised learning”
  11. Perich Pujol “Architectures for neural EDA”
  12. Pierre Reboud “Fairness Without Demographics”
  13. Fernandez  Ruiz De Velasco “Blind Fairness”
  14. Fadel Seydou “Personalized Federated Learning under Distribution Shifts”
  15. Eliot Walt “Active Model Based RL”

2020/2021

Projets fall

  1. Alvaro Caudéran “Generative Adverserial Imitation Learning for Circuit Design”
  2. Lucien Iseli “Reinforcement learning for circuit design”
  3. Ara Ibrar Malik “An investigation of generative models for deep classifier calibration”
  4. Florian Ravasi “Reinforcement learning for circuit design”
  5. Jonathan Sauder “Adaptive Sampling for MRI”
  6. Aleksandr Timofeev “Finding Mixed Nash Equilibrium of GANs”

Projets spring

  1. Yehya El Hassan “Implementation of SketchyCGAL for NN verification in JAX”
  2. Kiarash Farivar “Disentangling adversarial examples”
  3. Nicolas Feppon “Advanced Neural Network Architectures for Neuroprosthetics”
  4. Tushar Goel “Disentangled Representations for Circuit RL goal vectors”
  5. Berk Mandiracioglu “Safe exploration algorithms for automatic optimization of Epidural Electrical Stimulation (EES)”
  6. Aleksandr Timofeev “Neural Architecture Search with Fairness”
  7. Bohan Wang  “Stability of polynomial neural networks”
  8. Zhiyuan Wu “Representation learning in GAN”
  9. Zhenyu Zhu “Lipschitz constant for polynomial neural network”

2019/2020

Projets fall

  1. Tianyang Dong “Reinforcement learning for sampling in MRI”
  2. Jiahua Wu “Robust Training and Interpretability for Fundus Imaging”

Projets spring

  1. Patrick Ley “Library for variational inequalities and saddle point problems”
  2. Cyrille Masserey “Low-power analog-to-digital conversion for multichannel neural recording”
  3. Ambroise Renaud  “Self-supervised learning for MRI sampling”
  4. Luca Viano “Robust Inverse Reinforcement Learning”
  5. Aras Yazgan “DeepMDP replication study”
  6. Adrian Ziegler “Keeping kids safe online with privacy-preserving machine learning”

2018/2019

Projets fall

  1. Doruk Erden “Learning-based neural network decoder”
  2. Niklaus Immer “Probability function maximization via smooth Beta processes”

Projets spring

  1. Fengyu Cai  “Efficient Tensor Decomposition with Application to Topic Modelling”
  2. Sébastien Chevaleyre “Neural network-based hardware decoder design”
  3. Yu-Ting Huang “Solving two-player Markov games via SGLD/sampling mechanisms”
  4. Zhaodong Sun “Uncertainty Quantification on MRI”
  5. Doga Tekin “Optimizing RL Agents with Langevin Dynamics and Homotopy Based Optimization”

2017/2018

Projets fall

  1. Manuel Vonlanthen “Sampling optimization for 3D MRI and cluster computing”

 Projets spring

  1. Xingce Bao “Learning-based compressive sampling (Iot)”
  2. Dilara Günay “Optimization for Tensor Decompositions”
  3. Jean Lamiot “Learning based Measurement Design for Biomedical Applications”
  4. Julien Leal “Project Learning Representations via Weak Supervision Using Deep Learning”
  5. Qianqian Qiao “Application of three operator splitting method for semidefinite programming”

2016/2017

 Projets spring

  1. Jean Benvenuti  “Distributed Convex Optimization with TensorFlow”

2015/2016

Projects  spring

  1. Chen Liu “Novel Optimization methods for Deep Neural Networks by Non-Euclidean Geometry”
  2. Martin Wüthrich “Learning Based Compressive Sensing applied for Image Compression”

2014/2015

Projects fall

  1. Tasorn Sornnarong “Sparse covariance estimation for Markowitz portfolio design”

Projects spring

  1. Arnaud Reymond “Latent variable graphical model selection via convex optimization”
  2. Hugo von Däniken “Adaptive algorithms for regret minimization under partial monitoring”

2012/2013

Projects spring

  1. Gizem Tabak Topic Model Discovery Using Low Rank Tensors
  2. Ender Tinkir “Topic Model Discovery Using Low Rank Tensors”