EPFL Pre-NeurIPS 2024 Regional Event

A pre-event on machine learning on December 5, 2024 – BC 01 and ELE 117

The Conference on Neural Information Processing Systems (NeurIPS) is the main machine learning conference.

The EPFL ELLIS Lausanne Unit, hosted within the EPFL AI Center, is delighted to invite you to its 2024 pre-NeurIPS regional event on December 5, 2024

While the conference is highly selective, 45 EPFL papers have been accepted to this year conference. The list of NeurIPS 2024 accepted papers with at least one EPFL author is available down below.

For the fourth consecutive year we have decided, prior to the conference in Canada, to organize a local mid-scale event on the EPFL campus, and to invite anyone who had an accepted contribution – papers, talks, posters, workshops..- at NeurIPS to apply for one of our talk* and/or poster slots**. 

The conference will give the occasion to all EPFL students and researchers, and partners with accepted papers at NeurIPS 2024 to present their works, and to all students and researchers interested in machine learning research to connect and discuss science during this event. 

Researchers from all institutions (not only EPFL) are welcome to apply and/or attend the event***. Please note that travel costs cannot be reimbursed.

*Talk submissions are closed.

**Posters of workshop papers are accepted and we welcome other published NeurIPS contributions (papers, talks, posters, workshops). Feature posters including but not exclusive to accepted NeurIPS submissions. Upon space, contributions to other major conferences can also be showcased.

Poster boards can accommodate normalized standard format A0 (841 × 1189 mm) and in portrait mode.

***Please note that seats are limited, thus, not all EPFL external registrations might be accepted. 

Venue: BC 01 and ELE 117

Registration

Deadlines for submission and registration

  • Talk submission: CLOSED
  • Registration and poster submission: December 1

Program

08:00 – 09:00Check-in and poster setup – BC Building

09:00 – 10:20Spotlight Talks Session 1/3: 4 x Spotlight session – BC01

Session Chair – Prof. Volkan Cevher

  • Connecting Representation, Collapse, and Trust Region Issues in PPO, Skander Moalla
  • Why the Metric Backbone Preserves Community Structure, Maximilien Dreveton
  • Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers, Xiuying Wei
  • SGD vs GD: Rank Deficiency in Linear Networks, Aditya Varre

Comfort break of 15 minutes

10:40 – 11:40Spotlight talks Session 2/3: 3 x Spotlight session – BC01

Session Chair – Prof. Lenka Zdeborová

  • Super Consistency of Neural Network Landscapes and Learning Rate Transfer, Lorenzo Noci
  • A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention, Freya Behrens
  • Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients, Abdellah EL Mrini

11:40 – 14:00Standing lunch & poster sessionBC Hall (Atrium)

Transitioning to the ELE 117

14:30 – 15:30 Spotlight talks Session 3/3: 3 x Spotlight session – ELE 117 AI Center LoungeFull house on-site– and Online

Session Chair – Dr. Dorina Thanou

  • 4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities, Oguzhan Fatih Kar
  • JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models, Maksym Andriushchenko
  • Toward Semantic Gaze Target Detection, Samy Tafasca

15:30Coffee & networking – ELE 117 AI Center Lounge

Organizing Committee and Chairs

Prof. Florent Krzakala, EPFL (Information, Learning & Physics Lab.) – ELLIS Fellow

Prof. Volkan Cevher, EPFL (Information and Inference Systems Lab.) – ELLIS Fellow

Prof. Pascal Frossard, EPFL AI Center – ELLIS Fellow and Lausanne Unit Director

Prof. Martin Jaggi, EPFL (Machine Learning and Optimization Lab.) – ELLIS Fellow

Prof. Lenka Zdeborová, EPFL (Statistical Physics of Computation Lab) – ELLIS Fellow

Dr. Dorina Thanou, EPFL AI Center – ELLIS Scholar

Coordination

Nicolas Machado, EPFL AI Center and ELLIS Lausanne Unit

Poster list

  1. “4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities” – Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir
  2. “The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks” – Lénaïc Chizat, Praneeth Netrapalli
  3. “No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO” – Skander Moalla, Andrea Miele, Daniil Pyatko, Razvan Pascanu, Caglar Gulcehre
  4. “JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models” – Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramer, Hamed Hassani, Eric Wong
  5. “Loss Landscape Characterization of Neural Networks without Over-Parametrziation” – Rustem Islamov, Niccolò Ajroldi, Antonio Orvieto, Aurelien Lucchi
  6. “Graph Edit Distance with General Costs Using Neural Set Divergence” – Eeshaan Jain, Indradyumna Roy, Saswat Meher, Soumen Chakrabarti, Abir De 
  7. “Toward Semantic Gaze Target Detection” – Samy Tafasca, Anshul Gupta, Victor Bros, Jean-marc Odobez
  8. “MTGS: A Novel Framework for Multi-Person Temporal Gaze Following and Social Gaze Prediction” – Anshul Gupta, Samy Tafasca, Arya Farkhondeh, Pierre Vuillecard, Jean-Marc Odobez
  9. “This Too Shall Pass: Removing Stale Observations in Dynamic Bayesian Optimization” – Anthony Bardou, Patrick Thiran, Giovanni Ranieri
  10. “Why the Metric Backbone Preserves Community Structure” – Maximilien Dreveton, Charbel Chucri, Matthias Grossglauser, Patrick Thiran
  11. “Generative Modelling of Structurally Constrained Graphs” – Manuel Madeira, Clément Vignac, Dorina Thanou, and Pascal Frossard
  12. “Taming Nonconvex Stochastic Mirror Descent with General Bregman Divergence” – Ilyas Fatkhullin, Niao He
  13. “Contextual Bilevel Optimization and RL for Incentrive Alignment” – Vinzenz Thoma, Barna Pasztor, Andreas Krause, Giorgia Ramponi, Yifan Hu
  14. “Super Consistency of Neural Network Landscapes and Learning Rate Transfer” – Lorenzo Noci, Alex Meterez, Thomas Hofmann, Antonio Orvieto
  15. “Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers” – Xiuying Wei, Skander Moalla, Razvan Pascanu, Caglar Gulcehre
  16. “SGD vs GD: Rank Deficiency in Linear Networks” – Aditya Varre, Margarita Sagitova, Nicolas Flammarion
  17. “A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention” – Hugo Cui, Freya Behrens, Florent Krzakala, Lenka Zdeborová
  18. “MaskSDM: Adaptive species distribution modeling through data masking” – Robin Zbinden, Nina van Tiel, Gencer Sumbul, Benjamin Kellenberger, Devis Tuia
  19. “Building Conformal Prediction Intervals with Approximate Message Passing” – Lucas Clarté, Lenka Zdeborova
  20. “Bayes-optimal learning of an extensive-width neural network from quadratically many samples” – Antoine Maillard, Emanuele Troiani, Simon Martin, Florent Krzakala, Lenka Zdeborova
  21. “SynEHRgy: Synthesizing Mixed-Type Structured Electronic Health Records using Decoder-Only Transformers” – Hojjat Karami, David Atienza, Anisoara Paraschiv-Ionescu
  22. “Fast Proxy Experiment Design for Causal Effect Identification” – Sepehr Elahi, Sina Akbari, Jalal Etesami, Negar Kiyavash, Patrick Thiran
  23. “Flexible task abstractions emerge in linear networks with fast and bounded units” – Kai Sandbrink**, Jan Bauer**, Alexandra Proca**, Christopher Summerfield, Andrew Saxe, Ali Hummos** (**equal contribution, randomized order)
  24. “Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients” – Youssef Allouah, Abdellah El Mrini, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot
  25. “How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad” – Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Colin Sandon, Omid Saremi
  26. “Transformers on Markov data: Constant depth suffices” – Nived Rajaraman, Marco Bondaschi, Ashok Vardhan Makkuva, Kannan Ramchandran, Michael Gastpar
  27. “Local to Global: Learning Dynamics and Effect of Initialization for Transformers” – Ashok Vardhan Makkuva, Chanakya Ekbote, Marco Bondaschi, Adway Girish, Alliot Nagle, Hyeji Kim, Michael Gastp
  28. “Score-Based Inverse Reinforcement Learning with Neurodynamical Models for Undulatory Swimming” – Astha Gupta, Shravan Tata, Auke Ijspeert
  29. “Principled Bayesian Optimisation in Collaboration with Human Experts” – Wenjie Xu, Masaki Adachi, Colin N. Jones, Michael A. Osborne
  30. “SAMPa: Sharpness-aware Minimization Parallelized” – Wanyun Xie, Thomas Pethick, Volkan Cevher
  31. “QWO: Speeding Up Permutation-Based Causal Discovery in LiGAMs” – Mohammad Shahverdikondori, Ehsan Mokhtarian, Negar Kiyavash
  32. “How do Active Dendrite Networks Mitigate Catastrophic Forgetting?” – Sankarshan Damle, Satya Lokam, Navin Goyal
  33. “Revisiting Ensembling in One-Shot Federated Learning” – Youssef Allouah, Akash Dhasade, Rachid Guerraoui, Nirupam Gupta, Anne-Marie Kermarrec, Rafael Pinot, Rafael Pires and Rishi Sharma
  34. “Don’t Think It Twice: Exploit Shift Invariance for Efficient Online Streaming Inference of CNNs” – Christodoulos Kechris, Jonathan Dan, Jose Miranda, David Atienza
  35. “Towards the Transferability of Rewards Recovered via Regularized Inverse Reinforcement Learning” – Andreas Schlaginhaufen, Maryam Kamgarpour
  36. “Asymptotic generalization error of a single-layer graph convolutional network” – Odilon Duranthon, Lenka Zdeborova
  37. “Optical Diffusion Models for Image Generation” – Ilker Oguz, Niyazi Dinc, Mustafa Yildirim, Junjie Ke, Innfarn Yoo, Qifei Wang, Feng Yang, Christophe Moser, Demetri Psaltis

Practical information