EPFL Pre-NeurIPS 2024 Regional Event
The Conference on Neural Information Processing Systems (NeurIPS) is the main machine learning conference.
The EPFL ELLIS Lausanne Unit, hosted within the EPFL AI Center, is delighted to invite you to its 2024 pre-NeurIPS regional event on December 5, 2024.
While the conference is highly selective, 45 EPFL papers have been accepted to this year conference. The list of NeurIPS 2024 accepted papers with at least one EPFL author is available down below.
For the fourth consecutive year we have decided, prior to the conference in Canada, to organize a local mid-scale event on the EPFL campus, and to invite anyone who had an accepted contribution – papers, talks, posters, workshops..- at NeurIPS to apply for one of our talk* and/or poster slots**.
The conference will give the occasion to all EPFL students and researchers, and partners with accepted papers at NeurIPS 2024 to present their works, and to all students and researchers interested in machine learning research to connect and discuss science during this event.
Researchers from all institutions (not only EPFL) are welcome to apply and/or attend the event***. Please note that travel costs cannot be reimbursed.
*Talk submissions are closed.
**Posters of workshop papers are accepted and we welcome other published NeurIPS contributions (papers, talks, posters, workshops). Feature posters including but not exclusive to accepted NeurIPS submissions. Upon space, contributions to other major conferences can also be showcased.
Poster boards can accommodate normalized standard format A0 (841 × 1189 mm) and in portrait mode.
***Please note that seats are limited, thus, not all EPFL external registrations might be accepted.
Registration
Deadlines for submission and registration
- Talk submission: CLOSED
- Registration and poster submission: December 1
Program
08:00 – 09:00 – Check-in and poster setup – BC Building
09:00 – 10:20 – Spotlight Talks Session 1/3: 4 x Spotlight session – BC01
Session Chair – Prof. Volkan Cevher
- Connecting Representation, Collapse, and Trust Region Issues in PPO, Skander Moalla
- Why the Metric Backbone Preserves Community Structure, Maximilien Dreveton
- Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers, Xiuying Wei
-
SGD vs GD: Rank Deficiency in Linear Networks, Aditya Varre
Comfort break of 15 minutes
10:40 – 11:40 – Spotlight talks Session 2/3: 3 x Spotlight session – BC01
Session Chair – Prof. Lenka Zdeborová
- Super Consistency of Neural Network Landscapes and Learning Rate Transfer, Lorenzo Noci
- A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention, Freya Behrens
- Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients, Abdellah EL Mrini
11:40 – 14:00 – Standing lunch & poster session – BC Hall (Atrium)
Transitioning to the ELE 117
14:30 – 15:30 – Spotlight talks Session 3/3: 3 x Spotlight session – ELE 117 AI Center Lounge –Full house on-site– and Online
Session Chair – Dr. Dorina Thanou
- 4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities, Oguzhan Fatih Kar
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models, Maksym Andriushchenko
- Toward Semantic Gaze Target Detection, Samy Tafasca
15:30 – Coffee & networking – ELE 117 AI Center Lounge
Organizing Committee and Chairs
Prof. Florent Krzakala, EPFL (Information, Learning & Physics Lab.) – ELLIS Fellow
Prof. Volkan Cevher, EPFL (Information and Inference Systems Lab.) – ELLIS Fellow
Prof. Pascal Frossard, EPFL AI Center – ELLIS Fellow and Lausanne Unit Director
Prof. Martin Jaggi, EPFL (Machine Learning and Optimization Lab.) – ELLIS Fellow
Prof. Lenka Zdeborová, EPFL (Statistical Physics of Computation Lab) – ELLIS Fellow
Dr. Dorina Thanou, EPFL AI Center – ELLIS Scholar
Coordination
Nicolas Machado, EPFL AI Center and ELLIS Lausanne Unit
Poster list
- “4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities” – Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir
- “The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks” – Lénaïc Chizat, Praneeth Netrapalli
- “No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO” – Skander Moalla, Andrea Miele, Daniil Pyatko, Razvan Pascanu, Caglar Gulcehre
- “JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models” – Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramer, Hamed Hassani, Eric Wong
- “Loss Landscape Characterization of Neural Networks without Over-Parametrziation” – Rustem Islamov, Niccolò Ajroldi, Antonio Orvieto, Aurelien Lucchi
- “Graph Edit Distance with General Costs Using Neural Set Divergence” – Eeshaan Jain, Indradyumna Roy, Saswat Meher, Soumen Chakrabarti, Abir De
- “Toward Semantic Gaze Target Detection” – Samy Tafasca, Anshul Gupta, Victor Bros, Jean-marc Odobez
- “MTGS: A Novel Framework for Multi-Person Temporal Gaze Following and Social Gaze Prediction” – Anshul Gupta, Samy Tafasca, Arya Farkhondeh, Pierre Vuillecard, Jean-Marc Odobez
- “This Too Shall Pass: Removing Stale Observations in Dynamic Bayesian Optimization” – Anthony Bardou, Patrick Thiran, Giovanni Ranieri
- “Why the Metric Backbone Preserves Community Structure” – Maximilien Dreveton, Charbel Chucri, Matthias Grossglauser, Patrick Thiran
- “Generative Modelling of Structurally Constrained Graphs” – Manuel Madeira, Clément Vignac, Dorina Thanou, and Pascal Frossard
- “Taming Nonconvex Stochastic Mirror Descent with General Bregman Divergence” – Ilyas Fatkhullin, Niao He
- “Contextual Bilevel Optimization and RL for Incentrive Alignment” – Vinzenz Thoma, Barna Pasztor, Andreas Krause, Giorgia Ramponi, Yifan Hu
- “Super Consistency of Neural Network Landscapes and Learning Rate Transfer” – Lorenzo Noci, Alex Meterez, Thomas Hofmann, Antonio Orvieto
- “Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers” – Xiuying Wei, Skander Moalla, Razvan Pascanu, Caglar Gulcehre
- “SGD vs GD: Rank Deficiency in Linear Networks” – Aditya Varre, Margarita Sagitova, Nicolas Flammarion
- “A Phase Transition between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention” – Hugo Cui, Freya Behrens, Florent Krzakala, Lenka Zdeborová
- “MaskSDM: Adaptive species distribution modeling through data masking” – Robin Zbinden, Nina van Tiel, Gencer Sumbul, Benjamin Kellenberger, Devis Tuia
- “Building Conformal Prediction Intervals with Approximate Message Passing” – Lucas Clarté, Lenka Zdeborova
- “Bayes-optimal learning of an extensive-width neural network from quadratically many samples” – Antoine Maillard, Emanuele Troiani, Simon Martin, Florent Krzakala, Lenka Zdeborova
- “SynEHRgy: Synthesizing Mixed-Type Structured Electronic Health Records using Decoder-Only Transformers” – Hojjat Karami, David Atienza, Anisoara Paraschiv-Ionescu
- “Fast Proxy Experiment Design for Causal Effect Identification” – Sepehr Elahi, Sina Akbari, Jalal Etesami, Negar Kiyavash, Patrick Thiran
- “Flexible task abstractions emerge in linear networks with fast and bounded units” – Kai Sandbrink**, Jan Bauer**, Alexandra Proca**, Christopher Summerfield, Andrew Saxe, Ali Hummos** (**equal contribution, randomized order)
- “Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients” – Youssef Allouah, Abdellah El Mrini, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot
- “How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad” – Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Colin Sandon, Omid Saremi
- “Transformers on Markov data: Constant depth suffices” – Nived Rajaraman, Marco Bondaschi, Ashok Vardhan Makkuva, Kannan Ramchandran, Michael Gastpar
- “Local to Global: Learning Dynamics and Effect of Initialization for Transformers” – Ashok Vardhan Makkuva, Chanakya Ekbote, Marco Bondaschi, Adway Girish, Alliot Nagle, Hyeji Kim, Michael Gastp
- “Score-Based Inverse Reinforcement Learning with Neurodynamical Models for Undulatory Swimming” – Astha Gupta, Shravan Tata, Auke Ijspeert
- “Principled Bayesian Optimisation in Collaboration with Human Experts” – Wenjie Xu, Masaki Adachi, Colin N. Jones, Michael A. Osborne
- “SAMPa: Sharpness-aware Minimization Parallelized” – Wanyun Xie, Thomas Pethick, Volkan Cevher
- “QWO: Speeding Up Permutation-Based Causal Discovery in LiGAMs” – Mohammad Shahverdikondori, Ehsan Mokhtarian, Negar Kiyavash
- “How do Active Dendrite Networks Mitigate Catastrophic Forgetting?” – Sankarshan Damle, Satya Lokam, Navin Goyal
- “Revisiting Ensembling in One-Shot Federated Learning” – Youssef Allouah, Akash Dhasade, Rachid Guerraoui, Nirupam Gupta, Anne-Marie Kermarrec, Rafael Pinot, Rafael Pires and Rishi Sharma
- “Don’t Think It Twice: Exploit Shift Invariance for Efficient Online Streaming Inference of CNNs” – Christodoulos Kechris, Jonathan Dan, Jose Miranda, David Atienza
- “Towards the Transferability of Rewards Recovered via Regularized Inverse Reinforcement Learning” – Andreas Schlaginhaufen, Maryam Kamgarpour
- “Asymptotic generalization error of a single-layer graph convolutional network” – Odilon Duranthon, Lenka Zdeborova
- “Optical Diffusion Models for Image Generation” – Ilker Oguz, Niyazi Dinc, Mustafa Yildirim, Junjie Ke, Innfarn Yoo, Qifei Wang, Feng Yang, Christophe Moser, Demetri Psaltis
Practical information
- Getting to EPFL
- For any questions regarding the event you can contact [email protected]