Projects are available on the following topics (not exclusive):
- Machine Learning and Applications
- Deep Learning Science
- Image Analysis and Computer Vision
- Graph Signal Processing and Network Machine Learning
New interface – still under development: it maybe be good to check both this link as well as the page below.
Non-exhaustive project list
Physics-Informed Graph Neural Networks for High-Resolution Mesh-Based Simulations
Graph neural networks (GNNs) have proven highly effective in modeling mesh-based simulations, offering data-driven alternatives to classical numerical solvers for physical systems [1]. These approaches can approximate complex systems governed by partial differential equations with impressive speed-ups over traditional methods. However, many existing methods, such as MeshGraphNets, struggle to scale to high-resolution simulations due to the computational bottleneck introduced by message passing over large graphs and the challenge of maintaining accuracy across diverse scales [2].
Building on recent advancements like MultiScale MeshGraphNets [2], this project aims to explore and address the challenges of high-resolution simulations using physics-informed GNNs. Specifically, the project will focus on:
Efficient Representation of Dynamics: Investigating whether accurate surrogate dynamics can be learned on coarser mesh representations to alleviate the message-passing bottleneck, with a focus on incorporating physics-inspired priors to improve both the efficacy and efficiency of the representations.
Hierarchical Modeling: Developing or extending multi-scale approaches that combine fine and coarse representations to improve accuracy and computational efficiency.
The initial task will center on modeling wind-tunnel dynamics, providing a practical and interpretable testbed for evaluating scalability and accuracy. The project may also generalize to broader physical simulation tasks.
References:
[1] Pfaff, T. et al., “Learning Mesh-Based Simulation with Graph Networks,” International Conference on Learning Representations, 2021.
[2] Fortunato, M. et al., “MultiScale MeshGraphNets,” AI4Science Workshop (ICML), 2022.
Requirements:
At least one deep learning course + prior experience with PyTorch.
Contact: [email protected]
RNA Language Models for prediction of mRNA regulatory regions catalytic functions
Messenger mRNAs are molecules within cells that carry genetic information from DNA to specific cellular locations, where they provide the instructions for producing particular proteins. These molecules consist of coding RNA sequences, which are read according to the well-known genetic code to synthesize proteins, and non-coding regions such as the 5′ UTR, 3′ UTR, and introns. These non-coding regions play crucial regulatory roles, including controlling mRNA stability, translation efficiency, and localization within the cell.
For many decades, nucleic acids were primarily understood as molecules for information storage and transfer. However, these studies have raised an intriguing question: what is the functional potential of the non-coding regions of mRNA, beyond simply transporting the genetic code? Here we will leverage RNA language models to test whether specific non-coding portions of the transcriptome exhibit novel functions such as ribozyme activities. Work will consist in benchmarking different RNA-LMs for ribozyme function prediction, establishing baselines and interpreting results.
References:
[1] Luisier, R. et al. Intron retention and nuclear loss of SFPQ are molecular hallmarks of ALS. Nat. Commun. 9, 2010 (2018).
[2] Petric-Howe, M. et al. Physiological intron retaining transcripts in the cytoplasm abound during human motor neurogenesis. Genome Res. (2022) doi:10.1101/gr.276898.122.
[3] Tyzack, G. E. et al. Aberrant cytoplasmic intron retention is a blueprint for RNA binding protein mislocalization in VCP-related amyotrophic lateral sclerosis. Brain vol. 144 1985–1993 Preprint at https://doi.org/10.1093/brain/awab078 (2021).
[4] Luisier, R., Andreassi, C., Fournier, L. & Riccio, A. The predicted RNA-binding protein regulome of axonal mRNAs. Genome Res. 33, 1497–1512 (2023).
[5] Andreassi, C. et al. Cytoplasmic cleavage of IMPA1 3’ UTR is necessary for maintaining axon integrity. Cell Rep. 34, 108778 (2021).
[6] Schmidt, C. M. & Smolke, C. D. A convolutional neural network for the prediction and forward design of ribozyme-based gene-control elements. Elife 10, (2021).
[7] Ziff, O. J. et al. Integrated transcriptome landscape of ALS identifies genome instability linked to TDP-43 pathology. Nat. Commun. 14, 2176 (2023).
Requirements:
Candidates should have strong mathematical and computational skills. Candidates should be familiar with Python/R, and with the Linux environment. Experience in sequencing data and machine learning is an asset. Candidates do not necessarily have to have a biological background but should have a strong desire to directly work with experimental biologists.
Contact: [email protected], [email protected] or [email protected]
Interpretable Deep Learning towards cardiovascular disease prediction
Cardiovascular disease (CVD) is the leading cause of death in most European countries and is responsible for more than one in three of all potential years of life lost. Myocardial ischemia and infarction are most often the result of obstructive coronary artery disease (CAD), and their early detection is of prime importance. This could be developed based on data such as coronary angiography (CA), which is an X-ray based imaging technique used to assess the coronary arteries. However, such prediction is a non-trivial task, as i) data is typically noisy and of small volume, and ii) CVDs typically result from the complex interplay of local and systemic factors ranging from cellular signaling to vascular wall histology and fluid hemodynamics. The goal of this project is to apply advanced machine learning techniques, and in particular deep learning, in order to detect culprit lesions from CA images, and eventually predict myocardial infarction. Incorporating domain specific constraints to existing learning algorithms might be needed.
References:
[1] Yang et al., Deep learning segmentation of major vessels in X-ray coronaryangiography, Nature Scientific Reports, 2019.
[2] Du et al., Automatic and multimodal analysis for coronary angiography: training and validation of a deep learning architecture, Eurointervention 2020.
Requirements:
Good knowledge of machine learning and deep learning architectures. Experience with one of deep learning libraries and in particular Pytorch is necessary.
Contact: [email protected]
Deep learning towards X-ray CT imaging becoming the gold standard for heart attack diagnosis
Cardiovascular disease (CVD) is the leading cause of death in most European countries and is responsible for more than one in three of all potential years of life lost. Myocardial infarction (MI), commonly known as a heart attack, is most often the result of obstructive coronary artery disease (CAD). The gold standard today for diagnosing a severe stenosis (the obstruction of the artery) in patients with symptoms of a cardiac event is through coronary angiography (CA). CA is an invasive procedure, in which a catheter is inserted into the body through an artery towards the heart. Over the last decade there have been attempts at diagnosing severe stenosis by extracting various measurements [1,2] from the non-invasive X-ray CT imaging. However, the gold standard for the final decision making for the treatment of patients still requires the invasive CA imaging. The goal of this project is to apply advanced machine learning techniques, and in particular deep learning, in order to predict if a certain suspected area shown in a CT image is considered a severe stenosis according to the CA gold standard. This will hopefully pave the way towards making the non-invasive CT imaging the gold standard for MI diagnosis.
References:
[1] Zreik, Majd, et al. “A recurrent CNN for automatic detection and classification of coronary artery plaque and stenosis in coronary CT angiography.” IEEE transactions on medical imaging 38.7 (2018): 1588-1598.
[2] Hong, Youngtaek, et al. “Deep learning-based stenosis quantification from coronary CT angiography.” Medical Imaging 2019: Image Processing. Vol. 10949. International Society for Optics and Photonics, 2019.
Requirements:
Good knowledge of machine learning and deep learning architectures. Experience with one of the deep learning libraries and in particular Pytorch is necessary.
Contact: [email protected] and [email protected]
Learning novel predictive representation by concept bottleneck disentanglement
Concepts are human-defined features used to explain the decision-making of black-box models with human interpretable explanations. Such methods are especially useful in the medical domain where we wish to explain the decision of a model trained to diagnose a medical condition (e.g, arthritis grade) from images (e.g, X-ray) with a concept a physician would look for in the image (e.g, bone spurs). Over the last few years various methods have been developed to extract concept explanations to interpret models post-hoc [1,2,3,4]. These methods assume that the models implicitly learn those concepts from the data, however this is not guaranteed.
In a more recent work, [5] introduces concept bottleneck models (CBM) which exploit the access to labels of human-interpretable concepts as well as the downstream task label to learn concepts explicitly. These models are trained to predict the task label y, given input x and through a bottleneck layer L, which is forced to learn some k labeled concepts. In this work they show that although constraining the parametric space of the bottleneck layer they are able to achieve comparable predictive performance with equivalent unconstrained baselines.
In this project we propose to combine the concept bottleneck parameters with unconstrained ones in order to learn a hybrid representation that takes into account both. Moreover, we wish the unconstrained bottleneck representation to be disentangled from the concepts parameters to allow the learning of new information. To this end, we will experiment with different information bottleneck disentanglement approaches as proposed in [6,7].
References:
[1] Bau, D., Zhou, B., Khosla, A., Oliva, A., and Torralba, A. Network dissection: Quantifying interpretability of deep visual representations. In Computer Vision and Pattern Recognition (CVPR), pp. 6541–6549, 2017.
[2]Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors Concept Bottleneck Models (tcav). In International Conference on Machine Learning (ICML), pp. 2668–2677, 2018.
[3] Zhou, B., Sun, Y., Bau, D., and Torralba, A. Interpretable basis decomposition for visual explanation. In European Conference on Computer Vision (ECCV), pp. 119–134, 2018.
[4] Ghorbani, A., Wexler, J., Zou, J. Y., and Kim, B. Towards automatic concept-based explanations. In Advances in Neural Information Processing Systems (NeurIPS), pp. 9277–9286, 2019.
[5] Koh PW, Nguyen T, Tang YS, Mussmann S, Pierson E, Kim B, Liang P. Concept bottleneck models. InInternational Conference on Machine Learning 2020 Nov 21 (pp. 5338-5348). PMLR.
[6]Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., and Lerchner, A. betavae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations (ICLR), 2017.
[7] Klys, Jack, Jake Snell, and Richard Zemel. “Learning latent subspaces in variational autoencoders.” Advances in neural information processing systems 31 (2018).
Requirements:
Experience with machine and deep learning projects and experience with ML/DL libraries, preferably pytorch. Knowledge of information theory is a plus.
Contact: [email protected]
Leveraging Biological Knowledge and Gene Ontologies to Improve Unsupervised Clustering in Single-Cell RNA-Sequencing and Spatial Transcriptomics
Single-cell RNA-sequencing (scRNA-seq) and spatial transcriptomics emerged as breakthrough technologies to characterize cellular heterogeneity within human tissues, including cancer biopsies. Unsupervised clustering based on detailed transcriptomes of individual cells/tissue regions is a central component to identify and characterize novel cell types [1]. In cancer biology, identifying rare cell populations is highly relevant, as it can reveal drivers of therapy resistance. However, technical variability, high dimensionality (curse of dimensionality), and sparsity (high drop-out rate) in single-cell RNA-sequencing [2] can lead to the emergence of spurious clusters, posing a significant challenge.
This collaborative research project between the Genomics and Health Informatics group at IDIAP and the LTS4 lab at EPFL, aims to address this limitation by focusing on the structure of the biological system, specifically how genes collaborate to control cellular and tissue-scale functions. Novel graph-based feature representation learning methods will be proposed for individual cells, possibly using Graph Neural Networks (GNNs). Then, building on these new representations, improved cell clustering algorithms will be developed and validated against recent baseline methods [3] in their ability to (1) recover rare escapees driving tumor resistance; (2) identify spots that exhibit similar morphological structure and organisation.
References:
[1] Zhang, S., Li, X., Lin, J., Lin, Q. & Wong, K.-C. Review of single-cell RNA-seq data clustering for cell-type identification and characterization. RNA 29, 517–530 (2023).
[2] Kiselev, V. Y., Andrews, T. S. & Hemberg, M. Publisher Correction: Challenges in unsupervised clustering of single-cell RNA-seq data. Nat. Rev. Genet. 20, 310 (2019).
[3] Du, L., Han, R., Liu, B., Wang, Y. & Li, J. ScCCL: Single-Cell Data Clustering Based on Self-Supervised Contrastive Learning. IEEE/ACM Trans. Comput. Biol. Bioinform. 20, 2233–2241 (2023).
Requirements:
Good knowledge of Python and a deep learning framework of choice (PyTorch, Tensorflow, Jax); sufficient familiarity with statistics and machine learning, also preferably Graph Neural Networks. Good knowledge of biology, or strong interest to learn biology, is a plus.
Contact: [email protected], [email protected] or [email protected]
Unlocking the Complexity of Amyotrophic Lateral Sclerosis: Integration of Biological Knowledge and Clinical Data for Genetic Insights
Amyotrophic Lateral Sclerosis (ALS) is a complex and devastating neurodegenerative condition characterized by a diverse array of clinical presentations and progression trajectories1. There is growing evidence to suggest that molecular subtypes, driven by independent disease mechanisms, contribute to the observed clinical heterogeneity2. However, our understanding of the genetic architecture and the corresponding molecular or cellular events that underlie distinct subtypes has been limited.
The goal of this collaborative research project between the Genomics and Health Informatics group at IDIAP and the LTS4 lab at EPFL is, through the integration of genomics and clinical data, to gain deeper insights into the genetic underpinnings of the disease and pinpoint relevant molecular pathways. The project will be based on publicly available data from large-scale consortiums (AnswerALS and ProjectMinE). Specifically, graph-based approaches such as graph neural networks (GCNs)3,4 could be applied to:
1) Represent molecular pathway knowledge (GO ontology or Protein-Protein interactions) to identify accumulation of genetic mutations in specific pathways, and
2) Enable improved patient stratification and to delineate the pertinent genetic mutations and molecular pathways that underlie distinct ALS subtypes.
References:
[1] Pires, S., Gromicho, M., Pinto, S., de Carvalho, M., Madeira, S.C. (2020). Patient Stratification Using Clinical and Patient Profiles: Targeting Personalized Prognostic Prediction in ALS. In: Rojas, I., Valenzuela, O., Rojas, F., Herrera, L., Ortuño, F. (eds) Bioinformatics and Biomedical Engineering. IWBBIO 2020. Lecture Notes in Computer Science(), vol 12108. Springer, Cham.
[2] Eshima, J., O’Connor, S.A., Marschall, E. et al. Molecular subtypes of ALS are associated with differences in patient prognosis. Nat Commun 14, 95 (2023).
[3] Manchia M, Cullis J, Turecki G, Rouleau GA, Uher R, Alda M. The impact of phenotypic and genetic heterogeneity on results of genome wide association studies of complex diseases. PLoS One. 2013 Oct 11;8(10):e76295.
[4] Liang, B., Gong, H., Lu, L. et al. Risk stratification and pathway analysis based on graph neural network and interpretable algorithm. BMC Bioinformatics 23, 394 (2022).
Requirements:
Candidates should have strong mathematical and computational skills. Candidates should be familiar with Python/R, and with the Linux environment. Experience in sequencing data and machine learning is an asset. Candidates do not necessarily have to have a biological background but should have a strong desire to directly work with experimental biologists..
Contact: [email protected], [email protected], [email protected] or [email protected]
Cell-Graph Analysis with Graph Neural Networks for Immunotherapy
With the advance of imaging systems, reasonably accurate cell phenomaps, which refer to the spatial map of cells accompanied by cell phenotypes, have become more accessible. As spatial organization of immune cells within the tumor microenvironment is believed to be a strong indicator of cancer progression [1], data-driven analysis of cell phenomaps to discover new biomarkers to help with cancer prognosis is an important and emerging research area. One straightforward idea is to use cell-graphs [2], which can be later used as an input to Graph Neural Network, for example, for survival prediction [3]. However, such a dataset itself poses a lot of algorithmic and computational challenges given the big variations in both number of cells (from few tens of thousands on a slide to a few millions) and their structure, as well as the class imbalance if the objective is some sort of classification. In this project, we will explore different modeling of cell graphs for hierarchical representation learning that has a prognostic value.
References:
[1] Anderson, Nicole M, and M Celeste Simon. “The tumor microenvironment.” Current biology: CB vol. 30,16 (2020): R921-R925. doi:10.1016/j.cub.2020.06.081
[2] Yener, Bulent. “Cell-Graphs: Image-Driven Modeling of Structure-Function Relationship.” Communications of the ACM, January 2017, Vol. 60 No. 1, Pages 74-84. doi:10.1145/2960404
[3] Yanan Wang, Yu Guang Wang, Changyuan Hu, Ming Li, Yanan Fan, Nina Otter, Ikuan Sam, Hongquan Gou, Yiqun Hu, Terry Kwok, John Zalcberg, Alex Boussioutas, Roger J. Daly, Guido Montúfar, Pietro Liò, Dakang Xu, Geoffrey I. Webb, Jiangning Song. “Cell graph neural networks enable the digital staging of tumor microenvironment and precise prediction of patient survival in gastric cancer.” medRxiv 2021.09.01.21262086; doi: https://doi.org/10.1101/2021.09.01.21262086
Requirements:
Good knowledge of Python and a deep learning framework of choice (PyTorch, Tensorflow, Jax); sufficient familiarity with statistics and machine learning, also preferably Graph Neural Networks. Prior experience with DataFrame is a plus.
Contact: [email protected]
Graph Latent Diffusion Models
Graph generative models have recently undergone through huge developments mostly due to the adoption of diffusion models to the graph setting [1]. Their capability of capturing higher order relations in graph datasets has lead to impressive accurate models of complex graph distributions, with scientific applications ranging from molecular generation [1] to digital pathology [2]. Despite their remarkable expressivity, current state-of-the-art graph generative models are limited to small graph generation because the unordered nature of graphs makes it difficult to adequately exploit this property. In this project, we will develop a graph-specific latent diffusion models to solve this scaling issues. We will take inspiration from the success of latent diffusion model for image generation [5], where the diffusion process occurs at a smaller dimensionality space, thus more efficiently, and the final image is then upsampled to a high-resolution image.
References:
[1] Vignac, C. et al., “Digress: Discrete denoising diffusion for graph generation”, International Conference on Learning Representations, 2022
[2] Madeira, M. et al., “Tertiary lymphoid structures generation through graph-based diffusion”, GRAIL (MICCAI workshop), 2023
[3] Limnios, S., “Sagess: Sampling graph denoising diffusion model for scalable graph generation”, arXiv preprint arXiv:2306.16827, 2023
[4] Karami, M., “Higen: Hierarchical graph generative networks”, arXiv preprint arXiv:2305.19337, 2023.
[5] Rombach, R. et al. “High-resolution image synthesis with latent diffusion models.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022.
Requirements:
Mandatory: one deep learning course (at least) and prior experience PyTorch. Prior knowledge with graph deep learning and/or diffusion models is a plus.
Contact: [email protected]
Scalable Graph Generation via Link Prediction
Generative graph models [1][2] face scalability challenges due to the need to predict the existence or type of edges between all node pairs. Some approaches use sparse graph transformers to achieve performance comparable to graph transformers, but their space complexity remains theoretically quadratic, only linearly reduced by a parameter $\lambda < 1$ [3]. To better address this issue, one possible solution is to use message-passing layers. Currently, experiments show that message-passing slightly underperforms compared to transformers [3][4]. Therefore, this project has three goals: 1) Reproduce the results of SparseDiff based on the current state-of-the-art graph generation model, 2) Enhance the existing message-passing & link prediction modules [5] or the sparse transformer [6] to match the performance of transformers, and 3) Exploration over other scalable graph generation methods such as latent diffusion [4] is also encouraged.
References:
[1] Vignac, C. et al., Digress: Discrete denoising diffusion for graph generation, International Conference on Learning Representations, 2022
[2] Qin, Madeira et al., DeFoG: Discrete Flow Matching for Graph Generation, Arxiv Preprint, 2024
[3] Qin et al., Sparse training of discrete diffusion models for graph generation, Arxiv Preprint, 2023
[4] Yang et al., Graphusion: Latent Diffusion for Graph Generation, IEEE Transactions on Knowledge and Data Engineering, 2024
[5] Cai et al., On the Connection Between MPNN and Graph Transformer, PMLR, 2023
[6] Shirzad et al., EXPHORMER: Sparse Transformers for Graphs, PMLR, 2023
Requirements:
Knowledge of Python and sufficient familiarity with statistics and machine learning. Prior experience with PyTorch is strongly recommended.
Contact: [email protected]
Benchmarking Vision Foundation Models for Digital Pathology
Large-scale self-supervised learning models, a.k.a foundation models, are becoming increasingly popular due to their ability to learn universal representations useful for a variety of downstream tasks. Such models learned from tissue images are revolutionizing the field of digital pathology [1,2], improving generalizability and transferability to a wide range of challenging diagnostic tasks and clinical workflows. As cells are the key components of these tissues, we hypothesize that modeling them as cell graphs, and then processing them with appropriate foundation models, could lead to improved performance. In this ongoing project, we aim to design such approaches and develop a comprehensive benchmark of existing vision models, taking into account several self-supervised learning strategies [3, 4]. The student, who will need to be proficient in Pytorch, will first help complete the benchmark pipeline and then explore new graph-based methods.
References:
[1] Wang, Xiyue, et al. “Transformer-based unsupervised contrastive learning for histopathological image classification.” Medical image analysis 81 (2022): 102559.
[2] Chen, Richard J., et al. “Towards a general-purpose foundation model for computational pathology.” Nature Medicine 30.3 (2024): 850-862.
[3] He, Kaiming, et al. “Masked autoencoders are scalable vision learners.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[4] Caron, Mathilde, et al. “Emerging properties in self-supervised vision transformers.” Proceedings of the IEEE/CVF international conference on computer vision. 2021.
Requirements:
Good knowledge of machine learning and deep learning architectures. Experience with Python and PyTorch is strongly recommended.
Contact: [email protected], [email protected], [email protected] or [email protected]
Hypergraph neural networks for digital pathology
Hypergraphs are generalisations of graphs where edges can connect multiple nodes instead of just two. Hypergraphs are powerful in that they allow a broader set of nodes to be involved. The recent development of hypergraph neural networks [1][2] has opened up an interesting application area in digital pathology. Hypergraph neural networks in digital pathology [3][4] can be thought of as an extension of hierarchical graph representations in digital pathology which work on tissue and cell graphs [5]. In digital pathology, hypergraph neural networks have been used mainly for survival prediction.
In this project, we will explore hypergraph neural networks for node-level prediction tasks from the OCELOT dataset [6] and cancer type prediction from TCGA datasets. We will leverage existing hypergraph neural networks implementations from the https://github.com/pyt-team/TopoModelX library. Further, we will investigate explainable methods to understand the important hypergraph components for predictions.
References:
[1] Feng, Yifan, et al. “Hypergraph neural networks.” Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019.
[2] Telyatnikov, Lev, et al. “Hypergraph neural networks through the lens of message passing: a common perspective to homophily and architecture design.” arXiv preprint arXiv:2310.07684 (2023).
[3] Di, Donglin, et al. “Big-hypergraph factorization neural network for survival prediction from whole slide image.” *IEEE Transactions on Image Processing* 31 (2022): 1149-1160.
[4] Benkirane, Hakim, et al. “Hyper-AdaC: adaptive clustering-based hypergraph representation of whole slide images for survival analysis.” *Machine Learning for Health*. PMLR, 2022.
[5] Pati, Pushpak, et al. “Hierarchical graph representations in digital pathology.” *Medical image analysis* 75 (2022): 102264.
[6] OCELOT grand challenge: https://ocelot2023.grand-challenge.org/evaluation-metric/
Requirements:
Experience with deep learning projects and experience with PyTorch and knowledge of graph neural networks is recommended. Knowledge of digital pathology is a plus.
Contact: [email protected].
Prototypical graph neural networks for medical image applications
Prototypical graph neural networks are interpretable-by-design [1][2][3]. For graph-level classification, they work by identifying important prototypes for each class. Recent works applying prototypical neural networks have shown promise in medical image analysis applications [4][5][6]. Yet, the use of prototypical graph neural networks for medical image analysis remains largely unexplored.
In this project, we will explore the use of different prototypical graph neural networks (including ProtGNN and PIGNN) on cell graphs constructed on medical images for graph level classification tasks. In particular, we will focus on the explainability of the different architectures and their applicability for medical images. We will work on publicly available datasets (such as ACROBAT, CAMELYON and other GrandChallenge datasets).
References:
[1] Zhang, Zaixi, et al. “Protgnn: Towards self-explaining graph neural networks.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.
[2] Ragno, Alessio, Biagio La Rosa, and Roberto Capobianco. “Prototype-based interpretable graph neural networks.” IEEE Transactions on Artificial Intelligence (2022).
[3] Marc, Christiansen, et al. “How Faithful are Self-Explainable GNNs?.” Learning on Graphs Conference 2023. 2023.
[4] Rymarczyk, Dawid, et al. “ProtoMIL: multiple instance learning with prototypical parts for whole-slide image classification.” Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer International Publishing, 2022.
[5] Yu, Jin-Gang, et al. “Prototypical multiple instance learning for predicting lymph node metastasis of breast cancer from whole-slide pathological images.” Medical Image Analysis 85 (2023): 102748.
[6] Deuschel, Jessica, et al. “Multi-prototype few-shot learning in histopathology.” Proceedings of the IEEE/CVF international conference on computer vision. 2021.
Requirements:
Experience with deep learning projects and experience with PyTorch and knowledge of graph neural networks is recommended. Knowledge of digital pathology is a plus.
Contact: [email protected].
Interpretable machine learning in personalised medicine
Modern machine learning models mostly act as a black box and their decisions cannot be easily inspected by humans. To trust the automated decision-making, we need to understand the reasons behind predictions, and gain insights into the models. This can be achieved by building models that are interpretable. Recently, different methods have been proposed for data classification, such as augmenting the training set with useful features [1], visualizing the intermediate features in order to understand the input stimuli that excite individual feature maps at any layer in the model [2-3], or introducing logical rules in the network that guide the classification decision [4], [5]. The aim of this project is to study existing algorithms, which attempt to interpret deep architectures by studying the structure of their inner layer representations, and based on these methods find patterns for classification decisions along with coherent explanations. The studied algorithms will most be considered in the context of personalised medicine applications.
[1] R. Collobert, J. Weston, L. Bottou, M. M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,”J. Mach. Learn. Res., vol. 12, pp. 2493–2537, Nov. 2011.
[2] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv:1312.6034, 2013.
[3] L. M. Zintgraf, T. S. Cohen, T. Adel, and M. Welling, “Visualizing deep neural network decisions: Prediction difference analysis,” arXiv:1702.04595, 2017.
[4] Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. Xing, “Harnessing deep neural networks with logic rules,” in ACL, 2016.
[5] Z. Hu, Z. Yang, R. Salakhutdinov, and E. Xing, “Deep neural networks with massive learned knowledge,” in Conf. on Empirical Methods in Natural Language Processing, EMNLP, 2016.
Requirements:
Familiarity with machine learning and deep learning architectures. Experience with one of deep learning libraries and good knowledge of the corresponding coding language (preferably Python) is a plus.
Contact: [email protected]
Explainable Ejection Fraction Estimation from cardiac ultrasound videos
Quantitative assessment of caridiac functions is essential for the diagnosis of cardiovascular diseases (CVD). In particular, one of the most crucial measurements of heart function in clinical routine is left ventricle ejection fraction (LVEF), which is the ratio of left ventricle blood volume in the end-systolic to end-diastolic phase during one cardiac cycle. The manual assessment of LVEF, which depends on accurate frame identification and ventricular annotation, is associated with significant inter-observer variability. The EF has been predicted using a variety of deep learning-based algorithms, however most of them lack reliable explainability and have a low accuracy due to unrealistice data augmentations. This project aims to (1) automatically estimate the EF from ultrasound videos using the public datasets [1, 2] (2) provide explainability such as the weights over the frames in one ultrasound video or attention maps over pixels in each frame for the LVEF estimation.
[1] Ouyang, D., He, B., Ghorbani, A. et al. Video-based AI for beat-to-beat assessment of cardiac function. Nature 580, 252–256 (2020).
[2] S. Leclerc, E. Smistad, J. Pedrosa, A. Ostvik, et al. “Deep Learning for Segmentation using an Open Large-Scale Dataset in 2D Echocardiography” in IEEE Transactions on Medical Imaging, vol. 38, no. 9, pp. 2198-2210, Sept. 2019.
Requirements:
Good knowledge of deep learning and experience with ML/DL libraries, preferably pytorch.
Contact: [email protected]
Modeling and learning dynamic graphs
Graphs provide a compact representation for complex systems describing for example biological, financial or social phenomena. Often graphs are considered as static objects, although in many applications the underlying systems are varying over time: Individuals in social networks make new connections to each other, or drugs change how components of biological networks interact. Modelling graphs as temporal objects thus allows us to better describe and understand the dynamical behavior of these physical systems.
In this project we aim to model or learn the dynamics of temporal graphs using ideas from optimal transport. Optimal transport is a powerful mathematical tool for describing the dynamics of different types of data [1], and also has tight connections with diffusion based generative models [2].
Depending on the background of the student, the goal of this project is to use either optimal transport based graph distances [3], or graph generative models [4] to better understand temporal graphs.
References:
[1] G. Peyré, M. Cuturi. “Computational Optimal Transport: With Applications to Data Science”. Foundations and trends in machine learning. 2019.
[2] V. De Bortoli, J. Thornton, J. Heng, and A. Doucet. “Diffusion Schrödinger bridge with applications to score-based generative modeling”. Advances in Neural Information Processing Systems. 2021.
[3] H. Petric Maretic, M. El Gheche, G. Chierchia, P. Frossard. “GOT: an optimal transport framework for graph comparison”. Advances in Neural Information Processing Systems. 2019.
[4] C. Vignac, I. Krawczuk, A. Siraudin, B. Wang, V. Cevher, P. Frossard. “Digress: Discrete denoising diffusion for graph generation“. In Proceedings of the 11th International Conference on Learning Representations. 2023.
Comparing structured data with fused Gromov-Wasserstein distance
In the era of big data it becomes crucial to quantify the similarity between data sets. A useful method to compare data distributions is the Wasserstein distance [1]. Another related metric, the Gromov-Wasserstein distance can be used to compare structured objects, such as graphs [2,3].
The two methods have been combined to the so-called fused Gromov Wasserstein distance, which compares graph structured data by taking into account both the underlying graph structures and the feature information [4].
In this project we explore the fused Gromov Wasserstein distance and its ability to compare structured data. Interesting directions of the project are, e.g., to incorporate new types of feature information or identify subgraph structures.
References:
[1] G. Peyré, M. Cuturi. “Computational Optimal Transport: With Applications to Data Science”. Foundations and trends in machine learning. 2019.
[2] F. Mémoli. “Gromov-Wasserstein distances and the metric approach to object matching” Foundations of computational mathematics. 2011
[3] D. Alvarez-Melis, T. Jaakkola, S. Jegelka. Structured optimal transport. In International Conference on Artificial Intelligence and Statistics. 2018.
[4] T. Vayer, L. Chapel, R. Flamary, R. Tavenard, N. Courty. “Optimal Transport for structured data with application on graphs”. International Conference on Machine Learning (ICML). 2019
Some experience with machine learning and graphs is a plus.
Gromov-Wasserstein projections for Graph Neural Network
We are interested in this work in learning representation of attributed graphs in an end-to-end fashion, for both node-level and graph-level tasks, using Optiomal Transport across graph spaces. One existing approach consists in designing kernels that leverage topological properties of the observed graphs. Alternative approaches relying on Graph Neural Networks aim at learning vectorial representations of the graphs and their nodes that can encode the graph structure (i.e. graph representation learning [1]). These architectures typically learn node embeddings via local permutation-invariant transformations following two dual mechanisms: i) the message-passing (MP) principle followed by a global pooling; ii) or iteratively performing hierarchical pooling that induces MP via graph coarsening principles [2, 3].
In this project, we aim at designing GNNs that leverage recent advances in Optimal Transport (OT) across spaces, naturally providing novel MP mechanisms or their dual hierarchical counterpart [4]. We will study these models in depth following a rigorous methodology in order to position the approaches on some of the main concerns of the current GNN literature. First, by studying these approaches on well-known synthetic datasets used to assess the expressiveness limits of GNNs, i.e., their ability to distinguish graphs or their nodes in homophilic and heterophilic contexts. Finally, we will benchmark these approaches on real-world datasets commonly used by the research community. Pytorch and Pytorch geometric implementations of initial frameworks and experiments will be provided so that students can easily familiarise themselves with the tools involved (especially OT solvers).
References:
[1] Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher R ́e, and Kevin Murphy. Machine learning on graphs: A model and comprehensive taxonomy. The Journal of Machine Learning Research, 23(1):3840–3903, 2022.
[2] Daniele Grattarola, Daniele Zambon, Filippo Maria Bianchi, and Cesare Alippi. Understanding pooling in graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2022.
[3] Chuang Liu, Yibing Zhan, Jia Wu, Chang Li, Bo Du, Wenbin Hu, Tongliang Liu, and Dacheng Tao. Graph pooling for graph neural networks: Progress, challenges, and opportunities. arXiv preprint arXiv:2204.07321, 2022.
[4] C ́edric Vincent-Cuaz, R ́emi Flamary, Marco Corneli, Titouan Vayer, and Nicolas Courty. Semi-relaxed gromov wasserstein divergence with applications on graphs. In International Conference on Learning Representations, 2022.
Robust Graph Dictionary Learning
Dictionary learning is a key tool for representation learning, that explains the data as linear combination of few basic elements. Yet, this analysis is complex in the context of graph learning, as graphs usually belong to different metric spaces. Seminal works of [1, 2] filled this gap by proposing new Graph Dictionary Learning approaches using the Gromov-Wasserstein (GW) distance based on Optimal Transport (OT) as data fitting term, or relaxation of this distance [3]. Later on, [4] identified that these methods exhibit high sensitivity to edge noises and proposed a variant of GW to fix this, leveraging robust optimization tools that can be seen as a modification of the primal GW problem.
This project will first aim at analyzing results in [4] from different graph-theoretic perspectives, with potential contributions to the open-source package Python Optimal Transport [5]. Then we will investigate new models to improve performances of the latter. This project naturally involves challenges in terms of solver design, implementations, and theoretical analysis between OT and graph theory, which can be studied to a greater or lesser extent depending on the student’s wishes.
References:
[1] Hongtengl Xu. Gromov-wasserstein factorization models for graph clustering. In Proceedings of the AAAI con- ference on artificial intelligence, volume 34, pages 6478–6485, 2020.
[2] Cédric Vincent-Cuaz, Titouan Vayer, Rémi Flamary, Marco Corneli, and Nicolas Courty. Online graph dictionary learning. In International conference on machine learning, pages 10564–10574. PMLR, 2021.
[3] Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, and Nicolas Courty. Semi-relaxed gromov- wasserstein divergence and applications on graphs. In International Conference on Learning Representations, 2021.
[4] Weijie Liu, Jiahao Xie, Chao Zhang, Makoto Yamada, Nenggan Zheng, and Hui Qian. Robust graph dictionary learning. In The Eleventh International Conference on Learning Representations, 2022.
[5] Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, et al. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1–8, 2021.
Scalable template-based GNN with Optimal Transport divergences
This work aims at investigating novel Optimal Transport (OT) based operators for Graph Neural Networks (GNN) based representation learning, to address graph-level tasks e.g classification and regression [1]. GNN typically learn node embeddings via local permutation-invariant transformations using message-passing, then perform a global pooling step to get the graph representation [2]. Recently, [3] proposed a novel global pooling relational concept that led to SOTA performances, placing distances to some learnable graph templates at the core of the graph representation using the Fused Gromov-Wasserstein distance [4]. The latter results from solving a complex graph matching problem, which greatly enhances GNN expressivity, but comes with a high computational cost which limits its use to dataset of small graphs.
This project will aim at enhancing the scalability of this kind of approaches from moderate to large graphs, while guaranteeing gains in terms of expressivity and/or generalization performances. This project naturally includes both empirical and theoretical challenges, which can be studied to a greater or lesser extent depending on the student’s wishes.
References:
[1] Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, and Kevin Murphy. Machine learning on graphs: A model and comprehensive taxonomy. The Journal of Machine Learning Research, 23(1):3840–3903, 2022.
[2] Chuang Liu, Yibing Zhan, Jia Wu, Chang Li, Bo Du, Wenbin Hu, Tongliang Liu, and Dacheng Tao. Graph pooling for graph neural networks: Progress, challenges, and opportunities. arXiv preprint arXiv:2204.07321, 2022.
[3] Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, and Nicolas Courty. Template based graph neural network with optimal transport distances. Advances in Neural Information Processing Systems, 35:11800– 11814, 2022.
[4] Vayer Titouan, Nicolas Courty, Romain Tavenard, and Rémi Flamary. Optimal transport for structured data with application on graphs. In International Conference on Machine Learning, pages 6275–6284. PMLR, 2019.
Analysis of brain networks over time
We are interested in detecting, and possibly predict, epileptic seizures using graphs extracted from EEG measurements.
Seizures occur as abnormal neuronal activity. They can affect the whole brain or localized areas and may propagate over time. The main non-invasive diagnosis tool is EEG which measures voltage fluctuations over a person’s scalp. These fluctuations correspond to the electrical activity caused by joint activation of groups of neurons. EEGs can span several hours and are currently inspected “by hand” by highly specialized doctors. ML approaches could improve this analyis, and network approaches have shown promising results.
Our data consists in multiple graphs providing a snapshot of brain activity over a time window. Considering consequent time windows, we have stochastic processes on graphs, of which we would like to identify changing points. We will learn graph representations and study their evolution over time to identify changes in regime. You are expected to compare different models in terms of performances and explainability. We are paticularly interested in inherently explainable methods, using graph features and classical time series analysis. A comparison with deep learning models could be valuable as well.
The content and workload is flexible based on the student profile and time involvement (semester project vs MSc thesis).
– Time series (preferably)
– Python (numpy, sklearn)
Implementation of Hierarchical Training of Neural Networks
Deep Neural Networks (DNNs) provide state-of-the-art accuracy for many tasks such as image classification. Since most of these networks require high computational resources and memory; in general, they are executed on cloud systems, which satisfy this requirement. However, it increases the latency of execution due to the high communication cost of the data to the cloud and it raises privacy concerns. These issues are more critical during the training phase, as the backward pass is naturally more resource hungry, and the required dataset is huge.
Hierarchical Training [1][2], is a novel approach to implement the training phase of DNNs in edge-cloud frameworks, dividing the calculations between two devices. It aims to keep the communication cost and computation cost in an acceptable criterion that results in the reduction of the training time while keeping the accuracy of the model high. Moreover, these methods inherently preserve the privacy of users.
In this project, the goal is to implement a new method of hierarchical training, which has been made in CSEM/LTS4 using PyTorch framework, on a two-device, edge-cloud system. The edge device (e.g., Nvidia Jetson Series [3]) has lower resources in comparison to the cloud, which is basically a high-end GPU system. We aim to train popular neural networks (such as VGG) on this two-device system.
References:
[1] D. Liu, X. Chen, Z. Zhou, and Q. Ling, ‘HierTrain: Fast Hierarchical Edge AI Learning with Hybrid Parallelism in Mobile-Edge-Cloud Computing’, ArXiv200309876 Cs, Mar. 2020, Accessed: Jul. 03, 2021. [Online]. Available: http://arxiv.org/abs/2003.09876
[2] A. E. Eshratifar, M. S. Abrishami, and M. Pedram, ‘JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services’, ArXiv180108618 Cs, Feb. 2020, Accessed: Jul. 09, 2021. [Online]. Available: http://arxiv.org/abs/1801.08618
[3] ‘NVIDIA Embedded Systems for Next-Gen Autonomous Machines’, NVIDIA. https://www.nvidia.com/en-gb/autonomous-machines/embedded-systems/ (accessed Apr. 14, 2022).
Requirements:
Experience in programming on Nvidia Jetson is required. Good knowledge of deep learning in PyTorch is necessary. Experience of working with TensorFlow is a plus
Contact: yamin.sepehri@epfl.ch
Black-box attack against LLMs
Recently, Large Language Models (LLMs) such as ChatGPT have seen widespread deployment. These models exhibit advanced general capabilities, but pose risks around misuse by bad actors. LLMs are trained for safety and harmlessness but they remain susceptible to adversarial misuse. It has been shown that these systems can be forced to elicit undesired behavior [1].
Adversarial examples have been investigated in the different fields of natural language processing such as text classification [2]. The goal of this project is to extend such attacks to LLMs.
References:
[1] Jones et al., “Automatically Auditing Large Language Models via Discrete Optimization”, ICML, 2023.[2] Zhang et al., “Adversarial attacks on deep-learning models in natural language processing: A survey”, ACM TIST, 2020.
Requirements:
Good knowledge of Python. Sufficient familiarity with machine/deep learning, and NLP systems. Experience with one of the deep learning libraries and in particular PyTorch.
Contact: [email protected]
TransFool against LLMs
Recently, Large Language Models (LLMs) such as ChatGPT have been increasingly deployed. However, these models can pose risks around misuse by bad actors. It has been shown that these models are prone to produce objectionable content, and they are vulnerable to adversarial misuse [1].
Recently, we proposed TransFool to generate adversarial examples against Neural Machine Translation models [2]. In this project, we aim to extend TransFool to reveal vulnerabilities of LLMs and force them to produce objectionable information.
References:
[1] Jones et al., “Automatically Auditing Large Language Models via Discrete Optimization”, ICML, 2023.
[2] Sadrizadeh et al., “TransFool: An Adversarial Attack against Neural Machine Translation Models”, TMLR, 2023.
Requirements:
Good knowledge of Python. Sufficient familiarity with machine/deep learning, and NLP systems. Experience with one of the deep learning libraries and in particular PyTorch.
Contact: [email protected]
Adversarial attacks against neural machine translation models
In recent years, DNN models have been used in machine translation tasks. The significant performance of Neural Machine Translation (NMT) systems have led to their growing usage in diverse areas. However, DNN models have been shown to be highly vulnerable to intentional or unintentional manipulations, which are called adversarial examples [1]. Although adversarial examples have been investigated in the field of text classification [2], they have not been well studied for the NMTs.
The goal of this project is to extend popular methods of generating adversarial examples against text classifiers, e.g. TextFooler [3] and BERT-Attack [4], to the case of NMT.
References:
[1] Szegedey et al., “Intriguing properties of neural networks”, ICLR 2014.
[2] Zhang et al., “Adversarial attacks on deep-learning models in natural language processing: A survey”, ACM TIST, 2020.
[3] Jin et al., “Is bert really robust? a strong baseline for natural language attack on text classification and entailment”, AAAI 2020.
[4] Li et al., “BERT-ATTACK: Adversarial attack against BERT using BERT”, EMNLP 2020.
Requirements:
Good knowledge of Python. Sufficient familiarity with machine/deep learning, and NLP systems. Experience with PyTorch or TensorFlow is a plus.
Contact: [email protected]