In this project, we plan to explore methods to match/align shapes extracted from biomedical images, based on shape and texture-based features. By doing so, we will be able to analyze the variations of these structures over time and compare them with other patients.
We can align two shapes by maximizing the overlap between them under a global transformation. Additionally, we can define alignment based on textural features as well. Initially, we plan to use an optimization-based approach and then use a learning-based approach (i.e. using deep learning models). We will obtain the segmentations of the structures using a pre-trained Voxel2Mesh network[1]. You can find more information about the project in this document.
Contact
Udaranga Wickramasinghe
Sina Honari
Reference
[1] U. Wickramasinghe, E. Remelli, G. Knott, P. Fua. “Voxel2Mesh: 3D Mesh Model Generation from Volumetric Data”, MICCAI 2020
Please note that the publication lists from Infoscience integrated into the EPFL website, lab or people pages are frozen following the launch of the new version of platform. The owners of these pages are invited to recreate their publication list from Infoscience. For any assistance, please consult the Infoscience help or contact support.
Voxel2Mesh: 3D Mesh Model Generation from Volumetric Data
23rd International Conference On Medical Image Computing & Computer Assisted Intervention
2020
International Conference On Medical Image Computing & Computer Assisted Intervention (MICCAI), Lima, Peru, 4-8 OCTOBER 2020.