Student Projects

Available Projects

The optimal integration of simultaneously acquired datasets of the 2D domain (imagery) and the 3D domain (lidar point-cloud) is a direct task when geometric constraints are considered. However, in the absence of system calibrations, the fusion of the two optical datasets becomes uncertain and often fails to meet user expectations. Deep Neural Networks have managed to establish a spatial relationship between the 2D and the 3D domain, but current architectures have not yet been evaluated on long-range (< 200 m) aerial datasets.

Airborne laser scanning (ALS) is a widely adopted remote sensing technology, renowned for its efficient and precise modeling of forests. This is attributed to its capability to accurately describe the geometric features of trees within a forest. However, automating the identification of individual trees and their species from ALS data poses a formidable challenge. Traditional closed-form clustering algorithms yield inaccurate segmentation results, and deep learning-based methods demand substantial amounts of labeled training data, which is impractical to establish manually.

This project aims to tackle the challenges associated with object labeling and accuracy by employing unsupervised and self-supervised approaches. Unsupervised methods are utilized to obtain a preliminary segmentation of the ALS data. Subsequently, these roughly segmented tree examples will be hand labeled and employed to train a classifier, facilitating the identification of well-segmented tree individuals. In the final step, these labels will be used to calibrate and refine state of the art segmentation and classification algorithms, employing a semi-supervised approach.

This project utilizes deep learning techniques, to improve the accuracy of LiDAR point clouds in remote sensing tasks. By establishing 3D correspondences, the project aims to refine the point cloud accuracy and enable more precise terrain mapping, object recognition, change detection, and environmental monitoring.
The objectives include analyzing and implementing state-of-the-art techniques and evaluating their effectiveness on real large scale LiDAR point cloud datasets. Challenges like noisy data, occlusions, and computational efficiency will be addressed by leveraging novel data representations, training metrics and network architectures.

 

In this project, you will contribute on improving a real car-mounted laser scanning system.

The project will focus on adapting a deep learning methodology developed in our lab and able to recognize correspondences (i.e. recognizable points scanned multiple times in a point cloud). These correspondences can then be leveraged to refine the estimation of the trajectory of the vehicle. The end goal is to improve the robustness of the trajectory estimation and point cloud generation pipeline when GNSS signal degradation occurs, allowing for more accurate 3D digitization of scanned areas.

The “OpenSimDrive” project is a comprehensive investigation into open source car simulators, aiming to identify a suitable candidate for extracting dynamic models. The project further involves utilizing this model to predict linear and angular acceleration and subsequently validating its accuracy by comparing the outputs with the simulator. Additionally, a crucial aspect of the project is the integration of the validated model into a sensor fusion framework, enhancing the estimated position