Our lab specializes in Explainable AI for Prognostics and Health Management, focusing on enhancing trust and transparency in deep learning models for industrial applications. Our research includes:

Pulse Activation Maps (PAMs)
Interpretable fault detection in power systems.

Concept Bottleneck Models (CBMs)
Explainable remaining useful life (RUL) prediction.

Weakly-supervised explainable AI
Explainable AI for crack detection and growth monitoring, leveraging feature attribution methods to reduce annotation costs while ensuring reliability.