Current Project:
Mathematical Foundations for RISE of AI
While machine learning (ML) is making extraordinary demonstrations in many scientific and engineering domains with neural networks (NN), ML researchers have no delusions about the emerging weaknesses of the NN paradigm, such as robustness, interpretability, bias, and reproducibility (RISE). To this end, there is growing interest in finding robust and fair training models, where rigorous certificates of correctness can be obtained, reducing (inductive) biases and improving interpretability of the ML models, and understanding as well as overcoming the new-found difficulties in optimizing such powerful models.
The conventional wisdom in ML and in particular high-dimensional statistics in general until 2010’s was that we use simple data models (e.g., linear or Kernel regression), concise signal models (e.g., sparse and low-rank models), and convex loss functions since we can analyze the trade-offs in the data size and the computation to achieve learning objectives that also generalizes to unseen data. This whole academic endeavor is turned upside down with the performance improvements with the re-emergence of NNs. This basic research proposal provides broadly applicable, universal adaptive methods that cut into the core of emerging, critical RISE issues. We introduce new robust learning formulations to learn unbiased, fair prediction models, develop interpretable ML approaches in order to answer questions from the so-called causal hierarchy, and make new advances in non-convex, non-concave minimax optimization formulations that has applications in game theory, economics, and theoretical computer science. Finally, we introduce rigorous verification and certification tools towards the ultimate goal of trustworthy AI.