Understanding the performance of neural networks is certainly one of the most thrilling challenges for the current machine learning community.
Implicit regularisation
There remain many unanswered questions concerning the brilliant performances of neural networks. One of which is why do the currently used training algorithms converge to solutions which generalise well, and this with very little use of explicit regularisation. To understand this phenomenon, the concept of implicit regularisation has emerged: if over-fitting is benign, it must be because the optimisation procedure converges towards some particular global minimum which enjoys good generalisation properties.
S. Pesme, L. Pillaud-Vivien, N. Flammarion, Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity , Neurips 2021