EVENTS
Deep learning has revolutionized artificial intelligence and a wide range of applied domains, driving transformative progress in computer vision, language processing, and scientific discovery.
This talk surveys the vibrant and rapidly evolving landscape of deep learning theory—an effort to uncover the mathematical foundations of learning with neural networks.
We will review key theoretical insights into optimization dynamics, implicit biases of learning algorithms, and the generalization behavior of deep models—highlighting connections to classical learning theory, high dimensional statistics, and approximation theory.
Along the way, we will discuss some of the major successes in analyzing overparameterized regimes, as well as open challenges in understanding feature learning and generalization under moderate overparameterization.
The talk will also spotlight emerging phenomena such as benign overfitting, grokking, and delayed generalization, illustrating the depth and complexity of ongoing research questions that challenge traditional notions.