Outline
The 2016 course consists of the following topics
“Objects in Space”: Definitions of norms, inner products, and metrics for vector, matrix and tensor objects. | |
Lecture 2 | Review of basic probability theory. |
Maximum likelihood, M-estimators, and empirical risk minimization as a motivation for convex optimization. | |
Lecture 3 | Fundamental concepts in convex analysis. |
Basics of complexity theory. | |
Lecture 4 | Unconstrained smooth minimization I: |
Concept of an iterative optimization algorithm. | |
Convergence rates. | |
Characterization of functions. | |
Lecture 5 |
Unconstrained smooth minimization II:
|
Gradient and accelerated gradient methods. | |
Lecture 6 | Unconstrained smooth minimization III: |
The quadratic case. | |
The conjugate gradient method. | |
Variable metric algorithms. | |
Lecture 7 | Structured data models (e.g. sparse and low-rank) and convex gauge functions. |
The subgradient method. | |
Lecture 8 | Composite convex minimization I: |
Proximal and accelerated proximal gradient methods. | |
Lecture 9 | Composite convex minimization II: |
Proximal Newton-type methods. | |
Composite self-concordant minimization. | |
Lecture 10 | Convex demixing. |
Basis pursuit denoising. | |
Convex geometry of linear inverse problems. | |
Lecture 11 | Constrained convex minimization I: |
The primal-dual approach. | |
Smoothing approaches for non-smooth convex minimization. | |
Lecture 12 | Constrained convex minimization II: |
The Frank-Wolfe method. | |
The universal primal-dual gradient method. | |
The alternating direction method of multipliers (ADMM). | |
Lecture 13 | Classical black-box convex optimization techniques. |
Linear programming, quadratic programming, second-order cone programming, and semidefinite programming. | |
The simplex method and interior point method (IPM). | |
Hierarchies of classical formulations. | |