EE236C - Optimization Methods for Large-Scale Systems (Spring 2016)

Lecture notes

  1. Gradient method

  2. Quasi-Newton methods

  3. Conjugate gradient method

  4. Subgradients

  5. Subgradient method

  6. Proximal gradient method

  7. Conjugate functions

  8. The proximal mapping

  9. Accelerated proximal gradient methods (updated)

  10. Proximal point method

  11. Dual decomposition

Spring 2014 notes

Additional lectures (from previous editions of the course)

Homework and project

Homework solutions and grades are posted on the EEweb course website. (Follow the links to “Assignments” or “Grades”.)

Course information

Lectures: Kinsey 1200B. Tuesday and Thursday, 12:00PM-1:50PM.

Description. The course continues EE236B and covers several advanced and current topics in optimization, with an emphasis on large-scale algorithms for convex optimization. This includes first-order methods for large-scale optimization (gradient and subgradient method, conjugate gradient method, proximal gradient method, accelerated gradient methods), decomposition and splitting methods (dual decomposition, augmented Lagrangian method, alternating direction method of multipliers, monotone operators and operator splitting), and (possibly) interior-point algorithms for conic optimization.

Textbook and lecture notes. The lecture notes will be posted on this website. Many of the topics are covered in the following books and in the course EE364b (Convex Optimization II) at Stanford University.

Course requirements. Several homework assignments and a project.

Grading. Approximate weights in the final grade: homework 20%, project 80%.