Skip to the content.

Tentative daily schedule (all times are in US Pacific Daylight Time - PDT)

Time (PDT) Activity
7:30-8am icebreaking activity (TA led; small groups)
8-9:30am Lecture block 1; ends with homework assigned (entire group)
9:30-10am Break
10am-11:30am TA-led problem solving session (small groups)
12-1:30pm Lecture block 2; ends with homework assigned (entire group)
1:30-2pm Break
2-3pm plenary / panel / non-technical talk / other activities
Evening office hours with TAs, opportunity to interact / games / community-building activities



A Google calendar with links to events will also be shared with the school participants. Please see the program below for lecture schedules and course descriptions.


  Lecture block 1 Lecture block 2
Monday Yael Kalai Samory Kpotufe
Tuesday Yael Kalai Samory Kpotufe
Wednesday Gillat Kol Antonio Blanca
Thursday Gillat Kol Rediet Abebe
Friday Antonio Blanca Rediet Abebe


Rediet Abebe: From Policy Goals to Mathematical Objectives: Challenges in Optimizing in an Interactive World

Antonio Blanca: Markov Chain Monte Carlo method

This mini-course will serve as an introduction to the Markov chain Monte Carlo (MCMC) method, one of the most powerful approaches to sampling complex probability distributions. An MCMC algorithm consists of a Markov chain designed to converge to a probability distribution from which we would like to sample efficiently. A fundamental research question in theoretical computer science, discrete mathematics, and other more applied areas is how many steps of the Markov chain are required until its distribution is close to the target distribution.

We will begin with a brief introduction to Markov chains, exploring convergence conditions and standard probabilistic and analytical methods for bounding their speed of convergence. We will then see these methods in action by considering classical Markov chains for sampling proper graph colorings, configurations from the Ising model, and others.

Yael Kalai: The Magic of Cryptography

In this mini course we will teach you two cryptographic magic tricks: The first is how to generate randomness out of thin air, and we will see how this trick can be used to construct secure encryption schemes (as well as other cryptographic primitives). The second is zero-knowledge proofs; like real magicians we will see how to prove statements without revealing any information (beyond the statement being true)!

Gillat Kol: Computability, Complexity, and Mathematical Logic – The Limits of Human Knowledge

Can any function be computed? Can every true mathematical statement be proven? Is solving a Sudoku Puzzle harder than verifying its solution? And why are we lucky to fail in solving some fundamental problems? In this mini-class we will take a computational lens, viewing real-life phenomena and entities, like our brains, evolution, and even black holes, as computational machines that run algorithms. We will ask what are the limits of computation and of efficient computation, and relate this to the study of proofs.

Samory Kpotufe: Is Universal Learning Possible?

It depends on the Universe, and our notion of ‘Learning’. I will first discuss motivation for the question as we aim for ’true’ AI, and then discuss two notions of what we may mean by universal: first, by considering so-called consistency (in which case universal learners exist—essentially), and second, a stronger notion in terms of ‘rates of convergence’ (in which case there is no universal learner). Throughout, I’ll drive the discussion using the simplest, and perhaps oldest learner, i.e., a ’nearest-neighbor’ method. The discussion will assume familiarity with the most basic probability concepts, e.g., mean, variance, and introductory distributions.