CS-E4500 Advanced Course in Algorithms
Spring 2017: Algorithms and Uncertainty
This is an advanced-level optional course that in spring 2017 explores algorithm designs that cope with uncertainty, ranging from classical error-correcting codes for protecting information to proofs of correctness and algorithmic decision-making with incomplete information. Our exploration will be motivated both by timely applications and by generality of the underlying mathematical ideas from the perspective of algorithm design, such as linearity, polynomials, duality, and relaxation.
Fundamentals of algorithm design and analysis (e.g. CS-E3190). Mathematical maturity.
Prof. Petteri Kaski and Prof. Parinya Chalermsook.
Tue 10 January 2017, 12--14, hall T5 (Konemiehentie 2).
Points awarded from weekly problem sets determine the course grade. No exam.
- Terminology and objectives of modern algorithmics, including elements of algebraic, approximation, online, and randomised algorithms.
- Ways of coping with uncertainty, including error-correction, proofs of correctness, online data structures, and decision-making with incomplete information.
- The art of solving a large problem reduction to one or more smaller instances of the same or a related problem.
- (Linear) independence, dependence, and their abstractions as enablers of efficient algorithms.
- Making use of duality. Often a problem has a corresponding dual problem that is obtainable from the original (the primal) problem by means of an easy transformation. The primal and dual control each other, enabling an algorithm designer to use the interplay between the two representations.
- Relaxation and tradeoffs between objectives and resources as design tools. Instead of computing the exact optimum solution at considerable cost, often a less costly but principled approximation suffices. Instead of the complete dual, often only a randomly chosen partial dual or other relaxation suffices to arrive at a solution with high probability.
Lecture schedule (tentative)
Part 1: [P. Kaski] Coping with errors and proving correctness
Polynomials in one variable are among the most elementary and most useful mathematical objects, with broad-ranging applications from signal processing to error-correcting codes and advanced applications such as probabilistically checkable proofs. One of the main reasons why polynomials are useful in a myriad of applications is that highly efficient algorithms are known for computing with polynomials. Part 1 introduces you to this near-linear-time toolbox and its selected applications, with some algorithmic ideas dating back millennia, and some introduced only last year.
- Lecture 1 (Tue 10 Jan): Polynomials and computing with polynomials. We start with elementary computational tasks involving polynomials, such as polynomial addition, multiplication, division (quotient and remainder), greatest common divisor, evaluation, and interpolation. We observe that polynomials admit two natural representations: coefficient representation and evaluation representation. We encounter the more-than-2000-year-old algorithm of Euclid for computing a greatest common divisor.
- Lecture 2 (Tue 17 Jan): The fast Fourier transform and fast multiplication. We derive one of the most fundamental and widely deployed algorithms in all of computing, namely the fast Fourier transform and its inverse. We explore the consequences of this near-linear-time-computable duality between the coefficient and evaluation representations of a polynomial. A key consequence is that we can multiply two polynomials in near-linear-time.
- Lecture 3 (Tue 31 Jan): Reductions to fast multiplication. We continue the development of the fast polynomial toolbox with fast quotient and remainder, fast batch evaluation, fast interpolation, and fast Euclid’s algorithm. Along the way we encounter the principles of reduction, Newton iteration, divide-and-conquer, homomorphic simplification, and more.
- Lecture 4 (Tue 7 Feb): Error-correcting codes and proofs. We investigate applications of polynomials in the design of error-correcting codes and proofs of correctness for computations. We find that polynomial interpolation is robust against errors, and furthermore, runs in near-linear time essentially up to the information-theoretic maximum number of errors that still enables correct recovery. We also find that the simple fact that a low-degree nonzero polynomial has only a small number of roots enables us to (probabilistically) verify the correctness of intricate computations substantially faster than running the computation from scratch.
Part 2: [P. Chalermsook] Decision-making under incomplete information
Modern applications often involve a situation where the input data are not revealed to us completely, and sometimes they are even dynamically changing. How do we make sure that an irrevocable decision made by our algorithm does not cost us a fortune in the future?
Part 2.A: Online Competitive Analysis
- Lecture 5 (Tue 21 Feb): Decisions and data structures under uncertainty. A standard way to deal with uncertainty is to use competitive analysis, i.e., bound the performance of your algorithm against the performance of the optimal algorithm that sees the whole input. The research area that deals with this measure is called online algorithms.
- Lecture 6 (Tue 28 Feb): Linear Programming (LP) and duality. We introduce a general framework of linear programming relaxation: To solve a discrete optimization problem, one relaxes the problem by allowing fractional solutions (which can be solved efficiently), and the fractional solution can then be turned into the integral one, possibly with a (hopefully, small) loss in the quality of the solution.
Lecture 7 (Tue 7 Mar): Primal-dual technique for online algorithms.
In this lecture, we discuss the connection between the LP technique and online algorithms. This connection leads to a generic technique for designing online algorithms.
Part 2.B: Online Prediction & Applications
Lecture 8 (Tue
14 Mar; updated 21 Mar): Learning from experts framework and multiplicative weight update. On each day, you wake up in the morning, sip a cup of coffee, and buy/sell stocks. Experts are giving you advice on how to get rich, but different experts have different/conflicting opinions. Who should you trust? What would be the best way to use these conflicting advices? In this lecture, you will learn a principled way not to do substantially worse than the best expert.
Lecture 9 (Tue
21 Mar; updated 28 Mar): Routing under uncertainty. Routing is a crucial task in networks and a fundamental building block in several recent advanced algorithm designs such as network flow. The demands (e.g. device A sends 1 unit to device B) keep arising in an unpredictable fashion, and a routing strategy needs to be determined in order to minimize the network "congestion". We will show how to devise a good network routing strategy in advance without knowing what needs to be routed.