CSE4500 Advanced Course in Algorithms
CSE4500 Advanced Course in Algorithms
Spring 2020: Algorithms for Polynomials and Factor Graphs
This is an advancedlevel optional course that in spring 2020 explores algorithm designs for univariate polynomials and largeinteger arithmetic, with a threefold motivation:
 The toolbox of nearlineartime algorithms for univariate polynomials and large integers provides a practical showcase of recurrent mathematical ideas in algorithm design such as linearity, duality, divideandconquer, dynamic programming, iteration and invariants, approximation, parameterization, tradeoffs between resources and objectives, and randomization.
 We gain exposure to a number of classical and recent applications, such as secretsharing, errorcorrecting codes, probabilistically checkable proofs, and errortolerant computation.
 Factor graphs and inference by contraction of factors constitute one of the fundamental approaches for representing and evaluating sumproduct expressions, with applications e.g. in representing multivariate probability distributions and associated probabilistic inference. We study a recent algorithm design for inference that also enables fast verification that the result of inference is correct. In particular, we find that we can extend scalar inference to a univariate polynomial that contains the result of inference as well as serves as a quickly verifiable proof of correctness, with probabilistic soundness.
Prerequisites:
Fundamentals of algorithm design and analysis (e.g. CSE3190). Mathematical maturity.
Lecturer: Prof. Petteri Kaski.
Teaching assistants: Minoo Zarsav and Negin Karimi.
First lecture: Tuesday 7 January 2020, 1214, hall T5 (Konemiehentie 2).
Grading:
Points awarded from weekly problem sets determine the course grade. No exam.
Learning objectives
 Terminology and objectives of modern algorithmics, including elements of algebraic, approximation, online, and randomised algorithms. Ways of coping with uncertainty in computation, including errorcorrection and proofs of correctness.
 The art of solving a large problem reduction to one or more smaller instances of the same or a related problem.
 (Linear) independence, dependence, and their abstractions as enablers of efficient algorithms.
 Making use of duality. Often a problem has a corresponding dual problem that is obtainable from the original (the primal) problem by means of an easy transformation. The primal and dual control each other, enabling an algorithm designer to use the interplay between the two representations.
 Relaxation and tradeoffs between objectives and resources as design tools. Instead of computing the exact optimum solution at considerable cost, often a less costly but principled approximation suffices. Instead of the complete dual, often only a randomly chosen partial dual or other relaxation suffices to arrive at a solution with high probability.
Synopsis of lectures
Polynomials in one variable are among the most elementary and most useful mathematical objects, with broadranging applications from signal processing to errorcorrecting codes and advanced applications such as probabilistically checkable proofs. One of the main reasons why polynomials are useful in a myriad of applications is that highly efficient algorithms are known for computing with polynomials. These lectures introduce you to this nearlineartime toolbox and its select applications, with some algorithmic ideas dating back millennia, and some introduced only in the last few years.
By virtue of the positional number system, algorithms for computing with polynomials are closely related to algorithms for computing with integers. In most cases, algorithms for polynomials are conceptually easier and thus form our principal object of study during our weekly lectures, with the corresponding algorithms for integers left for the exercises or for further study.
Factor graphs and inference by contraction of factors constitute one of the fundamental approaches for representing and evaluating sumproduct expressions, with applications e.g. in representing multivariate probability distributions and associated probabilistic inference. We study a recent algorithm design for inference that also enables fast verification that the result of inference is correct. In particular, we find that we can extend scalar inference to a univariate polynomial that contains the result of inference as well as serves as a quickly verifiable proof of correctness, with probabilistic soundness.
Beyond their use in applications such as inference, factor graphs and their contraction can also be viewed as a model of computation that captures many advanced algorithm designs, such as fast matrix multiplication via lowrank decompositions of matrix multiplication tensors.
Lecture schedule (tentative)

Lecture 1 (Tue 7 Jan): Polynomials and integers.
We start with elementary computational tasks involving polynomials, such as polynomial addition, multiplication, division (quotient and remainder), greatest common divisor, evaluation, and interpolation. We observe that polynomials admit two natural representations: coefficient representation and evaluation representation. We encounter the morethan2000yearold algorithm of Euclid for computing a greatest common divisor. We observe the connection between polynomials in coefficient representation and integers represented in the positional number system. 
Lecture 2 (Tue 14 Jan): The fast Fourier transform and fast multiplication.
We derive one of the most fundamental and widely deployed algorithms in all of computing, namely the fast Fourier transform and its inverse. We explore the consequences of this nearlineartimecomputable duality between the coefficient and evaluation representations of a polynomial. A key consequence is that we can multiply two polynomials in nearlineartime. We obtain an algorithm for integer multiplication by reduction to polynomial multiplication. 
Lecture 3 (Tue 21 Jan): Quotient and remainder.
We continue the development of the fast polynomial toolbox with nearlineartime polynomial division (quotient and remainder). The methodological protagonist for this lecture is Newton iteration. We explore Newton iteration and its convergence both in the continuous and in the discrete settings, including fast quotient and remainder over the integers. 
Lecture 4 (Tue 28 Jan): Batch evaluation and interpolation.
We derive nearlineartime algorithms for batch evaluation and interpolation of polynomials using recursive remaindering along a subproduct tree. In terms of methodological principles, we encounter algebraic divideandconquer, dynamic programming, and spacetime tradeoffs. As an application, we encounter secret sharing. 
Lecture 5 (Tue 4 Feb): Extended Euclidean algorithm.
This lecture continues our development of the nearlineartime toolbox for univariate polynomials and integers. First, we develop a divideandconquer version of the extended Euclidean algorithm for polynomials that recursively truncates the inputs to achieve nearlinear running time. Second, we develop an analogous algorithm for computing the greatest common divisor of two given integers in nearlinear time.  Tue 11 Feb: Break, no lecture.
 Tue 18 Feb: Exam week, no lecture.

Lecture 6 (Tue 26 Feb): Interpolation from erroneous data.
This lecture culminates our development of the nearlineartime toolbox for univariate polynomials. We present a nearlineartime polynomial interpolation algorithm that is robust to errors in the input data up to the informationtheoretic maximum number of errors for correct recovery. As an application, we encounter ReedSolomon errorcorrecting codes together with nearlineartime encoding and decoding algorithms. Time permitting, we will look at the an analogous setting over the integers, namely Chinese remaindering with errors. 
Lecture 7 (Tue 3 Mar): Factor graphs and contraction.
This lecture develops the basic theory of factor graphs as a representation of sumproduct expressions. We encounter the operation of contraction of factors for simplifying a factor graph as well as show its universality and orderinvariance for evaluating sumproduct expressions. As an application, we encounter factor graphs as a representation of multivariate probability distributions. 
Lecture 8 (Tue 10 Mar): Verifiable and errortolerant inference for factor graphs.
We investigate some further applications of the nearlineartime toolbox for univariate polynomials involving randomization and proof systems with probabilistic soundness. We find that the elementary fact that a lowdegree nonzero polynomial has only a small number of roots enables us to (probabilistically) verify the correctness of intricate computations substantially faster than running the computation from scratch. Furthermore, we observe that proof preparation intrinsically tolerates errors by virtue of ReedSolomon coding. In particular, we show that the inference problem for factor graphs can be placed into this framework, yielding errortolerant inference that admits fast verification for correctness. 
Lecture 9 (
Tue 17 MarCanceled): Further applications of factor graphs.
We look at the power of factor graphs and contraction as a model of computation for evaluating multilinear maps. We encounter the notion of a tensor of a multilinear map and study an extension of the notion of matrix rank to tensors, namely tensor rank. As examples we will encounter fast matrix multiplication, the fast Fourier transform, and Ryser's algorithm for matrix permanent.