Osion kuvaus

  • CS-E4500 Advanced Course in Algorithms D

    CS-E4500 Advanced Course in Algorithms D

    Spring 2023 — Algebraic Algorithms: Classical and Quantum


    PLEASE NOTE. All course activity takes place in the A+ system. The material will be available in A+ in the week of 2 January 2023. Course activity starts 10 January 2023.

    Overview. This is an advanced-level optional course that in spring 2023 explores algebraic algorithm designs for univariate polynomials and large-integer arithmetic, as well as select mathematically related concepts in quantum computation, with a three-fold motivation:

    1. The toolbox of classical near-linear-time algorithms for univariate polynomials and large integers provides a practical showcase of recurrent mathematical ideas in algorithm design such as linearity, duality, divide-and-conquer, dynamic programming, iteration and invariants, approximation, parameterization, tradeoffs between resources and objectives, and randomization. This exposure to key classical ideas enables us to relate and compare the classical notions with their high-dimensional quantum analogs, for example, the fast Fourier transform with the fast quantum Fourier transform.
    2. We gain exposure to a number of classical and more recent applications, such as secret-sharing, error-correcting codes, quantum error correction, as well as quantum algorithms for period-finding and quantum search by amplitude amplification.
    3. Among key motivators for the ongoing push towards quantum computing are (i) Shor's polynomial-time factorization algorithm for integers, a task for which no classical polynomial-time algorithm is known at the moment, as well as (ii) Grover's fast quantum search, which in many applications reduces the search time to essentially the square root of its classical black-box counterpart. We study both algorithm designs in a quantum circuit model of computation, which we develop from mathematical-minimalist first principles.


    Prerequisites: Fundamentals of algorithm design and analysis (e.g. CS-E3190). Mathematical maturity.

    Teacher: Prof. Petteri Kaski.

    First lecture: Tuesday 10 January, 12—14, Hall T5 (CS Building).

    Grading: Total points awarded from weekly problem sets determine the course grade. No exam.

    Learning objectives

    • Terminology and objectives of modern algorithmics, including elements of algebraic, approximation, randomised, and quantum algorithms.
    • The art of solving a large problem by reduction to one or more smaller instances of the same or a related problem.
    • (Linear) independence, dependence, and their abstractions as enablers of efficient algorithms. Computational serendipity of high-dimensional quantum states.
    • Making use of duality. Often a problem has a corresponding dual problem that is obtainable from the original (the primal) problem by means of an easy transformation. The primal and dual control each other, enabling an algorithm designer to use the interplay between the two representations.
    • Relaxation and tradeoffs between objectives and resources as design tools. Instead of computing the exact optimum solution at considerable cost, often a less costly but principled approximation suffices. Instead of the complete dual, often only a randomly chosen partial dual or other relaxation suffices to arrive at a solution with high probability.


    Short synopsis of lectures

    Polynomials in one variable are among the most elementary and most useful mathematical objects, with broad-ranging applications from signal processing to error-correcting codes and advanced applications such as probabilistically checkable proofs. One of the main reasons why polynomials are useful in a myriad of applications is that highly efficient algorithms are known for computing with polynomials. These lectures introduce you to this near-linear-time toolbox and its select applications, with some algorithmic ideas dating back millennia, and some introduced only in the last few years.

    By virtue of the positional number system, algorithms for computing with polynomials are closely related to algorithms for computing with integers. In most cases, algorithms for polynomials are conceptually easier and thus form our principal object of study during our weekly lectures, with the corresponding algorithms for integers left for the exercises or for further study.

    A tantalizing case where the connection between polynomials and integers apparently breaks down occurs with factoring. Namely, it is known how to efficiently factor a given univariate polynomial over a finite field into its irreducible components, whereas no such classical algorithm is known at the moment for factoring a given integer into its prime factors. Indeed, the best known classical algorithms for factoring integers run in time that scales moderately exponentially in the number of digits in the input.

    However, integers can be factored efficiently in a quantum circuit model of computation, which presents one of the leading motivators for the push towards quantum computing. Our lectures start from classical Boolean circuits and reversible Boolean gates, extend to quantum circuits and quantum gates, observe the potential serendipity of quantum parallelism, and proceed to use it to our advantage by developing Shor's efficient period-finding algorithm (leading to fast integer factorization) as well as Grover's fast quantum search by amplitude amplification.


    Lecture schedule (tentative)

    Lecture 1 (Tue 10 Jan): Polynomials and integers. We start with elementary computational tasks involving polynomials, such as polynomial addition, multiplication, division (quotient and remainder), greatest common divisor, evaluation, and interpolation. We observe that polynomials admit two natural representations: coefficient representation and evaluation representation. We encounter the more-than-2000-year-old algorithm of Euclid for computing a greatest common divisor. We observe the connection between polynomials in coefficient representation and integers represented in the positional number system.

    Lecture 2 (Tue 17 Jan): The fast Fourier transform and fast multiplication. We derive one of the most fundamental and widely deployed algorithms in all of computing, namely the fast Fourier transform and its inverse. We explore the consequences of this near-linear-time-computable duality between the coefficient and evaluation representations of a polynomial. A key consequence is that we can multiply two polynomials in near-linear-time. We obtain an algorithm for integer multiplication by reduction to polynomial multiplication.

    Lecture 3 (Tue 24 Jan): Quotient and remainder. We continue the development of the fast polynomial toolbox with near-linear-time polynomial division (quotient and remainder). The methodological protagonist for this lecture is Newton iteration. We explore Newton iteration and its convergence both in the continuous and in the discrete settings, including fast quotient and remainder over the integers.

    Lecture 4 (Tue 31 Jan): Batch evaluation and interpolation. We derive near-linear-time algorithms for batch evaluation and interpolation of polynomials using recursive remaindering along a subproduct tree. In terms of methodological principles, we encounter algebraic divide-and-conquer, dynamic programming, and space-time tradeoffs. As an application, we encounter secret sharing.

    Lecture 5 (Tue 7 Feb): Interpolation from erroneous data. This lecture culminates our development of the near-linear-time toolbox for univariate polynomials. Using a fast version of the extended Euclidean algorithm, we present a near-linear-time polynomial interpolation algorithm that is robust to errors in the input data up to the information-theoretic maximum number of errors for correct recovery. As an application, we encounter Reed—Solomon error-correcting codes together with near-linear-time encoding and decoding algorithms. Time permitting, we will look at the an analogous setting over the integers, namely Chinese remaindering with errors.

    Lecture 6 (Tue 14 Feb): Circuits, from classical to quantum. This lecture develops a basic circuit model of quantum computation by starting with classical Boolean circuits, transforming to reversible-gate Boolean circuits, and extending to quantum circuits. For the latter, we give a mathematical-minimalist review of the standard (finite-dimensional) Hilbert-space model for isolated quantum systems and state evolution by unitary transformations alternating with measurement by orthogonal observables. From a computational standpoint a key observation is the massive parallelism offered by the high dimensionality of the state space compared with classical models of computation.

    Tue 21 Feb: Exam week, no lecture.

    Lecture 7 (Tue 28 Feb): Quantum error correction. This lecture gives an introduction to quantum error correction. (A more detailed synopsis will be available in early January.)

    Tue 7 Mar: Break, no lecture.

    Lecture 8 (Tue 14 Mar): The quantum Fourier transform and period-finding. This lecture gives our first encounter with nontrivial quantum algorithms and the advantage of quantum parallelism. We encounter the quantum Fourier transform (QFT) as a higher-dimensional analog of the classical DFT, study Coppersmith's quantum circuit design for a radix-2 QFT and Shor's period-finding algorithm, which we then develop by further classical considerations into an algorithm for efficiently factoring a given integer.

    Lecture 9 (Tue 21 Mar): Quantum search by amplitude amplification. This lecture develops Grover's iterative amplitude amplification technique for searching a massively parallel quantum state for basis-states of interest. We observe that quantum searching can be carried out in many cases with essentially a square root of the resources which would be required by classical black-box search.