Credits: 5

Schedule: 09.09.2019 - 04.12.2019

Teaching Period (valid 01.08.2018-31.07.2020): 

I-II (autumn) 2018 - 2019
I-II (autumn) 2019 - 2020

Learning Outcomes (valid 01.08.2018-31.07.2020): 

After completing the course, a student can: (I) explain main concepts and approaches related to decision making and learning in stochastic time series systems; (ii) read scientific literature to follow the developing field; (iii) implement algorithms such as value iteration and policy gradient.

Content (valid 01.08.2018-31.07.2020): 

Modeling uncertainty. Markov decision processes. Model-based reinforcement learning. Model-free reinforcement learning. Function approximation. Policy gradient. Partially observable Markov decision processes.

Assessment Methods and Criteria (valid 01.08.2018-31.07.2020): 

Assignments and project work.

Elaboration of the evaluation criteria and methods, and acquainting students with the evaluation (applies in this implementation): 

Grading 0-5. Quizzes 20 %, Assignments 50 %, Project 30 %. No exam.

To pass: Completed assignments. Completed project.

Workload (valid 01.08.2018-31.07.2020): 

Contact teaching, independent study, assignments, project

Contact teaching 56 h

Independent study 74 h

Study Material (valid 01.08.2018-31.07.2020): 

Lecture notes. On-line material.

Prerequisites (valid 01.08.2018-31.07.2020): 

Required: Basic programming skills, basic calculus (gradient), basic vector and matrix algebra, basic probability (random variables, expectation)
Recommended: Artificial Intelligence
Useful: Machine learning - basic principles, Digital and optimal control, Stochastics and estimation

Grading Scale (valid 01.08.2018-31.07.2020): 

0-5

Further Information (valid 01.08.2018-31.07.2020): 

Language class 3: English

Description

Registration and further information