In the first lecture, you will hear about how machine learning and data science become more and more relevant in every aspect of human life. We then detail the course logistics and if time permits start to describe the formal setup of a machine learning problem.
15.09.2017. Lecture 2 - The Elements of Machine Learning
In this lecture, you will hear about the main components of a machine learning problem and its solution. We will discuss how to translate raw data into features and labels. One of the main goals of machine learning is to find a useful mapping from the features to labels. Such mappings are known as predictors (for continuous-valued labels) or classifiers (for discrete-valued labels). We will discuss the notion of loss and empirical risk for assessing the quality of a particular predictor or classifier.
In this lecture, we will already start covering some powerful regression methods based on the basic linear regression formalism. We will extend linear regression to full-blown non-linear models by kernel regression. We will also consider probabilistic learning setting and discuss Bayesian linear regression, where we can quantify the uncertainty in the model. Both approaches are widely used in many domains.
In this lecture, you will hear about one of the central challenges within ML, i.e., how to validate a particular ML method. By validating an ML method or model, we can assess how reliable the obtained predictions are. It is typically very helpful to have not only a single quantitative prediction but also a measure of how confident we should be in this prediction.