MS-C2111 - Stochastic Processes, 26.10.2020-09.12.2020
This course space end date is set to 09.12.2020 Search Courses: MS-C2111
Topic outline
-
Lectures are delivered online via Zoom https://aalto.zoom.us/j/63492414200. Video clips of lectures appear here afterwards. In addition to watching the lectures, you are recommended to study the lecture notes which provide more in-depth information.
-
View Receive a grade
-
Applications of stochastic processes
-
Definition and examples of Markov chains
-
Computing transient distributions using the initial distribution and transition matrix
-
Stochastic representation of a transition matrix, simulation of Markov chains as random dynamical systems
-
This video by Petar Kormushev illustrates an application of reinforcement learning methods for teaching a robot to flip pancakes. The theory of reinforcement learning is based on Markov chains and Markov decision processes.
-
View
-
Sample paths of stochastic processes rarely convergence but the time-dependent probability distributions might.
-
Any limiting distribution (if exists) is an invariant distribution
-
Example of an regular Markov chain, convergence of transition matrix powers
-
Example of a periodic Markov chain
-
Example of a Markov chain with many possible limiting distributions
-
Irreducible chain = MC with a strongly connected transition diagram
-
Period of a state = Greatest common divisor of the possible return times
-
Finite, irreducible, aperiodic MC converges in distribution to its unique invariant distribution
-
Reducible Markov chains can be reduced into distinct components
-
Online version of a famous book by Sean Meyn and Richard Tweedie about Markov chains on general state spaces.
Recommended extra reading for those curious to see how Markov chains on uncountable state spaces can be analyzed using advanced techniques of stochastics and topology.
-
Modeling an inventory process as a Markov chain
-
Definition of a Markov additive process; example: November rain
-
Computing expected values of Markov additive processes in finite time horizon
-
Ergodic theorems connect time-averages with mathematical expectations, and extend laws of large numbers from IID sequences to Markov chains and Markov additive processes.
-
Passage times as extended random numbers; arithmetic with infinity
-
Expected passage times form a solution to a system of linear equations
-
Solving expected passage times in a concrete case
-
Hitting probabilities form a solution to another system of linear equations
-
For one-dimensional random walks, linear equations for solving hitting probabilities become linear difference equations
-
Stochastic proceesses on infinite state spaces may be simpler to analyse
-
Computing with infinite vectors and matrices
-
Extending known theorems from the finite world to the infinite setting
-
Stochastic coupling method for proving the convergence theorem, Markov chain covering theorem
-
Reversible Markov chains and detailed balance conditions
-
Birth-death chains only have local transitions, which allows deriving analytical formulas for invariant distributions
-
A random walk on the nonnegative integers may or may not have an invariant distribution
-
Definition: Branching process & Galton-Watson tree
-
Branching process as an infinite-state Markov chain, transition matrix
-
Probability generating function as a tool for analysing discrete probability distributions
-
Computing the probability generating function of a random sum
-
Computing the pgf of the population size distribution using the pgf of the offspring distribution
-
Expected population size either remains constant, or grows/decreases exponentially
-
The probability of eventual extinction equals the smallest nonnegative fixed point of the offspring pgf.
-
Populations where individuals produce one or less children on average will surely go extinct
-
Definition of a random point patterns as a random set
-
Homogeneous and independently scattered point patterns as models of maximally random events
-
The event count distribution of a homogeneous & independently scattered point pattern is a Poisson distribution.
-
The number of unlikely successes in a large number of trials is approximately Poisson.
-
Definition of a Poisson process
-
Winning time and and winner's identity in an exponential race
-
Simulating a homogeneous & independently scattered random point pattern.
-
Summary of basic properties of Poisson processes
-
The superposition of mutually independent Poisson processes is a Poisson process
-
Compound Poisson processes calculate cumulative rewards/costs associated with random events
-
Independently thinned Poisson processes are Poisson processes. Due to underlying Poisson magic, these thinned Poisson processes are mutually independent.
-
Renewal processes count random events for which the interevent times are mutually independent but not necessarily exponentially distributed
-
Video file (MP4)ViewHomogeneous and independently scattered random point patterns can be studied in multidimensional spaces (Poisson random measures) and also in spate-time settings (Poisson rain)
-
Stochastic processes and Markov property in continuous time
-
For every time step length, there is one transition matrix
-
-
Elements of a transition semigroup near the origin uniquely determine the semigroup at all time instants
-
Generator matrix is a matrix-valued derivative of a transition semigroup
-
The time-dependent distributions of a continuous-time Markov chain can be computed as a product of the initial distribution (as row vector) and a matrix exponential of the generator matrix
-
Jump rates and jump probabilities associated with continuous-time Markov chains
-
Jump rates and jump probabilities determined the generator
-
Example: Machine maintenance
-
Three simulation methods:
- Scaled exponentials
- Overclocking
- Pairwise Poisson triggers
-
Continuous-time Markov chains with unbounded jump rates may experience infinitely many transitions in finite time
-
Computing time-dependent distributions using matrix exponentials
-
Computing invariant distributions using linear algebra
-
Martingales are used as stochastic models, and also as theoretical tools for analysing other stochastic processes.
-
Relative conditional expectation is defined as a random variable representing an outsider's view of an expected value computed by an insider.
-
Expected value of the number of aces in the hand of player 2, from the point of view of player 1, player 2, and the dealer.
-
Functional dependence is a special form of dependence, not to be confused with stochastic dependence
-
Arithmetic rules for working with conditional expectations
-
Definition: Martingale, submartingale, supermartingale
-
Random walks with zero drift are martingales
-
Prediction martingales form an important class of stochastic processes having convergent paths
-
Nonnegative martingales and bounded martingales converge pathwise with probability one
-
Functional transforms of Markov chains with zero drift are martingales, and so are normalised branching processes.
-
Doubling is a famous gambling strategy apparently providing a risk-free profit
-
Net profit from adaptive betting can be represented as an integral process against a unit return process
-
Integrating a previsible process against a martingale produces a martingale
-
Stopping a martingale at an optional time produces a martingale
-
Doob's optional stopping theorem returns the expected value of a martingale observed at an optional time.
-
Summary of main learning outcomes and closing the course.