Approximate Bayesian Computation (a.k.a. ABC, likelihood-free inference) is a new class of computational inference methods that can be used when the likelihood function is difficult to evaluate or unknown, and one has a simulator for generating data that (hopefully) resemble observations when generated with correct parameters. The underlying intuition is that similar model parameters are likely to generate similar data, but the practice is of course a bit more complex...
ABC has applications from medicine to particle physics, and is expected to revolutionize computional sciences that cannot apply traditional statistical methods.
To pass the course you have to (% of grade, all required):
- Give a presentation (approx 45 min) on a related topic (30%)
- Act as an opponent in 2 presentations (5%)
- Keep a learning diary (up to 1-2 page per topic in arbitrary format) (20%)
- Finish an assignment (30%)
- Peer-review 2 assignments (5%)
- Present your assignment (approx 5-10 min) (10%)
First meeting on 9 January at 16:15 in room T5. An introduction to ABC will be given along with agreeing about seminar duties and schedule.
Meeting dates (subject to changes!):
- 9 January (introduction & practicals)
- 16 January (Jonathan: Rejection sampling, MCMC-ABC, SMC-ABC; Akash: Summary statistics & distance)
- 23 January (Ossi: Convergence, validation, post-processing in ABC)
- 30 January (Ivan: Model selection in ABC)
- 20 February (Sidd: BOLFI)
- 27 February (Cagatay: Variational methods in ABC)
- 6 March (Zheyang: Classifier ABC, LFIRE)
- 13 March (Lifang: High-dimensional ABC)
- 27 March (Sachith: GANs for LIFE)
- 24 April (assignment presentations)
Opponents should have a look at the topic beforehand and actively ask questions. The opponents for each presentation are the 2 following week's presenters, wrapping around at the end. It's ok to substitute missed duties at other times as well.
Lectures should cover the general idea of the algorithm, theory and examples, and last about 45 minutes. Consider implementing the algorithm yourself and presenting how it works (this is probably the best way to learn!).
Your learning diary should be a (brief!) documentation on the lecture topics, either based on the lectures or your own googling. It can be anything from paper to Jupyter notebooks, the point is that you learn. :)
The assignment is to perform a case study on some proper problem, either with real data or a toy model, and produce a "notebook" (Jupyter Notebook) with introduction, methods, codes and results. Please check your topic with lecturer. The due date for assignments is 29 April, after which there's one week for peer-reviewing. The assignments should be done within the ELFI framework (disclaimer: the lecturer is one of its authors), and submitted to ELFI's Zoo area.
Some material in arbitrary order (also check references therein):
- Marin, J.-M., Pudlo, P., Robert, C. P., and Ryder, R. J. (2012). Approximate Bayesian computational methods. Statistics and Computing, 22(6):1167–1180
- Lintusaari, J., Gutmann, M. U., Dutta, R., Kaski, S., and Corander, J. (2016). Fundamentals and recent developments in approximate Bayesian computation, Systematic Biology, doi: 10.1093/sysbio/syw077
- Michael U. Gutmann and Jukka Corander (2016). Bayesian Optimization for Likelihood-Free Inference of Simulator-Based Statistical Models, Journal of Machine Learning Research 17, 1-47
- Pablo Montero and José A. Vilar. (2015) TSclust: An R Package for Time Series Clustering, Journal of Statistical Software 62, 1
- Michael U. Gutmann, Ritabrata Dutta, Samuel Kaski and Jukka Corander. Statistical Inference of Intractable Generative Models via Classification, ArXiv preprint
- Alexander Moreno, Tameem Adel, Edward Meeds, James M. Rehg, Max Welling. Automatic Variational ABC, ArXiv preprint
- Minh-Ngoc Tran, David J. Nott, Robert Kohn. Variational Bayes with Intractable Likelihood, ArXiv preprint
- George Karabatsos, Fabrizio Leisen. An Approximate Likelihood Perspective on ABC Methods, ArXiv preprint
- Ritabrata Dutta, Jukka Corander, Samuel Kaski, Michael U. Gutmann. Likelihood-free inference by ratio estimation (LFIRE), ArXiv preprint
- Stuart Barber, Jochen Voss and Mark Webster. The rate of convergence for approximate Bayesian computation, Electronic Journal of Statistics, 9, 2015
- Vinay Jethava, Devdatt Dubhashi. GANS for LIFE, ArXiv preprint
- Ong, Victor M. H., Nott, David J., Tran, Minh-Ngoc, Sisson, Scott A., & Drovandi, Christopher C. Likelihood-free inference in high dimensions with
synthetic likelihood, unpublished