CS6730 - Probabilistic Graphical Models

Course Data :

Changelog :

Course title and contents updated on Oct 2017. Old title "Probabilistic reasoning for AI".

Preamble :

A Probabilistic Graphical Model (PGM) is a probabilistic model for which a graph expresses the dependence structure between the random variables given by the nodes in the graph. They are commonly used in probability theory, statistics and machine learning. This course takes a deep dive into PGMs with a bias towards applications in machine learning.


The course provides comprehensive introduction to probabilistic graphical models. At the end of the course the student should be able to model problems using graphical models; design inference algorithms; and learn the structure of the graphical model from data.

Course Contents:

Fundamentals: Fundamentals of Probability Theory - Views of Probability, Random Variables and Joint Distributions, Conditional Probability, Conditional Independence, Expectation and Variance, Probability Distributions - Conjugate Priors, Introduction to Exponential Family; Fundamentals of Graph Theory - Paths, Cliques, Subgraphs, Cycles and Loops.

Graphical Models: Introduction - Directed Models (Bayesian Network), Undirected Models (Markov Random Fields), Dynamic Models (Hidden Markov Model & Kalman Filters) and Factor Graph; Conditional Independence (Bayes Ball Theorem and D-separation), Markov Blanket, Factorization (Hammersley-Clifford Theorem), Equivalence (I-Maps & Perfect Maps); Factor Graphs - Representation, Relation to Bayesian Network and Markov Random Field.

Inference in graphical models: Exact Inference - Variable Elimination, Elimination Orderings, Relation to Dynamic Programming, Dealing with Evidence, Forward-Backward Algorithm, Viterbi Algorithm; Junction Tree Algorithm; Belief Propagation (Sum Product); Approximate Inference - Variational Methods (Mean Field, Kikuchi & Bethe Approximation), Expectation Propagation, Gaussian Belief Propagation; MAP Inference - Max-Product, Graph Cuts, Linear Programming Relaxations to MAP (Tree-Reweighted Belief Propagation, MPLP); Sampling - Markov Chain Monte Carlo, Metropolis Hastings, Gibbs (Collapsing & Blocking), Particle filtering.

Learning in Graphical Models: Parameter Estimation - Expectation Maximization, Maximum Likelihood Estimation, Maximum Entropy, Pseudolikelihood, Bayesian Estimation, Conditional Likelihood, Structured Prediction; Learning with Approximate Inference; Learning with Latent Variables; Structure Learning, Structure Search, L1 priors.

Text Books:

Koller, D. and Friedman, N. (2009). Probabilistic Graphical Models: Principles and Techniques. MIT Press.

Reference Books:

  • Jensen, F. V. and Nielsen, T. D. (2002). Bayesian Networks and Decision Graphs. Information Science and Statistics. Springer, 2nd edition.
  • Kevin P. Murphy (2013) Machine Learning: A Probabilistic Perspective. 4th Printing. MIT Press.
  • Barber, D. (2011). Bayesian Reasoning and Machine Learning. Cambridge University Press, 1st edition.
  • Bishop, C. M. (2011). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 2nd printing.
  • Wainwright, M. and Jordan, M. (2008). Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends in Machine Learning, 1:1–305.


  • Machine learning (CS4011/CS5011/CS6690/EE5177) and Probability (CS6015/MA2040/EE5110)
  • None


Credits Type Date of Introduction
4-0-0-0-8-12 Elective Jan 2008

Previous Instances of the Course

© 2016 - All Rights Reserved - Dept of CSE, IIT Madras
Website Credits