Week 2: Probabilistic Graphical Models (PGMs)

DSAN 5650: Causal Inference for Computational Social Science
Summer 2025, Georgetown University

Jeff Jacobs

jj1088@georgetown.edu

Wednesday, May 28, 2025

Schedule

Today’s Planned Schedule:

Start End Topic
Lecture 6:30pm 6:45pm TA Intros →
6:45pm 7:00pm HW1 Questions and Concerns →
7:00pm 7:30pm Motivating Examples: Causal Inference →
7:30pm 7:45pm Your First Probabilistic Graphical Model! →
Break! 7:45pm 8:00pm
8:00pm 8:30pm PGM Nuts and Bolts →
8:30pm 9:00pm Course Logistics →

TA Intros

Courtney Green

Wendy Hu

HW1 Questions and/or Concerns

Technical Issues (JupyterHub)

Content Issues

Motivating Examples: Causal Inference

  • The methodology we’ll use to draw inferences about social phenomena from data

Disclaimer: Unfortunate Side Effects of Engaging Seriously with Causality

You’ll no longer be able to read “scientific” writing without striking this expression (involuntarily):

“Scientific” talks will begin to sound like the following:

Blasting Off Into Causality!

Data-Generating Processes (DGPs)

  • You saw this in DSAN 5100!
  • «\(X_1, \ldots, X_n\) drawn i.i.d. Normal, mean \(\mu\) variance \(\sigma^2\)» characterizes DGP of \((X_1, \ldots, X_n)\)

  • 5650: Dive into DGPs, rather than treating as black box/footnote to Law of Large Numbers, so we can move [asymptotically!]…
  • From associational statements:
    • «\(\underbrace{\text{An increase}}_{\small\text{noun}}\) in \(X\) by 1 is associated with increase in \(Y\) by \(\beta\)»
  • To causal ones: «\(\underbrace{\text{Increasing}}_{\small\text{verb}}\) \(X\) by 1 causes \(Y\) to increase by \(\beta\)»

DGPs and the Emergence of Order

  • Who can guess the state of this process after 10 steps, with 1 person?
  • 10 people? 50? 100? (If they find themselves on the same spot, they stand on each other’s heads)
  • 100 steps? 1000?

The Result: 16 Steps

The Result: 64 Steps

“Mathematical/Scientific Modeling”

  • Thing we observe (poking out of water): data
  • Hidden but possibly discoverable via deeper dive (ecosystem under surface): DGP

So What’s the Problem?

  • Non-probabilistic models: High potential for being garbage
    • tldr: even if SUPER certain, using \(\Pr(\mathcal{H}) = 1-\varepsilon\) with tiny \(\varepsilon\) has literal life-saving advantages (Finetti 1972)
  • Probabilistic models: Getting there, still looking at “surface”
    • Of the \(N = 100\) times we observed event \(X\) occurring, event \(Y\) also occurred \(90\) of those times
    • \(\implies \Pr(Y \mid X) = \frac{\#[X, Y]}{\#[X]} = \frac{90}{100} = 0.9\)
  • Causal models: Does \(Y\) happen because of \(X\) happening? For that, need to start modeling what’s happening under the surface making \(X\) and \(Y\) “pop up” together so often

The Shallow Problem of Causal Inference

cor(ski_df$value, law_df$value)
[1] 0.9921178

(Data from Vigen, Spurious Correlations)

This, however, is only a mini-boss. Beyond it lies the truly invincible FINAL BOSS… 🙀

The Fundamental Problem of Causal Inference

The only workable definition of «\(X\) causes \(Y\)»:

Defining Causality (Hume 1739, ruining everything as usual 😤)

\(X\) causes \(Y\) if and only if:

  1. \(X\) temporally precedes \(Y\) and
    • In two worlds \(W_0\) and \(W_1\) where
    • everything is exactly the same except that \(X = 0\) in \(W_0\) and \(X = 1\) in \(W_1\),
    • \(Y = 0\) in \(W_0\) and \(Y = 1\) in \(W_1\)
  • The problem? We live in one world, not two identical worlds simultaneously 😭

What Is To Be Done?

Probability++

  • Tools from prob/stats (RVs, CDFs, Conditional Probability) necessary but not sufficient for causal inference!
  • Example: Say we use DSAN 5100 tools to discover:
    • Probability that event \(E_1\) occurs is \(\Pr(E_1) = 0.5\)
    • Probability that \(E_1\) occurs conditional on another event \(E_0\) occurring is \(\Pr(E_1 \mid E_0) = 0.75\)
  • Unfortunately, we still cannot infer that the occurrence of \(E_0\) causes an increase in the likelihood of \(E_1\) occurring.

Beyond Conditional Probability

  • This issue (that conditional probabilities could not be interpreted causally) at first represented a kind of dead end for scientists interested in employing probability theory to study causal relationships…
  • Recent decades: researchers have built up an additional “layer” of modeling tools, augmenting existing machinery of probability to address causality head-on!
  • Pearl (2000): Formal proofs that (\(\Pr\) axioms) \(\cup\) (\(\textsf{do}\) axioms) \(\Rightarrow\) causal inference procedures successfully recover causal effects

Preview: do-Calculus

  • Extends core of probability to incorporate causality, via \(\textsf{do}\) operator
  • \(\textsf{do}(X = 5)\) is a “special” event, representing intervention in DGP to force \(X \leftarrow 5\)\(\textsf{do}(X = 5)\) not the same event as \(X = 5\)!
\(X = 5\) \(\neq\) \(\textsf{do}(X = 5)\)
Observing that \(X\) took on value 5 (for some possibly-unknown reason) \(\neq\) Intervening to force \(X \leftarrow 5\), all else in DGP remaining the same (intervention then “flows” through rest of DGP)
  • Probably the most difficult thing in 5650 to wrap head around

  • “Special”: \(\Pr(\textsf{do}(X = 5))\) not well-defined, only \(\Pr(Y = 6 \mid \textsf{do}(X = 5))\)

  • To emphasize special-ness, we may use notation like:

    \[ \Pr(Y = 6 \mid \textsf{do}(X = 5)) \equiv \textstyle \Pr_{\textsf{do}(X = 5)}(Y = 6) \]

    to avoid confusion with “normal” events

Causal World Unlocked 😎 (With Great Power Comes Great Responsibility…)

  • With \(\textsf{do}(\cdot)\) in hand… (Alongside DGP satisfying axioms slightly more strict than core probability axioms)
  • We can make causal inferences from similar pair of facts! If:
    • Probability that event \(E_1\) occurs is \(\Pr(E_1) = 0.5\),
    • The probability that \(E_1\) occurs conditional on the event \(\textsf{do}(E_0)\) occurring is \(\Pr(E_1 \mid \textsf{do}(E_0)) = 0.75\),
  • Now we can actually infer that the occurrence of \(E_0\) caused an increase in the likelihood of \(E_1\) occurring!

Ulysses and the [Computational] Sirens

Your First PGM!

Image source

  • Which of the variables (ovals) are observed? Which are latent?
  • What do you think the arrows represent?
  • Can we use this to find the “root cause” of (e.g.) observed chest pain? Or conversely, to predict possible ↑ in likelihood of chest pain if we start smoking?

Bayesian Inference but with Pictures

A Probabilistic Graphical Model (PGM) provides us with:

  • A formal-mathematical…
  • But also easily visualizable (by construction)…
  • Representation of a data-generating process (DGP)!

Example: Let’s model how weather \(W\) affects evening plans \(Y\): the choice between going to a party or staying in to watch movies

The Partier’s Dilemma

  1. A person \(i\) wakes up with some initial affinity for partying: \(\Pr(Y_i = \textsf{Go Out})\)
  2. \(i\) then goes to their window and observes the weather \(W_i\) outside:
    1. If the weather is sunny, \(i\)’s affinity increases: \(\Pr(Y_i = \textsf{Go} \mid W_i = \textsf{Sun}) > \Pr(Y = \textsf{Go})\)
    2. Otherwise, if it is rainy, \(i\)’s affinity decreases: \(\Pr(Y_i = \textsf{Go} \mid W_i = \textsf{Sun}) < \Pr(Y = \textsf{Go})\)

Two Main “Building Blocks”

  • Nodes like \(\require{enclose}\enclose{circle}{\kern .01em ~X~\kern .01em}\) denote Random Variables

\[ \boxed{\require{enclose}\enclose{circle}{\kern .01em ~X~\kern .01em}} \simeq \boxed{ \begin{array}{c|cc}x & \textsf{Tails} & \textsf{Heads} \\\hline \Pr(X = x) & 0.5 & 0.5\end{array}} \]

  • Edges like \(\require{enclose}\enclose{circle}{\kern .01em ~X~\kern .01em} \rightarrow \; \enclose{circle}{\kern.01em Y~\kern .01em}\) denote relationships between RVs
    • What an edge “means” can get [ontologically] tricky!
    • Retain sanity by just remembering: an edge \(\require{enclose}\enclose{circle}{\kern .01em ~X~\kern .01em} \rightarrow \; \enclose{circle}{\kern.01em Y~\kern .01em}\) is included in our PGM if we “care about” modeling the conditional probability table (CPT) of \(Y\) w.r.t. \(X\)

\[ \require{enclose}\boxed{ \enclose{circle}{\kern .01em ~X~\kern .01em} \rightarrow \; \enclose{circle}{\kern.01em Y~\kern .01em} } \simeq \boxed{ \begin{array}{c|cc} x & \Pr(Y = \textsf{Lose} \mid X = x) & \Pr(Y = \textsf{Win} \mid X = x) \\\hline \textsf{Tails} & 0.8 & 0.2 \\ \textsf{Heads} & 0.5 & 0.5 \end{array} } \]

PGM for the Partier’s Dilemma

  • A node \(\pnode{W}\) denoting RV \(W\), which can take on values in \(\mathcal{R}_W = \{\textsf{Sun}, \textsf{Rain}\}\),
  • A node \(\pnode{Y}\) denoting RV \(Y\), which can take on values in \(\mathcal{R}_Y = \{\textsf{Go}, \textsf{Stay}\}\), and
  • An edge \(\pedge{W}{Y}\) representing the following relationship between \(W\) and \(Y\):
    • \(\Pr(Y = \textsf{Go} \mid W = \textsf{Sun}) = 0.8\)
    • \(\Pr(Y = \textsf{Stay} \mid W = \textsf{Sun}) = 0.2\)
    • \(\Pr(Y = \textsf{Go} \mid W = \textsf{Rain}) = 0.1\)
    • \(\Pr(Y = \textsf{Stay} \mid W = \textsf{Rain}) = 0.9\)
Figure 1: Our PGM of the Partier’s Dilemma
\(\Pr(Y = \textsf{Stay} \mid W)\) \(\Pr(Y = \textsf{Go} \mid W)\)
\(W = \textsf{Sun}\) 0.2 0.8
\(W = \textsf{Rain}\) 0.9 0.1
Figure 2: The Conditional Probability Table (CPT) for the edge \(\pedge{W}{Y}\) in Figure 1

Observed vs. Latent Nodes

  • PGMs help us make valid (Bayesian) inferences about the world in the face of incomplete information!
  • Key remaining tool: separation of nodes into two categories:
    • Observed nodes (shaded)
    • Latent nodes (unshaded)
  • \(\Rightarrow\) Can use our PGM as a weather-inference machine!
  • If we observe \(i\) at a party, what can we infer about the weather outside?

Observed Partier, Latent Weather

  • We can draw this situation as a PGM with shaded and unshaded nodes, distinguishing what we know from what we’d like to infer:
 
  • And we can now use Bayes’ Rule to compute how observed information (\(i\) at party \(\Rightarrow [Y = \textsf{Go}]\)) “flows” back into \(W\)

Computation via Bayes’ Rule

  • Bayes’ Rule, \(\Pr(A \mid B) = \frac{\Pr(B \mid A)\Pr(A)}{\Pr(B)}\), tells us how to use info about \(\Pr(B \mid A)\) to obtain info about \(\Pr(A \mid B)\)!
  • We use it to obtain a distribution for \(W\) updated to incorporate new info \([Y = \textsf{Go}]\):

\[ \begin{align*} &\Pr(W = \textsf{Sun} \mid Y = \textsf{Go}) = \frac{\Pr(Y = \textsf{Go} \mid W = \textsf{Sun}) \Pr(W = \textsf{Sun})}{\Pr(Y = \textsf{Go})} \\ =\, &\frac{\Pr(Y = \textsf{Go} \mid W = \textsf{Sun}) \Pr(W = \textsf{Sun})}{\Pr(Y = \textsf{Go} \mid W = \textsf{Sun}) \Pr(W = \textsf{Sun}) + \Pr(Y = \textsf{Go} \mid W = \textsf{Rain}) \Pr(W = \textsf{Rain})} \end{align*} \]

  • Plug in info from CPT to obtain our new (conditional) probability of interest:

\[ \begin{align*} \Pr(W = \textsf{Sun} \mid Y = \textsf{Go}) &= \frac{(0.8)(0.5)}{(0.8)(0.5) + (0.1)(0.5)} = \frac{0.4}{0.4 + 0.05} \approx 0.89 \end{align*} \]

  • We’ve learned something interesting! Observing \(i\) at the party \(\leadsto\) probability of sun jumps from \(0.5\) (“prior” estimate of \(W\), best guess without any other relevant info) to \(0.89\) (“posterior” estimate of \(W\), best guess after incorporating relevant info).

References

Finetti, Bruno de. 1972. Probability, Induction and Statistics: The Art of Guessing. J. Wiley.
Hume, David. 1739. A Treatise of Human Nature: Being an Attempt to Introduce the Experimental Method of Reasoning Into Moral Subjects; and Dialogues Concerning Natural Religion. Longmans, Green.
Pearl, Judea. 2000. Causality: Models, Reasoning, and Inference. Cambridge University Press.