What is a Probability?¶
First things first, we need to get some very basic notions of probability down on paper, before we can move into the real meat of the section, Bayesian Statistics. For the purposes of this course, a probability is just a number between 0 and 1 representing how likely we think some event is[1]. If we’re flipping a coin, for example, and we have no reason to believe that the coin is biased in any way, then we can create and adopt a model of the coin called in which (read this as ``the probability of seeing heads given the \fair{} model’') and .
As a quick but important aside, the reason we only have to define here (with being automatically derived as a result) is because the full \textbf{event space} or set of all possible events is . By the rules of probability, or the probability of \textit{anything} happening:
must equal 1, for a model to be valid\footnote{Hence the equations in this paragraph look like just rather than !}. Then, knowing that the logical connectives and'' and or’’ for events in a valid model must satisfy
we know that in any of our models we must have
so that
allowing us to immediately derive from our model’s assertion that \footnote{Scrupulous readers will notice that I actually snuck a model assumption into the line above, namely, that . However, if you’re that scrupulous hopefully you also know that the ``atomic’’ events within must be mutually exclusive...}. In fact, this gives us a third rule that a valid probability model must satisfy:
where “not ” is shorthand for “the event does \textit{not} happen”. In our case, since the only two possible events are and ,
In the back of our heads, however, we can construct an alternate model of the coin called , in which and . Then, the beauty of Bayesian statistics is that we can go out into the world and see which model best comports with what we observe. So, if we notice that the coin keeps coming up heads a suspiciously large number of times, we can change the model we believe from to . Mathematically, we would want to do so if . This thing literally just means “likelihood”, and it’s mathematically the same as probability but written differently to emphasize an unusual property of this calculation: usually when we write we mean that we’re computing because we want to know how likely is in a world where we know that happened, but in this case we’re working ``in reverse’', computing not because we’re interested in that quantity in and of itself but only because we want to see how \textit{likely} this outcome was given the (varying) model on the right-hand side of the conditional bar.
Optimal Queueing: Why Whole Foods is (Sadly but Truly) the Future¶
Now let’s apply what we learned in the previous section. Ever notice how, in grocery stores that let you choose which checkout line to wait in, the other lines always seem to go faster than the one you’re in? There’s a probability-theoretic reason for this! Let’s start from the simplest (non-trivial) case: a grocery store with two lines and . There are only two possibilities here: either moves faster than , or vice-versa. Mathematically, we can represent this situation by writing out the \textbf{event space} as , where the first element represents the case where moves faster than and the second elements represents the case where moves faster than . Without knowing anything about the cashier or the customers in line, the best model we can develop \textit{a priori} is that these two outcomes are equally likely: . So, if you choose to enter line , there’s a 50% chance that your line moves the fastest, and a 50% chance that the other line moves the fastest.
So far so good -- the probability that your line moves fastest is the same as the probability that some other line moves fastest. But what happens when we move to a grocery store with 3 lines, , , and ? In this case, the possible line orderings (from fastest to slowest) are . As before, we consider all of these outcomes as equally likely. So, now what is the probability that the line you choose will be the fastest? If you choose line , your line is the fastest in only two of the six possible outcomes: and . Since each outcome is equally likely, each has probability , and thus the probability that your line moves fastest, , is \footnote{Note again that , since and are disjoint events -- it can’t both be the case that line moved faster than line \textit{and} line moved faster than line .}. Then, using our ``not’’ rule from above, the probability that your line does \textit{not} move fastest is
So, even with only three lines in the store, we already see that there’s only a 33.3% chance that the line we choose will go the fastest, versus a 66.6% chance that we will see (at least) one of the other two lines moving faster...
To quickly look at the case of four lines , , , and , note that now the possible events are
so that whichever line you enter the probability of it being fastest is only , and the probability that you will see another line go faster is . The logic continues in this way, such that in general if there are lines the probability that you see another line moving faster than yours is . I used to work as a cashier at a grocery store with 10 checkout lines, putting the probability of frustration for a given customer at an astounding ... though obviously they all rationally calculated this in their heads and never yelled at me upon seeing one of the 9 other lines moving faster.
Bayesian Statistics: A Scary Term for an Intuitive Concept¶
One of the scariest sentiments in politics, to me, is the notion that someone or some group “knows” that they’re “right” about something. One of the core principles that sets Marxism or anti-capitalism apart from religion and superstition[2] is the notion that we can come to hold these beliefs by looking out into the world, observing and measuring and comparing things, and updating our beliefs to incorporate whatever we learn. This is the basic intuition behind Bayesian reasoning, which puts this into practice via an “update equation” (which you don’t have to memorize!) specifying exactly how much one should “nudge” their degree of belief in some proposition, thus updating their “mental model” of the world, given observed evidence for or against it.
As simple as this seems, it turn out that Bayesian reasoning is the optimal method for drawing inferences about social phenomena, In a mathematically-precise sense that we will discuss. For example, a Bayesian gambler will always beat a non-Bayesian given a sufficient number of bets. More on that later.
Probabilistic Graphical Models¶
“Probabilistic Graphical Model” or PGM is just a fancy term for a statistical tool which operationalizes an intuitive idea: when trying to understand a complex phenomenon with lots of “moving parts” interacting with one another, a good way to start analyzing it is often to break it down into its constituent parts and then specify how these parts work together to give rise to the phenomenon. With this in mind, a PGM is a collection of nodes (drawn as circles) representing variables and edges (drawn as arrows) representing relationships of influence between nodes, codified as “Conditional Probability Tables”. So, if we wanted to model the relationship between weather and a person’s choice of whether to go out and party or stay in and watch a movie on a given Saturday evening, we could use
A variable representing the weather, which can take on values in ,
A variable representing the person’s action, which can take on values in , and
An edge from to which encodes the intuition that one is more likely to go out if it’s sunny than if it’s rainy via the probability distribution , , , and , which we can also represent as a simple Conditional Probability Table:
\begin{center} \begin{tabular}{cc} \hline Weather (Value of ) & Probability of Going Out \ \hline \hline \Sunny{} & 0.8 \ \Rainy{} & 0.1 \\hline \end{tabular} \end{center}
We need just one more thing before our PGM is complete, however: while we can use this Conditional Probability Table to obtain any information we want about , notice that the table depends on information about . Thus, to fully parameterize our PGM, we’ll need to supply a non-conditional probability table giving the initial distribution over the weather. In this case, let’s just say that there’s a 50/50 chance of rain or sunshine, so that .
Now we have everything we need! The resulting PGM, in graphical form, is presented in Figure 3. Pretty boring, but it gets the job done.
\begin{figure}[!ht] \centering \begin{tikzpicture}[ %every node/.style={draw, minimum size={width(“dmztp”)},node distance=0.8cm}, every node/.style={draw, node distance=0.8cm}, every path/.style={thick}, outer/.style={draw,circle}, inner/.style={draw,circle,minimum size=1cm,inner sep=0}, metanode/.style={draw,minimum size=3cm,rounded corners=0.4cm}, obs/.style={fill=lightgray}, latent/.style={fill=white}, ar/.style={->,>=latex}, auto ] % %\nodeouter,obs{}; % \nodeouter{}; \nodeouter,right=of w{}; %\nodeouter,obs, right=of dyt, xshift=2cm{}; % \draw-> edge (a); %\draw-> edge (dztp); \end{tikzpicture} \label{fig:pgm-noshade} \caption{A basic PGM, representing the relationship between , the weather, and , the subsequent action of a person deciding whether to go out or stay in for the night.} \end{figure}
The beautiful thing about PGMs, though (and the primary reason to use them), is that you can then use this model to make inferences about the world in the face of incomplete information -- i.e., the situation in pretty much every real-world problem. The key tool here is the separation of nodes into two categories: observed (represented graphically as a shaded node) and latent (represented graphically as an unshaded node). Thus we can now use our model as a weather-inference machine: if we observe that the person we’re modeling is out at a party with us, what can we infer from this information about the weather outside? We can draw this situation as a PGM with shaded and unshaded nodes, as in Figure 4, and then use Bayes’ Rule to perform calculations over the network, to see how the observed information about the person at the party “flows” back into the node representing the weather.
\begin{figure}[!ht] \centering \begin{tikzpicture}[ %every node/.style={draw, minimum size={width(“dmztp”)},node distance=0.8cm}, every node/.style={draw, node distance=0.8cm}, every path/.style={thick}, outer/.style={draw,circle}, inner/.style={draw,circle,minimum size=1cm,inner sep=0}, metanode/.style={draw,minimum size=3cm,rounded corners=0.4cm}, obs/.style={fill=lightgray}, latent/.style={fill=white}, ar/.style={->,>=latex}, auto ] % %\nodeouter,obs{}; % \nodeouter{}; \nodeouter,obs,right=of w{}; %\nodeouter,obs, right=of dyt, xshift=2cm{}; % \draw-> edge (a); %\draw-> edge (dztp); \end{tikzpicture} \label{fig:pgm-shaded} \caption{The same situation as in Figure \ref{fig:pgm-noshade}, except that the node for variable is now shaded, indicating a situation where we have observed the person’s action () but still only have a probability distribution over the weather .} \end{figure}
Keeping in mind that Bayes’ Rule tells us, for any two events and , how to use information about to obtain information about :
We can now apply this rule to obtain our new probability distribution over the weather, taking into account the new information that the person has chosen to go out:
And now we simply plug in the information we already have from our conditional probability table to obtain our new (conditional) probability of interest:
So we have now learned something interesting from our observation! Namely: now that we’ve observed the person out at a party, the probability that it is sunny out jumps from 0.5 (called the “prior” estimate of the , i.e., our best guess without any other relevant information) to 0.89 (called the “posterior” estimate of )
If you’ve taken a stats class in high school or undergrad, you probably learned that a probability represents “how likely the event is”. The latter represents the “frequentist” philosophy of probability, but in this book we instead adopt a “Bayesian” philosophy, which foregrounds the model-builder (you) by treating a probability as representing “how likely we think the event is”. Notice the subtle but important difference in wording.
I said it explicitly in the first sentence, but from here on out basically just add “to me” to the beginning of all these opinionated statements, in your head. Disclaimer complete.