How The Posterior Got Its Width - A visual story about uncertainty in Bayesian models.
Before you read
This post assumes you're comfortable with:
- Basic mathematical notation[1].
- Probability density functions (PDFs). What it means for a continuous random variable to have a density, and why density isn't probability. Fuzzy ? Check out the What does density mean? section.
- Bayes' theorem in the inference setting. Specifically, the form — prior, likelihood, posterior. More here : [2]
- Linear regression at the level of .
- The Gaussian distribution. That its defined (i.e., parameterized) by a mean () and a variance ( )
What is a probabilistic model doing ?
What we are after is a model of an input-output relationship. A function . We begin with the assumption that is a random variable.For any given , can take one of many values with some probability.
The uncertainty ladder
- Rung 1 - A deterministic model (e.g., a standard neural network or ordinary least squares regression (OLS)), maps a single input to single point predictionHere denotes some set of parameters which are part of defining the function (e.g., the weights of a neural network or the s in a linear regression). — , where is one set of parameters. Take the example of a univariate OLS. We assume that the data generating function is , a linear function of plus error This is exactly aleatoric uncertainty : the irreducible variation in that remains even when is known exactly. For OLS we assume that . All of the randomness is contained in this term. By making this assumption, we are also claiming that is also normally distributed.. We model We are asking what is the mean of Y at each value of ?. The noise hasn't disappeared, it's still in . We assumed so a mean-zero noise term drops out under expectation. . is a random variable — that's the whole premise. Because is random, at any fixed , there is a distribution of possible values. collapses that distribution down to a single number, the mean. OLS gives you a function that takes and gives you a single number out. This is useful, but it's thrown away everything else about the distribution of at that .
- Rung 2 - A probabilistic model avoids collapsing the uncertainty around to a single number. For a fixed , the model specifies a full distribution over at each . What this distribution should look like is a choice we make. We can suppose a normal distribution such that — a Gaussian whose mean is given by some function (e.g., linear, neural network or something else) and variance captures the noise around the meanThis is the key difference from rung 1. In rung 1, we only modeled — lived in our assumption about the data generation process but wasn't part of what we fit. Here, it's a first class parameter that is estimated jointly with the rest via maximum likelihood : find a single that makes the observed pairs jointly most probable under the model. [3].. This lets you make statements such as there is 95% chance that falls in this interval given this — something Rung 1 could not do. Fit via MLE, you end up with a single best-guess for the parameters , . It tells you how uncertain is given but not how uncertain is given the dataQuantifying parameter uncertainty is certainly possible at this level but requires extra machinery that still only provides an approximation about the uncertainty, among other problems that I don't quite understand yet (^_^) ..
- Rung 3 - A Bayesian model takes it a step further and places a distribution over itself. We start with the prior In standard statistical lingo denotes a probability density function and denotes a cumulative density function.—a belief about plausible parameter values before seeing any data — and update it to get the posterior — . The result now carries two layers of uncertainty: the noise around what the model predicts (aleatoric), and our uncertainty about the parameters itself (epistemic)Uncertainty due to the lack of knowledge. Uncertainty as a function of not seeing enough data or because the model is incomplete.. Unlike rung 2, the model can now express the difference between having seen 10 data points and 10,000, the subject of the rest of this post.
What is ?
Let's continue with our linear regression example. First consider a standard linear regression of the form:
For a given , we get a new . Now, instead of that suppose parameters the parameters define a distribution over For this toy example, the variance is modeled linearly, which would allow negative values. In reality, we would prevent this with a transformation (e.g., log scale). Left out here for clarity. :
defines a normal distribution, and for each , you get a new distribution. in , a distribution over at that out.
In traditional ML , we would try to minimize some loss to get to a single (i.e., a model). In the probabilistic setting, we are starting with a distribution over . Say we start with the assumption that any value of is equally likely, then we would have the probability mass (note that all of the mass = 1) spread evenly over all possible s within a range [a,b]. The idea is to update this distribution of mass around as informed by the data (i.e., the evidence) we have collected.
We start with a prior distribution ()This is a tricky object. In this example, consists of 4 parameters. Each parameter can take a value within some range. They don't all have to be between [a,b]. That is difficult to visualize. is a compact way to say the joint distribution over the parameters that define the predictive distribution of y. :
and update it using our evidence (i.e., the likelihood) to get a posterior distribution:
here denotes a pdf and is the data. is a collection that spans the input space. It's a set of samples that vary in both the input space, and has some noise in the output space. Let's suppose our data looks like this in the input space.
is our evidence. A lot of it comes from the yellow region and very little from the green. Recall that for each we get a distribution over out. Let's consider an example.
What's happening here ? For this data point, we evaluate how likely is under each θ. The full likelihood multiplies these across all points.
For , you get a set of distributions out. Which of these could have plausibly generated ? For the second, yes 10 is comfortably around the peak (high density). For the third, less so, its way out in the tail (low density).
Literally : what is the likelihood of the data point given these parameters ?
Now, let's plug in the visual pieces to the Bayesian update, :
The likelihood shapes . High likelihood s get scaled up, low likelihood s get squashed down. Over many examples, the process looks like this:
How does the distribution of data in the input space play into the training process and what information does the Bayesian model capture about it ?
Recall again that each generates a distribution but to do that, we first need to sample from to get parameters. During training was shaped by the likelihood (i.e., data).
Most of the data will come from the yellow region. Over many samples, as the prior-to-posterior loop goes on, samples in the yellow region contribute many likelihood multiplications, constraining which s are plausible for in that region. When shaping the posterior, for each example we check if a given set of agrees with it (i.e., what's the likelihood of y given this ?). High agreement s survive (weighted up by likelihood), low agreement s don't (weighted down by the likelihood). These s were mostly tested against data in the yellow region. They were asked often: Does the distribution over y built with you give high likelihood to y = 10, when x = 3?.
Note that these s also say something about when . You pump through, and get many distributions over : . However, the s were rarely tested at . That is, very few likelihood multiplications () were applied to squash s that disagreed with values at . What we want ideally is a distribution over that does well in both regions of . However, we don't have much evidence in this region. Many of the s, when evaluated at , will produce distributions centered at very different values of because they were never forced to converge during training .
So when we pump a new example through the machinery and get a set of distributions out, the means of these distributions will not be clustered around some number. Thus when you take the average distribution over all of these s — the posterior predictive distribution — it will have a big spread, reflecting that there was not much training data in this region : epistemic uncertainty.
And so, that is how the posterior (predictive distribution) gets its width. In the regions the likelihood visited often, the posterior over got squeezed, the sampled distributions over y agree, and their average is narrow. In the regions the likelihood barely touched, many s survived the update, each telling a different story about y, and their average distribution fans out. Epistemic uncertainty is the visible residue of how much the data got to push the prior around at each .
There's more to the story, though. This is the more data would help flavor — uncertainty in the parameters . The other is uncertainty about the model family itself. A linear model fit to sinusoidal data will produce confident, narrow predictive bands in data-rich regions and be confidently wrong. Fixing that kind of uncertainty requires a better model, not a bigger dataset. But that's for another time.
References
- Glossary of mathematical notation. https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbols
- Updating Priors. https://udesh.io/Updating_priors.html
- Maximum likelihood estimation. https://www.probabilitycourse.com/chapter8/8_2_3_max_likelihood_estimation.php