This post sets background for the upcoming post on my work on more efficient use of neural samplers for Variational Inference.
Variational Inference
At the core of Bayesian Inference lies the wellknown Bayes' theorem, relating our prior beliefs $p(z)$ with those obtained after observing some data $x$:
$$ p(zx) = \frac{p(xz) p(z)}{p(x)} = \frac{p(xz) p(z)}{\int p(x, z) dz} $$
However, in most practical cases the denominator $p(x)$ requires intractable integration. Thus the field of Approximate Bayesian Inference seeks to efficiently approximate this posterior. For example, MCMCbased methods essentially use samplebased empirical distribution as an approximation.
In problems of learning latent variable models (for example, VAEs) we seek to do maximum likelihood learning for some hierarchical model $p_\theta(x) = \int p_\theta(x, z) dz$, but computing the integral is intractable and latent variables $z$ are not observed.
Variational Inference is a method that gained a lot of popularity recently, especially due to its scalability. It nicely allows for simultaneous inference (finding the posterior approximate) and learning (optimizing parameters of the model) by means of the evidence lower bound (ELBO) on the marginal loglikelihood $\log p_\theta(x)$, obtained by applying importance sampling followed by Jensen's inequality:
$$ \log p_\theta(x) = \log \mathbb{E}_{p_\theta(z)} p_\theta(xz) = \log \mathbb{E}_{q_\phi(zx)} \frac{p_\theta(x, z)}{q_\phi(zx)} \ge \mathbb{E}_{q_\phi(zx)} \log \frac{p_\theta(x, z)}{q_\phi(zx)} =: \mathcal{L} $$
This lower bound should be maximized w.r.t. both $\phi$ (variational parameters) and $\theta$ (model parameters). To better understand the effect of such optimization, it's helpful to consider the gap between the marginal loglikelihood and the bound. It's easy to show that this gap is equal to some KullbackLeibler (KL) divergence:
$$ \log p_\theta(x)  \mathbb{E}_{q(zx)} \log \frac{p_\theta(x,z)}{q_\phi(zx)} = D_{KL}(q_\phi(zx) \mid\mid p_\theta(zx)) $$
Now it's easy to see that maximizing the ELBO w.r.t. $\phi$ tightens the bound and performs approximate inference  $q(zx)$ becomes closer to the true posterior $p(zx)$ as measured by the KL divergence. While we hope that maximizig the bound w.r.t. $\theta$ increases marginal loglikelihood $\log p_\theta(x)$, this is obstructed by the KLdivergence. In a more realistic setting maximizing the ELBO is equivalent to maximizing the marginal loglikelihood regularized with the $D_{KL}(q_\phi(zx) \mid\mid p_\theta(zx))$, except there's no hyperparameter to control the strength of this regularization. This regularization prevents the true posterior $p_\theta(zx)$ from deviating too much from the variational distribution $q(zx)$, which is not bad, as you'd know that the true posterior has somewhat simple form, but on the other hand it prevents us from learning powerful and expressive models $p_\theta(x) = \int p_\theta(xz) p_\theta(z) dz$. Therefore if we're after expressive models $p_\theta(x)$, we probably should minimize such regularization effect, for example, by means of more expressive variational approximations.
Intuitively, tighter the bound  lesser the regularizational effect is. And it's relatively easy to obtain a tighter bound: $$ \begin{align*} \log p_\theta(x) &= \log \mathbb{E}_{p_\theta(z)} p_\theta(xz) = \log \mathbb{E}_{q_\phi(z_{1:K}x)} \frac{1}{K} \sum_{k=1}^K \frac{p_\theta(x, z_k)}{q_\phi(z_kx)} \\ &\ge \mathbb{E}_{q_\phi(z_{1:K}x)} \log\left( \frac{1}{K} \sum_{k=1}^K \frac{p_\theta(x, z_k)}{q_\phi(z_kx)} \right) =: \mathcal{L}_K \ge \mathcal{L} \end{align*} $$ That is, by simply taking several samples to estimate the marginal likelihood $p_\theta(x)$ under the logarithm, we made the bound tighter. Such bounds usually are called IWAE bounds (for Importance Weighted Autoencoders paper they were first introduced in), but we'll be calling these bounds Multisample Variational Lower Bounds. Such bounds were shown to correspond to using more expressive proposal distributions and are very powerful, but require multiple evaluations of the decoder $p_\theta(xz)$, which might be very expensive for complex models, for example, when applying VAEs to dialogue modelling.
An alternative direction is to use more expressive family of variational distributions $q_\phi(zx)$. Moreover, with the explosion of Deep Learning we actually know one family of models that have empirically demonstrated terrific approximation capabilities  Neural Networks. We therefore will consider so called Neural Samplers as generators of approximate posterior $q(zx)$ samples. A Neural Sampler is simply a neural network that is trained to take some simple (say, Gaussian) random variable $\psi \sim q(\psix)$ and transform it into $z$ that has the properties we seek. Canonical examples are GANs and VAEs and we'll get back to them later in the discussion.
And using neural nets is not a new idea. There's been a lot of research along this direction, which we might roughly classify into 3 directions based on how they deal with the intractable $\log q_\phi(zx)$ term:
 Flows
 Estimates
 Bounds
I'll briefly cover the first two and then discuss the last one, which is of central relevance to this post.
Flows
So called Flow models appeared on the radar with the publication of the Normalizing Flows paper, and then quickly exploded into a hot topic of research. At the moment there exist dozens of works on all kinds of flows. The basic idea is that if the Neural Net defining the sampler is invertible, then by computing its Jacobian (the determinant of the Jacobi matrix) we can analytically find the density $q(zx)$. Flows further restrict the samplers to have efficiently computable Jacobians. For further reading refer to Adam Kosiorek's post.
Flows were shown to be very powerful, they even managed to model the highdimensional data directly, as was shown by OpenAI researchers with Glow model. However, Flowbased model require a neural network specially designed to be invertible and have easytocompute Jacobian. Such restrictions might lead to inefficiency in parameter usage, requiring much more parameters and compute compared to simpler methods. The aforementioned Glow uses a lot of parameters and compute to learn modestly hires images.
Estimates
Another direction is to estimate $q_\phi(zx)/p(z)$ by means of auxiliary models. For example, the Density Ratio Trick lying at the heart of many GANs say that if you have an optimal discriminator $D^*(z, x)$ discerning samples from $q(zx)$ from those from $p(z)$ (for the given $x$), then the following is true:
$$ \frac{D^*(z, x)}{1  D^*(z, x)} = \frac{q(zx)}{p(z)} $$
In practice we do not have the optimal classifier, so instead we train auxiliary model to perform such classification. A particularly successful approach along this direction is the Adversarial Variational Bayes. Biggest advantage of this method is the lack of any restrictions on the Neural Sampler (except the standard requirement of differentiability). The disadvantage is that it loses all bound guarantees and inherits a lot of stability issues from GANs.
Bounds and Hierarchical Variational Inference
Arguably, the most natural approach to employing Neural Samplers as variational approximations is to give an efficient lower bound on the ELBO. In particular, we'd like to give a variational lower bound on the intractable term $\log \tfrac{1}{q_\phi(zx)}$.
You can notice that for the Neural Sampler as described above the marginal density $q_\phi(zx)$ has the form of $q_\phi(zx) = \int q_\phi(zx, \psi) q_\phi(\psix) d\psi$, very similr to that of VAE itself! Indeed, the Neural Sampler is a latent variable model like the VAE itself, except its conditioned on $x$. Great  you might think  we'll just reuse the bounds we have derived above, problem solved, right? Well, no. The problem is that we need to give a lower bound on negative marginal logdensity, or equivalently, an upper bound on the marginal logdensity.
But first we need to figure out one important question: what is $q_\phi(zx, \psi)$? In case of the GANlike procedure we could say that this density is degenerate: $q_\phi(z\psi, x) = \delta(z  f_\phi(\psi, x))$ where $f_\phi$ is the neural network that generates $z$ from $\psi$. While the estimationbased approach is fine with this since it doesn't work with densities directly, for the bounds, however, we need $q_\phi(zx, \psi)$ to be a welldefined density, so from now on we'll assume it's some proper density, not the delta function^{2}.
Luckily, one can use the following identity
$$ \mathbb{E}_{q_\phi(\psiz, x)} \frac{\tau_\eta(\psiz, x)}{q_\phi(z, \psix)} = \frac{1}{q_\phi(zx)} $$
Where $\tau(\psiz, x)$ is arbitrary density we'll be calling auxiliary variational distribution. Then, by applying logarithm and the Jensen's inequality, we obtain a much needed variational upper bound:
$$ \log q_\phi(zx) \le \mathbb{E}_{q_\phi(\psiz, x)} \log \frac{q_\phi(z, \psix)}{\tau_\eta(\psiz, x)} := \mathcal{U} $$
Except  oops  it needs a sample from the true inverse model $q_\phi(\psiz, x)$, which in general is not any easier to obtain than to calculate the $\log q_\phi(z)$ in the first place. Bummer? No  turns out, we can use the fact that samples $z$ are coming from the same hierarchical process $q_\phi(z, \psix)$! Indeed, since we're interested in the $\log q_\phi(z)$ averaged over all $zx$: $$ \begin{align*} \mathbb{E}_{q_\phi(zx)} \log q_\phi(zx) &\le \mathbb{E}_{q_\phi(zx)} \mathbb{E}_{q_\phi(\psiz, x)} \log \frac{q_\phi(z, \psix)}{\tau_\eta(\psiz, x)} \\ & = \mathbb{E}_{q_\phi(z, \psix)} \log \frac{q_\phi(z, \psix)}{\tau_\eta(\psiz, x)} \\ &= \mathbb{E}_{q_\phi(\psix)} \mathbb{E}_{q_\phi(z\psi, x)} \log \frac{q_\phi(z, \psix)}{\tau_\eta(\psiz, x)} \end{align*} $$
These algebraic manipulations show that if we sampled $z$ through a hierarchical scheme, then $\psi$ used to generate this $z$ can be thought of as a free posterior sample^{1}. This leads to the following lower bound on the ELBO, introduced in Hierarchical Variational Models paper:
$$ \log p_\theta(x) \ge \mathcal{L} \ge \mathbb{E}_{q_\phi(z, \psix)} \log \frac{p_\theta(x, z)}{ \tfrac{q_\phi(z, \psix)}{\tau_\eta(\psiz, x)} } $$ Interestingly, this bound admits another interpretation. Indeed, it can be equivalently represented as $$ \log p_\theta(x) \ge \mathbb{E}_{q_\phi(z, \psix)} \log \frac{p_\theta(x, z) \tau_\eta(\psiz, x)}{q_\phi(z, \psix) } $$ Which is just ELBO for an extended model where the latent code $z$ as extended with $\psi$, and since there was not $\psi$ in the original model $p_\theta(x, z)$, we extended the model as well with $\tau_\eta(\psiz, x)$. This view has been investigated in the Auxiliary Deep Generative Models paper.
Let's now return to the variational upper bound $\mathcal{U}$. Can we give a multisample variational upper bound on $\log q_\phi(zx)$ similar to IWAE? Well, following the same logic, we can arrive to the following:
$$ \begin{align*} \log \frac{1}{q_\phi(zx)} & = \log \mathbb{E}_{q_\phi(\psi_{1:K}z, x)} \frac{1}{K} \sum_{k=1}^K \frac{\tau_\eta(\psi_kz, x)}{q_\phi(z, \psi_kx)} \\ &\ge \mathbb{E}_{q_\phi(\psi_{1:K}z, x)} \log \frac{1}{K} \sum_{k=1}^K \frac{\tau_\eta(\psi_kz, x)}{q_\phi(z, \psi_kx)} \end{align*} $$ $$ \log q_\phi(zx) \le \mathbb{E}_{q_\phi(\psi_{1:K}z, x)} \log \frac{1}{\frac{1}{K} \sum_{k=1}^K \frac{\tau_\eta(\psi_kz, x)}{q_\phi(z, \psi_kx)}} $$
However, this bound  Variational Harmonic Mean Estimator  is no good as it uses $K$ samples from the true inverse model $q_\phi(\psix,z)$ whereas we can have only one free sample. The rest have to be obtained through expensive MCMC sampling and that doesn't scale well. Interestingly, this estimator was already presented in the original VAE paper (though buried in the Appendix D), but discarded as too unstable.
Why multisample variational upper bound?
The gap between the ELBO and it's tractable lower bound can be shown to be $$ \mathcal{L}  \mathbb{E}_{q_\phi(z, \psix)} \log \frac{p_\theta(x, z)}{ \tfrac{q_\phi(z, \psix)}{\tau_\eta(\psiz, x)} } = D_{KL}(q_\phi(\psix,z) \mid\mid \tau_\eta(\psix,z)) $$ So since we'll be using some simple $\tau_eta(\psix,z)$, we'll be restricting the true inverse model $q_\phi(\psix,z)$ to also be somewhat simple, limiting the expressivity of $q(zx)$, thus limiting the expressivity $p(zx)$... Looks like we ended up with where we started, right? Well, not quite so, as we might have gained more than lost by moving the simple distribution from $q(zx)$ to $\tau(\psix,z)$, but still not quite satisfying. So having a multisample upper bound would allow us to give tighter bounds (which don't suffer from the regularization that much) and not invoke any additional model's decoder $p_\theta(xz)$ evaluations (see the Variational Harmonic Mean Estimator above as example).
So... Are there efficient multisample variational upper bounds? A year ago you might have thought the answer is "Probably no", until... [To be continued]

This is not a new result, see Grosse et al., section 4.2, paragraph on "simulated data". ↩

The problem is that delta function is not a real function, but a generalized function, and a special case has to be taken to deal with them. ↩