AutoVAE: Mismatched Variational Autoencoder with Irregular Posterior Prior Pairing


The variational autoencoder (VAE) has been used in a myriad of applications, e.g., dimensionality reduction and generative modeling. VAE uses a specific model for stochastic sampling in latent space. The normal distribution is the most commonly used one because it allows a straightforward sampling, a reparameterization trick, and a differentiable expression of the Kullback–Leibler divergence. Although various other distributions such as Laplace were studied in literature, the effect of heterogeneous use of different distributions for posterior-prior pair is less known to date. In this paper, we investigate numerous possibilities of such a mismatched VAE, e.g., where the uniform distribution is used as a posterior belief at the encoder while the Cauchy distribution is used as a prior belief at the decoder. To design the mismatched VAE, the total number of potential combinations to explore grows rapidly with the number of latent nodes when allowing different distributions across latent nodes. We propose a novel framework called AutoVAE, which searches for better pairing set of posterior-prior beliefs in the context of automated machine learning for hyperparameter optimization. We demonstrate that the proposed irregular pairing offers a potential gain in the variational Renyi bound. In addition, we analyze a variety of likelihood beliefs and divergence order.


  • Related Video

  • Related Research Highlights