Chris Cremer


I'm currently working at Cohere. Previously, I've interned at Facebook Reality Labs and Microsoft Research. In the spring of 2020, I completed my PhD in Computer Science at the University of Toronto and Vector Institute, where I was advised by Quaid Morris and David Duvenaud.

Publications




On the Importance of Learning Aggregate Posteriors in
Multimodal Variational Autoencoders

We study latent variable models of two modalities: images and text. A common task for these multimodal models is to perform conditional generation; for instance, generating an image conditioned on text. This can be achieved by sampling the posterior of the text then generating the image given the latent variable. However, we find that a problem with this approach is that the posterior of the text does not match the posteriors of the images corresponding to that text. The result is that the generated images are either of poor quality or don't match the text. A similar problem is also encountered in the mismatch between the prior and the marginal aggregate posterior. In this paper, we highlight the importance of learning aggregate posteriors when faced with these types of distribution mismatches. We demonstrate this on modified versions of the CLEVR and CelebA datasets.
Chris Cremer, Nate Kushman
AABI, December 2018
[pdf, bibtex]

Inference Suboptimality in Variational Autoencoders

We analyze approximate inference in variational autoencoders in terms of the approximation and amortization gaps. We find that suboptimal inference is often due to amortizing inference rather than the limited complexity of the approximating distribution. We show that this is due partly to the generator learning to accommodate the choice of approximation. Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.
Chris Cremer, Xuechen Li, David Duvenaud
ICML, July 2018
[arxiv, bibtex]

Reinterpreting Importance-Weighted Autoencoders

The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood than the standard evidence lower bound. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, present a tighter lower bound, and visualize the implicit importance-weighted distribution.
Chris Cremer, Quaid Morris, David Duvenaud
ICLR Workshop, April 2017
[arxiv, bibtex]

Topics in Machine Learning



INNF+ Workshop

I co-organized a workshop at ICML 2020 on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models.
July 2020

Invertible Neural Networks and Normalizing Flows

I co-organized a workshop at ICML 2019 on Invertible Neural Networks and Normalizing Flows.
June 2019

Learning to Ignore

An exploration of how to model information that is relevant to a trained network
April 2018
[blog]

Uncertainty in Bayesian Neural Networks

Visualizations of decision boundaries in BNNs
August 2017
[slides]

Approximate Posterior Building Blocks

This is a short review of orthogonal methods for improving inference in latent variable models. I examine the lower bounds and complexities of these models as well as their combinations.
June 2017
[pdf]

Gradients of Deep Networks

A small look at skip-connection models
March 2017
[slides] [blog]

Intro to Probability for ML

Review of probability theory
September 2015
[slides]

Dissertations



PhD Dissertation: Approximate Inference in Variational Autoencoders

A deep latent variable model is a powerful tool for modelling complex distributions. However, in order to train this model, we often need to perform approximate inference of the latent variable. A variational autoencoder (VAE) is a framework for learning both the generative and inference models for a latent variable model. This thesis provides novel analyses, applications, and interpretations of approximate inference in VAEs.
Advisors: Quaid Morris and David Duvenaud
University of Toronto, April 2020
[pdf]

MSc Dissertation: Gene Expression Deconvolution with Subpopulation Proportions

Personalized cancer strategies are currently being hindered by intratumor heterogeneity. One source of heterogeneity, clonal evolution, can lead to genetically distinct subpopulations within a sample. Through the use of subclonal reconstruction methods, we can obtain estimates of the subpopulation proportions within a single sample. Here, I leverage these proportion estimates by incorporating it into the deconvolution of tumour gene expression data in order to estimate the subclone specific gene expression profiles.
Advisor: Quaid Morris
University of Toronto, February 2016
[pdf]