- Felipe N. Ducau - [email protected]
- Sony Trénous - [email protected]
Motivated by the work of Chen et al. [1], we analyze the role of mutual information in variational autoencoders. We experimentally study the behavior of this model when mutual information between the latent code and the generated data is ex- plicitly enforced as part of its loss function. Furthermore, we make an attempt to formalize the role of MI in the VAE objective. We give an interpretation of a lower bound to the MI as the reconstruction error of a dual VAE.
[1] Xi Chen et al. “InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets”. In: CoRR abs/1606.03657 (2016). URL : http://arxiv.org/ abs/1606.03657.