Back to Glossary
Generative Models

Variational autoencoder (VAE)

A variational autoencoder (VAE) is a type of generative neural network used for unsupervised learning. It learns a latent representation of the input data and then generates new data points that are similar to the original data by sampling from the learned latent space.

Explanation

Variational autoencoders belong to the family of autoencoders, which are neural networks trained to reconstruct their input. VAEs differ from standard autoencoders by learning a probability distribution over the latent space instead of a fixed-length vector. This is achieved by encoding the input into parameters of a probability distribution (typically Gaussian) rather than directly encoding it into a single latent vector. During the encoding phase, the input is mapped to a mean and variance, which define the Gaussian distribution in the latent space. To generate new data, a point is sampled from this distribution, and this sampled point is then decoded back into the original data space. The loss function used to train VAEs includes a reconstruction loss (measuring how well the decoder can reconstruct the input from the latent representation) and a Kullback-Leibler (KL) divergence term (ensuring that the learned latent distribution is close to a prior distribution, usually a standard normal distribution). This regularization encourages a well-structured latent space that facilitates meaningful data generation and interpolation between data points.

Related Terms