Back to Glossary
Unsupervised Learning

Autoencoder

An autoencoder is a type of neural network used for unsupervised learning to learn efficient data encodings. It works by compressing the input into a latent space representation and then reconstructing the original input from this compressed representation, forcing the network to learn the most salient features of the data.

Explanation

Autoencoders consist of two main parts: an encoder and a decoder. The encoder takes the input data and maps it to a lower-dimensional latent space. This compressed representation, often called the 'bottleneck,' captures the most important features of the input. The decoder then takes this latent representation and reconstructs the original input. The network is trained to minimize the reconstruction error – the difference between the original input and the reconstructed output. By minimizing this error, the autoencoder learns to extract and represent the key features of the data. Variations of autoencoders exist, including sparse autoencoders (which encourage sparse representations), variational autoencoders (which produce probabilistic latent spaces), and denoising autoencoders (which are trained to reconstruct the input from noisy versions of it). Autoencoders are useful for dimensionality reduction, feature extraction, anomaly detection, and generative modeling.

Related Terms