Autoencoders
Last updated
Last updated
Autoencoders are an unsupervised learning technique in which we leverage neural networks for the task of representation learning.
Specifically, we’ll design a neural network architecture such that we impose a bottleneck in the network that forces a compressed knowledge representation of the original input.
If the input features were each independent of one another, this compression and subsequent reconstruction would be a very difficult task. However, if some sort of structure exists in the data (ie. correlations between input features), this structure can be learned and consequently leveraged when forcing the input through the network’s bottleneck.
As visualized, we can take an unlabeled dataset and frame it as supervised learning.
This network can be trained by minimizing the reconstruction error.
The bottleneck is a key attribute of our network design; without the presence of an information bottleneck, our network could easily learn to simply memorize the input values by passing these values along through the network.