
Deep neural networks
DNNs are neural networks having complex and deeper architecture with a large number of neurons in each layer, and there are many connections. The computation in each layer transforms the representations in the subsequent layers into slightly more abstract representations. However, we will use the term DNN to refer specifically to the MLP, the Stacked Auto-Encoder (SAE), and Deep Belief Networks (DBNs).
SAEs and DBNs use AEs and Restricted Boltzmann Machines (RBMs) as building blocks of the architectures. The main difference between these and MLPs is that training is executed in two phases: unsupervised pre-training and supervised fine-tuning.

SAE and DBN using AE and RBM respectively
In unsupervised pre-training, shown in the preceding diagram, the layers are stacked sequentially and trained in a layer-wise manner, like an AE or RBM using unlabeled data. Afterwards, in supervised fine-tuning, an output classifier layer is stacked and the complete neural network is optimized by retraining with labeled data.