What is the «greedy layer-wise pretraining»?

TODO