This trend of increasing complexity has been pushed to its logical conclusion with the introduction of neural Turing machines (Graves et al., 2014) that learn to read from memory cells and write arbitrary content to memory cells.
Such neural networks can learn simple programs from examples of desired behavior.
For example, they can learn to sort lists of numbers given examples of scrambled and sorted sequences.
This self-programming technology is in its infancy, but in the future could in principle be applied to nearly any task.
Goodfellow, Bengio, Courville - «Deep Learning» (2016)
A Neural Turing machine (NTMs) is a recurrent neural network model published by Alex Graves et. al. in 2014.
NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers.
An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms.
The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent.
An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.
They can infer algorithms from input and output examples alone.