What has neuroscience given to deep learning?

Neuroscience has given us a reason to hope that a single deep learning algorithm can solve many different tasks.

Neuroscientists have found that ferrets can learn to “see” with the auditory processing region of their brain if their brains are rewired to send visual signals to that area (Von Melchner et al., 2000).

This suggests that much of the mammalian brain might use a single algorithm to solve most of the different tasks that the brain solves.

Before this hypothesis, machine learning research was more fragmented, with different communities of researchers studying natural language processing, vision, motion planning and speech recognition.

Today, these application communities are still separate, but it is common for deep learning research groups to study many or even all of these application areas simultaneously.

We are able to draw some rough guidelines from neuroscience.

The basic idea of having many computational units that become intelligent only via their interactions with each other is inspired by the brain.

The Neocognitron (Fukushima, 1980) introduced a powerful model architecture for processing images that was inspired by the structure of the mammalian visual system and later became the basis for the modern convolutional network (LeCun et al., 1998b).

Most neural networks today are based on a model neuron called the rectified linear unit.

The original Cognitron (Fukushima, 1975) introduced a more complicated version that was highly inspired by our knowledge of brain function.

The simplified modern version was developed incorporating ideas from many viewpoints, with Nair and Hinton (2010) and Glorot et al. (2011a) citing neuroscience as an influence, and Jarrett et al. (2009) citing more engineeringoriented influences.

While neuroscience is an important source of inspiration, it need not be taken as a rigid guide.

We know that actual neurons compute very different functions than modern rectified linear units, but greater neural realism has not yet led to an improvement in machine learning performance.

Also, while neuroscience has successfully inspired several neural network architectures, we do not yet know enough about biological learning for neuroscience to offer much guidance for the learning algorithms we use to train these architectures.

Media accounts often emphasize the similarity of deep learning to the brain.

While it is true that deep learning researchers are more likely to cite the brain as an influence than researchers working in other machine learning fields such as kernel machines or Bayesian statistics, one should not view deep learning as an attempt to simulate the brain.

Modern deep learning draws inspiration from many fields, especially applied math fundamentals like linear algebra, probability, information theory, and numerical optimization.

While some deep learning researchers cite neuroscience as an important source of inspiration, others are not concerned with neuroscience at all.

What is the «computational neuroscience»?

Goodfellow, Bengio, Courville - «Deep Learning» (2016)

See also: