A brief history of Neural Networks, as they relate to Neuroscience

(according to Carl)

These days a standard neural network model consists of a neuron having a real valued threshold function combined with some sort of backpropagation error optimization for training.  Because of their long history and recent successes in machine learning applications this model is taken by many to be a plausible model for intelligent behavior in living organisms.

Many neural network algorithms have drawn biological inspiration. In particular the hierarchical processing of visual features in cortex has been the inspiration for many neural networks.

Yan LeCun called these the family of "Multi-Stage Hubel-Wiesel Architectures" in reference to the discoverers (Hubel and Wiesel) of oriented selectivity in primary visual cortex.

Many hierarchical neural networks used hand crafted features

Recent progress in hierarchical convolutional neural networks has allowed features in multiple layers of features to be tuned automatically,

Whats the problem?

But the recent methods use computational units that look less and less like biological neurons: not only are the outputs real valued and not spiking, but input output relations are derived from computational utility rather than a simplified biophysics. For example, the input output function of many model units is a real valued convolution operation, while for other units the original bounded sigmoid function is replaced with the unbounded rectified linear threshold unit (RelU).

But while the biophysics of computation in individual neurons is known to be rich and complex, the transformations in modern machine learning networks are not known to be within the repertoire of biological neurons.

The main reason these models do not seem realistic is the fact that real neurons spike, a  binary output that occurs for a short duration; in contrast to model neurons that hold a real valued  output for an indefinite period.  Many observers have noted problems with the conclusion that biological networks of neurons could implement a real valued input output function. While at one time it was believed that a rate code could be used to implement the equivalent of a real valued output from organic neurons, more recent observations have cast doubt on this. In particular it was noted that the fast speed of transmission in real decision making leaves no time for organic neurons to establish rates and detect changes in rates

At the same time, studies have also shown that relative spike timing can be highly precise to a time scale of milliseconds in response to visual stimuli. These observations imply that a rate code theory is at best a partial explanation for real neurons.

Spiking Neuron Models

This has lead to a large body of research on spiking neuron models – models in which the neurons actually produce biologically realistic spikes

One of the most studied coding methods in spiking neurons is spike time coding in which the precise time of each spike relative to a baseline is taken as a real valued output for the neuron. Latency coding is a scheme in which the delay to the time of output after stimulus presentation increases for less salient stimulus