A brief history of Neural Networks, as they relate to Neuroscience
These days a standard neural network model consists of a neuron having a real valued threshold function combined with some sort of backpropagation error optimization for training. Because of their long history and recent successes in machine learning applications this model is taken by many to be a plausible model for intelligent behavior in living organisms.
- Neural Networks for Pattern Recognition (book by Bishop, 1995)
- Deep learning in neural networks: An overview. (article in Neural networks, by Schmidhuber, 2015)
Many neural network algorithms have drawn biological inspiration. In particular the hierarchical processing of visual features in cortex has been the inspiration for many neural networks.
- Visual Processing in Cat extrastriate cortex (article in the Annual Review of Neuroscience, by Maunsell and Newsome, 1987. )
Yan LeCun called these the family of "Multi-Stage Hubel-Wiesel Architectures" in reference to the discoverers (Hubel and Wiesel) of oriented selectivity in primary visual cortex.
- Convolutional networks and applications in vision (artcile by Le Cun et al. in Proceedings of the IEEE Symposium on Circuits and Systems, 2010)
- Receptive fields, binocular interaction and functional architecture in the cat's visual cortex (article by Hubel and Wiesel in the Journal of Physiology, 1962)
Many hierarchical neural networks used hand crafted features
- Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition (article by Fukushima and Miyake in the journal Pattern Recognition,1982)
- Spike-based strategies for rapid processing (article by Delorme and Thorpe in the journal Neural Networks, 2001)
- Object recognition with features inspired by visual cortex (article by Serre et al. in proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005. )
Recent progress in hierarchical convolutional neural networks has allowed features in multiple layers of features to be tuned automatically,
- Imagenet classification with deep convolutional neural networks (article by Krizhevsky et al., in the proceedings of the Neural Information Processing Society conference, 2012)
Whats the problem?
However these recent methods use computational units that look less and less like biological neurons: not only are the outputs real valued and not spiking, but input output relations are derived from theory rather than a simplified biophysics. For example, the input output function of many model units is a real valued convolution operation, while for other units the original bounded sigmoid function is replaced with the unbounded rectified linear threshold unit (RelU).
- Rectified linear units improve restricted boltzmann machines (in proceedings of the 27th International Conference on Machine Learning by Nair and Hinton, 2010)
But while the biophysics of computation in individual neurons is known to be rich and complex, the transformations in modern machine learning networks are not known to be within the repertoire of biological neurons.
The main reason these models do not seem realistic to neuroscientists is the fact that real neurons spike, a binary output that occurs for a short duration; in contrast to model neurons that hold a real valued output for an indefinite period. Many observers have noted problems with the conclusion that biological networks of neurons could implement a real valued input output function. While at one time it was believed that a rate code could be used to implement the equivalent of a real valued output from organic neurons, more recent observations have cast doubt on this. In particular it was noted that the fast speed of transmission in real decision making leaves no time for organic neurons to establish rates and detect changes in rates
- Surfing a spike wave down the ventral stream. (paper by Van Rullen and Thorpe in the journal Vision Resarch, 2002)
At the same time, studies have also shown that relative spike timing can be highly precise to a time scale of milliseconds in response to visual stimuli. These observations imply that a rate code theory is at best a partial explanation for real neurons.
- Temporal precision in the neural code and the timescales of natural vision. (paper by Butts et al. in the journal Nature, 2007)
Spiking Neuron Models
This has lead to a large body of research on spiking neuron models – models in which the neurons actually produce biologically realistic spikes
- Spiking Neuron Models, book by Gerstner and Kistler, 2002
- Simulation of networks of spiking neurons: a review of tools and strategies (article in Journal of Computational Neuroscience by Brette et al., 2007)
- Spiking neural networks: Principles and challenges. (in proceedings of ESANN by Gruning and Bohte, 2014)
One of the most studied coding methods in spiking neurons is spike time coding in which the precise time of each spike relative to a baseline is taken as a real valued output for the neuron. Latency coding is a scheme in which the delay to the time of output after stimulus presentation increases for less salient stimulus