- Published7 Nov 2017
- Reviewed7 Nov 2017
Eyewire, a collaborative computational neuroscience program, is creating a 3D map of the human connectome by encouraging thousands of volunteers to help piece together the puzzle.
For millennia, humans looked to the night sky and used the stars to guide their way. And, for centuries, astronomers and physicists sought to understand the movements of the universe by gathering data and recording in great detail the positions of stars and planets.
Those scientists could predict the movement of these heavenly bodies “even though they didn’t know much about what was driving that motion,” says Alexandre Pouget, who heads the computational cognitive neuroscience laboratory at the University of Geneva. Understanding that, Pouget says, took Isaac Newton and later Albert Einstein “to put the universe into equations.”
Computational neuroscience is doing the same for the brain. Only on a greatly compressed timeline.
“Neuroscience as a whole is an astonishingly young field,” says Peter Dayan, director of the Gatsby Computational Neuroscience Unit at University College London. Indeed, if you google the number of scientific articles in the 20th century containing the word ‘neuron,’ you get a graph that’s basically flat — year after year scientists published roughly the same number of papers — until 1960, then number just takes off.
Today, neuroscience includes a surfeit of subfields, which probe the function of the nervous system on multiple levels — from the biochemistry of neurotransmitters and their receptors to the molecular biology of synapses, the physiology of neurons, and the anatomy of neural circuits and the brain itself. All the while, this surge in neuroscience research is generating a massive amount of data.
“Computational neuroscience complements these other fields and serves as a bridge between them,” says Terence Sejnowski, director of the computational neurobiological laboratory and Francis Crick Professor at the Salk Institute and Distinguished Professor at UCSD. “It looks at the brain from the perspective of information. Most areas of neuroscience are reductionist. They take things apart into smaller pieces and try to explain the mechanisms.”
While that approach has yielded many insights “Because of the BRAIN Initiative, the technology is just exploding and new discoveries are being made every day… It’s like a Golden Age of neuroscience.” into neural mechanisms, Sejnowski points out, “it’s very difficult to put something together after you’ve taken it apart, to capture the complexity of how the pieces work together. Computational neuroscience is, by its nature, synthetic. It involves integrating all the pieces and observations, quantitatively analyzing the data, and building models.”
Those models attempt to explain how the brain works. For example, how you make decisions, how you analyze visual images, how you turn sound into speech, how you reason about things.
These models, says Pouget, are then used to guide research. “Once you have a model, you use it to make predictions for what should happen in the next set of experiments. You test the model to confirm or disprove it.”
Such model building, Pouget notes, is standard practice in all neuroscience laboratories. “But given the increasing complexity we’re facing in neuroscience, it’s becoming clear that we need theoreticians with strong backgrounds in mathematics, physics, and statistics to help us develop more abstract, global models for how the brain works—general models that are not tied to a particular data set or experimental lab.”
Scientists from the Salk Institute and the University of Texas created this computer generated model of brain tissue from the hippocampus. Using this precise model, their research revealed that the brain’s capacity for memory is up to 10 times more than what scientists previously believed.
This is the raison d’etre of computational neuroscience. “Computational neuroscience is really about providing the theoretical backbone we can use to design new experiments, to help us understand and synthesize data into a coherent framework, and even to design artificial learning systems inspired by the brain,” says Pouget.
Converting information into understanding is one of the objectives of — and the rationale behind — the BRAIN Initiative, a public-private partnership aimed at developing the tools and approaches needed to formulate a coherent picture of how the brain works, says Sejnowski, who served as a member of the Initiative’s advisory committee for NIH. And the success of the endeavor will hinge in large part on the integration of theoretical and experimental approaches — a point that Sejnowski says “comes through clearly” in the committee’s report, BRAIN 2025.
“Computational neuroscience is now really an integral part of neuroscience,” he says, “and it’s going to be increasingly important for analyzing data and building conceptual frameworks as new techniques — many funded by the BRAIN Initiative — come online.”
“Because of the BRAIN Initiative, the technology is just exploding and new discoveries are being made every day,” says Sejnowski. “There’s just so much excitement in the air and it’s all happening before our eyes. It’s like a Golden Age of neuroscience.”
But transforming all that data into working models of neural circuits — from the level of synapses up to the whole brain — “will be hopeless without computational tools and techniques,” says Sejnowski.
Pouget wholeheartedly agrees. “The brain is the most complex computational device we know in the universe,” he says. “And unless we do the math, unless we use mathematical theories, there’s absolutely no way we’re ever going to make sense of it.”