Theoretical Neuroscientist Brian DePasquale uses computer models to study how the human brain works.
How have computers helped you learn?
Machines — even computers — seem simple when compared to the complicated human brain. It might seem surprising that “machine learning” is helping scientists understand that brain. Read on to find out how.
Math has taught us a lot about how to understand the physical world. But what can it teach us about our own brains, the very structures that make us who we are? That’s what Theoretical Neuroscientist Brian DePasquale is trying to find out.
The human brain is made up of billions of cells called neurons, linked by trillions of connections, so it’s a daunting task to use math to understand what a brain might do. But DePasquale has a strategy. He breaks the brain’s structure down into neurocircuits, which are smaller groups of neurons and connections. Then, he comes up with simple equations to try to understand how the circuits will function. But even then, sometimes the math gets too complicated, and that’s when he uses a computer model to help figure out the equations.
How does the model work? Picture yourself dropping a bouncing ball from five feet in the air. If you just drop it, it will fall to the ground and start bouncing before you can even start to figure out how fast it goes. But if you get snapshots – taking a picture of the ball only a very short amount of time after you let go of it, and then taking another one the same short amount of time later – you’ll be able to figure out how fast and how far it falls. Maybe in the first snapshot it’s now 4.9 feet in the air. Maybe in the second one it’s 4.7 feet in the air instead. A series of snapshots. That’s how DePasquale uses the computer model to figure out just what the neurons are doing and how they work.
DePasquale’s models simulate the brain: they imitate it, in a simplified way. In his words, “most physical [and] biological processes are too complicated to understand in reality, so instead, we simulate those processes in order to understand the real thing.”
As he looks at the models, cleverly designed to approximate the real brain, DePasquale can see that neuroscience is truly “in between a science and an art.” Indeed, there are many different disciplines that go into this subject. He has colleagues who are experts in biology, medicine, genetics, and (for making those models) computer science. Sometimes they come from such disparate fields that they “don’t speak the same language,” but they are all united in their quest to crack the code on a subject that is relevant and fascinating to all of them – the very brains they use to understand their different fields. Their diversity means that, just like neurons, they can build connections and make a whole greater than their individual parts, solving a mystery greater than any one scientist could make sense of alone.
DePasquale himself comes from math and physics, giving him expertise in using and solving equations and understanding the mechanics of just how those neurons work. He didn’t always know he’d use those skills for theoretical neuroscience. When he was in high school, he hadn’t yet discovered his passion for math and science. He was good at both, but what he was most interested in was being a jazz pianist.
From the beginning, Brian was fascinated by the ways things worked. He enjoyed music theory, which let him understand just what music was and how it functioned. And he liked a challenge, too. That was why he switched to physics in college: everything else just seemed too easy. He also studied philosophy, which got Brian thinking about “why people do the things that they do.” The answers usually seemed too vague, however. Wasn’t there a better, more scientific way to answer that question? He loved the elegance and challenge of physics, but it didn’t quite captivate him enough.
He was at the University of California San Diego in a summer exchange program (Research Experience for Undergraduates) sponsored by the National Science Foundation when he met them: physics students like himself who had branched out and were now trying to study the neurocircuits that let birds sing songs. Like DePasquale, they loved physics for its elegance but searched beyond it. Now they had discovered neuroscience, which was the perfect subject for them – and, as it turned out, the perfect subject for him as well.
DePasquale discovered that he craved variation and was not satisfied focusing on one narrow field of study. “Neuroscience gives me that variation in a single scientific pursuit because of its multidisciplinary nature.”
In his years as a neuroscientist, DePasquale has learned that the brain is complicated and fascinating, and it doesn’t always play by the rules of the math he learned in college. For a start, the brain has a durability that seems impossible. Have you ever had an iPhone or computer that broke? It might have been only a small part of it that stopped working, but when it did, the whole device fell apart. That’s not how brains work. People can have massive strokes or other severe brain injuries and still have a high chance of surviving with their brains intact or at least able to recover and function fully again.
Not everything strange about the brain is that helpful, though. For DePasquale, the most challenging part of his work is figuring out just how the math of the brain works. It’s unlike anything mathematicians study about the rest of the world. Mathematicians graph functions shaped like lines and curves, but when you map brain activity onto a graph it looks like something different and altogether less predictable called a “step function” or “Heaviside function.”
Imagine that someone is pushing on your shoulder with a small amount of force. Your brain gives you a signal to let you feel the pressure. Just how does the brain do that? A neuron in the brain sharply changes in voltage, an event known as a spike. For that small force on your shoulder, there might be three spikes. DePasquale can measure the spikes by placing a metal wire right next to the neuron. When there is more force on your shoulder, the neuron might give off five spikes. And when there’s even more force, it might emit seven spikes. There’s no gradient — in other words, if you graphed the relationship of force (on the x-axis) vs. brain response (on the y-axis), you would see that the number of spikes doesn’t increase in a smooth line or curve. No amount of force will make the neuron emit four spikes, or six. It just jumps up suddenly, in discrete steps. Such a system seems unwieldy. “If I were to build a brain, I would not do it this way,” DePasquale says. And until recently, most mathematical biologists avoided using step functions in their models.
In designing a new way to use computers to help us understand brain behavior, DePasquale improved upon existing methods. The models that neuroscientists used for the past 30 years took advantage of “the dirty secret of theoretical neuroscience” (that they had avoided using step functions). They modeled a brain — unlike a real one — which might actually spike four times when you press against someone’s shoulder. (A real brain might go from three to five spikes: four spikes would be impossible.) Scientists adjusted their computer models to create smooth, differentiable functions, allowing them to use calculus to understand how the brain works — something that’s not easily done with step functions. The functions their models produce appear as a clean “S” shape (called a sigmoid function), not the broken series of steps which reflect real brain spikes.
All models are simplified to some extent, not just the ones used in neuroscience. But you can always make more and more advanced models, marching towards the truth about just how brains work. For a more accurate picture of the brain, DePasquale decided to combine the old, “cheating” model of brain behavior with more accurate information, only inputting solutions from the former when there was no other way to find an answer.
His newer model uses a method known as supervised learning, which grew out of an area of computer science called “machine learning.” When a computer program recognizes your face in a photo and suggests that you tag yourself on Facebook, it’s using supervised learning to do something that a normal computer can’t do – but a brain can. Just as you can train an animal, DePasquale trains the circuit, teaching it to produce spikes in a way that looks like a function he can understand. But unlike training an animal, this training is far more exact. DePasquale doesn’t just tell the circuit it’s doing something wrong: he shows it in detail the exact step it performed incorrectly and the exact neurons responsible. What he ends up with is a model of a neural circuit, one that works realistically but is simplified enough that he can approximate the math involved.
There’s a lot more to come for DePasquale and theoretical neuroscience and he couldn’t be more excited for the challenge and the knowledge that he can gain. “We are pre-Newtonian in biology: we have almost no mathematical grounding in our field,” he notes. But thanks to DePasquale and other math-loving scientists, we are beginning to understand even the workings of our enormously complicated brains.