Staring down a packed room at the Hyatt Regency Hotel in downtown San Francisco this March, Randy Gallistel gripped a wooden podium, cleared his throat, and presented the neuroscientists sprawled before him with a conundrum. “If the brain computed the way people think it computes," he said, "it would boil in a minute." All that information would overheat our CPUs.
Humans have been trying to understand the mind for millennia. And metaphors from technology—like cortical CPUs—are one of the ways that we do it. Maybe it’s comforting to frame a mystery in the familiar. In ancient Greece, the brain was a hydraulics system, pumping the humors; in the 18th century, philosophers drew inspiration from the mechanical clock. Early neuroscientists from the 20th century described neurons as electric wires or phone lines, passing signals like Morse code. And now, of course, the favored metaphor is the computer, with its hardware and software standing in for the biological brain and the processes of the mind.
In this technology-ridden world, it’s easy to assume that the seat of human intelligence is similar to our increasingly smart devices. But the reliance on the computer as a metaphor for the brain might be getting in the way of advancing brain research.
As Gallistel continued his presentation to the Cognitive Neuroscience Society, he described the problem with the computer metaphor. If memory works the way most neuroscientists think it does—by altering the strength of connections between neurons—storing all that information would be way too energy-intensive, especially if memories are encoded in Shannon information, high fidelity signals encoded in binary. Our engines would overheat.
Instead of throwing out the metaphor, though, scientists like Gallistel have massaged their theories, trying to align the brain’s biological reality with computational complexity. Rather than question the assumption that the brain’s information is Shannon-like, Gallistel—a wiry emeritus professor at Rutgers—devised an alternate hypothesis for storing Shannon information as molecules inside the neurons themselves. Chemical bits, he argued, are cheaper than synapses. Problem solved.
This patchwork method is standard procedure in science, filling in holes in their theories as problems and evidence present themselves. But adherence to the computer metaphor might be getting out of hand—leading to all sorts of shenanigans, especially in the tech world.
“I think the brain-as-a-computer metaphor has led us astray a little bit,” says Floris de Lange, a cognitive neuroscientist at the Donders Institute in the Netherlands. “It makes people think that you can completely separate software from hardware,” de Lange says. That assumption leads some scientists—mind-body dualists—to argue that we won’t learn much by studying the physical brain.
Recently, neuroscientists tried to demonstrate how current techniques for studying the brain wouldn’t help much with understanding how the mind works. They took a crack at analyzing some hardware—a microprocessor running Donkey Kong—in hopes of elucidating the software, just using techniques like connectomics and electrophysiology. They couldn’t find much other than the circuit’s off switch. Analyzing hardware won’t give you insights into the software, QED.
But the Donkey Kong study was framed the wrong way. It assumes that what is true for a computer chip is true for a brain. The mind and the brain are much more profoundly entangled than a computer chip and its software, though. Just look at the physical traces of our memories. Over time, our memories are physically encoded in our brains in spidery networks of neurons—software building new hardware, in a way. While working at MIT, Tomás Ryan used a method to visualize that entanglement, labeling neurons that are active when memories are forming by marking them with fluorescent proteins. Using this tool, Ryan watched memory take hold physically in the brain over time.
Ryan took the podium directly after Gallistel. “We’ve been told that if we want to understand the brain, we have to approach it from a design or an engineering perspective,” he said. “Given that we know very little about how memory is stored, we don’t need to be quite so rigid.” Ryan, a clean-shaven neurobiologist who just started his lab at Trinity College Dublin, conceded that the brain probably stores information, but Shannon information? Wrong. In molecules? Wrong as well.
Instead, Ryan displayed a slide of a satellite photo of the city of Berlin, lit at night. This was his analogy for how memory works: Not molecular bits in a cranial computer, but streetlamp infrastructure.
Looking at a recent photo of Berlin from space, you can tell East and West Berlin apart, almost 30 years after the Wall was torn down. That’s because the street lamp infrastructure in the two halves of the city remain different, to this day—West Berlin street lamps use bright white mercury bulbs and East Berlin uses tea-stained sodium vapor bulbs. “It’s not because they haven’t changed the lightbulbs since 1989,” says Ryan. “It’s because the setup was already there.” Even though the divide is gone, the memory of Berlin’s history is still visible in the structure of the city.
X content
This content can also be viewed on the site it originates from.
Our brains might form memories in that same way, creating a memory structure—connections between specific cells—and then maintaining that structure even as the pieces are replaced over a lifetime. The hardware is more entangled with the software because the software changes the hardware, modifying the connections as a memory takes shape. This is just a hypothesis, but a compelling one given Ryan’s data. He has found that even when rodents have Alzheimer’s disease and seem to forget their memories, those memories are still physically present in the brain and can be recalled artificially. It’s just the way to access them that’s been lost.
Plus, what’s stored in that memory structure wouldn’t be limited to Shannon information—which by definition is high-fidelity. “Before we had digital computers we had analog computers, before that we had writing, we had painting, there were many ways of communicating information,” Ryan says, some fuzzier than others. Just because the most advanced human-made mode of information storage and communication happens to be binary right now doesn't mean that's how our brains evolved to work.
On the other hand, using tech as a metaphor for the brain may have had the unintended consequence of inspiring creative computer algorithms. As scientists learn more about the brain’s operation, coders are co-opting them. Artificial intelligence algorithms for object recognition borrow from the visual cortex, analyzing images using multi-layered networks with edge-detection filters just like the ones discovered in cat brains in the 1960s. “That has really made the difference between algorithms that didn't function very well at all—for decades—and now, finally, methods that are quite good at recognizing objects,” says de Lange. If we make computers in our own image, perhaps someday they will become a good metaphor for the brain.