At a lab near San Jose, IBM has built the digital equivalent of a rodent brain---roughly speaking. It spans 48 of the company's experimental TrueNorth chips, a new breed of processor that mimics the brain's biological building blocks. IBM |
DHARMENDRA MODHA WALKS me to the front of the room so I can see it up close. About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a ’70s sci-fi movie, but Modha describes it differently. “You’re looking at a small rodent,” he says.
He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent.
Modha oversees the cognitive computing group at IBM, the company that created these “neuromorphic” chips. For the first time, he and his team are sharing their unusual creations with the outside world, running a three-week “boot camp” for academics and government researchers at an IBM R&D lab on the far side of Silicon Valley. Plugging their laptops into the digital rodent brain at the front of the room, this eclectic group of computer scientists is exploring the particulars of IBM’s architecture and beginning to build software for the chip dubbed TrueNorth.
'We want to get as close to the brain as possible while maintaining flexibility.'DHARMENDRA MODHA, IBM
Some researchers who got their hands on the chip at an engineering workshop in Colorado the previous month have already fashioned software that can identify images, recognize spoken words, and understand natural language. Basically, they’re using the chip to run “deep learning” algorithms, the same algorithms that drive the internet’s latest AI services, including the face recognition on Facebook and the instant language translation on Microsoft’s Skype. But the promise is that IBM’s chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.
“What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption,” says Brian Van Essen, a computer scientist at the Lawrence Livermore National Laboratory who’s exploring how deep learning could be applied to national security. “It lets us tackle new problems in new environments.”
The TrueNorth is part of a widespread movement to refine the hardware that drives deep learning and other AI services. Companies like Google and Facebook and Microsoft are now running their algorithms on machines backed with GPUs (chips originally built to render computer graphics), and they’re moving towards FPGAs (chips you can program for particular tasks). For Peter Diehl, a PhD student in the cortical computation group at ETH Zurich and University Zurich, TrueNorth outperforms GPUs and FPGAs in certain situations because it consumes so little power.
The main difference, says Jason Mars, a professor of a computer science at the University of Michigan, is that the TrueNorth dovetails so well with deep-learning algorithms. These algorithms mimic neural networks in much the same way IBM’s chips do, recreating the neurons and synapses in the brain. One maps well onto the other. “The chip gives you a highly efficient way of executing neural networks,” says Mars, who declined an invitation to this month’s boot camp but has closely followed the progress of the chip.
That said, the TrueNorth suits only part of the deep learning process—at least as the chip exists today—and some question how big an impact it will have. Though IBM is now sharing the chips with outside researchers, it’s years away from the market. For Modha, however, this is as it should be. As he puts it: “We’re trying to lay the foundation for significant change.”
The Brain on a Phone
Peter Diehl recently took a trip to China, where his smartphone didn’t have access to the `net, an experience that cast the limitations of today’s AI in sharp relief. Without the internet, he couldn’t use a service like Google Now, which applies deep learning to speech recognition and natural language processing, because most the computing takes place not on the phone but on Google’s distant servers. “The whole system breaks down,” he says.
Deep learning, you see, requires enormous amounts of processing power—processing power that’s typically provided by the massive data centers that your phone connects to over the `net rather than locally on an individual device. The idea behind TrueNorth is that it can help move at least some of this processing power onto the phone and other personal devices, something that can significantly expand the AI available to everyday people.
To understand this, you have to understand how deep learning works. It operates in two stages.
- First, companies like Google and Facebook must train a neural network to perform a particular task. If they want to automatically identify cat photos, for instance, they must feed the neural net lots and lots of cat photos.
- Then, once the model is trained, another neural network must actually execute the task. You provide a photo and the system tells you whether it includes a cat. The TrueNorth, as it exists today, aims to facilitate that second stage.
Once a model is trained in a massive computer data center, the chip helps you execute the model. And because it’s small and uses so little power, it can fit onto a handheld device. This lets you do more at a faster speed, since you don’t have to send data over a network. If it becomes widely used, it could take much of the burden off data centers. “This is the future,” Mars says. “We’re going to see more of the processing on the devices.”
Neurons, Axons, Synapses, Spikes
Google recently discussed its efforts to run neural networks on phones, but for Diehl, the TrueNorth could take this concept several steps further. The difference, he explains, is that the chip dovetails so well with deep learning algorithms. Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.
'Silicon operates in a very different way than the stuff our brains are made of.'
The setup is quite different than what you find in chips on the market today, including GPUs and FPGAs. Whereas these chips are wired to execute particular “instructions,” the TrueNorth juggles “spikes,” much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone’s voice as they speak—or changes in color from pixel to pixel in a photo. “You can think of it as a one-bit message sent from one neuron to another.” says Rodrigo Alvarez-Icaza, one of the chip’s chief designers.
The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.
Of course, using such a chip also requires a new breed of software. That’s what researchers like Diehl are exploring at the TrueNorth boot camp, which began in early August and runs for another week at IBM’s research lab in San Jose, California. In some cases, researchers are translating existing code into the “spikes” that the chip can read (and back again). But they’re also working to build native code for the chip.
Parting Gift
Like these researchers, Modha discusses the TrueNorth mainly in biological terms. Neurons. Axons. Synapses. Spikes. And certainly, the chip mirrors such wetware in some ways. But the analogy has its limits. “That kind of talk always puts up warning flags,” says Chris Nicholson, the co-founder of deep learning startup Skymind. “Silicon operates in a very different way than the stuff our brains are made of.”
Modha admits as much. When he started the project in 2008, backed by $53.5M in funding from Darpa, the research arm for the Department of Defense, the aim was to mimic the brain in a more complete way using an entirely different breed of chip material. But at one point, he realized this wasn’t going to happen anytime soon. “Ambitions must be balanced with reality,” he says.
In 2010, while laid up in bed with the swine flu, he realized that the best way forward was a chip architecture that loosely mimicked the brain—an architecture that could eventually recreate the brain in more complete ways as new hardware materials were developed. “You don’t need to model the fundamental physics and chemistry and biology of the neurons to elicit useful computation,” he says. “We want to get as close to the brain as possible while maintaining flexibility.”
This is TrueNorth. It’s not a digital brain. But it is a step toward a digital brain. And with IBM’s boot camp, the project is accelerating. The machine at the front of the room is really 48 separate machines, each built around its own TrueNorth processors. Next week, as the boot camp comes to a close, Modha and his team will separate them and let all those academics and researchers carry them back to their own labs, which span over 30 institutions on five continents. “Humans use technology to transform society,” Modha says, pointing to the room of researchers. “These are the humans.”.
ORIGINAL: Wired
By CADE METZ
08.17.15
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.