Illustration: IBM
Brain-inspired computers have tickled the public imagination ever since Arnold Schwarzenegger's character in “Terminator 2: Judgment Day” uttered: “My CPU is a neural net processor; a learning computer.” Today, IBM researchers backed by U.S. military funding unveiled a new computer chip that they say could revolutionize everything from smartphones to smart cars—and perhaps pave the way for neural networks to someday approach the computing capabilities of the human brain.
The IBM neurosynaptic computer chip consists of one million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of chip’s 4,096 neurosynaptic cores includes the entire computing package—memory, computation, and communication. They all operate in parallel based on “event-driven” computing, similar to the signal spikes and cascades of activity when human brain cells work in concert. Such architecture helps to bypass the bottleneck in traditional computing where program instructions and operation data cannot pass through the same route simultaneously.
“We have not built a brain,” says Dharmendra Modha, chief scientist and founder of IBM’s Cognitive Computing group at IBM Research-Almaden. “But we have come the closest to creating learning function and capturing it in silicon in a scalable way to provide new computing capability that was not possible before.”
Such capability could enable new mobile device applications that emulate the human brain’s capability to swiftly process information about new events or other changes in real-world environments, whether that involves recognizing familiar sounds or a certain face in a moving crowd. IBM envisions its new chips working together with traditional computing devices as hybrid machines—providing an added dose of brain-like intelligence for smart car sensors, cloud computing applications or mobile devices such as smartphones. The chip's architecture was detailed in a new paper published in the 7 August online issue of the journal Science.
Add caption |
With a total of 5.4 billion transistors the computer chip, named TrueNorth, is one of the largest CMOS chips ever built. Yet the chip uses just 70 milliwatts while running and has a power density of 20 milliwatts per square centimeter— almost 1/10,000th the power of most modern microprocessors. That brings the new chip's efficiency much closer to the human brain’s astounding power consumption of just 20 watts, or less than the average incandescent light bulb.
“This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” Modha says.
One reason IBM was able to minimize power usage is that its chip's computation only triggers when needed. Traditional computer chips have a clock that uses power to trigger and coordinate all the computational processes. But the IBM chip's digital neurons can work together asynchronously when triggered by the signal spikes. IBM also designed its chip to have low power consumption by creating an on-chip network to interconnect all the neurosynaptic cores and building the chip with a low-power process technology used for making mobile devices.
It’s also a supercomputer that can easily scale up in size. IBM designed its computer chip architecture so that it could simply add new neurosynatpic cores within the chip. The chips themselves can be arranged in a repeatable 2-D tile pattern to create bigger machines—IBM has already tested that idea with a 16-chip configuration. That’s the “blueprint of a scalable supercomputer,” Modha says.
Past brain-inspired neural networks have used a combination of both analog and digital to represent the individual neurons. IBM chose to represent the neurons in digital form, which provided several advantages. (At least one other project, SpiNNaker also depends on digital.)
First, the choice allowed IBM engineers to avoid the physical problems of dealing with differences in the manufacturing process or temperature fluctuations. Second, it provided a “one to one equivalence with software and hardware” that allowed the IBM software team to build applications on a simulator even before the physical chip had been designed and tested—applications that ran without problems on the finished chip. Third, the lack of analog circuitry allowed the IBM team to dramatically shrink the size of its circuits. (IBM fabricated its chip using Samsung’s 28-nm process technology—typical for manufacturing chips for mobile devices.)
IBM’s new chip represents the culmination of a decade of Modha’s personal research and almost six years of funding from the U.S. Defense Advanced Research Projects Agency (DARPA). Modha currently heads DARPA’s SyNAPSE project, a global effort that has committed US $53 million to making learning computers since 2008.
Now IBM has built an entire ecosystem around its new chip hardware and software, including a new programming language and a curriculum to teach coders everything they need to know. And the company is reaching out to potential customers, universities, government agencies, and IBM employees to fully explore the commercial applications of its chip technology.
“Our long-term end goal is to build a ‘brain in a box’ with 100 billion synapses consuming 1 kilowatt of power,” Modha says. “In the near future, we’ll be looking at multiple things for empowering smartphones, mobile devices and cloud services with this technology.”
Learn More DARPA SyNAPSE IBM cognitive computing brain-inspired computerslearning computers neural networks neuro-inspired computing
ORIGINAL: IEEE Spectrum
By Jeremy Hsu
Posted 7 Aug 2014 | 18:00 GMT
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.