jueves, 19 de mayo de 2016

Google Built Its Very Own Chips to Power Its AI Bots

GOOGLE
GOOGLE HAS DESIGNED its own computer chip for driving deep neural networks, an AI technology that is reinventing the way Internet services operate.

This morning, at Google I/O, the centerpiece of the company’s year, CEO Sundar Pichai said that Google has designed an ASIC, or application-specific integrated circuit, that’s specific to deep neural nets. These are networks of hardware and software that can learn specific tasks by analyzing vast amounts of data. Google uses neural nets to identify objects and faces in photos, recognize the commands you speak into Android phones, or translate text from one language to another. This technology has even begin to transform the Google search engine.

Big Brains
Google’s called its chip the Tensor Processing Unit, or TPU, because it underpins TensorFlow, the software engine that drives its deep learning services.

This past fall, Google released TensorFlow under an open-source license, which means anyone outside the company can use and even modify this software engine. It does not appear that Google will share the designs for the TPU, but outsider can make use of Google’s own machine learning hardware and software via various Google cloud services.

Google says it has been running TPUs for about a year, and that they were developed not long before that.Google is just one of so many companies adding deep learning to a wide range of Internet services, including everyone from Facebook and Microsoft to Twitter. Typically, these Internet giants drive their neural nets with graphics processing units, or GPUs, from chip makers like Nvidia. But some, including Microsoft, are also exploring the use of field programmable gate arrays, or FPGAs, chips that can be programmed to specific tasks.
GOOGLE
According to Google, on the massive hardware racks inside the data centers that power its online services, a TPU board fits into the same slot as a hard drive, and it provides an order of magnitude better-optimized performance per watt for machine learning than other hardware solutions.

TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation,” the company says in a blog post. “Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly.

This means, among other things, that Google is not using chips from companies like Nvidia—or using fewer chips from these companies. It also indicates that Google is more than willing to build its own chips, which bad news from any chipmaker, most notably the world’s largest: Intel. Intel processor power a vast major of the computer servers inside Google, but the worry, for Intel, is that the Internet giant will one day design its own central processing units as well.

Google says it has been running TPUs for about a year, and that they were developed not long before that. After testing its first silicon, the company says, it had it running live applications inside its data centers within 22 days.

ORIGINAL: Wired
By Cade Metz
05.18.2016 

No hay comentarios:

Publicar un comentario

Nota: solo los miembros de este blog pueden publicar comentarios.