NVIDIA today shifted its autonomous-driving leadership into high gear.
At a press event kicking off CES 2016, we unveiled artificial-intelligence technology that will let cars sense the world around them and pilot a safe route forward.
Dressed in his trademark black leather jacket, speaking to a crowd of some 400 automakers, media and analysts, NVIDIA CEO Jen-Hsun Huang revealed DRIVE PX 2, an automotive supercomputing platform that processes 24 trillion deep learning operations a second. That’s 10 times the performance of the first-generation DRIVE PX, now being used by more than 50 companies in the automotive world.
The new DRIVE PX 2 delivers 8 teraflops of processing power. It has the processing power of 150 MacBook Pros. And it’s the size of a lunchbox in contrast to earlier autonomous-driving technology being used today, which takes up the entire trunk of a mid-sized sedan.
“Self-driving cars will revolutionize society,” Huang said at the beginning of his talk. “And NVIDIA’s vision is to enable them.”
Volvo to Deploy DRIVE PX in Self-Driving SUVs
As part of its quest to eliminate traffic fatalities, Volvo will be the first automaker to deploy DRIVE PX 2. |
Huang announced that Volvo – known worldwide for safety and reliability – will be the first automaker to deploy DRIVE PX 2.
In the world’s first public trial of autonomous driving, the Swedish automaker next year will lease 100 XC90 luxury SUVs outfitted with DRIVE PX 2 technology. The technology will help the vehicles drive autonomously around Volvo’s hometown of Gothenburg, and semi-autonomously elsewhere.
DRIVE PX 2 has the power to harness a host of sensors to get a 360 degree view of the environment around the car.
“The rear-view mirror is history,” Jen-Hsun said.
Drive Safely, by Not Driving at All
Not so long ago, pundits had questioned the safety of technology in cars. Now, with Volvo incorporating autonomous vehicles into its plan to end traffic fatalities, that script has been flipped. Autonomous cars may be vastly safer than human-piloted vehicles.
Car crashes – an estimated 93 percent of them caused by human error – kill 1.3 million drivers each year. More American teenagers die from texting while driving than any other cause, including drunk driving.
There’s also a productivity issue. Americans waste some 5.5 billion hours of time each year in traffic, costing the U.S. about $121 billion, according to an Urban Mobility Report from Texas A&M. And inefficient use of roads by cars wastes even vaster sums spent on infrastructure.
Deep Learning Hits the Road
Self-driving solutions based on computer vision can provide some answers. But tackling the infinite permutations that a driver needs to react to – stray pets, swerving cars, slashing rain, steady road construction crews – is far too complex a programming challenge.
Deep learning enabled by NVIDIA technology can address these challenges. A highly trained deep neural network – residing on supercomputers in the cloud – captures the experience of many tens of thousands of hours of road time.
Huang noted that a number of automotive companies are already using NVIDIA’s deep learning technology to power their efforts, getting speedup of 30-40X in training their networks compared with other technology. BMW, Daimler and Ford are among them, along with innovative Japanese startups like Preferred Networks and ZMP. And Audi said it was able in four hours to do training that took it two years with a competing solution.
NVIDIA’s end-to-end solution for deep learning starts with NVIDIA DIGITS, a supercomputer that can be used to train digital neural networks by exposing them to data collected during that time on the road. On the other end is DRIVE PX 2, which draws on this training to make inferences to enable the car to progress safely down the road. In the middle is NVIDIA DriveWorks, a suite of software tools, libraries and modules that accelerates development and testing of autonomous vehicles.
DriveWorks enables sensor calibration, acquisition of surround data, synchronization, recording and then processing streams of sensor data through a complex pipeline of algorithms running on all of the DRIVE PX 2’s specialized and general-purpose processors.
During the event, Huang reminded the audience that machines are already beating humans at tasks once considered impossible for computers, such as image recognition. Systems trained with deep learning can now correctly classify images more than 96 percent of the time, exceeding what humans can do on similar tasks.
He used the event to show what deep learning can do for autonomous vehicles.
A series of demos drove this home, showing in three steps how DRIVE PX 2 harnesses a host of sensors – lidar, radar and cameras and ultrasonic – to understand the world around it, in real time, and plan a safe and efficient path forward.
The World’s Biggest Infotainment System
The highlight of the demos was what Huang called the world’s largest car infotainment system — an elegant block the size of a medium-sized bedroom wall mounted with a long horizontal screen and a long vertical one.
While a third larger screen showed the scene that a driver would take in, the wide demo screen showed how the car — using deep learning and sensor fusion — “viewed” the very same scene in real-time, stitched together from its array of sensors. On its right, the huge portrait-oriented screen shows a highly precise map that marked the car’s progress.
It’s a demo that will leave an impression on an audience that’s going to be hear a lot about the future of driving in the week ahead.
Photos from Our CES 2016 Press Event
NVIDIA Drive PX-2 |
ORIGINAL: Nvidia
By Bob Sherbin on January 3, 2016
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.