domingo, 16 de noviembre de 2014

Robot Brains Catch Humans in 25 Years, Then Speed Right On By

An android Repliee S1, produced by Japan's Osaka University professor Hiroshi Ishiguro, performing during a dress rehearsal of Franz Kafka's "The Metamorphosis." Phototographer: Yoshikazu Tsuno/AFP via Getty Images

We’ve been wrong about these robots before.

Soon after modern computers evolved in the 1940s, futurists started predicting that in just a few decades machines would be as smart as humans. Every year, the prediction seems to get pushed back another year. The consensus now is that it’s going to happen in ... you guessed it, just a few more decades.

There’s more reason to believe the predictions today. After research that’s produced everything from self-driving cars to Jeopardy!-winning supercomputers, scientists have a much better understanding of what they’re up against. And, perhaps, what we’re up against.

Nick Bostrom, director of the Future of Humanity Institute at Oxford University, lays out the best predictions of the artificial intelligence (AI) research community in his new book, “Superintelligence: Paths, Dangers, Strategies.” Here are the combined results of four surveys of AI researchers, including a poll of the most-cited scientists in the field, totalling 170 respondents.

Human-level machine intelligence is defined here as “one that can carry out most human professions at least as well as a typical human.

By that definition, maybe we shouldn’t be so surprised about these predictions. Robots and algorithms are already squeezing the edges of our global workforce. Jobs with routine tasks are getting digitized: farmers, telemarketers, stock traders, loan officers, lawyers, journalists -- all of these professions have already felt the cold steel nudge of our new automated colleagues. 


Replication of routine isn't the kind of intelligence Bostrom is interested in. He’s talking about an intelligence with intuition and logic, one that can learn, deal with uncertainty and sense the world around it. The most interesting thing about reaching human-level intelligence isn’t the achievement itself, says Bostrom; it’s what comes next. Once machines can reason and improve themselves, the skynet is the limit.

Computers are improving at an exponential rate. In many areas -- chess, for example -- machine skill is already superhuman. In others -- reason, emotional intelligence -- there’s still a long way to go. Whether human-level general intelligence is reached in 15 years or 150, it’s likely to be a little-observed mile marker on the road toward superintelligence.

Superintelligence: one that “greatly exceeds the cognitive performance of humans in virtually all domains of interest.

Inventor and Tesla CEO Elon Musk warns that superintelligent machines are possibly the greatest existential threat to humanity. He says the investments he's made in artificial-intelligence companies are primarily to keep an eye on where the field is headed.

Hope we’re not just the biological boot loader for digital superintelligence,” Musk Tweeted in August. “Unfortunately, that is increasingly probable.

There are lots of caveats before we prepare to hand the keys to our earthly kingdom over to robot offspring.
  • First, humans have a terrible track record of predicting the future. 
  • Second, people are notoriously optimistic when forecasting the future of their own industries. 
  • Third, it’s not a given that technology will continue to advance along its current trajectory, or even with its current aims.
Still, the brightest minds devoted to this evolving technology are predicting the end of human intellectual supremacy by midcentury. That should be enough to give everyone pause. The direction of technology may be inevitable, but the care with which we approach it is not.

Success in creating AI would be the biggest event in human history,” wrote theoretical physicist Stephen Hawking, in an Independent column in May. “It might also be the last.”

ORIGINAL: Bloomberg
By Tom Randall 
Nov 10, 2014

No hay comentarios:

Publicar un comentario

Nota: solo los miembros de este blog pueden publicar comentarios.