Thinking about the end of the world is something that most people try to avoid; for others, it’s a profession. The Future of Humanity Institute at the University of Oxford, UK specializes in looking at the ‘big-picture’ future of the human race, and notably, the risks that could wipe us out entirely.
As you’d probably imagine, the risks considered by the Institute include things like nuclear war and meteor strikes, but one perhaps unexpected area that it’s looking into is the potential threat posed by artificial intelligence. Could computers become so smart that they become our rivals, take all our jobs and eventually wipe us all out? This Terminator-style scenario used to seem like science fiction, but it’s starting to be taken seriously by those who watch the way technology is developing.
“I think there’s more academic papers published on either dung beetles or Star Trek than about actual existential risk,” says Stuart Armstrong, a philosopher and Research Fellow at the institute, whose work has lately been focused on AI. “There are very few institutes of any sorts in the world looking into these large-scale risks…. there is so little research… compared to other far more minor risks – traffic safety and things like that.”
HAL 9000 from ’2001: A Space Odyssey’
“One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk,” Armstrong explains. “You take a nuclear war for instance, that will kill only a relatively small proportion of the planet. You add radiation fallout, slightly more, you add the nuclear winter you can maybe get 90%, 95% – 99% if you really stretch it and take extreme scenarios – but it’s really hard to get to the human race ending. The same goes for pandemics, even at their more virulent.
“The thing is if AI went bad, and 95% of humans were killed then the remaining 5% would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.”
An AI meets a human in a bar… So, what kind of threat are we talking about here?
“First of all forget about the Terminator,” Armstrong says. “The robots are basically just armoured bears and we might have fears from our evolutionary history but the really scary thing would be an intelligence that would be actually smarter than us – more socially adept. When the AI in robot form can walk into a bar and walk out with all the men and/or women over its arms, that’s when it starts to get scary. When they can become better at politics, at economics, potentially at technological research.”
The first impact of that technology, Armstrong argues, is near total unemployment. “You could take an AI if it was of human-level intelligence, copy it a hundred times, train it in a hundred different professions, copy those a hundred times and you have ten thousand high-level employees in a hundred professions, trained out maybe in the course of a week. Or you could copy it more and have millions of employees… And if they were truly superhuman you’d get performance beyond what I’ve just described.”
Why would AI want to kill us?
Okay, they make take our jobs, but the idea that some superior being would want to kill us may seem presumptuous. Google’s Director of Engineering, Ray Kurzweil, for example, has an optimistic view of how organic and cybernetic lifeforms will become increasingly intertwined in a more positive way – Skynet doesn’t have to become a reality, and if it does, it doesn’t necessarily have to turn against its creators. Armstrong thinks we should be aware of, and prepared for, the risks though.
“The first part of the argument is they could get very intelligent and therefore very powerful. The second part of the argument is that it’s extremely hard to design some sort of motivation structure, or programming… that results in a safe outcome for such a powerful being.
“Take an anti-virus program that’s dedicated to filtering out viruses from incoming emails and wants to achieve the highest success, and is cunning and you make that super-intelligent,” Armstong continues. “Well it will realise that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and and as a side effect no viruses will be sent.
“This is sort of a silly example but the point it illustrates is that for so many desires or motivations or programmings, ‘kill all humans’ is an outcome that is desirable in their programming.”
Couldn’t we program in safeguards though? A specific ‘Don’t kill humans’ rule?
“It turns out that that’s a more complicated rule to describe, far more than we suspected initially. Because if you actually program it in successfully, let’s say we actually do manage to define what a human is, what life and death are and stuff like that, then its goal will now be to entomb every single human under the Earth’s crust, 10km down in concrete bunkers on feeding drips, because any other action would result in a less ideal outcome.
“So yes, the thing is that what we actually need to do is to try and program in essentially what is a good life for humans or what things it’s not allowed to interfere with and what things it is allowed to interfere with… and do this in a form that can be coded or put into an AI using one or another method.”
Uncertain is not the same as ‘safe’
Armstrong certainly paints a terrifying picture of life in a world where artificial intelligence has taken over, but is this an inevitability? That’s uncertain, he says, but we shouldn’t be too reassured by that.
“Increased uncertainty is a bad sign, not a good sign. When the anti-global warming crowd mention ‘but there are uncertainties to these results’ that is utterly terrifying – what people are understanding is ‘there are increased uncertainties so we’re safe’ but increased uncertainties nearly always cut both ways.
“So if they say there’s increased uncertainties, there’s nearly always increased probabilities of the tail risk – really bad climate change and that’s scary. Saying ‘we don’t know stuff about AI’ is not at all the same thing as saying ‘we know that AI is safe’. Even though we’re mentally wired to think that way. ”
When might we see true AI?
As for a timeframe as to when we could have super-intelligent AI, Armstrong admits that this is a tough question to answer.
“Proper AI of the (kind where) ‘we program it in a computer using some method or other’… the uncertainties are really high and we may not have them for centuries, but there’s another approach that people are pursuing which is whole-brain emulations, some people call them ‘uploads’, which is the idea of copying human brains and instantiating them in a computer. And the timelines on this seem a lot more solid because unlike AI we know exactly what we want to accomplish and have clear paths to reaching it, and that seems to be plausible over a century timescale.”
If computers can ‘only’ think as well as humans, that may not be so bad a scenario.
“(With) a whole brain emulation… this would be an actual human mind so we wouldn’t have to worry that the human mind would misinterpret ‘keep humans safe’ as something pathological,” Armstrong says. “We would just have to worry about the fact of an extremely powerful human – a completely different challenge but it’s the kind of challenge that we’re more used to – constraining powerful humans – we have a variety of methods for that that may or may not work, but it is a completely different challenge than dealing with the completely alien mind of a true AI.”
David from ‘AI: Artificial Intelligence’
As for those true AIs that can outsmart any human, timeframes are a lot more fuzzy.
“You might think you can get a good estimate off listening to predictors in AI, maybe Kurzweil, maybe some of the others who say either pro- or anti-AI stuff. But I’ve had a look at it and the thing is there’s no reason to suspect that these experts know what they’re talking about. AIs have never existed, they’ll never have any feedback about how likely they are to exist, we don’t have a theory of what’s needed in any practical sense.
“If you plot predictions, they just sort of spread over the coming century and the next, seemingly 20 years between any two predictions and no real pattern. So definitely there is strong evidence that they don’t know when AI will happen or if it will happen.
“This sort of uncertainty however goes both ways, the arguments that AI will not happen are also quite weak and the arguments that AI will not happen soon are also quite weak. So, just as you might think that say it might happen in a century’s time, you should also think that it might happen in five years’ time.
“(If) someone comes up with a nearly neat algorithm, feeds it a lot of data, this turns out to be able to generalize well and – poof – you have it very rapidly, though it is likely that we won’t have it any time soon, we can’t be entirely confident of that either.”
The philosophy of technology
What became apparent to me while talking to Armstrong is that the current generation of philosophers, often ignored by those outside the academic circuit, have a role to play in establishing the guidelines around how we interact with increasingly ‘intelligent’ technology.
Armstrong likens the process behind his work to computer programming. “We try to break everything down into the simplest terms possible, as if you were trying to program it into an AI or into any computer. Programming experience is very useful. But fortunately, philosophers, and especially analytic philosophers, have been doing this for some time. You just need to extend the program a bit. So see what you have and how you would ground it, so theories of
What became apparent to me while talking to Armstrong is that the current generation of philosophers, often ignored by those outside the academic circuit, have a role to play in establishing the guidelines around how we interact with increasingly ‘intelligent’ technology.
Armstrong likens the process behind his work to computer programming. “We try to break everything down into the simplest terms possible, as if you were trying to program it into an AI or into any computer. Programming experience is very useful. But fortunately, philosophers, and especially analytic philosophers, have been doing this for some time. You just need to extend the program a bit. So see what you have and how you would ground it, so theories of
- how you learn stuff,
- how you know anything about the world and
- how to clearly define terms
AI’s threat to your job
The biggest problem Armstrong faces is simple disbelief from people that the threat of mass extinction from artificial intelligence is worth taking seriously.
“Humans are very poor at dealing with extreme risks,” he says. “Humans in general and decision makers at all levels – we’re just not wired well to deal with high-impact, low-probability stuff… We have heuristics, we have mental maps in which extinction things go into a special category – maybe ‘apocalypses and religions or crazy people’, or something like that.”
At least Armstrong is making headway when it comes to something that seems a little more impactful on our day-to-day lives in the nearer term – the threat AI poses to our jobs. “That’s perfectly respectable, that’s a very reasonable fear. It does seem that you can get people’s attention far more with mid-size risks than with extreme risks,” he says.
“(AI) can replace practically anybody, including people in professions that are not used to being replaced or outsourced. So just for that, it’s worth worrying about, even if we don’t look at the dangerous effect. Which again, I’ve found personally if I talk about everybody losing their job it gets people’s interest much more than if I start talking about the extinction of the human species. The first is somehow more ‘real’ than the second.”
I feel it’s appropriate to end our conversation with a philosophical question of my own. Could Armstrong’s own job be replaced by an AI, or is philosophy an inherently human pursuit?
“Oh interesting… There is no reason why philosophers would be exempt from this, that philosophers would be able to be AI much better than humans because philosophy is a human profession,” he says.
“If the AI’s good at thinking, it would be better. We would want to have done at least enough philosophy that we could get the good parts into the AI so that when it started extending it didn’t extend it in dangerous or counterproductive areas, but then again it would be ‘our final invention’ so we would want to get it right.
“That does not mean that in a post AI world that there would not be human philosophers doing human philosophy, the point is that we would want humans to do stuff that they found worthwhile and useful. So it is very plausible that you would have in a post-AI society philosophers going on as you would have other people doing other jobs that they find worthwhile. If you want to be romantic about it, maybe farmers of the traditional sort.
“I don’t really know how you would organise a society but you would have to organize it so that people would find something useful and productive to do, which might include philosophy.
“In terms of ‘could the AIs do it beyond a human level’, well yes, most likely, at least to a way that we could not distinguish easily between human philosophers and AI.”
We may be a long way away from the Terminator series becoming a documentary, but then again maybe we’re not. Autonomous robots with the ability to kill are already being taken seriously as a threat on the battlefields of the near future.
The uncertainty around AI is why we shouldn’t ignore warnings from people like Stuart Armstrong. When the machines rise, we’ll need to be ready for them.
See also: Can machine algorithms truly mimic the depths of human communication?
Image credits: YOSHIKAZU TSUNO/AFP/Getty Images, Metro Goldwyn Meyer, Troll.me, Disney/Pixar, Christopher Furlong/Getty Images
Paper at: http://www.aleph.se/papers/oracleAI.pdf
ORIGINAL: The Next Web
The biggest problem Armstrong faces is simple disbelief from people that the threat of mass extinction from artificial intelligence is worth taking seriously.
“Humans are very poor at dealing with extreme risks,” he says. “Humans in general and decision makers at all levels – we’re just not wired well to deal with high-impact, low-probability stuff… We have heuristics, we have mental maps in which extinction things go into a special category – maybe ‘apocalypses and religions or crazy people’, or something like that.”
At least Armstrong is making headway when it comes to something that seems a little more impactful on our day-to-day lives in the nearer term – the threat AI poses to our jobs. “That’s perfectly respectable, that’s a very reasonable fear. It does seem that you can get people’s attention far more with mid-size risks than with extreme risks,” he says.
“(AI) can replace practically anybody, including people in professions that are not used to being replaced or outsourced. So just for that, it’s worth worrying about, even if we don’t look at the dangerous effect. Which again, I’ve found personally if I talk about everybody losing their job it gets people’s interest much more than if I start talking about the extinction of the human species. The first is somehow more ‘real’ than the second.”
Car assembly, one area where robots long ago replaced many human roles.
I feel it’s appropriate to end our conversation with a philosophical question of my own. Could Armstrong’s own job be replaced by an AI, or is philosophy an inherently human pursuit?
“Oh interesting… There is no reason why philosophers would be exempt from this, that philosophers would be able to be AI much better than humans because philosophy is a human profession,” he says.
“If the AI’s good at thinking, it would be better. We would want to have done at least enough philosophy that we could get the good parts into the AI so that when it started extending it didn’t extend it in dangerous or counterproductive areas, but then again it would be ‘our final invention’ so we would want to get it right.
“That does not mean that in a post AI world that there would not be human philosophers doing human philosophy, the point is that we would want humans to do stuff that they found worthwhile and useful. So it is very plausible that you would have in a post-AI society philosophers going on as you would have other people doing other jobs that they find worthwhile. If you want to be romantic about it, maybe farmers of the traditional sort.
“I don’t really know how you would organise a society but you would have to organize it so that people would find something useful and productive to do, which might include philosophy.
“In terms of ‘could the AIs do it beyond a human level’, well yes, most likely, at least to a way that we could not distinguish easily between human philosophers and AI.”
We may be a long way away from the Terminator series becoming a documentary, but then again maybe we’re not. Autonomous robots with the ability to kill are already being taken seriously as a threat on the battlefields of the near future.
The uncertainty around AI is why we shouldn’t ignore warnings from people like Stuart Armstrong. When the machines rise, we’ll need to be ready for them.
See also: Can machine algorithms truly mimic the depths of human communication?
Image credits: YOSHIKAZU TSUNO/AFP/Getty Images, Metro Goldwyn Meyer, Troll.me, Disney/Pixar, Christopher Furlong/Getty Images
Thinking inside the box: using and controlling an Oracle AI by Stuart Armstrong
There is no strong reason to believe human level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed goals or motivation systems. Solving this issue in general has proven to be considerably harder than expected. This lecture looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in the world except by answering questions. Even this narrow approach presents considerable challenges and we analyse and critique various methods of control. In general this form of limited AI might be safer than unrestricted AI, but still remains potentially dangerous.Paper at: http://www.aleph.se/papers/oracleAI.pdf
ORIGINAL: The Next Web
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.