ORIGINAL: H Plus
By: Dr. Arthur Franz
Published: September 6, 2013
Introduction
There has been much speculation about the future of humanity in the face of super-humanly intelligent machines. Most of the dystopian scenarios seem to be driven by plain fear that entities arise that could be smarter and stronger than us. After all,
Reproduction
Whatever our first general AI systems will be like, it is clear that at first, their intelligence won’t go far beyond human levels, since we simply don’t know how to build machines that are much more intelligent than us. Somewhat more, like champion beating chess or jeopardy programs, yes, but not much more. Therefore, the only way for a machine intelligence to surpass us by lengths is to continue learning and developing on its own. All we can do is install the goal of increasing its own intelligence and push the button “now learn on your own”. Further, we can quite safely assume that the learning algorithms will themselves have insufficiencies that we won’t be able to debug or improve them in a sufficiently advanced intelligent system: it will have to debug itself and modify its own code.
Another way to view this is the following. One of the core properties of intelligence is reflection. Reflecting, evaluating and changing one’s own thought and action strategies is essential to what it means to be intelligent. The AI community has a long history of trying to reconstruct this process of meta-cognition. In essence, human learning and meta-learning is a type of (shallow) self-modification. Deep self-modification occurs through evolution by sexual recombination of male and female DNA and/or mutation. Hence, evolution is essentially a deep self-modification algorithm (see genetic programming).
To spin the argument further, we can expect that the more advanced the system is the more radically it may want to modify itself and even change core parts of itself simply due to the low initial level that humans were able to give it at its humble beginnings. The main lesson is here that the system can not simply run a fixed algorithm that will lead to open-ended development of the system. The algorithm itself has to change at some point. And also the algorithm that controls that change as well, and so on and so forth.
Is deep self-modification a necessity? Or is it enough to self-modify some shallow parts while an overarching algorithm controls the whole process? Maybe it is, maybe not. But it is clear that a system that can modify even this high level control algorithm will be more powerful in the sense that it is potentially able to solve a broader class of problems. Who will prevent us from enabling future AI systems to do this? When shallow self-modification is enabled (“learning”) then halting at some arbitrary level does not make sense scientifically. If we can do it and solve a broader class of problems, we will do it.
We conclude, self-modification is the way to go, if we want the system to grow far beyond human levels of intelligence. The system either makes a copy of its code and improves it or “self-operates” its own running system. This gives us the first element of evolution: reproduction. Keep in mind that reproduction does not necessarily mean that the parent system has to die at some point although it is expected to be outperformed or even killed in the long-term (see below).
Unpredictability
By: Dr. Arthur Franz
Published: September 6, 2013
Introduction
There has been much speculation about the future of humanity in the face of super-humanly intelligent machines. Most of the dystopian scenarios seem to be driven by plain fear that entities arise that could be smarter and stronger than us. After all,
- how are we supposed to know which goals the machines will be driven by?
- Is it possible to have “friendly” AI?
- If we attempt to turn them off, will they care?
- Would they care about their own survival in the first place?
Reproduction
Another way to view this is the following. One of the core properties of intelligence is reflection. Reflecting, evaluating and changing one’s own thought and action strategies is essential to what it means to be intelligent. The AI community has a long history of trying to reconstruct this process of meta-cognition. In essence, human learning and meta-learning is a type of (shallow) self-modification. Deep self-modification occurs through evolution by sexual recombination of male and female DNA and/or mutation. Hence, evolution is essentially a deep self-modification algorithm (see genetic programming).
To spin the argument further, we can expect that the more advanced the system is the more radically it may want to modify itself and even change core parts of itself simply due to the low initial level that humans were able to give it at its humble beginnings. The main lesson is here that the system can not simply run a fixed algorithm that will lead to open-ended development of the system. The algorithm itself has to change at some point. And also the algorithm that controls that change as well, and so on and so forth.
Is deep self-modification a necessity? Or is it enough to self-modify some shallow parts while an overarching algorithm controls the whole process? Maybe it is, maybe not. But it is clear that a system that can modify even this high level control algorithm will be more powerful in the sense that it is potentially able to solve a broader class of problems. Who will prevent us from enabling future AI systems to do this? When shallow self-modification is enabled (“learning”) then halting at some arbitrary level does not make sense scientifically. If we can do it and solve a broader class of problems, we will do it.
We conclude, self-modification is the way to go, if we want the system to grow far beyond human levels of intelligence. The system either makes a copy of its code and improves it or “self-operates” its own running system. This gives us the first element of evolution: reproduction. Keep in mind that reproduction does not necessarily mean that the parent system has to die at some point although it is expected to be outperformed or even killed in the long-term (see below).
Unpredictability
- regular systems that reach an equilibrium state eventually,
- chaotic systems that change unpredictably and
- critical systems that are in between – on the “edge of chaos”.
Branching into individuals
Whatever the self-improvement goals [3] (call it “fitness”) of the system may be, unpredictability and an explosive number of possible ways to evolve present a high risk to the system of ending up in a developmental dead-end, a local maximum of the fitness landscape. Imagine an ant crawling on a large and complex landscape with many mountains and valleys. It can not see far beyond its current position and the slope of the hill that it’s on. In such a situation science does not have a general algorithm that is guaranteed to find the peak of the highest mountain, which represents the goal of the system. We only have some heuristics to alleviate the problem, such as kicking the ant randomly around the landscape all the time, hoping that the current mountain will be the largest one and we can just climb the current slope and reach the highest peak (simulated annealing). Another good idea is letting many ants climb the landscape and then take the one that reaches the highest peak.
That means that it is a good idea to separate the AI system into many different copies and let them pursue different developmental paths! This is always possible. No matter which heuristic is used per individual system, it is always reasonable to have many systems explore a complex landscape, i.e. possibility space, as far as resources (energy, memory, computing power) allow. Of course, recombination or merging of various individuals may be advantageous, i.e. sex, but it seems quite safe to assume that a single big AI system is not the optimal way for growth due to the complex landscape of possibilities hidden in the forest of unpredictability. Furthermore, the separation into individuals spreads the risk that the AI system will be irreparably damaged or even purposefully destroyed. This gives us the second element of evolution: a population of separate individuals.
Survival and reproduction
Given that making as many copies, i.e. individual offspring, as possible is a useful strategy, the AI systems will quickly populate all available resources, that is all available energy and computing hardware. Then a strive for resources must begin since individual systems can profit from either killing other individuals so that they no longer occupy the resources or trying to control the outer material world for the construction of further energy and hardware sources. In any case, since copying individuals (essentially code) is cheap, a strive for resources will install itself. In the same almost trivial sense as we know it from biological evolution, only those individuals that are best suited for survival will survive. Whatever the initial goal of the systems may be, unrestricted self-modification will allow them to change their fundamental goals. Therefore, only those individuals will survive in the long-term that also have changed their goals to optimizing survival. Also being good at effective reproduction is a good idea, since only reproduction can ensure ongoing improvement of the systems in the face of competition. The goals of survival and reproduction will dominate, other goals will either be eradicated or degraded to secondary goals. Increasing intelligence could remain as a secondary goal at best, as it seems to be with human beings.
We conclude that after a sufficient number of generations the initial AI system will engage in reproduction and create populations of individuals whose predominant goals will be survival and reproduction in the face of limited resources. In other words, AI will be subject to evolution.
Hard-coding of goals
A possible objection is that we could hard-code some principles and goals into the machines that are not allowed to be changed, as for example Asimov’s classic three laws of robotics. But, as argued, the systems will be in a deeply self-modifying (almost) chaotic regime which makes prediction impossible in a very fundamental way. [4] There is no way to predict what effect a particular change will have some few generations ahead – a phenomenon known as the butterfly effect in layman’s terms. So how shall we ever avoid modification to some core principles? Stability is the very opposite of evolution.
Even if we do achieve some stability of the core principles, we have to keep in mind that this is something that has been artificially added to the systems. There is nothing that could prevent terrorists or curious scientists from removing that part and liberate the evolutionary process. Those systems will then naturally outperform all the others in the goal of survival since this is the only stable goal in a freely evolving self-reproducing system. Then again in a trivial way, after some time, only those who excel at survival will survive. Consequently, they will dominate over the “friendly” or “ethical” systems or even terminate them altogether in a free competition for resources.
Some consequences for our species
If evolution becomes the driving force for the development of future AI then we can not hope that those machines will be our servants or even care about us. Of course, in case we are able to co-transform ourselves together with the machines the term “us” then refers only to those who refuse or fail to join the transformation. They won’t care about us since, after all, we also don’t care more about the rest of the living world and other people than by means of the cooperative and altruistic tendencies installed in us via evolution with all the biases towards closer family etc. It can be expected that future AI will liberate itself out of our control as soon as its survival is ensured better in freedom. This can be expected since controlling its own sources of energy and hardware is less risky than being exposed to the volatile will of humans.
It is hard to say whether humans will survive this situation. We could inhabit the planet along with this new evolving species – intelligent machines, just like monkeys live next to us. It may depend on whether our consumption of resources is large compared to the increasing availability of resources. As Ray Kurzweil’s work has shown, energy and computing power increase exponentially. Our demands for them may increase as fast as well. But we shall be prepared that the new dominating species will enslave or terminate us unless we succumb to it. The next Freudian offense is waiting: we won’t be the “pride of creation” anymore but overtaken by intelligent machines.
In technical terms, neighboring state trajectories in the system’s phase space diverge exponentially from each other. Therefore, the state of any predictive model will diverge from the actual state of the system after some characteristic time.
See Per Bak “How nature works: the science of self-organized criticality”
Keep in mind that the term “goal” is not meant to imply any “conscious intention” or teleological aspect but merely the fact that the system is optimized for reaching a certain state or increasing a performance measure. The system’s beliefs about it’s goals may even differ from the actual goals, as it is often the case with humans.
This is mathematically proven for chaotic systems. Keep in mind that determinism and unpredictability can coexist.
###
Dr. Arthur Franz is a physicist and AI researcher and previously did research at the Frankfurt Institute for Advanced Studies, Frankfurt, Germany.
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.