By creating any form of AI we must copy from biology. The argument goes as follows. A brain is a biological product. And so must be then its products such as perception, insight, inference, logic, mathematics, etc. By creating AI we inevitably tap into something that biology has already invented on its own. It follows thus that the more we want the AI system to be similar to a human—e.g., to get a better grade on the Turing test—the more we need to copy the biology.
When it comes to describing living systems, traditionally, we assume the approach of different explanatory principles for different levels of system organization.
- One set of principles is used for “low-level” biology such as the evolution of our genome through natural selection, which is a completely different set of principles than the one used for describing the expression of those genes.
- A yet different type of story is used to explain what our neural networks do. Needless to say,
- the descriptions at the very top of that organizational hierarchy—at the level of our behavior—are made by concepts that again live in their own world.
But what if it was possible to unify all these different aspects of biology and describe them all by a single set of principles? What if we could use the same fundamental rules to talk about the physiology of a kidney and the process of a conscious thought? What if we had concepts that could give us insights into mental operations underling logical inferences on one hand and the relation between the phenotype and genotype on the other hand? This request is not so outrageous. After all, all those phenomena are biological.
One can argue that such an all-embracing theory of the living would be beneficial also for further developments of AI. The theory could guide us on what is possible and what is not. Given a certain technological approach, what are its limitations? Maybe it could answer the question of what the unitary components of intelligence are. And does my software have enough of them?
For more inspiration, let us look into Shannon-Wiener theory of information and appreciate how much helpful this theory is for dealing with various types of communication channels (including memory storage, which is also a communication channel, only over time rather than space). We can calculate how much channel capacity is needed to transmit (store) certain contents. Also, we can easily compare two communication channels and determine which one has more capacity. This allows us to directly compare devices that are otherwise incomparable. For example, an interplanetary communication system based on satellites can be compared to DNA located within a nucleus of a human cell. Only thanks to the information theory can we calculate whether a given satellite connection has enough capacity to transfer the DNA information about human person to a hypothetical recipient at another planet. (The answer is: yes, easily.) Thus, information theory is invaluable in making these kinds of engineering decisions.
So, how about intelligence? Wouldn’t it be good to come into possession of a similar general theory for adaptive intelligent behavior? Maybe we could use certain quantities other than bits that could tell us why the intelligence of plants is lagging behind that of primates? Also, we may be able to know better what the essential ingredients are that distinguish human intelligence from that of a chimpanzee? Using the same theory we could compare
- an abacus,
- a hand-held calculator,
- a supercomputer, and
- a human intellect.
The good news is that, since recently, such an overarching biological theory exists, and it is called practopoiesis. Derived from Ancient Greek praxis + poiesis, practopoiesis means creation of actions. The name reflects the fundamental presumption on what the common property can be found across all the different levels of organization of biological systems:
- Gene expression mechanisms act;
- bacteria act;
- organs act;
- organisms as a whole act.
Due to this focus on biological action, practopoiesis has a strong cybernetic flavor as it has to deal with the need of acting systems to close feedback loops. Input is needed to trigger actions and to determine whether more actions are needed. For that reason, the theory is founded in the basic theorems of cybernetics, namely that of requisite variety and good regulator theorem.
The key novelty of practopoiesis is that it introduces the mechanisms explaining how different levels of organization mutually interact. These mechanisms help explain how genes create anatomy of the nervous system, or how anatomy creates behavior.
When practopoiesis is applied to human mind and to AI algorithms, the results are quite revealing.
To understand those, we need to introduce the concept of practopoietic traverse. Without going into details on what a traverse is, let us just say that this is a quantity with which one can compare different capabilities of systems to adapt. Traverse is a kind of a practopoietic equivalent to the bit of information in Shannon-Wiener theory. If we can compare two communication channels according to the number of bits of information transferred, we can compare two adaptive systems according to the number of traverses. Thus, a traverse is not a measure of how much knowledge a system has (for that the good old bit does the job just fine). It is rather a measure of how much capability the system has to adjust its existing knowledge for example, when new circumstances emerge in the surrounding world.
To the best of my knowledge no artificial intelligence algorithm that is being used today has more than two traverses. That means that these algorithms interact with the surrounding world at a maximum of two levels of organization. For example, an AI algorithm may receive satellite images at one level of organization and the categories to which to learn to classify those images at another level of organization. We would say that this algorithm has two traverses of cybernetic knowledge. In contrast, biological behaving systems (that is, animals, homo sapiens) operate with three traverses.
This makes a whole lot of difference in adaptive intelligence. Two-traversal systems can be super-fast and omni-knowledgeable, and their tech-specs may list peta-everything, which they sometimes already do, but these systems nevertheless remain comparably dull when compared to three-traversal systems, such as a three-year old girl, or even a domestic cat.
To appreciate the difference between two and three traverses, let us go one step lower and consider systems with only one traverse. An example would be a PC computer without any advanced AI algorithm installed.
This computer is already light speed faster than I am in calculations, way much better in memory storage, and beats me in spell checking without the processor even getting warm. And, paradoxically, I am still the smarter one around. Thus, computational capacity and adaptive intelligence are not the same.
Importantly, this same relationship “me vs. the computer” holds for “me vs. a modern advanced AI-algorithm”. I am still the more intelligent one although the computer may have more computational power. But also the relationship holds “AI-algorithm vs. non-AI computer”. Even a small AI algorithm, implemented say on a single PC, is in many ways more intelligent than a petaflop supercomputer without AI. Thus, there is a certain hierarchy in adaptive intelligence that is not determined by memory size or the number of floating point operations executed per second but by the ability to learn and adapt to the environment.
A key requirement for adaptive intelligence is the capacity to observe how well one is doing towards a certain goal combined with the capacity to make changes and adjust in light of the feedback obtained. Practopoiesis tells us that there is not only one step possible from non-adaptive to adaptive, but that multiple adaptive steps are possible. Multiple traverses indicate a potential for adapting the ways in which we adapt.
We can go even one step further down the adaptive hierarchy and consider the least adaptive systems e.g., a book: Provided that the book is large enough, it can contain all of the knowledge about the world and yet it is not adaptive as it cannot for example, rewrite itself when something changes in that world. Typical computer software can do much more and administer many changes, but there is also a lot left that cannot be adjusted without a programmer. A modern AI-system is even smarter and can reorganize its knowledge to a much higher degree. Still, nevertheless, these systems are incapable of doing a certain types of adjustments that a human person can do, or that animals can do. Practopoisis tells us that these systems fall into different adaptive categories, which are independent of the raw information processing capabilities of the systems. Rather, these adaptive categories are defined by the number of levels of organization at which the system receives feedback from the environment — also referred to as traverses.
We can thus make the following hierarchical list of the best exemplars in each adaptive category:
- A book: dumbest; zero traverses
- A computer: somewhat smarter; one traverse
- An AI system: much smarter; two traverses
- A human: rules them all; three traverses
Most importantly for creation of strong AI, practopoiesis tells us in which direction the technological developments should be heading:
Engineering creativity should be geared towards empowering the machines with one more traverse. To match a human, a strong AI system has to have three traverses.
Engineering creativity should be geared towards empowering the machines with one more traverse. To match a human, a strong AI system has to have three traverses.
Practopoietic theory explains also what is so special about the third traverse. Systems with three traverses (referred to as T3-systems) are capable of storing their past experiences in an abstract, general form, which can be used in a much more efficient way than in two-traversal systems. This general knowledge can be applied to interpretation of specific novel situations such that quick and well-informed inferences are made about what is currently going on and what actions should be executed next. This process, unique for T3-systems, is referred to as anapoiesis, and can be generally described as a capability to reconstruct cybernetic knowledge that the system once had and use this knowledge efficiently in a given novel situation.
If biology has invented T3-systems and anapoiesis and has made a good use of them, there is no reason why we should not be able to do the same in machines.
5 Robots Booking It to a Classroom Near You:
Danko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here
ORIGINAL: Singularity Web