A new mathematical model of consciousness implies that your PC will never be conscious in the way you are
One of the most profound advances in science in recent years is the way researchers from a variety of fields are beginning to think about consciousness. Until now, the c-word was been taboo for most scientists. Any suggestion that a researchers was interested in this area would be tantamount to professional suicide.
That has begun to change thanks to a new theory of consciousness developed in the last ten years or so by Giulio Tononi, a neuroscientist at the University of Wisconsin in Madison, and others. Tononi’s key idea is that consciousness is phenomenon in which information is integrated in the brain in a way that cannot be broken down.
So each instant of consciousness integrates the smells, sounds and sights of that moment of experience. And consciousness is simply the feeling of this integrated information experience.
What makes Tononi’s ideas different from other theories of consciousness is that it can be modelled mathematically using ideas from physics and information theory. That doesn’t mean this theory is correct. But it does mean that, for the first time, neuroscientists, biologists physicists and anybody else can all reason about consciousness using the universal language of science: mathematics.
This has led to an extraordinary blossoming of ideas about consciousness. A few months ago, for example, we looked at how physicists are beginning to formulate the problem consciousness in terms of quantum mechanics and information theory.
Today, Phil Maguire at the National University of Ireland and a few pals take this mathematical description even further. These guys make some reasonable assumptions about the way information can leak out of a consciousness system and show that this implies that consciousness is not computable. In other words, consciousness cannot be modelled on a computer.
Maguire and co begin with a couple of thought experiments that demonstrate the nature of integrated information in Tononi’s theory. They start by imagining the process of identifying chocolate by its smell. For a human, the conscious experience of smelling chocolate is unified with everything else that a person has smelled (or indeed seen, touched, heard and so on).
This is entirely different from the process of automatically identifying chocolate using an electronic nose, which measures many different smells and senses chocolate when it picks out the ones that match some predefined signature.
A key point here is that it would be straightforward to access the memory in an electronic nose and edit the information about its chocolate experience. You could delete this with the press of a button.
But ask a neuroscientist to do the same for your own experience of the smell of chocolate—to somehow delete this—and he or she would be faced with an impossible task since the experience is correlated with many different parts of the brain.
Indeed, the experience will be integrated with all kinds of other experiences. “According to Tononi, the information generated by such [an electronic nose] differs from that generated by a human insofar as it is not integrated,” say Maguire and co.
This process of integration is then crucial and Maguire and co focus on the mathematical properties it must have. For instance, they point out that the process of integrating information, of combining it with many other aspects of experience, can be thought of as a kind of information compression.
This compression allows the original experience to be constructed but does not keep all of the information it originally contained.
To better understand this, they give as an analogy the sequence of numbers: 4, 6, 8, 12, 14, 18, 20, 24…. This is an infinite series defined as: odd primes plus 1. This definition does not contain all the infinite numbers but it does allow it be reproduced. It is clearly a compression of the information in the original series.
The brain, say Maguire and co, must work like this when integrating information from a conscious experience. It must allow the reconstruction of the original experience but without storing all the parts.
That leads to a problem. This kind of compression inevitably discards information. And as more information is compressed, the loss becomes greater.
But if our memories were like that cannot be like that, they would be continually haemorrhaging meaningful content. “Memory functions must be vastly non-lossy, otherwise retrieving them repeatedly would cause them to gradually decay,” say Maguire and co.
The central part of their new work is to describe the mathematical properties of a system that can store integrated information in this way but without it leaking away. And this leads them to their central proof. “The implications of this proof are that we have to abandon either the idea that people enjoy genuinely [integrated] consciousness or that brain processes can be modelled computationally,” say Maguire and co.
Since Tononi’s main assumption is that consciousness is the experience of integrated information, it is the second idea that must be abandoned: brain processes cannot be modelled computationally.
They go on to discuss this in more detail. If a person’s behaviour cannot be analysed independently from the rest of their conscious experience, it implies that something is going on in their brain that is so complex it cannot feasibly be reversed, they say.
In other words, the difference between cognition and computation is that computation is reversible whereas cognition is not. And they say that is reflected in the inability of a neuroscientist to operate and remove a particular memory of the small of chocolate.
That’s an interesting approach but it is one that is likely to be controversial. The laws of physics are computable, as far as we know. So critics might ask how the process of consciousness can take place at all if it is non-computable. Critics might even say this is akin to saying that consciousness is in some way supernatural, like magic.
But Maguire and go counter this by saying that their theory doesn’t imply that consciousness is objectively non-computable only subjectively so. In other words, a God-like observer with perfect knowledge of the brain would not consider it non-computable. But for humans, with their imperfect knowledge of the universe, it is effectively non-computable.
There is something of a card trick about this argument. In mathematics, the idea of non-computability is not observer-dependent so it seems something of a stretch to introduce it as an explanation.
What’s more, critics might point to other weaknesses in the formulation of this problem. For example, the proof that conscious experience is non-computable depends critically on the assumption that our memories are non-lossy.
But everyday experience is surely the opposite—our brains lose most of the information that we experience consciously. And the process of repeatedly accessing memories can cause them to change and degrade. Isn’t the experience of forgetting a face of a known person well documented?
Then again, critics of Maguire and co’s formulation of the problem of consciousness must not lose sight of the bigger picture—that the debate about consciousness can occur on a mathematical footing at all. That’s indicative of a sea change in this most controversial of fields.
Of course, there are important steps ahead. Perhaps the most critical is that the process of mathematical modelling must lead to hypotheses that can be experimentally tested. That’s the process by which science distinguishes between one theory and another. Without a testable hypothesis, a mathematical model is not very useful.
For example, Maguire and co could use their model to make predictions about the limits in the way information can leak from a conscious system. These limits might be testable in experiments focusing on the nature of working memory or long-term memory in humans.
That’s the next challenge for this brave new field of consciousness.
Ref: arxiv.org/abs/1405.0126 : Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory
Follow the Physics arXiv Blog on Twitter at @arxivblog, on Facebook and by hitting the Follow button below.
ORIGINAL: Medium
No hay comentarios:
Publicar un comentario
Nota: solo los miembros de este blog pueden publicar comentarios.