"Consciousness to emerge". That doesn't sound particularly rigorous to me.
There are whole books on the subject that are a lot more rigorous.
I used the term "emerge" as a summary, NOT as a mysterious black box or anything. You can't expect me to rewrite whole chapters of materials every time I talk about small matters of computing consciousness.
Could you perhaps precisely describe what is scientifically necessary for "consciousness to emerge".
I can offer a summary of the most compelling theory I have heard:
A mapping of relationship between various models of the self (which are, themselves, modeled after reporting of the states of various aspects of the body; called "proto-self" and "core self", etc.) with other objects external to the self; that can be sustained for a certain amount of time, within the network. This second-order mapping can be called the "autobiographical self".
And it is also worth noting that some form of memory might be needed to make the relationships in the mapping make any kind of sense. "Memory", however, would be more of a systematic reconstruction of possible playbacks of past responses to stimuli (motor and emotional, etc.), as opposed to the conventional view that "memory" is the playback of a recording from the senses.
That was summary was more or less written in my own words. But, there are books that will elaborate on that, if you care to read them.
Just what "inputs" are needed?
I focused on "inputs" recently, because someone brought up the idea that other physiological processes might be necessary for consciousness, than merely brain computation. If so, I argued how they can also be simulated to provide sufficient data. The exact details are probably not important, but if you insist on a summary:
Human-like consciousness seems to rely on states of the body being reported in some way, (Physical pain (perhaps from injury); or a sense of hunger and fatigue or being satiated and energetic); in order to develop models of the self. Each one of these can be fooled by anyone who intervenes in the process of reporting. And, there is no reason why any of them can never be simulated. You can call such reporting "inputs", if you would like.
In Antonio Damasio's book, he even makes specific claims as to how the posteromedial cortices (PMCs) have a unique set of connections to other parts of the brain that convey information about the body's states, including routes to the older brain stem areas, that other parts of the brain are not exactly privy to.
A bunch of vague waffle, a demand to disprove it and the obligatory reference to magic.
You offer no examples of what you are talking about. Conceptually, I can see no reason why such examples can exist, right now. But, I am willing to be proven wrong. Explain, perhaps even only in principle, how something relevant to consciousness can never, ever be computed.
In the case of consciousness, the claim appears to be that because the term is so vague, and the functionality so much more complex, that it's possible to make equally vague assertions and demand that someone disprove them.
The claims I am making are NOT vague assertions. We really DO know a LOT about how consciousness works, already!! Even if we don't know everything, we are still making progress on mastering the mysteries!
And, NOTHING discovered so far contradicts the idea that consciousness can be emulated or simulated on a computer, or in a robot.
I urge you to read up on this exciting stuff, before you make such accusations, again.
Another frequent feature of this discussion is to make very precise and specific claims - relating to algorithms, Church-Turing and computation - and then rephrase it in a totally open way. Because if someone denies that a machine might duplicate the functionality of the brain - well, that's just a claim of mysticism and magic and god.
It will be a claim of mysticism and magic (and god, if you like), until demonstrated otherwise.
The computation claim is that no particular physical processes are essential to consciousness, as opposed to every other process going on in the human body.
The most compelling computational claim I have seen is that the REPORTING of physical processes is an essential ingredient in human-like consciousness. (Though, a more generalized version might not even include that, but that's a rabbit hole we can explore later.)
The argument about which physical process are (or are not) "essential" to consciousness is a non-sequitur. What is really important is that there is something, somewhere, that can somehow be mapped into a model of the self.
It should be pointed out that essential functionality is something which we choose. If we want to replicate all the functionality of the heart, then we need an exact duplicate of the heart. If we decide that all we want is a pump, then we can replicate that and leave other considerations on one side.
It's an error to suppose that there is an objective definition of essential functionality.
I can agree with this, actually.
They say that the functionality of the brain is not well enough understood to make a definite assertion.
That is too bad for them. Nothing we know, so far, contradicts the assertion.
As long as
productive science can be achieved in following that idea, it will continue to be followed.
I strongly recommend that if you are interested in this subject, and in particular, the connection between the Church-Turing Thesis and artificial intelligence, that you read the article, or at least the section on misunderstandings.
Can you give me an example of something that can't be computed by a Turning machine, but can still be computed by a different machine, or not?!