If the contention is that consciousness is purely a Turing issue, then that's a different contention to consciousness being Turing + something else. When Aku and I have argued that some physical element may be part of consciousness, we've been accused of mysticism and incoherence. Time dependence is precisely the kind of thing we're talking about, and it's highly significant, because it removes The Chinese Room, for example.
I think you guys are likely arguing past one another and people are sniping rather than discussing if that is the case. I offer my interaction with Aku as evidence, where I pointed out that he was misusing a word or two and he responded in a strange way, like his entire worldview was being attacked. I see the same rhetoric in the exchange with Nescafe.
First issue. I don't see how adding time dependence is an issue for anyone. If you would just say that, then I don't think people would argue.
Second issue. I think that someone somewhere in here is confusing two different aspects of the argument. There is the fact that theoretically all computable problems are solvable by Turing machines, and there is the issue that a pure Turing machine is time independent, and there is the fact that most computation is abstract, and that mental processes are computable and solvable with Turing machines, so consciousness is too.
Somehow these separate strands of argument were fused into consciousness is abstract and computable on a Turing equivalent which is time independent.
None of that follows. Each of the above is a separate issue. That Turing equivalents can compute in a time independent fashion is fine, and that they can be described in purely abstract terms is fine. But there is also nothing wrong with putting added constraints on them for particular problems. I don't see how this would change the proof that they could 'compute consciousness' , so it is wrong to conflate two different issues going on in a long discussion.
I have seen several mistakes of over-generalizing in these discussions, and they need to be pointed out.
I think both sides are missing each others' points and that old antangonisms are ruling the day rather than honest debate. It might help if everyone took a step back and tried to really understand what the other person is saying. And stop all the personal attacks.
Third issue. This actually has nothing to do with the Chinese Room argument. The missing bit in the Chinese Room argument is semantic content, not a time element or anything else. What that argument shows is not that any computation cannot know Chinese or be aware or 'understand' but that simple forms of computation that provide only simple syntax are not capable of it. So, your typical multi-line program to get a computer to do anything will get nowhere close to consciousness because it never whispers close to understanding anything.
The important bit about a 'physical argument' is that once on sees that physical systems can be computers, that neurons have computation as an intrinsic property, then another issue disappears -- this argument that computation cannot explain consciousness because it is abstract. It can be dealt with as an abstraction, but it does not follow that it *is* an abstraction. Time dependence is just a consequence of the physical nature of the process.
It is the way that information processing proceeds within the system and dealing with a particular type of frame issue that provides the other missing bits in the explanation.
We must come to understand how we get semantic content, how we are or become aware, and how we feel. That is why I have repeatedly asked the questions "what is meaning?", "what is awareness?", and "what is feeling?". We have to arrive at better definitions of these concepts so that we can see how people do it. Only then can we work out how neurons do it. Only then can we emulate that with a computer.
Unless we happen to stumble on the solution with something like the Blue Brain Project. But that is unlikely since the work space is just too large, the number of possible solutions extraordinary.
Last issue. This has nothing to do with what you said, but I see it repeated often, so I want to address it again. There is nothing special about neurotransmitters. No neurotransmitter has any special properties at all. No computer system that emulates consciousness necessarily needs anything like a neurotransmitter, but again the engineering problem to replicate what happens at a synapse is a bear. The reason why we have different neurotransmitters is to segregate different 'systems'. Dopamine doesn't cause pleasure, nor does serotonin. It is the system that does it. Those transmitters just happen to be the chemicals used in the process.