Just a recap, to re-establish why the current topic matters in terms of the OP....
The larger topic, of course, is what would happen to the experience of a conscious robot if its brain's operating speed were slowed down dramatically.
One proposal is that it must remain conscious because its brain would necessarily be some sort of Turing Machine, and TMs get the same outputs regardless of operating speed.
So we're examining whether or not the human brain -- the only machine we can be certain produces consciousness -- is a TM or, alternately, whether we can be sure that a TM can do everything the human brain does (since we don't know the actual mechanism that produces consciousness). Inclusive in the latter is the question of whether rendering the products of calculation (at any speed) is sufficient to accomplish everything that a brain or a computer can accomplish.
If I'm understanding the "pro" side correctly, we can affirm that computers can do all the things a brain can do (and do them at any operating speed) because computers and brains are both information processors, and therefore information processing must be what generates consciousness.
But the notion of IP actually generating consciousness is problematic.
Searle complained that the difference between a brain and a computer is that a computer isn't aware of the meanings any of the symbols it manipulates. But there's good reason to believe that the brain isn't aware of any such meanings, either -- at least, not the parts of the brain that actually work things out.
We see evidence of this in the kinds of linguistic and reasoning errors the brain is prone to. The brain appears to be association-driven. It works by reinforcing the memory of patterns, and associating these patterns with other patterns that occur with them in our experience.
We can see this in the way we come up with wrong words, for example. Sometimes we err by coming up with near synonyms -- the ideas associated with the sounds match -- but other times we come up with sound-alikes that mean entirely different things, or words that look alike on the page, or even the word that Uncle Harry used to say all the time just like he used to say the word we were looking for all the time.
The brain appears indifferent to the kinds of associations it's retrieving. It simply reaches for whichever associations are available, and goes with the one that is strongest, regardless of whether it "makes sense".
Furthermore, the brain seems to use this pattern-storage and -association technique (just how, on a physical level, nobody knows) across the board. For example, PTSD appears to be a direct result of the brain going into hyperdrive after life-threatening experiences.
When we have a traumatic experience, the brain wants to encode the various patterns of the event and strongly associate them with mortal fear. Studies with stressed mice have shown their brains "rehearsing" the trauma over and over, for example. And in humans we see recurring dreams, as well as physical panic responses to physically similar situations (which may be getting on an airplane, driving under a bridge, walking in a narrowly constricted space, being in a dense crowd of people, etc.) even though the conscious mind "knows" that not threat is present.
But if drugs are administered immediately after the trauma which short-circuit this "rehearsal" (most of which is unconscious) PTSD can be greatly lessened or avoided.
What about a computer?
Well, we can get very basic and go back to our abacus example.
The man at the abacus is doing addition, a symbolic activity in his mind. But the abacus isn't doing any adding -- it's just having its beads moved back and forth. In the man's head, the beads represent certain number values. But those numbers are not manifest in the abacus. It's not actually manipulating any symbols -- the only thing that changes in that system is the location of the beads.
Are modern computers any different? Do they actually manipulate symbols?
It doesn't appear that they do. Although they're much more complicated than the abacus, they're still physical objects, and they do what they do merely by changing physical states, although they do it much faster and are powered by electricity rather than by human beings.
But humans have found a way to make these objects do what they do by using interfaces -- both on the programming end and the user end -- which "communicate" with us in the kind of symbolic system we understand. We put commands into our programs that tell the machine to give x a value, for instance, and to increment that value by 1 every time something happens.
But is there actually an "x" in the computer? Is there actually a "1" in the computer? Well, no. No more than there was ever a 1,264 in the abacus.
Both the abacus and the computer are machines that work by changing physical states, and the numbers in the abacus-users's head, and the logical statements in the programmer's lines of code, and the words and figures printed on screens and paper are things which make sense only to humans. The machines do what they do in the physical world without any reference to them at all.
We seem to get "answers" -- and we do -- but that's only because we've designed a physical apparatus that changes physical states (by hand-power or electrical power) in a way that facilitates our use of symbols. They merely allow humans to use symbols more quickly and powerfully.
So does the brain use symbols in its basic operation?
It doesn't appear so. It is easier for us to discuss the brain as if it did, but that's our own metaphor, a kind of short-hand that allows us to talk about events which would be impossibly complicated to discuss otherwise.
Symbolic thinking happens at a higher level. As I mentioned in another post, the brain constructs schema, or meta-patterns, that it uses as a kind of cache. We often miss details (especially as we age) in what we see because our brains don't bother to use the actual incoming stream of stimuli, but instead fills in with schema as a resource-saver.
Magicians who work with children know this all too well, because small children will often fail to be misdirected in the way that adults and teens are, because their brains are using what's coming into them from the outside rather than using short-cuts.
The use of these schema, or bundles of association, or pre-set clusters of neural activity, facilitates what we think of as symbolic thinking.
But underlying that symbolic thinking is the same old hardware, our big abacus, the brain.
We know that this brain generates conscious experience -- one of the many things it does. And we have every reason to believe, based on how the brain is wired and how it behaves, that consciousness is after-the-fact. When I look at a menu, I seem to consciously decide to have the asparagus, but what's really happening is that other parts of my brain do that and I become aware that "I" have "made a choice" a fraction of a second later.
The same can be said of the process of answering "What's 2 plus 2?" There's a chain reaction of pattern associations in the brain that results in my saying "4". But were any symbols manipulated in the process? It would seem that they weren't. However, it's extremely useful to model this activity as the use of symbols.
In any case, when we ask how consciousness is generated, to propose that symbolic thinking is the cause of consciousness is to get the relationship somewhat backward. Whatever the brain does, it does by changing physical states. It's all stuff, all chemicals, all biology.
Somehow, this biology generates the real-world phenomenon of conscious awareness. We don't know how. But however it does it, we can be sure that it's a consequence of physical activity.
It must be an error to say that the processing of "information" literally generates consciousness, because "information" is nowhere in the brain. Rather, "information" is an abstraction of our own invention.
We can talk about computers and the brain in terms of information processing, certainly, and our descriptions are accurate (if not perfectly precise). But we have to be aware that we're using abstractions, that we're not actually describing how these machines actually do what they do on a purely objective level.
Given that fact, we can't simply rely on the "information processor" analogy (because that's what it is) to assert that "information processing" is what generates consciousness, or that computers -- of the type we now have-- must be able to perform all of the functions that brains can perform.
Perhaps they can. It would be wonderful.
But we cannot make that assertion by analogy.