• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Sorry-- your argument is similar to pl3bs's.

So it is not possible that IP is what generates consciousness. IP is purely symbolic. It is our system of understanding and talking about certain things.
The problem is that you have no evidence of anything that is purely symbolic, that is, independent of any physical substrate, such as your brain.

Remember that you couldn't have recognized anything in your own mind as a symbol until you learned the meaning of the word "symbol". So what distinguished your use of symbols before then from, say, a calculator's use of them?
 
The problem is that you have no evidence of anything that is purely symbolic, that is, independent of any physical substrate, such as your brain.

What does this even mean? I'm sorry, but I can't parse it.

Remember that you couldn't have recognized anything in your own mind as a symbol until you learned the meaning of the word "symbol". So what distinguished your use of symbols before then from, say, a calculator's use of them?

I don't take your meaning.

How does this in any way support the notion that "information processing" literally generates consciousness?
 
What does this even mean? I'm sorry, but I can't parse it.
I think we need to look closer at what we're calling a symbol, and what physical objects we're allowing to hold them. You've allowed that a 100% physical brain can. When you hear someone say "two" you recognize that as the number 2, which you also recognize is a symbol. But was this still a symbol to you when you were young, before you knew what a symbol was?

If so, how do you distinguish that case from a computer recognizing its input as the symbolic number 2? Neither the young you nor the computer have a higher-level understanding of symbols, but that wouldn't stop either of you from making use of them.
 
I think we need to look closer at what we're calling a symbol, and what physical objects we're allowing to hold them. You've allowed that a 100% physical brain can. When you hear someone say "two" you recognize that as the number 2, which you also recognize is a symbol. But was this still a symbol to you when you were young, before you knew what a symbol was?

If so, how do you distinguish that case from a computer recognizing its input as the symbolic number 2? Neither the young you nor the computer have a higher-level understanding of symbols, but that wouldn't stop either of you from making use of them.

We don't disagree there.

But I don't see why you're raising the point.
 
We don't disagree there.

But I don't see why you're raising the point.

So you now agree that both inside your brain (mind) and outside (computers) it's the underlying physics that is causal, and that in both cases we recognize symbols, but the symbols themselves aren't causal.
 
Just a recap, to re-establish why the current topic matters in terms of the OP....

The larger topic, of course, is what would happen to the experience of a conscious robot if its brain's operating speed were slowed down dramatically.

One proposal is that it must remain conscious because its brain would necessarily be some sort of Turing Machine, and TMs get the same outputs regardless of operating speed.

So we're examining whether or not the human brain -- the only machine we can be certain produces consciousness -- is a TM or, alternately, whether we can be sure that a TM can do everything the human brain does (since we don't know the actual mechanism that produces consciousness). Inclusive in the latter is the question of whether rendering the products of calculation (at any speed) is sufficient to accomplish everything that a brain or a computer can accomplish.

If I'm understanding the "pro" side correctly, we can affirm that computers can do all the things a brain can do (and do them at any operating speed) because computers and brains are both information processors, and therefore information processing must be what generates consciousness.

But the notion of IP actually generating consciousness is problematic.

Searle complained that the difference between a brain and a computer is that a computer isn't aware of the meanings any of the symbols it manipulates. But there's good reason to believe that the brain isn't aware of any such meanings, either -- at least, not the parts of the brain that actually work things out.

We see evidence of this in the kinds of linguistic and reasoning errors the brain is prone to. The brain appears to be association-driven. It works by reinforcing the memory of patterns, and associating these patterns with other patterns that occur with them in our experience.

We can see this in the way we come up with wrong words, for example. Sometimes we err by coming up with near synonyms -- the ideas associated with the sounds match -- but other times we come up with sound-alikes that mean entirely different things, or words that look alike on the page, or even the word that Uncle Harry used to say all the time just like he used to say the word we were looking for all the time.

The brain appears indifferent to the kinds of associations it's retrieving. It simply reaches for whichever associations are available, and goes with the one that is strongest, regardless of whether it "makes sense".

Furthermore, the brain seems to use this pattern-storage and -association technique (just how, on a physical level, nobody knows) across the board. For example, PTSD appears to be a direct result of the brain going into hyperdrive after life-threatening experiences.

When we have a traumatic experience, the brain wants to encode the various patterns of the event and strongly associate them with mortal fear. Studies with stressed mice have shown their brains "rehearsing" the trauma over and over, for example. And in humans we see recurring dreams, as well as physical panic responses to physically similar situations (which may be getting on an airplane, driving under a bridge, walking in a narrowly constricted space, being in a dense crowd of people, etc.) even though the conscious mind "knows" that not threat is present.

But if drugs are administered immediately after the trauma which short-circuit this "rehearsal" (most of which is unconscious) PTSD can be greatly lessened or avoided.

What about a computer?

Well, we can get very basic and go back to our abacus example.

The man at the abacus is doing addition, a symbolic activity in his mind. But the abacus isn't doing any adding -- it's just having its beads moved back and forth. In the man's head, the beads represent certain number values. But those numbers are not manifest in the abacus. It's not actually manipulating any symbols -- the only thing that changes in that system is the location of the beads.

Are modern computers any different? Do they actually manipulate symbols?

It doesn't appear that they do. Although they're much more complicated than the abacus, they're still physical objects, and they do what they do merely by changing physical states, although they do it much faster and are powered by electricity rather than by human beings.

But humans have found a way to make these objects do what they do by using interfaces -- both on the programming end and the user end -- which "communicate" with us in the kind of symbolic system we understand. We put commands into our programs that tell the machine to give x a value, for instance, and to increment that value by 1 every time something happens.

But is there actually an "x" in the computer? Is there actually a "1" in the computer? Well, no. No more than there was ever a 1,264 in the abacus.

Both the abacus and the computer are machines that work by changing physical states, and the numbers in the abacus-users's head, and the logical statements in the programmer's lines of code, and the words and figures printed on screens and paper are things which make sense only to humans. The machines do what they do in the physical world without any reference to them at all.

We seem to get "answers" -- and we do -- but that's only because we've designed a physical apparatus that changes physical states (by hand-power or electrical power) in a way that facilitates our use of symbols. They merely allow humans to use symbols more quickly and powerfully.

So does the brain use symbols in its basic operation?

It doesn't appear so. It is easier for us to discuss the brain as if it did, but that's our own metaphor, a kind of short-hand that allows us to talk about events which would be impossibly complicated to discuss otherwise.

Symbolic thinking happens at a higher level. As I mentioned in another post, the brain constructs schema, or meta-patterns, that it uses as a kind of cache. We often miss details (especially as we age) in what we see because our brains don't bother to use the actual incoming stream of stimuli, but instead fills in with schema as a resource-saver.

Magicians who work with children know this all too well, because small children will often fail to be misdirected in the way that adults and teens are, because their brains are using what's coming into them from the outside rather than using short-cuts.

The use of these schema, or bundles of association, or pre-set clusters of neural activity, facilitates what we think of as symbolic thinking.

But underlying that symbolic thinking is the same old hardware, our big abacus, the brain.

We know that this brain generates conscious experience -- one of the many things it does. And we have every reason to believe, based on how the brain is wired and how it behaves, that consciousness is after-the-fact. When I look at a menu, I seem to consciously decide to have the asparagus, but what's really happening is that other parts of my brain do that and I become aware that "I" have "made a choice" a fraction of a second later.

The same can be said of the process of answering "What's 2 plus 2?" There's a chain reaction of pattern associations in the brain that results in my saying "4". But were any symbols manipulated in the process? It would seem that they weren't. However, it's extremely useful to model this activity as the use of symbols.

In any case, when we ask how consciousness is generated, to propose that symbolic thinking is the cause of consciousness is to get the relationship somewhat backward. Whatever the brain does, it does by changing physical states. It's all stuff, all chemicals, all biology.

Somehow, this biology generates the real-world phenomenon of conscious awareness. We don't know how. But however it does it, we can be sure that it's a consequence of physical activity.

It must be an error to say that the processing of "information" literally generates consciousness, because "information" is nowhere in the brain. Rather, "information" is an abstraction of our own invention.

We can talk about computers and the brain in terms of information processing, certainly, and our descriptions are accurate (if not perfectly precise). But we have to be aware that we're using abstractions, that we're not actually describing how these machines actually do what they do on a purely objective level.

Given that fact, we can't simply rely on the "information processor" analogy (because that's what it is) to assert that "information processing" is what generates consciousness, or that computers -- of the type we now have-- must be able to perform all of the functions that brains can perform.

Perhaps they can. It would be wonderful.

But we cannot make that assertion by analogy.
 
So you now agree that both inside your brain (mind) and outside (computers) it's the underlying physics that is causal, and that in both cases we recognize symbols, but the symbols themselves aren't causal.

I think when we speak of recognizing symbols, we're actually on shakier ground than it may seem.

But of course it's the underlying physics that's causal.
 
Now, back to Joe and the Glacial Axon Machine.

We have Joe in our black-out room and we show him the red circle, blue square, and green triangle. Let's say we also show him a yellow octagon between each of these, but at a subliminal speed.

When he gets out, we ask him what he saw, and he reports seeing a red circle, blue square, and green triangle.

We do it again, but this time he's under the influence of the machine while the green triangle is on the screen for a correspondingly longer period of time.

Will he report seeing the green triangle?

At first I thought he would not. Then I ran through it in my head, considering the actions of the neurons, and thought, well, there's nothing different happening, so yes, he would.

But now I'm not sure, and here's why.

Consciousness, of course, does not operate on the neural level. it requires the coordination of large sets of data.

We know that the brain will not consciously perceive events that fall below a certain real-time duration threshold (even though the brain does perceive, remember, and recall these events non-consciously). We also know that the brain will not perceive sequential events as sequential, but rather as simultaneous, if they occur within a span of real time above the subliminal threshold but below a slightly greater threshold.

So there appears to be a window of time that's a kind of conscious moment, a span of real time that consciousness treats as a single unit.

It's likely, therefore, that our stream of conscious awareness is a series of such moments, strung together like a film. This notion helps make sense of why time seems to speed up as we age, and slow down again when we're in danger.

So this leads to an obvious question: What is the minimum time span during which we can actually be conscious?

Can we be conscious for the duration of a single conscious moment? Or does consciousness require the stringing together of several of these?

Looking back at Joe's second experience in the black-out room, it's easy to see that he is conscious when he's not under the influence of the machine. His brain is doing what it's supposed to do, taking chunks of input, processing it, and "playing the film" which is his conscious experience -- note that "is" is meant literally... he does not "view" this film... it is his conscious experience in its entirety.

But what happens when the machine is turned on? (This machine maintains all his bodily functions, but slows down the impulses in his axons so that they take an average of 1 second to go end to end. Relative firing times at the synapses stay the same as they would otherwise be.)

The question in my mind is this: Under those circumstances, when is Joe conscious, and for how long at a stretch? Does his consciousness slow down, does it disappear, or does it flicker on and off?

So far, I haven't come up with any definite answers, but I'm having one heck of a time coming up with any answer but that his brain never serves up a conscious moment while the machine is on.
 
I mostly agree with your description and reasoning, up to this point:

It must be an error to say that the processing of "information" literally generates consciousness, because "information" is nowhere in the brain. Rather, "information" is an abstraction of our own invention.

We can talk about computers and the brain in terms of information processing, certainly, and our descriptions are accurate (if not perfectly precise). But we have to be aware that we're using abstractions, that we're not actually describing how these machines actually do what they do on a purely objective level.

If we can recognize one pattern of matter and call it a "brain", and another pattern of matter (electrons, say) and call it "information", what makes one pattern abstract and not the other? The information pattern overlays the brain pattern, but they're still both patterns of the physical.

It would be abstract if the information had no connection to any particular (concrete) matter. We can hold abstract information in our brains, but language is tricky here: the information itself isn't abstract, it's what the information refers to that is abstract.
 
We know that the brain will not consciously perceive events that fall below a certain real-time duration threshold (even though the brain does perceive, remember, and recall these events non-consciously). We also know that the brain will not perceive sequential events as sequential, but rather as simultaneous, if they occur within a span of real time above the subliminal threshold but below a slightly greater threshold.

So there appears to be a window of time that's a kind of conscious moment, a span of real time that consciousness treats as a single unit.

It's likely, therefore, that our stream of conscious awareness is a series of such moments, strung together like a film. This notion helps make sense of why time seems to speed up as we age, and slow down again when we're in danger.
Is the threshold the same for all our senses, or is a touch-conscious-moment different to a vision-conscious-moment? I can't get beyond the thought that all you are doing is analysing the mechanisms by which our senses get processed and arrive at a point where we are conscious of them rather than saying anything about consciousness itself.
 
A second pass at reviewing this:

Just a recap, to re-establish why the current topic matters in terms of the OP....

The larger topic, of course, is what would happen to the experience of a conscious robot if its brain's operating speed were slowed down dramatically.

One proposal is that it must remain conscious because its brain would necessarily be some sort of Turing Machine, and TMs get the same outputs regardless of operating speed.

So we're examining whether or not the human brain -- the only machine we can be certain produces consciousness -- is a TM or, alternately, whether we can be sure that a TM can do everything the human brain does (since we don't know the actual mechanism that produces consciousness). Inclusive in the latter is the question of whether rendering the products of calculation (at any speed) is sufficient to accomplish everything that a brain or a computer can accomplish.
To be proper, leave out that the brain is a TM, and stick with the alternate that it is equivalent to one.

If I'm understanding the "pro" side correctly, we can affirm that computers can do all the things a brain can do (and do them at any operating speed) because computers and brains are both information processors, and therefore information processing must be what generates consciousness.

But the notion of IP actually generating consciousness is problematic.

Searle complained that the difference between a brain and a computer is that a computer isn't aware of the meanings any of the symbols it manipulates. But there's good reason to believe that the brain isn't aware of any such meanings, either -- at least, not the parts of the brain that actually work things out.

We see evidence of this in the kinds of linguistic and reasoning errors the brain is prone to. The brain appears to be association-driven. It works by reinforcing the memory of patterns, and associating these patterns with other patterns that occur with them in our experience.

We can see this in the way we come up with wrong words, for example. Sometimes we err by coming up with near synonyms -- the ideas associated with the sounds match -- but other times we come up with sound-alikes that mean entirely different things, or words that look alike on the page, or even the word that Uncle Harry used to say all the time just like he used to say the word we were looking for all the time.

The brain appears indifferent to the kinds of associations it's retrieving. It simply reaches for whichever associations are available, and goes with the one that is strongest, regardless of whether it "makes sense".

Furthermore, the brain seems to use this pattern-storage and -association technique (just how, on a physical level, nobody knows) across the board. For example, PTSD appears to be a direct result of the brain going into hyperdrive after life-threatening experiences.

When we have a traumatic experience, the brain wants to encode the various patterns of the event and strongly associate them with mortal fear. Studies with stressed mice have shown their brains "rehearsing" the trauma over and over, for example. And in humans we see recurring dreams, as well as physical panic responses to physically similar situations (which may be getting on an airplane, driving under a bridge, walking in a narrowly constricted space, being in a dense crowd of people, etc.) even though the conscious mind "knows" that not threat is present.

But if drugs are administered immediately after the trauma which short-circuit this "rehearsal" (most of which is unconscious) PTSD can be greatly lessened or avoided.

What about a computer?

Well, we can get very basic and go back to our abacus example.

The man at the abacus is doing addition, a symbolic activity in his mind. But the abacus isn't doing any adding -- it's just having its beads moved back and forth. In the man's head, the beads represent certain number values. But those numbers are not manifest in the abacus. It's not actually manipulating any symbols -- the only thing that changes in that system is the location of the beads.
Unfortunately the abacus is a poor example. It's no more of a computer than a piece of paper is-- just a storage component (and a quite limited one at that).

Are modern computers any different? Do they actually manipulate symbols?

It doesn't appear that they do. Although they're much more complicated than the abacus, they're still physical objects, and they do what they do merely by changing physical states, although they do it much faster and are powered by electricity rather than by human beings.
The ability of computers to change physical states by themselves in useful ways is one crucial difference.

But humans have found a way to make these objects do what they do by using interfaces -- both on the programming end and the user end -- which "communicate" with us in the kind of symbolic system we understand. We put commands into our programs that tell the machine to give x a value, for instance, and to increment that value by 1 every time something happens.

But is there actually an "x" in the computer? Is there actually a "1" in the computer? Well, no. No more than there was ever a 1,264 in the abacus.

Both the abacus and the computer are machines that work by changing physical states, and the numbers in the abacus-users's head, and the logical statements in the programmer's lines of code, and the words and figures printed on screens and paper are things which make sense only to humans. The machines do what they do in the physical world without any reference to them at all.

We seem to get "answers" -- and we do -- but that's only because we've designed a physical apparatus that changes physical states (by hand-power or electrical power) in a way that facilitates our use of symbols. They merely allow humans to use symbols more quickly and powerfully.
The same argument can be made about children: we create them and teach them, and they appear to do the complex things that we teach them, but why don't we say that they are just doing our bidding, not "actually" using symbols? (remember that it's easy to add a random element to a computer to make its behavior unique)

So does the brain use symbols in its basic operation?

It doesn't appear so. It is easier for us to discuss the brain as if it did, but that's our own metaphor, a kind of short-hand that allows us to talk about events which would be impossibly complicated to discuss otherwise.
But that's what a symbol is-- there isn't anything else that a symbol could be a metaphor for.

Symbolic thinking happens at a higher level. As I mentioned in another post, the brain constructs schema, or meta-patterns, that it uses as a kind of cache. We often miss details (especially as we age) in what we see because our brains don't bother to use the actual incoming stream of stimuli, but instead fills in with schema as a resource-saver.

Magicians who work with children know this all too well, because small children will often fail to be misdirected in the way that adults and teens are, because their brains are using what's coming into them from the outside rather than using short-cuts.

The use of these schema, or bundles of association, or pre-set clusters of neural activity, facilitates what we think of as symbolic thinking.

But underlying that symbolic thinking is the same old hardware, our big abacus, the brain.
There is nothing restricting a computer from making all the same higher-level constructions, meta-patterns, associations, etc.

We know that this brain generates conscious experience -- one of the many things it does. And we have every reason to believe, based on how the brain is wired and how it behaves, that consciousness is after-the-fact. When I look at a menu, I seem to consciously decide to have the asparagus, but what's really happening is that other parts of my brain do that and I become aware that "I" have "made a choice" a fraction of a second later.

The same can be said of the process of answering "What's 2 plus 2?" There's a chain reaction of pattern associations in the brain that results in my saying "4". But were any symbols manipulated in the process? It would seem that they weren't. However, it's extremely useful to model this activity as the use of symbols.

In any case, when we ask how consciousness is generated, to propose that symbolic thinking is the cause of consciousness is to get the relationship somewhat backward. Whatever the brain does, it does by changing physical states. It's all stuff, all chemicals, all biology.

Somehow, this biology generates the real-world phenomenon of conscious awareness. We don't know how. But however it does it, we can be sure that it's a consequence of physical activity.

It must be an error to say that the processing of "information" literally generates consciousness, because "information" is nowhere in the brain. Rather, "information" is an abstraction of our own invention.

We can talk about computers and the brain in terms of information processing, certainly, and our descriptions are accurate (if not perfectly precise). But we have to be aware that we're using abstractions, that we're not actually describing how these machines actually do what they do on a purely objective level.

(I'm revising my earlier reply to this)

We don't know of any information that is independent of any concrete physical representation of it, so whatever represents information isn't abstract, though the concept of information is a slippery beast: the same information (that which informs something) can have multiple physical-pattern representations, and the same pattern can be taken as different information by two different recipients.

In any case, the recipient of the information is what determines what is information: it is being informed by it and may behave differently based on it. Our recognition of this process doesn't change it, in a computer or a brain. Calling some piece of information a symbol and another not is unsupported.
 
If we can recognize one pattern of matter and call it a "brain", and another pattern of matter (electrons, say) and call it "information", what makes one pattern abstract and not the other? The information pattern overlays the brain pattern, but they're still both patterns of the physical.

It would be abstract if the information had no connection to any particular (concrete) matter. We can hold abstract information in our brains, but language is tricky here: the information itself isn't abstract, it's what the information refers to that is abstract.

That's actually a point I've thought about myself. I'd just ask you, how do you identify "information"? What makes it "information" and not something else?
 
Is the threshold the same for all our senses, or is a touch-conscious-moment different to a vision-conscious-moment?

I don't know the answer to that. I don't know of any experiments that have been done on time thresholds for awareness of touch. It would certainly be interesting to find out.

I can't get beyond the thought that all you are doing is analysing the mechanisms by which our senses get processed and arrive at a point where we are conscious of them rather than saying anything about consciousness itself.

I don't see the difference between the two. How can the former not say something about the latter?
 
The same argument can be made about children: we create them and teach them, and they appear to do the complex things that we teach them, but why don't we say that they are just doing our bidding, not "actually" using symbols? (remember that it's easy to add a random element to a computer to make its behavior unique)

I'm not sure that would be safe to say that children do use symbols.

But if we say they do, that still doesn't tell us that consciousness itself is generated by the manipulation of symbols.

The brain does not appear to be a logic machine, but rather a habit machine.
 
But that's what a symbol is-- there isn't anything else that a symbol could be a metaphor for.

What I mean is, when we say that the brain uses symbols, that statement itself is metaphoric.
 
Is the threshold the same for all our senses, or is a touch-conscious-moment different to a vision-conscious-moment? I can't get beyond the thought that all you are doing is analysing the mechanisms by which our senses get processed and arrive at a point where we are conscious of them rather than saying anything about consciousness itself.

It's not that easy. The sensory receptors are inherently noisy and will be randomly triggering even when there isn't a stimulus input. Any change in the stimulus will change the rate of the firing. It's the correlation of getting input form several receptors or longer term changes from fewer receptors that creates a perceivable event.

Did you know that the time it takes to perceive a light source is dependent on it's brightness? If you have a small illuminated source moving in a near dark room, it can appear to move ahead of the hand that is moving it.
 
Shuttlt said:
I can't get beyond the thought that all you are doing is analysing the mechanisms by which our senses get processed and arrive at a point where we are conscious of them rather than saying anything about consciousness itself.
I don't see the difference between the two. How can the former not say something about the latter?
You don't see a difference between being consciously aware of something, and being conscious? Perhaps if stuff falls below your threshold some algorithm or other decides not to bother the explicitly consciously aware subsystems with it? Does this tell us that the clock tick of consciousness, if there is such a thing, is the same as the threshold at which visual input gets ignored.

For me, I suspect our intuitive notion of qualia style consciousness gradually stops making sense as one slows down an imaginary brain. In planck time I have no idea what it would mean to talk about consciousness. I'm not sure that I expect to see a hard line where we go from conscious to not conscious. Perhaps.
 
That's actually a point I've thought about myself. I'd just ask you, how do you identify "information"? What makes it "information" and not something else?
My next post replaces this. The recipient of the information is what determines what is information: it is being informed by it and may behave differently based on it.

I'm not sure that would be safe to say that children do use symbols.

But if we say they do, that still doesn't tell us that consciousness itself is generated by the manipulation of symbols.

The brain does not appear to be a logic machine, but rather a habit machine.
Any child using words is clearly using symbols, and I wouldn't say that consciousness is generated by manipulating symbols. Consciousness is mainly about learning.

You'll have to explain "habit machine". Clearly we (and children) have more than habitual behaviors.

What I mean is, when we say that the brain uses symbols, that statement itself is metaphoric.
Being a metaphor for which actual case?
 
It's not that easy. The sensory receptors are inherently noisy and will be randomly triggering even when there isn't a stimulus input. Any change in the stimulus will change the rate of the firing. It's the correlation of getting input form several receptors or longer term changes from fewer receptors that creates a perceivable event.

Did you know that the time it takes to perceive a light source is dependent on it's brightness? If you have a small illuminated source moving in a near dark room, it can appear to move ahead of the hand that is moving it.
So are we arguing that the speed theshold of consciousness is effected by how bright the light is, or that our visual systems are effected by this stuff and our consciousness sits on top and has to deal with it?
 
There is nothing restricting a computer from making all the same higher-level constructions, meta-patterns, associations, etc.

Well, I do expect that there will be artificial consciousness someday.

But the question I'm trying to focus on -- even with all these long posts -- is the issue of operating speed.

So let's say we create a computer that ostensibly acts like a brain. That is, it is habit-driven, relies on reinforcement and association of patterns, and makes choices based on association strength rather than any attempt to interpret the "meaning" of patterns.

The initial design has been developed to perform well in its environment -- out of the box it tends to make useful associations for the most important tasks it is normally required to make -- even if some of these are objectively "wrong". (The equivalent to what evolution gives us.)

Feedback mechanisms tend to make a non-malfunctioning machine improve over time, so that it adapts to its environment.

Now, let's take this machine and add consciousness functionality.

We now have an actual instantiation of awareness of experience -- the input to the processes that generate this phenomenon is made up of a combination of external input and stored patterns, properly formatted. Awareness is always after-the-fact (even awareness of the machine's own decisions) but there is feedback from the components that generate consciousness back to other areas of the machine.

There is a window of real time that constitutes a "moment" of conscious awareness -- input spanning shorter lengths of time are still processed by other parts of the machine but are not fed into conscious awareness, events falling within the window are always experienced as simultaneous even if they are not, and sequential events over longer periods of time are perceived as sequential.

At this point, we're talking about pretty high level stuff.

So if we slow down the operating speed dramatically, does the logic of the TM apply? Must we conclude that, because a TM can perform all its operations at any arbitrary speed, therefore the consciousness function of this machine must operate to sustain the phenomenon of conscious experience regardless of operating speed?

Or, would it fail to sustain conscious experience during periods of extremely slow operation?

It seems dubious to me, because the high-level operations which generate conscious experience no longer resemble the operations of a TM.

If we allow that computers can't play movies or run lasers at any artibrary operating speed (and for that matter that pressure-washers won't work at any arbitrary pressure) then, given the nature of what's required to perform this task, does the fact regarding a TM and processing speed even matter?
 

Back
Top Bottom