• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Piggy, Your claim that there is an absolute time scale to consciousness is indeed bizzar.

If we had some way to see inside a human brain and film it at high speed we would see the time gaps when no synapses are firing. Yet, this observation in no way affects the operation of the brain.* If it was conscious before being observed, it will still be conscious while being observed. By symmetry, a computer brain that is conscious while running at full speed will still be conscious when running slow enough that we can observe the transitions of each logic gate.

Mathematical symmetry trumps your belief in magical time.


(*) Since this is a hypothetical observation we can ignore the effects of quantum mechanics.

You don't seem to be following what I'm saying at all. Just wait for the entire explanatory post and then I'll respond to whatever comments you have about that.
 
Is a dog self aware? How about a rat? a snake? a cricket? a paramecium? How do we know?

Actually, there's a well-established test for "self-consciousness" in the animal psych literature. It's called the "mirror test" and was developed in 1970. Basically, allow an animal to look at itself in a mirror. Does it recognize itself as itself?

The usual way of doing this is by marking the animal in a concealed part of its body (like its forehead) and seeing if the animal will respond to seeing the animal in the mirror by grooming itself to get rid of the mark or something.

Most great apes pass the mirror test, as do magpies, elephants, and some cetaceans. Neither dogs, nor rats, nor any other type of animal (reptiles, insects, &c.) have been known to pass the mirror test.

Of course, the mirror test isn't perfect; dogs, for example, might be perfectly self-aware but simply not respond to visual stimuli because their primary sensory organ is their nose. So we can't conclude conclusively from this that dogs aren't self-aware, but we can certainly conclude that elephants are. And if you wanted to argue that dogs aren't self aware, this is certainly evidence in your support.....
 
But in short, what I'm saying is that if you have a conscious computer, even though you can stop and start the computer and it will simply move on to the next calculation, nevertheless if you slow the processing down too much, the computer -- like a human brain -- would no longer be conscious because what you'd have is a series of non-conscious moments.

In other words, my contention is that consciousness actually requires a certain minimum amount of time of "continuous" operation -- which is to say, not interrupted by pauses greater than a certain minimum frame of time.

This may seem like a bizarre claim from the point of view of computer science, but when you look at what we know and what we can deduce about how consciousness is generated and maintained, it doesn't seem bizarre at all.

No. It simply seems ludicrously unsupported. It's "fairies at the bottom of the garden."


An analog in the human brain is that it can, and does, take micronaps that are not noticeable to conscious awareness, which appears continuous; and it can "black out" for longer periods of time and "come to" with the result that time appears to have suddenly "jumped" (for example, the experience of driving and suddenly your wheels are off the road and you realize you were momentarily asleep).

And yet we also know that events which occur at very short lengths of time are processed by the brain just fine, but cannot be consciously perceived. So if you were to time the blackouts so that the interval between them were shorter than this subliminal threshold, you'd end up with a series of non-conscious moments even though the brain is working during those moments.

That's the important point: Just because the brain is operating doesn't mean that the consciousness function is working.

What I want to lay out in more detail is an argument that a robot brain which is conscious when run at normal speed would actually not be conscious when run at an extremely slow speed (e.g. pen-and-paper calculation speed) because you'd end up with a series of non-conscious states,

Completely unsupported.

in the same way that you or I would not be conscious if our brains were somehow magically able to run only for subliminal lengths of time with pauses in between.

ALSO completely unsupported. You have no idea how you or I would react if "our brains were somehow magically able to run only for subliminal lengths of time with pauses in between."
 
The brains of whales or elephants are much larger and presumably have much more processing power then our own yet we have a tough time even evaluating their level of intelligence. How then can we create human like intelligence? Even if we were to have the required processing power (which I don’t expect we will) we don’t really know what intelligence is.
 
Actually, there's a well-established test for "self-consciousness" in the animal psych literature. It's called the "mirror test" and was developed in 1970. Basically, allow an animal to look at itself in a mirror. Does it recognize itself as itself?

The usual way of doing this is by marking the animal in a concealed part of its body (like its forehead) and seeing if the animal will respond to seeing the animal in the mirror by grooming itself to get rid of the mark or something.

Most great apes pass the mirror test, as do magpies, elephants, and some cetaceans. Neither dogs, nor rats, nor any other type of animal (reptiles, insects, &c.) have been known to pass the mirror test.

Of course, the mirror test isn't perfect; dogs, for example, might be perfectly self-aware but simply not respond to visual stimuli because their primary sensory organ is their nose. So we can't conclude conclusively from this that dogs aren't self-aware, but we can certainly conclude that elephants are. And if you wanted to argue that dogs aren't self aware, this is certainly evidence in your support.....
Isn't this just an even less satisfactory version of the Turing test? How do you tell the difference between a magpie and a p-zombie magpie?
 
This may seem like a bizarre claim from the point of view of computer science, but when you look at what we know and what we can deduce about how consciousness is generated and maintained, it doesn't seem bizarre at all.

After reading this thread for quite awhile, it's pretty clear that you aren't really that familiar with what computer scientists and cognitive scientists think about consciousness. It's tough to see how you can qualify your statement as bizarre or not.

It's like you come from a culture where all music is made on stringed instruments. One day you meet someone from a culture where music is made with horns, but the guy doesn't happen to have a horn with him. He tries to convince you that his culture makes non-stringed-instrument music, but you don't believe him, arguing that a horn doesn't have vibrating strings, it doesn't have a wooden resonating chamber, etc, so it obviously can't make music.

You may be very familiar with how the brain works, but you seem to have no awareness of the considerable thought that has been put forward concerning machine consciousness. You say that you can see how an electronic brain could be conscious, but you don't see how the pen-and-paper brain could be. You seem to be unacquainted with the computation theory and the concept of Multiple Realizability. Basically, if it can run on an electronic brain, it can run on a suitably complex abacus, a pen-and-paper, a bunch of rocks, a series of waterwheels, pulley, and pendulums, and anything else that can realize a Turing Machine (which seems to be something else you think you know about, but which is something you really don't seem to understand). This Plank-length of perceptibility you are postulating is a non-starter. It's irrelevent.
 
The brains of whales or elephants are much larger and presumably have much more processing power then our own yet we have a tough time even evaluating their level of intelligence. How then can we create human like intelligence? Even if we were to have the required processing power (which I don’t expect we will) we don’t really know what intelligence is.

Argument from incredulity, anyone?
 
Isn't this just an even less satisfactory version of the Turing test?

No. It answers an entirely different question from the standpoint of an entirely different population.

In particular, we know that elephants are self-aware, something that the Turing test could never have told us about them.....

How do you tell the difference between a magpie and a p-zombie magpie?

No one outside the philosophy department cares about that particular difference. Indeed, few people outside the philosophy department even knows what the term 'p-zombie' denotes,... and those few who do tend to find it a stupid concept.
 
No. It answers an entirely different question from the standpoint of an entirely different population.

In particular, we know that elephants are self-aware, something that the Turing test could never have told us about them.....



No one outside the philosophy department cares about that particular difference. Indeed, few people outside the philosophy department even knows what the term 'p-zombie' denotes,... and those few who do tend to find it a stupid concept.
If you want to say it's not a useful concept to computer scientists, I'm completely with you. As I said eons ago on this thread, I keep feeling people redefine the question based on pragmatism. That's fine so long as they don't pretend to be answering the original question any more. The new question is a technical challenge rather than an interesting question. I'm an interested novice in this, so generally I lurk and keep quiet, but it seems to me that the only reason these threads go on beyond a couple of posts is because people treat these questions as if they were the same.

Is anybody really arguing that, given a computer that gives the same output regardless of the clock speed, it matters what speed we clock it at (if we are only talking about it's ability to pass some arbitrary Turing style test)? Surely a question like that doesn't warrant >400 posts?
 
No. It simply seems ludicrously unsupported. It's "fairies at the bottom of the garden."




Completely unsupported.



ALSO completely unsupported. You have no idea how you or I would react if "our brains were somehow magically able to run only for subliminal lengths of time with pauses in between."

I know it's unsupported. As I've said, it's going to take me more time than I've had over this past weekend to write it all out.

What I'd like to do is to provide several assertions about the brain, each with links to whatever research is available on the Web. Then with that out of the way, discuss what all of those observations about the brain mean in terms of how the brain appears to create and maintain conscious awareness (or what little we can tell at this point).

Once we've got a picture of the process of conscious awareness in the brain, then we can ask what would happen if we slowed things down to one neural firing per second (assuming we could somehow keep blood pumping, lungs breathing, etc. at a normal speed to keep the body alive).
 
Is anybody really arguing that, given a computer that gives the same output regardless of the clock speed, it matters what speed we clock it at (if we are only talking about it's ability to pass some arbitrary Turing style test)? Surely a question like that doesn't warrant >400 posts?

I agree that the question has fundamentally changed. The OP seemed to be asking whether the intelligent computer would retain some kind of subjective experience and what it would be like. Then someone disputed whether such a consciousness ould be possible at all, and here we are.
 
You may be very familiar with how the brain works, but you seem to have no awareness of the considerable thought that has been put forward concerning machine consciousness. You say that you can see how an electronic brain could be conscious, but you don't see how the pen-and-paper brain could be. You seem to be unacquainted with the computation theory and the concept of Multiple Realizability. Basically, if it can run on an electronic brain, it can run on a suitably complex abacus, a pen-and-paper, a bunch of rocks, a series of waterwheels, pulley, and pendulums, and anything else that can realize a Turing Machine (which seems to be something else you think you know about, but which is something you really don't seem to understand). This Plank-length of perceptibility you are postulating is a non-starter. It's irrelevent.

From my point of view, it's amazing that anyone would think to speculate about computer consciousness (which I have no doubt is possible) without any reference to the mechanisms of consciousness.

You are assuming that the consciousness function will work at any operating speed, which is something (tho not entirely) like assuming that an engine will run at any idle speed. (But I don't want to push that analogy too far.)

Like I said, I'll lay the whole thing out and then you can comment. Y'all might change my mind in the end. But not until we're actually discussing the mechanisms of consciousness.

So far, these arguments which do not bother to make reference to how consciousness (from what we can tell) is generated have enormous gaps which make them entirely unconvincing.
 
I agree that the question has fundamentally changed. The OP seemed to be asking whether the intelligent computer would retain some kind of subjective experience and what it would be like. Then someone disputed whether such a consciousness ould be possible at all, and here we are.

Hmm. I wasn't aware that anyone was disputing the feasibility of artificial consciousness itself.
 
Hmm. I wasn't aware that anyone was disputing the feasibility of artificial consciousness itself.
Maybe not you, but others have.

Most recently this one:
The brains of whales or elephants are much larger and presumably have much more processing power then our own yet we have a tough time even evaluating their level of intelligence. How then can we create human like intelligence? Even if we were to have the required processing power (which I don’t expect we will) we don’t really know what intelligence is.
 
Argument from incredulity, anyone?

An argument from incredulity says that because I can’t understand something no one else does. In this case it’s clear no one really understands what intelligence or consciousness are else there would not be so much research going on to identify it in other animals.

The problem is that the only tool we have for evaluating intelligence is to compare behavior to our own. Our own behavior, however, is an evolutionary solution to the problems presented by our environment. Different problems and different available tools demand different solutions so expecting something with different tools at its disposal to still use our solution seems misguided.
 
From my point of view, it's amazing that anyone would think to speculate about computer consciousness (which I have no doubt is possible) without any reference to the mechanisms of consciousness.

You are assuming that the consciousness function will work at any operating speed,

Right. That's one of the fundamental findings -- not assumptions -- of the various formalizations of "computer." They will work at any operating speed. Or more accurately, no one has found any formalizations of "computability" that are in any way speed-dependent.

So if you assume that "computer consciousness" is possible, then you implicitly assume either that our findings about "computers" remain relevant (including the finding of speed-independence) or that there's been some sort of breakthrough that renders everything we know about computers irrelevant.


Like I said, I'll lay the whole thing out and then you can comment. Y'all might change my mind in the end. But not until we're actually discussing the mechanisms of consciousness.

Unless your "mechanism of consciousness" is radically uncomputable -- which in turn makes "computer consciousness" impossible by definition -- then it will have all the properties we attribute with computability.
 
lomiller said:
The brains of whales or elephants are much larger and presumably have much more processing power then our own yet we have a tough time even evaluating their level of intelligence. How then can we create human like intelligence? Even if we were to have the required processing power (which I don’t expect we will) we don’t really know what intelligence is.

Arguments like that seem to have the hidden premise "...and we never will", which to my mind subjects the idea of consciousness to some sort of special pleading.

The Blue Brain Project is just one step in fully understanding the human brain; I say we go for it.
 
Is anybody really arguing that, given a computer that gives the same output regardless of the clock speed, it matters what speed we clock it at (if we are only talking about it's ability to pass some arbitrary Turing style test)?

Yes. Piggy is.
 
The problem is that the only tool we have for evaluating intelligence is to compare behavior to our own. Our own behavior, however, is an evolutionary solution to the problems presented by our environment. Different problems and different available tools demand different solutions so expecting something with different tools at its disposal to still use our solution seems misguided.

Okay, I can see your argument now. This problem is multiplied once you start to consider things like emotions and empathy.
 

Back
Top Bottom