• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Paul- are you familiar with Benjamin Libet's experiments on readiness potentials?
http://en.wikipedia.org/wiki/Benjamin_Libet
Read the bit from "Implications of Libet's experiments."
If the conscious and subconscious model of the universe does involve back-referring sensory inputs , then there would have to be a relationship between the robot's clock speed and it's data transfer speed , or it could have no coherent awareness of its own spatial extent at any instant. Also inputs would reach the CPU out of synch. I suspect it would have no linear sense of time passing and therefore no rational memory. It would be insane.

I presume current computers synchronise simultaneous events which travel different distances from the sensor to the CPU by delaying some signals till others catch up.
(Buffering). Whether humans do this in software or in a neurochemical process I have no clue, but we must do it- otherwise if I slap your ear and step on your foot at the same time, you will feel the slap as occurring earlier than the stamp, because your ear is nearer your brain.
Which you don't.

I suspect this synchronisation of sensory data in time to a considerable extent actually IS
"consciousness", though it actually happens unconsciously.

(One implication is that what "mind altering" drugs do is partly that they delay or block the delay of signals so the brains time sense is shot to hell. ).
 
Last edited:
We're talking about a robot brain, which presumably doesn't operate the way a human brain operates. Do you think the method of execution of the algorithms matters? That's the question I'm asking.

~~ Paul

Well, you can't have it both ways now.

The human brain is a physical system which produces consciousness. If any robot brain is also conscious the way a human is, then we have to assume that something similar is going on.

If not, then you're going to have to break down that wall and explain the mechanism.

Outside of that, I can't see that the questions even make sense.

From what we can tell, there's a lot of parallel processing going on in the brain which, together with sequential processing from module to module, creates the phenomenon.

If you've got a robot that mimics this process, then it really makes no sense to ask whether working out the math of the program on paper is going to create consciousness. That removes essential conditions, i.e., running the thing in the actual physical robot.

If you're talking about some form of consciousness which can be achieved when the process is single-stepped, you're talking about something that's nothing like human consciousness, so you'll have to explain what in the world that might be.
 
If I ask the robot, "Are you conscious?" and it answers, "Yes.", how do I prove it wrong?
That is what the Turing Test tries to accomplish, to prove or disprove whether a machine has intelligence, which implies consciousness. It is straightforward to write a program to detect the "Are you conscious?" question and to answer "yes" without any intelligence being behind the answer. The assumption of the Turing Test is that you need to ask a series of questions. The current way that most projects build "artificial intelligence" is to build up so many answers to so many questions that it is hard for a human asking the Turing Test questions to distinguish whether a human or computer answered the question. I think it is an illusion of intelligence, just like more sophisticated computers can generate more realistic computer graphics in movies; it's still fake, even if it can fool some humans. So I believe that the Turing Test is not sufficient to prove if a robot is conscious or not.

It's a tough question how to distinguish true consciousness from a sophisticated illusion of consciousness. I don't have a good answer.
 
That is what the Turing Test tries to accomplish, to prove or disprove whether a machine has intelligence, which implies consciousness.

I disagree. I don't see why the two have to go hand in hand, why we can't imagine an intelligent animal evolving without consciousness, or a conscious animal that fails the Turing test (e.g. a dog).

Consciousness is just another feature of some evolved creatures.

And it seems to be a limited downstream process.

I think the Turing test is not an appropriate test for consciousness.

With regard to animals, I think the only way we're going to answer questions about consciousness is to crack the problem of how the brain produces it, and then look for analogous structures which appear to be behaving in the same way.

Of course, evolution often arrives at the same solution by various means, so we'll also have to keep an eye out for other types of neurological configurations that might produce the same effect.

When it comes to robots, similar logic should apply.
 
nescafe said:
Asynchronous circuits do not have a master clock (or clocks). Any synchronization is an emergent property of how the system is interconnected, not something imposed by a clock running at a given speed. That is the primary difference -- asynchronous vs. synchronous has little to do with whether the system is or can be synchronized, but everything to do with how it is synchronized.
So we need not be concerned with the clock speed specifically, but such computers still have intrinsic speeds. What happens if we slow it way down, or even if we hand-simulate it?

Would your consciousness be crippled if your flicker fusion rate was 2.5 frames/min instead of 25 or so frames/second?
Excellent question. Or what if the flicker fusion rate imposed by the physical eye remained the same, but my brain ran at a significantly different speed?

~~ Paul
 
Last edited:
sol invictus said:
But I think the first statement contradicts the second. You can't say it's "conscious like a human, but not human" without defining or assuming a definition for "conscious".
Yes, I agree we need a definition, but I think it would have to be some sort of Turing test.

I suspect it's possible to build a robot that could pass just about any such individual test. But I also don't think there's any (non-arbitrary) test for consciousness, because I don't think it's a well-defined concept. Unless you simply define it via a test, but then - as with any sharp definition - we could easily answer your question.

For example we could use the Turing test, and then the questions in the OP are trivial to answer (no, no, and no).
Where did you get those answers? I suppose if we required the robot to answer the questions in a time frame similar to that of humans, then you're right. And that would be part of the Turing test. So let's relax the test and not require the answers at any particular speed.

I agree that the whole question of consciousness is fuzzy and fraught with poorly defined concepts. But unless we say that this fact precludes robot consciousness altogether, it still leaves this interesting issue. If it is the execution of algorithms that will produce robot consciousness, what are the requirements on the machine that executes the algorithms?

~~ Paul
 
crocofish said:
I currently see no signs of computer consciousness today at any speed. What is currently portrayed as A.I. is just big databases with fancy decision tree processing that give the illusion of intelligence. As computers get bigger and faster, the illusion gets bigger and more complex, but I believe that type of system is not headed toward consciousness.
Why do you think that the wholeness of conscious experience is anything other than a giant illusion constructed out of piles of complex components?

It's a tough question how to distinguish true consciousness from a sophisticated illusion of consciousness. I don't have a good answer.
Right. There might not even be any difference.

~~ Paul
 
Last edited:
Piggy said:
If you've got a robot that mimics this process, then it really makes no sense to ask whether working out the math of the program on paper is going to create consciousness. That removes essential conditions, i.e., running the thing in the actual physical robot.
Why does running the algorithms in the physical robot matter, as long as the inputs are the same?

If you're talking about some form of consciousness which can be achieved when the process is single-stepped, you're talking about something that's nothing like human consciousness, so you'll have to explain what in the world that might be.
Why couldn't it be similar to human consciousness, only slower?

As I said above, if the algorithms are self-contained, entirely deterministic, unaffected by wall clock time, and probably some other requirements, then how can it matter what speed they run at (except for speed of response) or what processor is used?

My guess is that those requirements are not met by the human brain.

~~ Paul
 
Piggy said:
I think the Turing test is not an appropriate test for consciousness.

With regard to animals, I think the only way we're going to answer questions about consciousness is to crack the problem of how the brain produces it, and then look for analogous structures which appear to be behaving in the same way.
If there is no Turing test for consciousness, then I'm not sure how we can "crack the problem." How would we know that the subject(s) we are studying, and assuming are conscious, really are conscious? We might be cracking nonconsciousness by mistake.

~~ Paul
 
Last edited:
I disagree. I don't see why the two have to go hand in hand, why we can't imagine an intelligent animal evolving without consciousness, or a conscious animal that fails the Turing test (e.g. a dog).

Consciousness is just another feature of some evolved creatures.

And it seems to be a limited downstream process.

I think the Turing test is not an appropriate test for consciousness.

With regard to animals, I think the only way we're going to answer questions about consciousness is to crack the problem of how the brain produces it, and then look for analogous structures which appear to be behaving in the same way.

Of course, evolution often arrives at the same solution by various means, so we'll also have to keep an eye out for other types of neurological configurations that might produce the same effect.

When it comes to robots, similar logic should apply.

I am aware of Mr Turing and his Test and I think both he and I (in my post) have our tongues somewhat in our cheeks. ;) All that success on the TT would mean was that the computer had passed the Test.

If the robot refused to answer any questions other than the one about its consciousness, what could we conclude?
 
to fellow nitpickers -- assume that by consciousness we are talking about the human style that all of us who are not p-zombies reputedly possess.

So we need not be concerned with the clock speed specifically, but such computers still have intrinsic speeds. What happens if we slow it way down, or even if we hand-simulate it?

Well, the answer to that depends on one's frame of reference. If one can be consciousness without regard to (or needing to) interact with the world or other (presumably conscious) systems, then the speed at which the conscious system operates is irrelavent, as long as forward progress is made.

If one can only be conscious (or be considered conscious) by virtue of interaction with other conscious entities, then (it would seem) one would cease being conscious once meaningful interaction is no longer possible (at least, from the perspective of whoever is determining whether any given system is conscious).

Heck, both of these frames could be valid at the same time. :)

Excellent question. Or what if the flicker fusion rate imposed by the physical eye remained the same, but my brain ran at a significantly different speed?

~~ Paul
My intuitions suggest that we may be dealing with a threshold effect -- in order to be considered recognized as conscious by us hairless apes, the system implementing consciousness has to have a certain (as yet undefined) level of complexity. That and $4.00 will get you overpriced froofie coffee at your local Charbucks.


Semi-related: Have you read the short story Understand?
 
Why does running the algorithms in the physical robot matter, as long as the inputs are the same?

Because the phenomenon of consciousness is something, as far as we know, that arises in real time in physical systems.

If you propose a robot that has a computer brain that produces consciousness, then it happens in the robot.

Have you ever worked out the logic for a program, then compiled it and run it to see if it works? Why doesn't it "work" when you just calculate it? Simple -- because you're not running it on the hardware.

Let's say you traced the exact sequence of neuron firings in a brain for a given conscious event. You would not expect that diagramming that sequence of firings would produce any such conscious event.

You might as well say that you could build a car by describing the assembly process.
 
Why couldn't it be similar to human consciousness, only slower?

As I said above, if the algorithms are self-contained, entirely deterministic, unaffected by wall clock time, and probably some other requirements, then how can it matter what speed they run at (except for speed of response) or what processor is used?

My guess is that those requirements are not met by the human brain.

Undoubtedly, given that we're dealing with a physical system, there's a range of acceptable speed which will sustain the process.

And that's a big ol' pile of assumptions you got there.
 
If there is no Turing test for consciousness, then I'm not sure how we can "crack the problem." How would we know that the subject(s) we are studying, and assuming are conscious, really are conscious? We might be cracking nonconsciousness by mistake.

We work with what we have and move forward.

The one thing we know is that the human brain produces consciousness.

It's almost certain that other mammalian brains do, too, and probably some other critters' brains as well. The smart money is that dogs, pigs, horses, non-human primates, and elephants are conscious.

But we know humans are, so the first step is to figure out the mechanism in humans.

Then we look for analogous systems in the brains of animals that are closest to us in evolutionary terms.

Along the way we will likely discover better means of discerning conscious activity from non-conscious activity.

I don't see any other reasonable way of proceeding at this point.
 
Where did you get those answers? I suppose if we required the robot to answer the questions in a time frame similar to that of humans, then you're right.

That's what I had in mind, yes.

And that would be part of the Turing test. So let's relax the test and not require the answers at any particular speed.

As in, the human questioner gets a written answer some time after asking? OK, then the answers are yes, yes, and yes.

See what I mean?
 
Why do you think that the wholeness of conscious experience is anything other than a giant illusion constructed out of piles of complex components?


Right. There might not even be any difference.

~~ Paul
What is experiencing this illusion?

In any case, when you say "there might not be a difference", you seem to acknowledge a possibility that there might indeed be a difference.
 
Last edited:
Yes, I agree we need a definition, but I think it would have to be some sort of Turing test.
I agree that the only practical test seems to be a Turing type one, but it seems to me one is then in a position of defining consciousness to fit the only available test.
 

Back
Top Bottom