Robot consciousness

I said in my first post on this topic that if consciousness is emergent, it is an emergent feature, not an emergent property like the whiteness of clouds.
OK. I probably missed it because I do not operate with different types of emergence, which I believe is different in each separate case.

CM is a shorthand, which I identified upthread, for "consciousness modules".
Thank you.

As for our robot, we must assume that it uses the same method used by the human brain to produce consciousness, otherwise the question becomes nonsense. It would be like saying: "Suppose there's a car which has an engine that produces motion in a way unlike anything we currently know about or imagine -- how low could its idle speed go before it stalls?"
Well, the robot situation is a little bit different, because concepts like single-stepping makes it clear that we are speaking of computers, and then it probably would not matter if the consciousness was a result of simulation of a human brain, or a result of some exceedingly clever programming.

I don't think so. It seems to me that the OP posits a conscious robot (not a simulation of consciousness, but a genuine conscious entity) and asks what would happen in the real world if we slowed its processing speed.
I do not see a difference in a simulation and "real consciousness". I think there is only one kind of consciousness.

In a simulation where you simulate halting and restarting the entire universe, of course there's no change in anything. It's entirely trivial, no matter what you're simulating.
That was my thought too, and I wondered why you seemed to oppose it.

Although we are clearly in disagreement, it seems to be based more on different definitions of what consciousness is.
 
Well, the robot situation is a little bit different, because concepts like single-stepping makes it clear that we are speaking of computers, and then it probably would not matter if the consciousness was a result of simulation of a human brain, or a result of some exceedingly clever programming.

Well, I think we actually don't disagree here.

All I'm saying is that, for our robot, we must assume that the programming produces consciousness in the same basic way that the human brain does.

In other words, if the human brain creates consciousness by coordinating particular kinds of sets of information in particular combinations in predictable sequences, then the robot brain will do the same thing, even though it's using a circuit brain instead of a neuron brain.

So the robot brain doesn't have to be a copy of the human brain, as long as it moves data and information around in a similar manner.

The reason for this is that we simply don't know of any other way to achieve consciousness, and if we posit a completely unknown mechanism, then the question of what happens at slow processing speeds is simply unanswerable.

And we definitely agree that a "simulation of consciousness" = "real consciousness".
 
Well, I think we actually don't disagree here.
Good.

All I'm saying is that, for our robot, we must assume that the programming produces consciousness in the same basic way that the human brain does.

In other words, if the human brain creates consciousness by coordinating particular kinds of sets of information in particular combinations in predictable sequences, then the robot brain will do the same thing, even though it's using a circuit brain instead of a neuron brain.

So the robot brain doesn't have to be a copy of the human brain, as long as it moves data and information around in a similar manner.
An important difference is that this robot brain can be halted and single-stepped, but a human cannot, because the biological mechanisms do not allow it. Halting is perhaps possible, but on resuming, a human brain would not resume as if nothing had happened.

That is why I believe that the OP can be answered with a clear "yes" that consciousness of this hypothetical would not be affected by single-stepping.

With humans or artificial biologically based consciousness, timing would be important, and consciousness would cease if attempts were made to slow down the processes more than a tiny amount.
 
It would make sense to me that a machine could be said to be conscious only if it were self aware.
The obvious problem is, how would one determine if this were true of an otherwise seemingly sentient machine. A machine that is programmed to imitate human responses and thereby mimic awareness would not necessarily qualify. But how would one determine the difference? What test could be used?
I know that the participants in this discussion are self aware, because I assume they are all human. Any sentient non self aware robots out there to contradict this?
 
It would make sense to me that a machine could be said to be conscious only if it were self aware.
The obvious problem is, how would one determine if this were true of an otherwise seemingly sentient machine. A machine that is programmed to imitate human responses and thereby mimic awareness would not necessarily qualify. But how would one determine the difference? What test could be used?
Do you know the Turing Test? It is not flawless; in fact, it is probably miserable, but as far as I know, it is the best we have got.
 
Do you know the Turing Test? It is not flawless; in fact, it is probably miserable, but as far as I know, it is the best we have got.

Yes, I have been well acquainted with the "Turing test." From your link:

Turing considers the question "can machines think?" Since "thinking" is difficult to define, Turing chose to "replace the question by another which is closely related to it and is expressed in relatively unambiguous words."[2] Turing's new question is: "Are there imaginable digital computers which would do well in the [Turing test]"?[3] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to this proposition.[4]

Thinking is certainly "difficult to define;" however, that does not render the Turing test the authoritative answer to this question. We all know what "self aware" means, just as we know what a myriad of other undefined terms in mathematics and science mean. We use those concepts, like "time" because without them we have no science. I am suggesting that "self aware" is one of those undefinable terms that we can use to determine if a machine is conscious.
If we had a machine that could communicate through speech in a room with say, a handful, of humans, after some interval of time, I think the people would know if the machine were conscious or merely programmed to appear conscious.
I see no reason why this kind of "test" would not work if, instead of a room and speech, we had a machine participant on this thread.
 
Last edited:
How? Would you ask it to tell you about it's mother?

I don't suppose there is any definite question or series of questions that would be absolutely definitive. But, I do think an extended discussion over time about life, death, the universe, art, fears, likes and dislikes, opinions, etc. would give one a sense of whether a machine were self aware. It would take some time and, probably, in the end, some intuition.

Is a dog self aware? How about a rat? a snake? a cricket? a paramecium? How do we know?
 
You mean self aware as in, there being something that it is like to be the self aware entity you're talking to? I struggle to imagine a question, or series of questions that could address this. Questions to determine whether you are talking to another human, or not... sure. Even that, I don't see why it would be theoretically impossible for a computer to fool you reliably evenually.

I appreciate you don't have one killer question, but what sort of angle of questioning do you think would be able to make this determination?

Is a dog self aware? How about a rat? a snake? a cricket? a paramecium? How do we know?
No way to know.
 
An important difference is that this robot brain can be halted and single-stepped, but a human cannot, because the biological mechanisms do not allow it. Halting is perhaps possible, but on resuming, a human brain would not resume as if nothing had happened.

That is why I believe that the OP can be answered with a clear "yes" that consciousness of this hypothetical would not be affected by single-stepping.

With humans or artificial biologically based consciousness, timing would be important, and consciousness would cease if attempts were made to slow down the processes more than a tiny amount.

Suppose that somehow we could keep your body alive and healthy, but still slow down your brain's neural impulses to one per second.

Would you still be conscious?
 
No way to know.

There will eventually be a way to know.

Once we figure out the basics of the mechanisms and identify the signatures of conscious awareness, we'll be able to look at some dogs' brains and see if they're doing the same thing.

And maybe we're getting close.

A new paper suggests that four specific, separate processes combine as a "signature" of conscious activity. By studying the neural activity of people who are presented with two different types of stimuli – one which could be perceived consciously, and one which could not – Dr. Gaillard of INSERM and colleagues, show that these four processes occur only in the former, conscious perception task.
 
Suppose that somehow we could keep your body alive and healthy, but still slow down your brain's neural impulses to one per second.

Would you still be conscious?
I doubt it. I think I would be in the condition that we call "unconscious", but we can only know by asking a person who has tried it.
 
I doubt it. I think I would be in the condition that we call "unconscious", but we can only know by asking a person who has tried it.

Ok, why do you believe you would not be conscious?

And why do you believe our robot would be conscious under similar conditions?

ETA: I have to be out of town today, but I'm jotting down notes for a post on consciousness w/ the cites. I hope I can find all the studies I'm recollecting. Been finding new stuff, too, that has me re-thinking my mental model of consciousness. I need to get some newer books and dive into brain studies again.
 
Last edited:
Ok, why do you believe you would not be conscious?
It is just a feeling. It may well be wrong.

And why do you believe our robot would be conscious under similar conditions?
Because it is not biological. The technology it is working under does allow unlimited interruptions without impairment of its functions.

ETA: I have to be out of town today, but I'm jotting down notes for a post on consciousness w/ the cites. I hope I can find all the studies I'm recollecting. Been finding new stuff, too, that has me re-thinking my mental model of consciousness. I need to get some newer books and dive into brain studies again.
It has been a very interesting discussion. I look forward to see the quotes you may dig out.
 
Suppose that somehow we could keep your body alive and healthy, but still slow down your brain's neural impulses to one per second.

Would you still be conscious?

You yourself mentioned the need for maintaining coherence. The brain is an electro-chemical machine. Most attempts to slow it down (such as by cooling) will cause different rates of slowing for different chemical reactions and no slowing for electrical propagation. Eventually, the brain would stop functioning as we are used to and lose the property we call consciousness.

However, if there were a way to equally slow down every function of the brain, such as by creating a hypothetical stasis field, consciousness would not be lost.


Most digital computers though are designed such that their state can be frozen. This state can be copied out of the machine and written on paper. The state on paper can be hand emulated to advance the state one clock cycle at a time. The physical machine can be destroyed and an exact copy of the machine built. The updated state can be loaded back on the hardware and execution resumed. If this machine initially had consciousness, it's consciousness will never have been lost. There could even be memories from the time it was being hand executed on paper if that time were long enough to acquire inputs.
 
It is just a feeling. It may well be wrong.


Because it is not biological. The technology it is working under does allow unlimited interruptions without impairment of its functions.


It has been a very interesting discussion. I look forward to see the quotes you may dig out.

Hi, I'm remote right now. During my drive I outlined the post in my head. If I can't follow up this evening, I'll post during the week.

But basically what I'm going to lay out is an argument that consciousness is a function which will run with a certain degree of interruption, but which has a threshold below which it cannot run, regardless of whether we're dealing with a human or robot brain.

We'll see if that argument holds up to the slings and arrows.
 
You yourself mentioned the need for maintaining coherence. The brain is an electro-chemical machine. Most attempts to slow it down (such as by cooling) will cause different rates of slowing for different chemical reactions and no slowing for electrical propagation. Eventually, the brain would stop functioning as we are used to and lose the property we call consciousness.

However, if there were a way to equally slow down every function of the brain, such as by creating a hypothetical stasis field, consciousness would not be lost.


Most digital computers though are designed such that their state can be frozen. This state can be copied out of the machine and written on paper. The state on paper can be hand emulated to advance the state one clock cycle at a time. The physical machine can be destroyed and an exact copy of the machine built. The updated state can be loaded back on the hardware and execution resumed. If this machine initially had consciousness, it's consciousness will never have been lost. There could even be memories from the time it was being hand executed on paper if that time were long enough to acquire inputs.

If you stopped the system and restarted it -- regardless of whether it's a computer or human brain, consciousness would resume. It would be the equivalent of falling asleep and waking up.

Our brains regularly shut down the consciousness function and restart it. Of course, with a robot brain, you could potentially do it more seamlessly.

But if we're talking about single-stepping the process of cognition itself, or slowing down the steps to an extreme degree, I'm going to argue that at a certain speed consciousness is not possible because what you end up with is a string of non-conscious states, whether you're dealing with a consciousness-enabled robot brain or a human brain.

Stay tuned.
 
But if we're talking about single-stepping the process of cognition itself, or slowing down the steps to an extreme degree, I'm going to argue that at a certain speed consciousness is not possible because what you end up with is a string of non-conscious states, whether you're dealing with a consciousness-enabled robot brain or a human brain.

The computer is and always has been a string of states. That you are at some point aware of the discontinuous nature of the states in no way affects the computer. It's the same as putting yourself in a room with a super-duper advanced computer system that is able to scan your entire state every pico-second. Does this suddenly make you unconscious?
 
The computer is and always has been a string of states. That you are at some point aware of the discontinuous nature of the states in no way affects the computer. It's the same as putting yourself in a room with a super-duper advanced computer system that is able to scan your entire state every pico-second. Does this suddenly make you unconscious?

No, I think you don't understand what I'm saying. And I'm sorry for the delay but I was on the road yesterday and this is going to be a rather long post.

But in short, what I'm saying is that if you have a conscious computer, even though you can stop and start the computer and it will simply move on to the next calculation, nevertheless if you slow the processing down too much, the computer -- like a human brain -- would no longer be conscious because what you'd have is a series of non-conscious moments.

In other words, my contention is that consciousness actually requires a certain minimum amount of time of "continuous" operation -- which is to say, not interrupted by pauses greater than a certain minimum frame of time.

This may seem like a bizarre claim from the point of view of computer science, but when you look at what we know and what we can deduce about how consciousness is generated and maintained, it doesn't seem bizarre at all.

An analog in the human brain is that it can, and does, take micronaps that are not noticeable to conscious awareness, which appears continuous; and it can "black out" for longer periods of time and "come to" with the result that time appears to have suddenly "jumped" (for example, the experience of driving and suddenly your wheels are off the road and you realize you were momentarily asleep).

And yet we also know that events which occur at very short lengths of time are processed by the brain just fine, but cannot be consciously perceived. So if you were to time the blackouts so that the interval between them were shorter than this subliminal threshold, you'd end up with a series of non-conscious moments even though the brain is working during those moments.

That's the important point: Just because the brain is operating doesn't mean that the consciousness function is working.

What I want to lay out in more detail is an argument that a robot brain which is conscious when run at normal speed would actually not be conscious when run at an extremely slow speed (e.g. pen-and-paper calculation speed) because you'd end up with a series of non-conscious states, in the same way that you or I would not be conscious if our brains were somehow magically able to run only for subliminal lengths of time with pauses in between.
 
Piggy, Your claim that there is an absolute time scale to consciousness is indeed bizzar.

If we had some way to see inside a human brain and film it at high speed we would see the time gaps when no synapses are firing. Yet, this observation in no way affects the operation of the brain.* If it was conscious before being observed, it will still be conscious while being observed. By symmetry, a computer brain that is conscious while running at full speed will still be conscious when running slow enough that we can observe the transitions of each logic gate.

Mathematical symmetry trumps your belief in magical time.


(*) Since this is a hypothetical observation we can ignore the effects of quantum mechanics.
 

Back
Top Bottom