• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Of course you won´t be able to just slow down or speed up a human´s or AI´s brain situated in the real world at will and expect it to stay concious.

And yet, that is precisely the question before us.

Consider a conscious robot with a brain composed of a computer running sophisticated software. Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains.

Would the robot be conscious if we ran the computer at a significantly reduced clock speed? What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?
 
But let´s say you create a log of the exact sensory inputs you receive while looking at that flickering image, with exact timestamp information, as well as the starting state of your computation device. You then can recreate the exact computation happening when looking at that image, regardless actual speed at which it is performed. Be it a nanosecond or a thousand years, the computation and it´s results will stay the same. If this computation results in conciousness, it will do so regardless of the actual time it took.

One could actually consider not only using a pencil to run a brain, but another person to use another pencil to compute the simulation environment that brain lives in and the associated incoming sensory input. And because both of these algorithms are computable, you could even have a single TM/human with a pencil computing both the brain´s activity as well as the simulation it lives in. For such a system it doesn´t matter whatsoever at which speed it is run, it will always be concious and in sync with it´s sensor input.

Of course, all the above assumes conciousness to be computable, as discussed earlier in this thread.

I wasn't talking about looking at movies, but the fact that our awareness is kind of like a movie, only apparently continuous.

Maintaining conscious awareness of our environment requires a certain level of processing speed, if you will.

Take the case of hearing your name at a party.

What is received by the brain modules that generate conscious experience (CMs) isn't the raw input, not by a long shot. What goes in has been highly processed, mixed with stored data, and "chunked".

What goes in is something akin to "That's my wife's voice saying my name a few feet over to my left".

Other parts of the brain do all the pre-processing. Then bundles of highly processed information are streamed into the CM. But the CM does not treat them as if they were streamed. It treats them as if they were coherent.

Often, the preprocessing introduces errors, sometimes gross errors, because it uses shortcuts. We often "see" things that aren't there, and fail to see things that are.

If we slow down the processing speed so that information drips into the CMs at a rate below the subliminal threshold, the apparent coherence is lost, and the CMs can't process the data, because it's not formatted correctly. It would be treated as discreet impulses, which the CMs can't "read".

Our brains do depend on a certain minimum speed in order to generate conscious awareness.

So the pencil brain would only work if it were part of a system which, at some points, acted much faster.
 
And yet, that is precisely the question before us.
No, the question isn´t if a AI brain in the real world will stay concious if it´s run totally out of sync with it´s external interfaces.
This additional constraint doesn´t exist in the OPs question.
 
No, the question isn´t if a AI brain in the real world will stay concious if it´s run totally out of sync with it´s external interfaces.
This additional constraint doesn´t exist in the OPs question.

Then how do you read the question?
 
Then how do you read the question?
There is no reference to external interfaces, so we might as well think about an AI that has none, or has none at the moment we´re undertaking our experiment.

Consider this: We have an advanced robot that senses the world with some sensors. We now disconnect all external sensors (and internal too, just for good measure) for 5 seconds and then reconnect them. If the robot is conscious before plugging out the sensors, it is quite likely that it is concious during the 5 second blackout period too. If we now severly underclock the robot´s CPU during it´s blackout phase (and yeah, it´s internal clock is dependant on the CPU clock), the robot will stay conscious and experience 5 conscious seconds, while in reality the blackout phase might has taken 10 seconds or 10000 years, depending on the underclocking.
 
Consider this: We have an advanced robot that senses the world with some sensors. We now disconnect all external sensors (and internal too, just for good measure) for 5 seconds and then reconnect them. If the robot is conscious before plugging out the sensors, it is quite likely that it is concious during the 5 second blackout period too.

If it is conscious during the sensory blackout, then it is dreaming, which is fine.

In common parlance, we are "unconscious" when dreaming, but for our purposes here we have to consider dreams to be "conscious" experience, since we are aware of what are essentially hallucinations during our dreams.

So we posit a dreaming robot.

If we now severly underclock the robot´s CPU during it´s blackout phase (and yeah, it´s internal clock is dependant on the CPU clock), the robot will stay conscious and experience 5 conscious seconds, while in reality the blackout phase might has taken 10 seconds or 10000 years, depending on the underclocking.

Ok, I'll need some clarification on this point here.

What actually happens physically/electronically to the robot brain when we "severely underclock the... CPU"?

What's the actual difference in the CPU's activity between the normal and underclocked states?

Thanks.
 
What actually happens physically/electronically to the robot brain when we "severely underclock the... CPU"?

What's the actual difference in the CPU's activity between the normal and underclocked states?

Thanks.
In it´s most simple form underclocking would work like on most of today´s notebook computer CPUs, by decreasing the value of the CPU multiplier. The exact method of slowing down isn´t as important as the concept that a slowdown can be achieved in principle.
Be it by underclocking, inserting a sleep / do-nothing instructions between every of the AI instructions or whatever else comes to one´s mind.
The only necessary condition for completely equivalent computation regardless of actual speed is that the all components of the system are slowed down proportionally, so they don´t get out of sync.
 
The position I was arguing against is that a bunch of neurons is all you need for consciousness to "emerge" from the critical mass.

That does not appear to be the case.
On what do you base this conclusion?

Consciousness, like vision, doesn't just arise from any ol' bundle of neurons. It requires particular kinds of circuitry.
That is possible, but it might remain a Turing Machine, and it can in principle be simulated on paper or in silicon. Do you have any sources for the specialised circuitry?

I'm sorry, but I don't know what you mean by "lower layer of consciousness".
The unconscious layer that is needed for consciousness.

Of course it's true that consciousness evolved like everything else in our biological world. I don't understand why you're bringing it up.
You brought up evolution, I did not. I also did not see the relevance.

Well, let's take an example of a conscious event.

First of all, we know there's a timespan below which events will not be consciously processed. Flicker that image too fast, and an observer won't be aware of it, even though his brain has processed it (which we can tell b/c it influences the observer's behavior).

So consciousness, as you and roger have both pointed out, is something that doesn't exist in very small frames of time, but in what we might call macro time.
So far you have not brought something up that cannot be simulated. You do realise that we are not talking about a realtime simulation of a brain, right?

Let's take the example of being aware that someone has said your name at a party.

An enormous amount of data has to be aggregated and analyzed and associated. (Yes, I know the pencil brain can aggregate data etc.)

All the incoming sounds have to be parsed, matched with stored patterns, compared with each other, triaged, prioritized.

The result is a pretty massive assemblage of simultaneous information which results in something like: "In this particular setting, that set of sounds is someone saying my name, which is more important than what I'm focusing on now, so I'll attend to it instead".

Can that feat be accomplished by feeding discreet bits of information at a very slow rate into the areas of the brain responsible for conscious awareness?
Not in a real brain, but in a simulated brain, it would be no problem.

No, because when you do that you lose the large-scale data coherence that's necessary for the brain to do this, and you have neuronal activity in discreet pulses that are below the minimum timespan for events to be consciously processed.

I suppose if you had a very slow machine that stored up information, then sent it in coherent bundles in short bursts of macro time to the modules responsible for conscious awareness, you could have intermittent bursts of consciousness, though.

Essentially, we're asking if there's a stall speed for consciousness like there's a stall speed for engines.
This was the question of the OP, and I believe that for a biological brain, there is a stall speed. For a simulated brain there may be no stall speed at all. The entire contents of every element of the simulated brain, down to the last atom could be stored somewhere and be restored at a later stage, and that consciousness would never notice what had happened.
 
In it´s most simple form underclocking would work like on most of today´s notebook computer CPUs, by decreasing the value of the CPU multiplier. The exact method of slowing down isn´t as important as the concept that a slowdown can be achieved in principle.
Be it by underclocking, inserting a sleep / do-nothing instructions between every of the AI instructions or whatever else comes to one´s mind.
The only necessary condition for completely equivalent computation regardless of actual speed is that the all components of the system are slowed down proportionally, so they don´t get out of sync.

So it would be the equivalent of, say, putting some sort of pause mechanism in the middle of each neuron, so that the impulse had to stop and wait for a prescribed amount of time before moving on?

(Impossible, yes, but in our hypothetical world.)
 
On what do you base this conclusion?

I'm not sure why you want me to repeat this, but it's because of what we observe in the operation of the brain. For example, Marvin's case, in which a stroke disrupted the pathway leading to the part of the brain that generates consciousness of emotions, resulting in a kind of emotional blindness.

We also infer this from the types of errors that are typical of conscious perception, such as the famous "gorilla on the basketball court" experiment, in which people counting ball passes often ignore a person in a gorilla suit who walks onto the court, stands amidst the players, beats his chest, and walks off.

We know from many studies that the mind doesn't shut off perception of these things. Rather, it simply fails to move that information into conscious awareness. On the other hand, the mind also fills in sensory data with stored patterns and associations to create a complete conscious experience even when the data is incomplete.

Also, there are the studies showing that we only become aware of our decisions after we make them and begin to act on them.

What emerges from all of this is a pretty clear picture of a brain that takes sensory information, adds it with stored information and internal data regarding the brain's own states, then moves highly processed information selectively to brain modules responsible for generating conscious awareness. These models are downstream from the pre-processing centers, as well as the areas that route physical responses out to the rest of the body.

So we can be sure that consciousness does not simply emerge from a critical mass of neurons, but rather that it is a specialized function of the brain.

Since thermostats have no such capacity built into them, it's safe to say they are not conscious.
 
That is possible, but it might remain a Turing Machine, and it can in principle be simulated on paper or in silicon. Do you have any sources for the specialised circuitry?

I do, but I can't access them right now.

I'll see if the vid on Marvin is still out there, and I'll try to find other sources on the studies I mention above, as well as anything newer regarding the identification of areas of the brain responsible for generating the experience of conscious awareness. Might not collect all that til tomorrow, tho.
 
So far you have not brought something up that cannot be simulated. You do realise that we are not talking about a realtime simulation of a brain, right?

If I understand what you mean by "simulation", I'm not claiming it can't be simulated -- although I suspect that simulating consciousness would be equivalent to instantiating it.

If you're talking about a very slow machine designed to simulate the brain, then that's the question at hand -- if it computes at a much slower rate, could it still be conscious?

I'm arguing that if it were very slow, it could not, because you'd lose the data coherence necessary for the CMs to work. If you sent data to them with long pauses between impulses, all information entering the CMs would be subliminal.
 
Not in a real brain, but in a simulated brain, it would be no problem.

And on what do you base that?

In order to make it work, you're going to have to create CMs that know how to handle data in a way that our only working consciousness-generator doesn't appear able to handle.

How will that be accomplished?
 
The entire contents of every element of the simulated brain, down to the last atom could be stored somewhere and be restored at a later stage, and that consciousness would never notice what had happened.

Ok, then you're talking about stopping and starting time, for all intents and purposes.

This makes the question trivial.

If you did that, of course consciousness would continue.
 
I'm not sure why you want me to repeat this, but it's because of what we observe in the operation of the brain. For example, Marvin's case, in which a stroke disrupted the pathway leading to the part of the brain that generates consciousness of emotions, resulting in a kind of emotional blindness.
I have seen your arguments, but I never noticed anything that spoke against an emergent function of neurons or whatever you think is important in the brain.

We also infer this from the types of errors that are typical of conscious perception, such as the famous "gorilla on the basketball court" experiment, in which people counting ball passes often ignore a person in a gorilla suit who walks onto the court, stands amidst the players, beats his chest, and walks off.
What connection does this have with emergence?

We know from many studies that the mind doesn't shut off perception of these things. Rather, it simply fails to move that information into conscious awareness. On the other hand, the mind also fills in sensory data with stored patterns and associations to create a complete conscious experience even when the data is incomplete.

Also, there are the studies showing that we only become aware of our decisions after we make them and begin to act on them.
I fail to see the relevance for emergence.

What emerges from all of this is a pretty clear picture of a brain that takes sensory information, adds it with stored information and internal data regarding the brain's own states, then moves highly processed information selectively to brain modules responsible for generating conscious awareness. These models are downstream from the pre-processing centers, as well as the areas that route physical responses out to the rest of the body.
That might be an accurate description of how the brain works, but you seem to think that emergence means that a a random assembly of neurons will generate consciousness, whereas on the contrary you just made the case that we need a certain level of specialisation plus a huge number of neurons in order to get conscience, which still does not rule out that emergence is important.

I do, but I can't access them right now.

I'll see if the vid on Marvin is still out there, and I'll try to find other sources on the studies I mention above, as well as anything newer regarding the identification of areas of the brain responsible for generating the experience of conscious awareness. Might not collect all that til tomorrow, tho.
OK, thanks. Take your time, I might in any case not be as much online tomorrow ...

If I understand what you mean by "simulation", I'm not claiming it can't be simulated -- although I suspect that simulating consciousness would be equivalent to instantiating it.
I am not sure what you mean by "instantiating" here. I think it means that we achieve consciousness, and in that case we agree.

If you're talking about a very slow machine designed to simulate the brain, then that's the question at hand -- if it computes at a much slower rate, could it still be conscious?
Yes, that was my interpretation of the question in the OP. My reply is a yes, but we might not be able to recognise that the machine is conscious if it takes too long to compute the various states.

I'm arguing that if it were very slow, it could not, because you'd lose the data coherence necessary for the CMs to work. If you sent data to them with long pauses between impulses, all information entering the CMs would be subliminal.
I am sorry, but I do not know what a "CM" is. I am not sure why a simulation or a robot would have CM's.

And on what do you base that?

In order to make it work, you're going to have to create CMs that know how to handle data in a way that our only working consciousness-generator doesn't appear able to handle.

How will that be accomplished?
By virtue of the precondition that the brain is a Turing Machine. I still do not know what a CM, so I cannot yet answer that part of the question.

Ok, then you're talking about stopping and starting time, for all intents and purposes.

This makes the question trivial.

If you did that, of course consciousness would continue.
That answers the question of the OP about what would happen if we single-stepped the robot's brain simulation.
 
I have seen your arguments, but I never noticed anything that spoke against an emergent function of neurons or whatever you think is important in the brain.

Odd, because I made all those points before, but no matter, I've made them again.
 
What connection does this have with emergence?

Patience, my friend. Try reading the whole thing.

I'm discussing, at your request I believe, why we can say that consciousness is a specialized function of the brain rather than an emergent property like the whiteness of clouds which arises merely from having a pile of neurons working together.

Part of the reason we know this is because of clues we get from the kinds of errors the brain makes in conscious processing, together with other evidence such as what happens when a brain like Marvin's "breaks" after being damaged.

From the gorilla experiment, and others like subliminal information tests, we see that consciousness does not deal with raw data, but rather with highly processed information. From Marvin, and the decision sequencing tests, we see that consciousness is a "downstream" process.

From this, we can deduce that the brain does one heck of a lot of work, and responds physically to input, before it bothers to make us aware of what's going on. And when it does so, it has already filtered out what it "thinks" is unimportant, filled in gaps, and attached associations from memory to the information before pushing the resulting product into consciousness.

If consciousness were an emergent property that arose from the mere fact of stringing neurons or circuits together, this is not what we'd expect to see.

Instead, what we observe is a specialized function like any other, such as vision.

We don't expect vision to arise as a "property" of any old bundle of neurons liek the whiteness of clouds. It's a function of the system that requires a proper setup. Ditto for consciousness.
 
That might be an accurate description of how the brain works, but you seem to think that emergence means that a a random assembly of neurons will generate consciousness, whereas on the contrary you just made the case that we need a certain level of specialisation plus a huge number of neurons in order to get conscience, which still does not rule out that emergence is important.

I said in my first post on this topic that if consciousness is emergent, it is an emergent feature, not an emergent property like the whiteness of clouds. So I'm not ruling out emergence. I was arguing against a specific type of emergence hypothesis.
 
I am sorry, but I do not know what a "CM" is. I am not sure why a simulation or a robot would have CM's.

CM is a shorthand, which I identified upthread, for "consciousness modules".

I'm finding some really interesting new stuff, tho, that might argue against the existence of CMs for generalized awareness, altho Marvin's case certainly presents evidence for a CM controlling emotional awareness.

As for our robot, we must assume that it uses the same method used by the human brain to produce consciousness, otherwise the question becomes nonsense. It would be like saying: "Suppose there's a car which has an engine that produces motion in a way unlike anything we currently know about or imagine -- how low could its idle speed go before it stalls?"
 
That answers the question of the OP about what would happen if we single-stepped the robot's brain simulation.

I don't think so. It seems to me that the OP posits a conscious robot (not a simulation of consciousness, but a genuine conscious entity) and asks what would happen in the real world if we slowed its processing speed.

In a simulation where you simulate halting and restarting the entire universe, of course there's no change in anything. It's entirely trivial, no matter what you're simulating.
 

Back
Top Bottom