Robot consciousness

So nothing but brains can have consciousness, by definition? Is this in the same way that only eyes can have vision because it is a specialised function?

No, that's not what I mean (unless you want to call anything that does it a "brain" in the same way you might want to call any vision device an "eye") .

In fact, there may be other ways to generate consciousness than the one our brains are using. Nature often comes to the same solution by different means.

But as of now, this is the only one we know of.

But when we look at how consciousness is generated in the cases we know of, we see that it's not simply a global emergent property that arises from having a bunch of neurons hooked up, but rather it is a specialized function of the brain.
 
I do not think it is meaningful to distinguish conscious thoughts from other thoughts, whatever they may be.

There has been some experimentation to show that people "make decisions" before they are "aware" that they make a decision. This sounds rather futile for me. Unconscious thoughts making decisions are as much part of consciousness as any other kinds of thoughts, so it really tells more about how we experience consciousness than what consciousness consists of.

Consciousness is your experience.

And the distinction between conscious and non-conscious activity in the brain is quite important.

We have to ask the question, not only "How are we conscious?", but "Why?"

Why does the brain bother to do this? There must be some evolutionary advantage to it.

The points you make here are one's I've made previously on this thread. Consciousness appears to be a specialized "downstream" function. Our awareness of our actions and decisions appears to be after-the-fact.

So why should evolution bother to do such a thing?

There's a wealth of experiments showing a distinction between conscious functions of the brain and all the rest (the majority) of activity.

It's well established the people can and do act on sensory information that the brain has picked up and processed, but not bothered to make them consciously aware of. And Marvin's case shows clearly how the information feed to the module controlling emotional awareness can be short-circuited, even though the parts of the brain generating emotion continue to function.

So when we talk about a "conscious robot", we're not just talking about a robot that can do a lot of what our brains can do. We must be speaking of a robot that has this particular capacity to be "aware" of at least some of what it's doing.
 
As long as we do not know exactly how the brain achieves consciousness, it remains an opinion.

But we may not need to know the whole story about how consciousness is produced by the brain in order to answer the question.

It's possible to have enough information to answer the question without knowing everything, just as we answer questions in astronomy without knowing everything about the cosmos.

I think we do know enough to say that the pencil brain is too slow to be able to do what the brain is doing when it generates conscious awareness.
 
The OP states: "Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains." Where do you read that the OP is not concerned with a virtual simulation of consciousness? How do you interpret the word "similar"?

What if there were an OP that said, "Let's assume we have enough food to feed an army...." Would you be ok with the premise that we can proceed with that thought experiment by positing that we have some sort of virtual food to feed that army?

The OP stipulates that we're dealing with a robot that is conscious like we are. In other words, we've got something here in the physical world that is actually self-aware.

No virtual "simulation" is being stipulated.
 
But when we look at how consciousness is generated in the cases we know of, we see that it's not simply a global emergent property that arises from having a bunch of neurons hooked up, but rather it is a specialized function of the brain.
How do we see this? Why are bunches of neurons not constituting the specialised function of the brain? Bunches of photoreceptor cells also make up vision.

Consciousness is your experience.
Yes, but I doubt that the experience could exist without the lower layer of consciousness.

We have to ask the question, not only "How are we conscious?", but "Why?"

Why does the brain bother to do this? There must be some evolutionary advantage to it.
Definitely, but if it is an emergent quality, evolution might have stumbled upon it by chance. In fact, I am pretty sure that evolution does not go in any special direction, unless there is a god to direct it, so it is fairly certain that consciousness just happened, and evolution opportunistically latched on to a good thing.

The points you make here are one's I've made previously on this thread. Consciousness appears to be a specialized "downstream" function. Our awareness of our actions and decisions appears to be after-the-fact.
Yes, our awareness is after the fact, but we are not consciously firing certain neurons, so it is rather obvious that any decision making is done on a level that is only later brought to the awareness level. Apparently, this is how consciousness works. I believe that you could not have consciousness without all of the levels.

There's a wealth of experiments showing a distinction between conscious functions of the brain and all the rest (the majority) of activity.
Does these experiments also show that consciousness could exist without the rest of the activity?

But we may not need to know the whole story about how consciousness is produced by the brain in order to answer the question.
Quite true. I said something along the same lines in an earlier post. As long as we do not know excatly what makes up consciousness, we can still simulate consciousness by simulating every single element. Once we have the knowledge of what makes up consciousness, we can cut down on everything that is not strictly necessary.

I think we do know enough to say that the pencil brain is too slow to be able to do what the brain is doing when it generates conscious awareness.
You are supposing that the timing element is important. I see no reason to accept this.

What if there were an OP that said, "Let's assume we have enough food to feed an army...." Would you be ok with the premise that we can proceed with that thought experiment by positing that we have some sort of virtual food to feed that army?
I fail to see the connection.

The OP stipulates that we're dealing with a robot that is conscious like we are. In other words, we've got something here in the physical world that is actually self-aware.

No virtual "simulation" is being stipulated.
A robot is not a biological being. "conscious like we are" necessarily means that it is a virtual simulation that is making the robot conscious. Though presumably not a paper machine :)

Many years ago I have produced a virtual simulation of a tape recorder that worked exactly as a tape recorder, except it had no tapes. I see nothing in the OP that rules out that the robot is running a virtual simulation of a brain.
 
How do we see this? Why are bunches of neurons not constituting the specialised function of the brain? Bunches of photoreceptor cells also make up vision.

Bunches of neurons do constitute the brain, obviously.

The position I was arguing against is that a bunch of neurons is all you need for consciousness to "emerge" from the critical mass.

That does not appear to be the case.

Consciousness, like vision, doesn't just arise from any ol' bundle of neurons. It requires particular kinds of circuitry.
 
Definitely, but if it is an emergent quality, evolution might have stumbled upon it by chance. In fact, I am pretty sure that evolution does not go in any special direction, unless there is a god to direct it, so it is fairly certain that consciousness just happened, and evolution opportunistically latched on to a good thing.

Of course it's true that consciousness evolved like everything else in our biological world. I don't understand why you're bringing it up.
 
Yes, our awareness is after the fact, but we are not consciously firing certain neurons, so it is rather obvious that any decision making is done on a level that is only later brought to the awareness level. Apparently, this is how consciousness works. I believe that you could not have consciousness without all of the levels.

Yes, but why are you bringing this up?

I'm totally befuddled about why you're stating all these obvious points.
 
You are supposing that the timing element is important. I see no reason to accept this.

Well, let's take an example of a conscious event.

First of all, we know there's a timespan below which events will not be consciously processed. Flicker that image too fast, and an observer won't be aware of it, even though his brain has processed it (which we can tell b/c it influences the observer's behavior).

So consciousness, as you and roger have both pointed out, is something that doesn't exist in very small frames of time, but in what we might call macro time.

Let's take the example of being aware that someone has said your name at a party.

An enormous amount of data has to be aggregated and analyzed and associated. (Yes, I know the pencil brain can aggregate data etc.)

All the incoming sounds have to be parsed, matched with stored patterns, compared with each other, triaged, prioritized.

The result is a pretty massive assemblage of simultaneous information which results in something like: "In this particular setting, that set of sounds is someone saying my name, which is more important than what I'm focusing on now, so I'll attend to it instead".

Can that feat be accomplished by feeding discreet bits of information at a very slow rate into the areas of the brain responsible for conscious awareness?

No, because when you do that you lose the large-scale data coherence that's necessary for the brain to do this, and you have neuronal activity in discreet pulses that are below the minimum timespan for events to be consciously processed.

I suppose if you had a very slow machine that stored up information, then sent it in coherent bundles in short bursts of macro time to the modules responsible for conscious awareness, you could have intermittent bursts of consciousness, though.

Essentially, we're asking if there's a stall speed for consciousness like there's a stall speed for engines.
 
A robot is not a biological being. "conscious like we are" necessarily means that it is a virtual simulation that is making the robot conscious. Though presumably not a paper machine :)

Many years ago I have produced a virtual simulation of a tape recorder that worked exactly as a tape recorder, except it had no tapes. I see nothing in the OP that rules out that the robot is running a virtual simulation of a brain.

This may be semantics, but at that point I'd argue you're no longer simulating consciousness. At that point, it's identity. You have consciousness.

As I read the OP, that's what's been stipulated.
 
I say no, and for entirely practical reasons.

The step which, from my point of view, you are failing to make is to actually go look at how consciousness is created and ask, "Is this a function that can be maintained by something that works as slowly as a pencil brain?"

I say no, because you don't have simultaneous coordination of large enough amounts of coherent data over short enough continuous spans of time to achieve it.

That's why I say that the pencil brain cannot mimic that particular function.
I am addressing this point.

This next part isn't going to convince you, but just let me say it first.

The smallest possible unit of time is Planck time, on the order of 5*10^-44 seconds. Rather brief. Nothing happens that is shorter than that duration. If we looked at your brain using that time frame, you would no longer talk about 'simultaneous coordination'. Events that are happening simultaneously are in fact happening glacially - you wouldn't even see things happening on that time scale. The impulses running from neuron to neuron in the chemical soup would appear absolutely frozen to you on that time scale. you'd have to leave markers and come back days later to even notice they moved. You could reach in 'by hand' on that time scale and make things happen, one at a time, one neuron at a time, and have time to play a golf game and go to the symphony between each adjustment. Heck, at that time scale, you could write a million novels between each look at a single neuron (there are only 100 billion to concern yourself with, after all). In other words, at that time scale, even though things are happening simultaneously, the signals and events are happening so slowly that there is no need for changes to happen simultaneously - you can easily do it one at a time. You wouldn't be talking about simultaneous on this time scale, you certainly wouldn't be talking about continuous. You'd see an absolutely frozen object that appears to do nothing over 100 years. This is unarguable - on a given time scale, that actually exists in our universe, the brain is basically doing nothing, 'continuous' has no meaning, and yet it produces consciousness. Hence, continuous is not a necessary property of consciousness.


Okay, so at this point I'm assuming you aren't convinced. If not, you are positing something 'special' about simultaneousness that creates consciousness. From a computational point of view there is no reason to assume that. Certainly there is a need for our neurons, running at the speeds they do, to be simultaneously doing things to get everything they need to get done in the time they have available. But if those neurons were running, say, 10 trillion times faster than they do right now, why couldn't you just have one trillion, and a big bank of memory do all the work of your 100 billion neurons? Computationally, they are equivalent, and I think we now have you say consciousness is computational. Again, this is a thought experiment, no need to point out that our neurotransmitters wouldn't work at that speed.

I'm not sure what you meant by 'practical'. Certainly with our brain we need parallel processing to get things done in time. But the pencil brain is not constrained by our time scale, where 10^-44 sec is too small to notice, and 1 is a small, but noticble time increment. With the pencil brain, I'm saying 1 second is it's planck time interval. Unable to perceive it, basically nothing happens during it, the brain is frozen. But as 10^44 seconds passes, the pencil brain will perceive, it will think, it will be conscious. They are computationally equivalent.

I assume that you don't think the neurotransmitters are creating a 'field' or something that creates consciousness. I think we are both on the same page - it's how large bundles of neurons process information, self-referentially, that creates consciousness. It's the processing, nothing more. If so, there is no fundamental difference between parallel and sequential processing. Again, back to the planck scale. Sure, even at that speed, there's an impulse running along nerve 10234 and another impulse along nerve 2343322. Simultaneously! Sure, but it's not the impulse thats creating consciousness, it's what happens when it reaches the neuron (broadly). A neuron gets a bunch of inputs, and at a certain threshhold it fires. At the scale of Planck, it'd be an astonishing miracle if two neurons got a signal at the same moment. At that time scale (if you were living such that a planck interval felt like one second to you) you might wait decades between two events you swear are instantaneous. So, we know simultaneous in that sense plays no role.

So, we are left with the field concept. I don't see any evidence for such a thing, so I dismiss it. If there was a field, then speed and simultaneousness could matter, and the pencil brain, running on a substrate that doesn't generate that field, wouldn't work. But, that is just pie-in-the-sky speculation.

I can't think of any other objections or alternatives. In a time scale that actually exists on this world, your brain does not do the 'important things' simultaneously or continuously. Where 'important things' mean neurons receiveing impulses and firing. It's the receiving, firing, and storing data that creates the consciousness, not the mere fact of neurotransmitters propagating a signal.

Therefore, a pencil brain operating on a time scale of seconds would work too, and would perceive time passing in parcels of billions of eons.
 
I like roger's idea of reposting the OP at the top as a reminder:

Consider a conscious robot with a brain composed of a computer running sophisticated software. Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains.

Would the robot be conscious if we ran the computer at a significantly reduced clock speed? What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?

So this question boils down to whether there's a "stall speed" for consciousness, like there is with a car engine.

Now, for the car, we could theoretically build a second car which has equivalents for all the parts of the working car in the same arrangement, but everything moves much more slowly.

Let's say there is only one event per second in this model car.

(Assuming all physics is computable, both cars are computable.)

Q: Will the second car run?

A: No.

So we cannot simply assume that a slow model brain will be conscious. It depends on what the mechanism of consciousness is.
 
So we cannot simply assume that a slow model brain will be conscious. It depends on what the mechanism of consciousness is.
Right. We all agree with this. Any given substrate, in the real world, has a minimum and maximum run speed. My beloved pencil brain wouldn't really work. Pencil fades in several hundred years, and paper deteriorates. We wouldn't actually get anything done before everything decomposed. We'd need this stuff to hang around for eons for the brain to do something useful, and that wouldn't happen.

But we are trying to discuss a more philosophical point rather than an implementation detail. Nothing, in principle, stops a pencil brain from having a conscious that spans eons.
 
The smallest possible unit of time is Planck time, on the order of 5*10^-44 seconds. Rather brief. Nothing happens that is shorter than that duration. If we looked at your brain using that time frame, you would no longer talk about 'simultaneous coordination'. Events that are happening simultaneously are in fact happening glacially - you wouldn't even see things happening on that time scale. The impulses running from neuron to neuron in the chemical soup would appear absolutely frozen to you on that time scale. you'd have to leave markers and come back days later to even notice they moved. You could reach in 'by hand' on that time scale and make things happen, one at a time, one neuron at a time, and have time to play a golf game and go to the symphony between each adjustment. Heck, at that time scale, you could write a million novels between each look at a single neuron (there are only 100 billion to concern yourself with, after all). In other words, at that time scale, even though things are happening simultaneously, the signals and events are happening so slowly that there is no need for changes to happen simultaneously - you can easily do it one at a time. You wouldn't be talking about simultaneous on this time scale, you certainly wouldn't be talking about continuous. You'd see an absolutely frozen object that appears to do nothing over 100 years. This is unarguable - on a given time scale, that actually exists in our universe, the brain is basically doing nothing, 'continuous' has no meaning, and yet it produces consciousness. Hence, continuous is not a necessary property of consciousness.

Since there is no consciousness on that scale, it doesn't really matter.

In fact, let's consider why it should be that there is a subliminal timeframe at all.

Why doesn't the brain move anything that happens within very short timeframes into conscious processing? We know that the brain processes information below that timeframe, so we imagine that it could push it into conscious processing. Why doesn't it?

That's because consciousness does rely on the coordination of apparently coherent data. And below a certain time interval, that kind of coherence just doesn't exist.

You might argue that our conscious experience is like a movie -- frames that only appear continuous.

Below the subliminal threshold, the brain can still process data, but it can't lump that data into apparently coherent sets which can be used to create the experience of being aware of them.

We perceive our experience to be continuous, but it is not.

Instead, the brain has to (on very short timescales) create chunks of highly processed data that are treated as if they were coherent and simultaneous by the brain structures that generate consciousness. On the neural level, it is transferred in the same way as any other data. But these modules appear to process batches of it.

This is only possible at a high level of organization.

The result is our experience of being "aware" of things.

If we slow down the data stream so much that we break up these apparently coherent batches, then all input becomes subliminal.
 
Right. We all agree with this. Any given substrate, in the real world, has a minimum and maximum run speed. My beloved pencil brain wouldn't really work. Pencil fades in several hundred years, and paper deteriorates. We wouldn't actually get anything done before everything decomposed. We'd need this stuff to hang around for eons for the brain to do something useful, and that wouldn't happen.

But we are trying to discuss a more philosophical point rather than an implementation detail. Nothing, in principle, stops a pencil brain from having a conscious that spans eons.

I disagree. That's like saying that nothing in principle stops our one-event-per-second car from running.

In fact, it won't be able to run.

See my above post for an explanation of why a one-impulse-per-second brain won't run the consciousness function. At that speed, everything becomes subliminal, so the brain can't be conscious of anything.
 
Certainly with our brain we need parallel processing to get things done in time. But the pencil brain is not constrained by our time scale, where 10^-44 sec is too small to notice, and 1 is a small, but noticble time increment. With the pencil brain, I'm saying 1 second is it's planck time interval. Unable to perceive it, basically nothing happens during it, the brain is frozen. But as 10^44 seconds passes, the pencil brain will perceive, it will think, it will be conscious. They are computationally equivalent.

What method are you going to use to overcome the subliminal threshold?

At some point, the pencil brain has to speed up.

If it works so slowly that all transfer of information happens at pencil speed, how do you create the apparent cohesion and simultaneity which the generation of conscious awareness requires?

You could do it by storing up sufficient data to create cohesion, then dumping all that aggregated and associated data very rapidly into the right module and allowing it to run at a speed that keeps up above the threshold. But then we'd have a hybrid, of which the pencil brain is only one part.
 
In a time scale that actually exists on this world, your brain does not do the 'important things' simultaneously or continuously. Where 'important things' mean neurons receiving impulses and firing. It's the receiving, firing, and storing data that creates the consciousness, not the mere fact of neurotransmitters propagating a signal.

But the action of neurons is not the only important thing.

We have to look at larger scale structures, and how they behave at that level of granularity, to understand how consciousness is created.
 
First of all, we know there's a timespan below which events will not be consciously processed. Flicker that image too fast, and an observer won't be aware of it, even though his brain has processed it (which we can tell b/c it influences the observer's behavior).
What you are describing has nothing to do with the dependence of conciousness and timespan, but with the synchronization of the computation that is/results in conciousness with the incoming sensory data. Of course you won´t be able to just slow down or speed up a human´s or AI´s brain situated in the real world at will and expect it to stay concious.

But let´s say you create a log of the exact sensory inputs you receive while looking at that flickering image, with exact timestamp information, as well as the starting state of your computation device. You then can recreate the exact computation happening when looking at that image, regardless actual speed at which it is performed. Be it a nanosecond or a thousand years, the computation and it´s results will stay the same. If this computation results in conciousness, it will do so regardless of the actual time it took.

One could actually consider not only using a pencil to run a brain, but another person to use another pencil to compute the simulation environment that brain lives in and the associated incoming sensory input. And because both of these algorithms are computable, you could even have a single TM/human with a pencil computing both the brain´s activity as well as the simulation it lives in. For such a system it doesn´t matter whatsoever at which speed it is run, it will always be concious and in sync with it´s sensor input.

Of course, all the above assumes conciousness to be computable, as discussed earlier in this thread.
 

Back
Top Bottom