Robot consciousness

An analog in the human brain is that it can, and does, take micronaps that are not noticeable to conscious awareness, which appears continuous; and it can "black out" for longer periods of time and "come to" with the result that time appears to have suddenly "jumped" (for example, the experience of driving and suddenly your wheels are off the road and you realize you were momentarily asleep).

In short, the gaps are unnoticed. Remember this.

And yet we also know that events which occur at very short lengths of time are processed by the brain just fine, but cannot be consciously perceived.

Not sure I want to agree with the "processed by the brain just fine" bit (it's irrelevant, anyway), but I'll agree that if a stimulus is not present for long enough, then we won't perceive it. Take, for example, a red circle flashed on a screen for one picosecond. I agree that we probably wouldn't perceive it.

So if you were to time the blackouts so that the interval between them were shorter than this subliminal threshold, you'd end up with a series of non-conscious moments even though the brain is working during those moments.

Here's the gotcha point. By your first point, the gaps are unnoticed. So you are not left with a series of picosecond flashes of the red circle with some gaps between them. We are left with a continuous image of a red circle.

But I have to wonder at your assertion that "the brain is working during those moments". This breaks the analogy with single-stepping through the computer program.

You seem to be trying to rest your argument on the fact that the stimulus potential has time to decay during the gaps. In that case, yeah, you'd never reach the threshold to fire a neuron. But this breaks the analogy. It's not at all what the OP was asking about.
 
Maybe not you, but others have.

Most recently this one:

Piggy was discussing theoretical feasibility, which is a good deal different then technical feasibility.

Even in the sphere of theoretical feasibility there is a big difference between saying “we know it’s possible because we have examples of devices that do it” and “we know how to produce devices that do it”. IMO Piggy was referring to the former while I’m challenging the latter in addition to technical feasibility any time soon.
 
Arguments like that seem to have the hidden premise "...and we never will", which to my mind subjects the idea of consciousness to some sort of special pleading.

If it’s there you added it yourself.
 
Yes. Piggy is.
I confess I've been dropping in and out of this thread, so I may have missed something, but I keep reading Piggy's posts as if he's saying things about the 'what it's like to be a thing that is conscious' question. Other times, as you say, he's clearly talking about pure computation.
 
I confess I've been dropping in and out of this thread, so I may have missed something, but I keep reading Piggy's posts as if he's saying things about the 'what it's like to be a thing that is conscious' question. Other times, as you say, he's clearly talking about pure computation.

Both are legitimate questions IMO.
 
They are, but arguments that apply to one reading of 'consciousness' don't necessarily (or even usually) apply to the other.
 
Right. That's one of the fundamental findings -- not assumptions -- of the various formalizations of "computer." They will work at any operating speed. Or more accurately, no one has found any formalizations of "computability" that are in any way speed-dependent.

So if you assume that "computer consciousness" is possible, then you implicitly assume either that our findings about "computers" remain relevant (including the finding of speed-independence) or that there's been some sort of breakthrough that renders everything we know about computers irrelevant.

Unless your "mechanism of consciousness" is radically uncomputable -- which in turn makes "computer consciousness" impossible by definition -- then it will have all the properties we attribute with computability.

From what I've read on this thread, any physical process is computable, therefore any computer, engine, animal, plant, etc. is computable. Ok.

So saying that brains and computers are computable... well, fine.

I also don't doubt that a computer should be able to do anything a human brain can do. No problem. I can accept that. No argument from me.

So far, everything we've been able to do with computers, we've been able to do at any operating speed. I'll accept that. I couldn't prove it, but I have no reason to doubt it.

But we haven't yet made a computer that's conscious. We've not made a computer that does what the brain does when it produces and maintains a state of conscious awareness.

When we do, will it be able to do THAT at any speed?

So far, no one has presented any evidence that it must be true that it could.

If it turns out that our brains could not maintain consciousness if we were to slow down neural firings to one per second, then it stands to reason that a computer -- which can do everything a brain can do -- that is configured to do what the brain does also will not be able to maintain consciousness at an equivalent speed unless (and perhaps this is the case, but so far I'm not seeing an argument for it) something about the computer brain is radically different from the human brain which somehow overcomes the problem.

So it really all depends on whether our brains produce consciousness by a method that would continue working if we were somehow able to slow neurons down radically while still keeping the body alive.
 
In short, the gaps are unnoticed. Remember this.

Not sure I want to agree with the "processed by the brain just fine" bit (it's irrelevant, anyway), but I'll agree that if a stimulus is not present for long enough, then we won't perceive it. Take, for example, a red circle flashed on a screen for one picosecond. I agree that we probably wouldn't perceive it.

Here's the gotcha point. By your first point, the gaps are unnoticed. So you are not left with a series of picosecond flashes of the red circle with some gaps between them. We are left with a continuous image of a red circle.

But I have to wonder at your assertion that "the brain is working during those moments". This breaks the analogy with single-stepping through the computer program.

You seem to be trying to rest your argument on the fact that the stimulus potential has time to decay during the gaps. In that case, yeah, you'd never reach the threshold to fire a neuron. But this breaks the analogy. It's not at all what the OP was asking about.

Well, I'll link it in the post I hope to get to tonight, but yes, there are studies showing that the brain does process subliminal information and use it in decision making. So we know that it's being perceived and processed, but for some reason it's not available to conscious awareness.

From what we can tell, events must be of a certain minimum duration in order to be usable by the modules of the brain that handle consciousness, even though events of shorter duration are usable in other ways.

I see what you mean by breaking the analogy -- we're not talking about a computer that blinks its consciousness modules on and off, but rather a computer that single-steps everything.

That's why I need a longer post than I can write up at the moment to put it all together.

It may turn out that the difference you've pointed out there is a spoiler (I have come back to it many times, wondering if I've missed something).

Perhaps we would be left with the illusion of a continuous image of a red circle.

We would only not get that effect if it were true that conscious awareness itself can't be maintained at all if processing is slowed down radically.

I'd like to argue that it is -- but I need to write it up and get all the links.

It may turn out that when I put it all together on paper instead of my head, I have to reverse myself. I'm open to that possibility. We'll see.
 
So far, everything we've been able to do with computers, we've been able to do at any operating speed. I'll accept that. I couldn't prove it, but I have no reason to doubt it.

Yes, but I can prove it. It's implicit in the definition of "computable" as "computable by a Turing machine" since Turing machines are state machines. The time it takes to transition from state to state does not affect the input-output relationship, which is how TM computability is defined.

But we haven't yet made a computer that's conscious. We've not made a computer that does what the brain does when it produces and maintains a state of conscious awareness.

When we do, will it be able to do THAT at any speed?

So far, no one has presented any evidence that it must be true that it could.

No, the evidence has been presented. TMs are not speed-dependent.


If it turns out that our brains could not maintain consciousness if we were to slow down neural firings to one per second,

Unsupported hypothesis.

then it stands to reason that a computer -- which can do everything a brain can do -- that is configured to do what the brain does also will not be able to maintain consciousness at an equivalent speed

Congratulations. You've just proven that your brains could maintain consciousness if we were to slow down neural firings to one per second. (not-P implies not-Q, Q is proven, therefore P).
 
If it turns out that our brains could not maintain consciousness if we were to slow down neural firings to one per second...


A computer executing a program at one operation per second is not the same as a human brain firing one neuron per second. When a neuronal signal propogates, it does so because signals from efferent (input) neurons build up in the target neuron until it reaches a threshold potential, which causes it to fire.

If you stipulate that the potential is allowed to decay during the gap, then the target
neuron will probably not reach its activation potential, and yes, brain activity would stop, consciousness would cease.

But that is *not* what was proposed by the OP regarding the electronic brain. Or at least I doubt it was--as that leads to the trivial conclusion I mention above. I assume that when we talk about speeding up the brain or slowing it down, we're talking about speeding up all of the physics involved.
 
We would only not get that effect if it were true that conscious awareness itself can't be maintained at all if processing is slowed down radically.

Ok, I can only imagine that you are talking about subjective experience when you talk about consciousness. By subjective experience, I mean the "what it's like" stuff, the phenomenology, the qualia, whatever.

If that's the case, then all talk about Turing Machines and computability do not apply. But that also means that there is no argument to make on your part.

Subjective experience is absolutely not observable by outsiders. There is no way for you or anyone else or any machine to see what it's like for me to see red. There is no machine we can devise, no test, no scientific study, that has access to that privileged information. The upshot of this is that no one can say with certainty whether or not an intelligence--biological, mechanical, electrical, or lithic, has subjective experience. We assume that humans have it. We can only assume that other creatures have it. There's no good reason to assume that any other sufficiently complex information processor involving feedback loops would NOT have it.
 
Oops. I got sidetracked.

What I meant to include is that because it is beyond am event horizon of a sort, there is very little we can say about it at all. Any assertions about its dependency on timing, or clock speed, or physical substrate must be the merest speculation.

I am certain that your arguments and the articles you are going to cite have no bearing on subjective experience, but only on the computability/intelligence aspect of 'consciousness'. It seems, still, that you are confusing the two.
 
A computer executing a program at one operation per second is not the same as a human brain firing one neuron per second. When a neuronal signal propogates, it does so because signals from efferent (input) neurons build up in the target neuron until it reaches a threshold potential, which causes it to fire.

If you stipulate that the potential is allowed to decay during the gap, then the target
neuron will probably not reach its activation potential, and yes, brain activity would stop, consciousness would cease.

But that is *not* what was proposed by the OP regarding the electronic brain. Or at least I doubt it was--as that leads to the trivial conclusion I mention above. I assume that when we talk about speeding up the brain or slowing it down, we're talking about speeding up all of the physics involved.

That's not quite how I imagined it. I was thinking more along the lines of this....

Suppose we were somehow able to slow the movement of the impulse along the axon so that it took an average of 1 second to move along the length. Suppose the synapses work the same as always and everything remains coordinated.

Would that be the same as running the replica TM brain at one calculation per second?
 
Yes, but I can prove it. It's implicit in the definition of "computable" as "computable by a Turing machine" since Turing machines are state machines. The time it takes to transition from state to state does not affect the input-output relationship, which is how TM computability is defined.

Keep in mind that the human brain isn’t limited to state based computing. It’s also a connection based machine whose physical configuration can change based on input and past computation.

While this can be accounted for in estimating its computational power it also implies that if you slow down the computational speed you must also slow down the input being sent to it or you will end up with a different computational result. This would be at odds with any real time test for consciousness.
 
Suppose we were somehow able to slow the movement of the impulse along the axon so that it took an average of 1 second to move along the length. Suppose the synapses work the same as always and everything remains coordinated.

Would that be the same as running the replica TM brain at one calculation per second?

I would think so, assuming I understand you correctly.
 
Keep in mind that the human brain isn’t limited to state based computing. It’s also a connection based machine whose physical configuration can change based on input and past computation.

While this can be accounted for in estimating its computational power it also implies that if you slow down the computational speed you must also slow down the input being sent to it or you will end up with a different computational result. This would be at odds with any real time test for consciousness.

This is all irrelevant.

Here's an example:

Say we have a software system that is meant to keep a robot upright by making thousands of micro-adjustments to servos in the legs. Obviously, if we slow down that software system to one tick per second and we keep all the other physics the same, then the robot will fall down.

See, no one is arguing that the software would perform its intended function. No one would argue that a super-slow computer brain would pas a real-time Turing test unless we somehow allowed for it by saying, for example, that the messages were being transmitted from Alpha Centauri.

Let's quit using "Turing test" in the same post as "Turing machine". The two are related only by the name of of the man who conceived them.
 
Ok, I can only imagine that you are talking about subjective experience when you talk about consciousness. By subjective experience, I mean the "what it's like" stuff, the phenomenology, the qualia, whatever.

If that's the case, then all talk about Turing Machines and computability do not apply. But that also means that there is no argument to make on your part.

Subjective experience is absolutely not observable by outsiders. There is no way for you or anyone else or any machine to see what it's like for me to see red. There is no machine we can devise, no test, no scientific study, that has access to that privileged information. The upshot of this is that no one can say with certainty whether or not an intelligence--biological, mechanical, electrical, or lithic, has subjective experience. We assume that humans have it. We can only assume that other creatures have it. There's no good reason to assume that any other sufficiently complex information processor involving feedback loops would NOT have it.

I don't see that as a barrier.

The OP stipulates that we do have a conscious robot. We simply posit that we have one, and we ask "What happens to that robot's experience if we slow down the processing speed?"

We could also imagine this experiment:

Suppose we have a means of slowing down the brain's processing speed to that of a replica TM brain moving at "pencil speed". And somehow we're able to keep the lungs breathing and blood pumping etc.

We set our subject (Joe) in a perfectly dark, silent room and display a red circle on a screen for one second. The circle changes to a blue square for half a second, then back to a red circle for one second, then to a green triangle for half a second, then back to a red circle.

We let Joe out of the room and ask which distinct colors and shapes he saw.

He says, "I saw a red circle, a blue square, and a green triangle".

We write that down and send him back in.

This time, just before we flash the image of the green triangle, we turn on the machine to slow his brain down. We also increase the length of time the green triangle is on the screen accordingly. (I have no idea how long that would be -- perhaps our experimenters have to go into suspended animation during this period, or perhaps their descendants come back to complete it, or maybe robots are doing the test, but we'll ignore that.) Just after the image changes back to a red dot, we turn off the machine and Joe's brain runs at normal speed again.

We let Joe out and ask what he saw.

Does he say "A red circle, a blue square, and a green triangle" or "A red circle and a blue square"?

It all depends on whether his mind can still run the consciousness function at that slow a processing speed.

We can imagine the same experiment with our robot Jane, slowing down her computer brain's speed to that of an analogous TM running at "pencil speed" while the green triangle is being displayed for a correspondingly longer period of time.

What does she say she saw?
 
Oops. I got sidetracked.

What I meant to include is that because it is beyond am event horizon of a sort, there is very little we can say about it at all. Any assertions about its dependency on timing, or clock speed, or physical substrate must be the merest speculation.

I am certain that your arguments and the articles you are going to cite have no bearing on subjective experience, but only on the computability/intelligence aspect of 'consciousness'. It seems, still, that you are confusing the two.

No, they're about subjective experience. For instance, what are we aware of and what are we not aware of even though other parts of the brain are processing that information (e.g. subliminal images, the irrelevant gorilla); how the brain can take very small breaks from awareness without any apparent (subjective) break in consciousness; how the brain uses schema to fill in what we don't actually perceive in order to create the sensation of completeness; how different modules of the brain "disconnect" when we're in dreamless sleep but "reconnect" when we're awake or dreaming; how our sense of time speeds up as we age, but slows down temporarily when we're in immediate danger; how our brains pre-process and "triage" information before serving up selected bits of it to conscious awareness (e.g. the cocktail party effect); etc.
 
Does he say "A red circle, a blue square, and a green triangle" or "A red circle and a blue square"?

According to your experimental setup, there is no reason to think he would NOT see the green triangle. All that's happening is that the light from the green triangle enters the pupil, hits some rods and cones, activates the optic nerve...and at that point just moves slower through the brain and causes him to report seeing a green triangle.

Whatever you are trying to demonstrate about speed being a factor, you didn't show it here. A relatively dumb electronic system (camera, flash memory, and an output monitor) could report "seeing" the green triangle, no matter what speed you ran it at.

Anyway, we're back to computability. Your experiment has nothing to do with subjective experience.
 
According to your experimental setup, there is no reason to think he would NOT see the green triangle. All that's happening is that the light from the green triangle enters the pupil, hits some rods and cones, activates the optic nerve...and at that point just moves slower through the brain and causes him to report seeing a green triangle.

Whatever you are trying to demonstrate about speed being a factor, you didn't show it here. A relatively dumb electronic system (camera, flash memory, and an output monitor) could report "seeing" the green triangle, no matter what speed you ran it at.

Anyway, we're back to computability. Your experiment has nothing to do with subjective experience.

You're right that I didn't show anything about speed being a factor -- I don't have enough blocks of time right now to write up that post, as I've said before. So that'll have to wait.

I just wanted to propose the thought experiment. I'll try to answer the question myself later, in terms of how the brain generates conscious experience.

But yes, this has everything to do with subjective experience.

Your thumbnail isn't quite correct:

All that's happening is that the light from the green triangle enters the pupil, hits some rods and cones, activates the optic nerve...and at that point just moves slower through the brain and causes him to report seeing a green triangle.

What's missing is that, in order for him to be able to report seeing the green triangle, he has to be aware that he saw it.

In order for that to happen, the brain has to make "green triangle" available to the parts of the brain that control conscious experience.

If they don't, he will be unaware of seeing it -- even though parts of his brain processed the information -- and he'll report just the red circle and the blue square (even though he will be able to more rapidly recognize a green triangle in a lineup of colored shapes immediately after the experiment than he will a yellow pentagon).

In other words, seeing it and being aware you saw it are not necessarily the same thing.

Anyway, more on the speed thing later.
 

Back
Top Bottom