Robot consciousness

Yes, but I can prove it. It's implicit in the definition of "computable" as "computable by a Turing machine" since Turing machines are state machines. The time it takes to transition from state to state does not affect the input-output relationship, which is how TM computability is defined.

I'm curious about this: Could a TM be designed (theoretically) that was computationally equivalent to my car?
 
What's missing is that, in order for him to be able to report seeing the green triangle, he has to be aware that he saw it.

No, any dumb system can report seeing the green triangle. You still seem to be confusing the qualia in the brain with the behaviors associated with consciousness.

In fact, we "report" stuff unconsciously all the time. Yelling out an old lover's name during sex. The "tells" that human lie detectors use. Twitches, reflexes, Tourette's syndrome.

It's all captured rather well in a quote from E.B. White: "How do I know what I think, until I see what I say?"
 
I'm curious about this: Could a TM be designed (theoretically) that was computationally equivalent to my car?

If what you mean is: could a computer-simulated car be written such that its simulated performance on a simulated road using simulated fuel with a simulated driver be comparable to the real car?

Then: most certainly. And down to an arbitrary level of precision--possibly barring quantum effects, but I doubt there are any that survive the aggregation into macro-scale behaviors.
 
Keep in mind that the human brain isn’t limited to state based computing. It’s also a connection based machine whose physical configuration can change based on input and past computation.

Not relevant, I'm afraid. We were asked to explicitly assume that a conscious robot existed, ergo that consciousness is computable.

Which is equivalent to assuming that the human brain -- or at least the consciousness creating parts of it -- can be successfully and fully implemented in a Turing machine.

If you want to argue that consciousness is not computable, you're in good company. But it's not the discussion you're in right now, and your argument is out of place.
 
Does he say "A red circle, a blue square, and a green triangle" or "A red circle and a blue square"?
You are again letting perception (eyes) run at full speed, while cognition (conciousness) runs desynchronized with a much slower speed. This will obviously screw up the computational results, but that doesn´t have much to do with the question at hand.
The cognition/control software of an vision-based industrial picking robot for example will completely fail if you run it at 0.1Hz instead of the say 60Hz it was designed for. But that doesn´t tell us anything about consciousness.

If you agree that conciousness is computable, by definition it has to be computation-speed invariant. So by saying it is not speed-invariant you´re implying it is not computable. You can´t have it both, please choose :)
 
This is all irrelevant.

It’s relevant because it means consciousness is dependant on the nature of the test being used to determine if it exists or not.

See, no one is arguing that the software would perform its intended function. No one would argue that a super-slow computer brain would pas a real-time Turing test unless we somehow allowed for it by saying, for example, that the messages were being transmitted from Alpha Centauri.

Your example is lacking because the configuration of the connections in, and therefore the FUNCTION of, a neural network changes based on the speed of the input you feed it.
 
Your example is lacking because the configuration of the connections in, and therefore the FUNCTION of, a neural network changes based on the speed of the input you feed it.

My point is that no one is arguing for a system where the input speed and processor speed are mismatched. Even Piggy said this:
This time, just before we flash the image of the green triangle, we turn on the machine to slow his brain down. We also increase the length of time the green triangle is on the screen accordingly.
 
We can imagine the same experiment with our robot Jane, slowing down her computer brain's speed to that of an analogous TM running at "pencil speed" while the green triangle is being displayed for a correspondingly longer period of time.

What does she say she saw?

In your example the computer should arrive at the same answer but there are other tests. In particular, if you slowed down the computer while you were teaching it what a green triangle is, it may not be able to identify one no mater what speed you displayed it at.
 
No, any dumb system can report seeing the green triangle. You still seem to be confusing the qualia in the brain with the behaviors associated with consciousness.

In fact, we "report" stuff unconsciously all the time. Yelling out an old lover's name during sex. The "tells" that human lie detectors use. Twitches, reflexes, Tourette's syndrome.

It's all captured rather well in a quote from E.B. White: "How do I know what I think, until I see what I say?"

On the contrary, I'm the one recognizing the distinction.

Yes, a dumb system can be programmed to type or say "I see a green triangle" when it sees a green triangle.

But that's not the point, is it?

Because that's not the experimental situation that's being proposed.

What's being proposed is to take a conscious human being and have him look at a screen and report what he sees.

This is not a test to determine if this is a conscious being or a dumb machine. We already know this is a conscious person.

However, we know that people are aware of seeing what they see under some circumstances but not in others.

For instance, if we flash the green triangle too quickly, the person will still see it, but won't be conscious of having seen it.

In other circumstances, you can get people to be consciously unaware of a gorilla beating its chest in the middle of a basketball game, even though we know their eyes picked up the light coming from the gorilla (i.e., they viewed it).

The purpose of this experiment would be to test if Joe was aware of having seen the green triangle. We could do the same for Jane, our conscious robot (we stipulate that this robot is conscious).
 
In your example the computer should arrive at the same answer but there are other tests. In particular, if you slowed down the computer while you were teaching it what a green triangle is, it may not be able to identify one no mater what speed you displayed it at.

...?
 
I'm curious about this: Could a TM be designed (theoretically) that was computationally equivalent to my car?

What are your car's inputs? What are its outputs? How are they related?

In a rather superficial sense, a car has NO inputs and NO outputs -- it just sits there. (Put a CD on top of a car and see what happens to it.) In this model, a car is computationally equivalent to a rock or a brick or a piece of toast. And, yes, it's very easy to make a TM that models a brick.

If you have a more involved model of what a car is (and how it differs from a brick), then you will need to define an input alphabet, an output alphabet, a language of valid inputs, and so on. If you do so, then there's no reason short of mechanical malfunction or QM effects that you can't model your car with a TM.
 
That's not quite how I imagined it. I was thinking more along the lines of this....

Suppose we were somehow able to slow the movement of the impulse along the axon so that it took an average of 1 second to move along the length. Suppose the synapses work the same as always and everything remains coordinated.

Would that be the same as running the replica TM brain at one calculation per second?

No, it probably wouldn't. One TM calculation isn't necessarily equivalent to one neural calculation, for one thing. If we were modeling a brain on a TM, we could only model one neuron at a time, which breaks the direct isomorphism right there (real brains are parallel. Parallel TMs are equivalent to serial TMs, but only by serial simulation).
 
Not relevant, I'm afraid. We were asked to explicitly assume that a conscious robot existed, ergo that consciousness is computable.

Which is equivalent to assuming that the human brain -- or at least the consciousness creating parts of it -- can be successfully and fully implemented in a Turing machine.

If you want to argue that consciousness is not computable, you're in good company. But it's not the discussion you're in right now, and your argument is out of place.

But we already know that consciousness is computable (accepting y'all's definition, which is fine by me) because our brains are conscious and they're computible.

However, when you say that consciousness can be "implemented" in a TM, are you claiming equivalence or identity?

I assume you could imagine a theoretical TM that was the computational equivalent of the system that is my car. This TM would symbolically equate to every atom, and given the right inputs it would compute all the behavior of my car starting up and driving.

But that would be entirely symbolic. There would be no real "driving down the road" going on.

On the other hand, I could build a scale model out of different materials, using an electric motor instead of a combustion engine, and actually run that sucker around. As long as it did the macro-level jobs the same -- turned the wheels and such -- it wouldn't matter that a TM that's computationally equivalent to it would be different from the TM that's computationally equivalent to my car.

Same situation with a conscious human with a wet brain and a conscious robot with a computer brain.

If we build TMs that are computationally to each of these, they aren't exactly the same. But they don't instantiate consciousness.

Let's say we run a computer simulation of my car driving down the road. As I said, there's no "driving down the road" going on in the real world, but if a human views the simulation, it reminds us of it, and we can tweak parameters to see what would happen -- we could simulate lowering the idle speed, for example, to see when it stalls.

But if all the people leave the room, and there's just a cat and dog there, they're in no danger of getting hit by the car, and they don't perceive any car.

On the other hand, if I have my electric scale model, even though it's not computationally equivalent to my car, it does actually drive like my car, and it can hit the dog or cat, and they'll react to it.

So our hypothetical conscious robot is like our electric model car. Because it does the same things on a macro-level that our brains do when they generate consciousness (albeit using different materials, and even if it's not doing exactly the same thing on smaller scales) then it, too, generates consciousness in reality.

But the TM is only equivalent. It can describe what happens when the brain, say, is consciously aware of seeing a green triangle. But we have no reason to believe that this makes the TM apparatus self-aware.
 
You are again letting perception (eyes) run at full speed, while cognition (conciousness) runs desynchronized with a much slower speed.

No, not in this case. Eyes are part of the brain, wired right into it. Remember, what's posited is that impulses traverse axons at an average time of 1 second. That includes the visual nerves.
 
But we already know that consciousness is computable (accepting y'all's definition, which is fine by me) because our brains are conscious and they're computible.

However, when you say that consciousness can be "implemented" in a TM, are you claiming equivalence or identity?

In this case, identity. Because you're assuming in the paragraphs above that a robot is available that is conscious, not just that simulates consciousness.

Remember that a Turing machine by itself is just hardware, and a Turing machine can do anything if appropriately programmed. If you have a Turing machine that is conscious, then it is conscious largely by virtue of the program it is running. In this sense, consciousness is "implemented" (it wouldn't be there if a different program were running) but geniune -- by assumption.


But the TM is only equivalent. It can describe what happens when the brain, say, is consciously aware of seeing a green triangle. But we have no reason to believe that this makes the TM apparatus self-aware.

Then you're contradicting yourself; you're no longer assuming the existence of a conscious robot.
 
Last edited:
What are your car's inputs? What are its outputs? How are they related?

In a rather superficial sense, a car has NO inputs and NO outputs -- it just sits there. (Put a CD on top of a car and see what happens to it.) In this model, a car is computationally equivalent to a rock or a brick or a piece of toast. And, yes, it's very easy to make a TM that models a brick.

If you have a more involved model of what a car is (and how it differs from a brick), then you will need to define an input alphabet, an output alphabet, a language of valid inputs, and so on. If you do so, then there's no reason short of mechanical malfunction or QM effects that you can't model your car with a TM.

That's what I thought.

We could model it with a TM machine right down to the molecules.

The inputs would be, for instance, the act of turning the key in the ignition and stepping on the gas pedal.
 
In this case, identity. Because you're assuming in the paragraphs above that a robot is available that is conscious, not just that simulates consciousness.

Then you're contradicting yourself; you're no longer assuming the existence of a conscious robot.

The confusion is yours.

You're assuming, it seems, that the TM is a conscious robot.

I'm assuming that we have a robot with a computer brain, or some sort of brain, that produces consciousness. At the nuts and bolts (or wires and chips) level, it's not the same as a neuron brain. But at a higher level of organization, it does what the human brain does to generate consciousness.

A TM which symbolically describes either the man or the robot is equivalent. But we have no reason to believe that it will be conscious.
 
The confusion is yours.

You're assuming, it seems, that the TM is a conscious robot.

Wrong direction. I assume that the conscious robot is a TM.

Because the robot's brain is a computer, it must be a TM. No other sort of computer exists.
 
I was replying to this quote from you:

What's missing is that, in order for him to be able to report seeing the green triangle, he has to be aware that he saw it.

It's not logically necessary for a person (or anything else) to have subjective experience of a green triangle to report seeing a green triangle, no matter what you assert.

My point was that your whole experiment is ONLY about information processing. Positing some additional level of "awareness" over and above information processing violates Occam's Razor and opens the door to dualism.

Anyway, I'll have to come back later today to keep playing. I need to focus on work for at least a little while today...
 
Wrong direction. I assume that the conscious robot is a TM.

Because the robot's brain is a computer, it must be a TM. No other sort of computer exists.

Ok, fine.

But that gets us no further along.

It still doesn't answer our question.

I'll have to wait for the longer post to explain why.
 

Back
Top Bottom