• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Ok, fine.

But that gets us no further along.

It still doesn't answer our question.

Yes, it does. You just don't want to admit it.

By assumption, the robot is conscious
Since the robot is conscious, the computer that is the robot's brain is conscious
Since the computer is a TM, the TM that is the robot's brain is conscious
Since a TM can be single-stepped without loss of functionality, the conscious robot can be single-stepped without lost of functionality.

Quod erat demonstrandum
 
I was replying to this quote from you:

It's not logically necessary for a person (or anything else) to have subjective experience of a green triangle to report seeing a green triangle, no matter what you assert.

My point was that your whole experiment is ONLY about information processing. Positing some additional level of "awareness" over and above information processing violates Occam's Razor and opens the door to dualism.

Anyway, I'll have to come back later today to keep playing. I need to focus on work for at least a little while today...

I know what you were replying to. But your statement is a non sequitur. (It's also patently absurd, and contrary to experimental results, to contend that people will tell you they saw a green triangle they weren't conscious of seeing.)

There is no invitation to dualism here.

Let's do a similar but simpler experiment.

We take Joe and 99 other humans and sit them, one by one, in front of a screen and flash pictures of animals on the screen at different speeds.

We show lions, tigers, and bears for enough time to be consciously perceived, and peacocks, rhinos, and crocodiles at subliminal speeds.

We ask these people what they saw. They all report lions, tigers, and bears.

No one reports seeing peacocks, rhinos, and crocodiles. And yet, when asked immediately after the experiment to list as many animals as they can in 30 seconds, we find that all 100 subjects list at least 2 of these animals, and most list all 3. A control group that saw no subliminal images has a much lower rate of listing any of them.

That's the difference between info processing and conscious awareness.

People don't report what they weren't aware of seeing, even if they did.

Here are some studies:


Subliminal Messages Can Influence People In Surprising Ways


Subliminal Advertising Leaves Its Mark On The Brain

Subliminal Messages Motivate People To Actually Do Things They Already Wanted To Do

Subliminal Smells Bias Perception About A Person's Likeability


Subliminal Learning Demonstrated In Human Brain


If we do the experiment as I've described, and Joe does not report seeing the green triangle, it means he was not conscious of seeing it. If he reports seeing it, then he was conscious of seeing it.

Unless there's some process blocking memory, which I don't think you're proposing.
 
Yes, it does. You just don't want to admit it.

By assumption, the robot is conscious
Since the robot is conscious, the computer that is the robot's brain is conscious
Since the computer is a TM, the TM that is the robot's brain is conscious
Since a TM can be single-stepped without loss of functionality, the conscious robot can be single-stepped without lost of functionality.

Quod erat demonstrandum

Not at all. More to come.
 
Not at all. More to come.

All right, then. Just quickly, indicate which of the following premises is false:

1) The robot is concious
2) The robot's brain is a computer
3) All computers are TMs
4) All TMs can be single-stepped without loss of functionality.

I don't need your "more to come," just a single number. If those four statements are true -- and the first two are the assumptions you yourself asked us to make, while the second are provable from the definitions of "computable," --- then the conclusion follows. So which one of the four is wrong?
 
Wrong direction. I assume that the conscious robot is a TM.

Because the robot's brain is a computer, it must be a TM. No other sort of computer exists.

Neural networks are not necessarily Turing Machines. They can be simulated on Turing Machines, but if the Turing Machine isn’t fast enough this may not occur in real time. If your test/definition of consciousness demands real time then the available processing power of the underlying Turing Machine is an issue and speeding it up slowing it down will change your results.
 
I assume you could imagine a theoretical TM that was the computational equivalent of the system that is my car. This TM would symbolically equate to every atom, and given the right inputs it would compute all the behavior of my car starting up and driving.

But that would be entirely symbolic. There would be no real "driving down the road" going on.
What you seem to be missing is that information processing is the key here. A brain is mainly an information processor, as is a TM. But a car isn't-- its main function is moving people.

So, is there anything that you would recognize as being conscious that isn't in the form of information? If not, then there's nothing about it that isn't computable. The sampling rate and synchronization of information makes a difference in what that information appears to the processor as, but that sampling can be considered as part of the environment. Time itself also needs to be considered as another input.

Also, there's no difference between simulating information processing and actual information processing: using a full simulation of a calculator to add 2 and 2 gives "4", which is as real a "4" as any.
 
Here are some studies:

None of this indicates that consciousness is not information processing--which is what you seem to be saying.

All those studies show is that our information processing is imperfect.

If consciousness is not information processing (i.e. computable, implementable on a Turing Machine), then what it is?

Not at all. More to come.

Give us a hint--which of these premises do you intend to refute?
 
All right, then. Just quickly, indicate which of the following premises is false:

1) The robot is concious


False.

2) The robot's brain is a computer


Non sequitur. Brains produce consciousness, and no-one really knows exactly how neurons create consciousness (dreaming, NDE's, hibernation, meditation, mind > brain relationship, etc, being some of the hard to explain phenomenon out of many). Computers are completely a result of our consiousness. They will never do anything more than we program them to, whether they are performing random actions, mathematical algorithms, fractal patterns, or anything.

3) All computers are TMs


hmmm, nope. They can be explained by ... maybe ... but they are not all TM's necissarily. Maybe being a pedantic here...

4) All TMs can be single-stepped without loss of functionality.


In a finite amount of time TM'scan only manipulate a finite amount of data. I'm not exactly sure of what you mean by single stepped though, and how this is at all relevant to consciousness.

I (as I usually do) agree with Penrose on this one. I also find his and Hameroffs alternative model of consciousness rather intriguing, though just as lacking in empirical evidence as 'standard' mechanistic models of consciousness. So in my opinion no-one has answered any of these queries definitively. Thus this is likely going to be a very long thread :)

Turing machine http://encyclopedia2.thefreedictionary.com/Infinite-time+Turing+machine
Sir Roger Penrose of Oxford University has argued that the brain can compute things that a Turing Machine cannot, which would mean that it would be impossible to create artificial intelligence
 

This was directed at Piggy, who already conceded the point, which was why drkitten used the premise. Way to jump in and fail.

Non sequitur.

Do you know what a non sequiter is? I'll give you a hint: it has to do with logical truth, not contingent truth.

hmmm, nope.

Go read someone other than Penrose.

I'm not exactly sure of what you mean by single stepped though, and how this is at all relevant to consciousness.

That's because you jumped into the thread without having a clue. And yes, I'm aware you posted earlier, but those were just as obtuse.
 
All right, then. Just quickly, indicate which of the following premises is false:

1) The robot is concious
2) The robot's brain is a computer
3) All computers are TMs
4) All TMs can be single-stepped without loss of functionality.

I don't need your "more to come," just a single number. If those four statements are true -- and the first two are the assumptions you yourself asked us to make, while the second are provable from the definitions of "computable," --- then the conclusion follows. So which one of the four is wrong?

Ok, I might as well knock this out.

2.

The assumption here is that when we do create artificial consciousness, it will be with the kind of computer we have now, stand-alone.

I don't doubt that, given world enough and time, we should be able to produce AC. After all, the brain is just physical stuff and it does it.

But since we haven't done it yet w/ current computers, we don't yet know if they, by themselves, are up to the job.

You have to admit, if it's true that all of our computers are TMs, and TMs can do everything they can do at any operating speed, and it turns out that we can deduce that the brain can't be conscious at any and every operating speed, we'd have to conclude that our current TM computers, all alone, will never produce consciousness.

Now I might be wrong about the speed thing. It might turn out that our brains would be conscious if we slowed things way down, and that our robot brain would be conscious at any operating speed. I might even find a glaring error in my current thinking tonight and come back and say "Whoops, got it wrong."

But because of what we don't yet know, that there is not QED.
 
Computers are completely a result of our consiousness. They will never do anything more than we program them to, whether they are performing random actions, mathematical algorithms, fractal patterns, or anything.
Is free will a necessary condition for consciousness?
 
What you seem to be missing is that information processing is the key here. A brain is mainly an information processor, as is a TM. But a car isn't-- its main function is moving people.

So, is there anything that you would recognize as being conscious that isn't in the form of information?

What we do know is that the brain is a physical object and that it somehow generates consciousness in certain animals that walk around in the real world.

We can describe its "main function" as "information processing" but that doesn't mean that the kinds of computers we have now will be able to generate consciousness. Maybe, but maybe not.

Is "a sense of experiencing the world" a kind of "information"? Heck if I know.
 

If you disagree with premise #1, then you're on the wrong thread.

I (as I usually do) agree with Penrose on this one.

Yes, I figured as much. There are dozens of I-know-nothing-of-QM-but-want-to-post-anyway threads already. Why don't you go join one of them instead?
 
Ok, I might as well knock this out.

2.

Fair enough. If you're asking us to assume the existence of a non-computable robot, then there's no conclusion that can be drawn about any of its properties. Magic leprechauns can do anything they like.
 
Ok, I might as well knock this out.

2.

The assumption here is that when we do create artificial consciousness, it will be with the kind of computer we have now, stand-alone.

I don't doubt that, given world enough and time, we should be able to produce AC. After all, the brain is just physical stuff and it does it.

But since we haven't done it yet w/ current computers, we don't yet know if they, by themselves, are up to the job.

You have to admit, if it's true that all of our computers are TMs, and TMs can do everything they can do at any operating speed, and it turns out that we can deduce that the brain can't be conscious at any and every operating speed, we'd have to conclude that our current TM computers, all alone, will never produce consciousness.

Now I might be wrong about the speed thing. It might turn out that our brains would be conscious if we slowed things way down, and that our robot brain would be conscious at any operating speed. I might even find a glaring error in my current thinking tonight and come back and say "Whoops, got it wrong."

But because of what we don't yet know, that there is not QED.
Piggy,

Surely all your evidence presented so far has been todo with slowing down/speeding up the inputs to the brain? What have these examples got to do with slowing down both the inputs and the brain together?
 
None of this indicates that consciousness is not information processing--which is what you seem to be saying.

All those studies show is that our information processing is imperfect.

If consciousness is not information processing (i.e. computable, implementable on a Turing Machine), then what it is?

Now you're off on a garden path.

Back to our experiment with Joe.

We know from experimentation that not all of the data processed by the brain is made available to consciousness.

If Joe comes out of the room and say he saw a green triangle, we conclude he was aware of it. If he only reports the red circle and blue square, we conclude he wasn't consciously aware of seeing the green triangle.

That's all.

So the question is, if we did this with Joe, or if we did the experiment with Jane the robot and slowed down her operating speed similarly, would they report seeing the green triangle or not?

That would tell us whether they were consciously aware of it or not.

Given the length of time it's on the screen, they should report seeing it, unless the brain-zapper or the slowing of the processing speed made consciousness fail. (Or if it prevented encoding the experience into memory.)

I'll be arguing that we should expect Joe not to report seeing the green triangle -- unless when I get it all on paper my thinking falls apart and I have to change my conclusion.
 
Fair enough. If you're asking us to assume the existence of a non-computable robot, then there's no conclusion that can be drawn about any of its properties. Magic leprechauns can do anything they like.

As I understand how y'all are using the term "computable", then everything in the universe is computable, including our conscious robot.

That does not mean that the types of computers we now have will eventually be able to simulate consciousness by themselves.

If you claim that you can prove that they can, shouldn't you be up for the Nobel?
 
Piggy,

Surely all your evidence presented so far has been to do with slowing down/speeding up the inputs to the brain? What have these examples got to do with slowing down both the inputs and the brain together?

Nothing has been said about slowing down the inputs.

If you do that, you're into roger's time dilation scenario.

We're talking about a conscious robot with a slowed brain in the real world.

Even if the robot is dreaming, physical reality goes on as usual.
 
As I understand how y'all are using the term "computable", then everything in the universe is computable,

Then I guess you don't understand the term. There are a lot of well-known uncomputable problems.
 

Back
Top Bottom