• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

System and virtual mind replies: finding the mind
Systems reply. The "systems reply" argues that it is the whole system that understands Chinese experiences consciousness, consisting of the room, the book, the man, the paper, the pencil and the filing cabinets.
This was one of the objection I raised to the Chinese Room argument when you raised it initially, several pages back.

Why are you raising it now when you know that I am aware of this objection to the CR argument and had introduced it to this thread ages ago?

Or don't you read my posts?

So what is your point?

Here was my reply to your original Searle post:
Robin said:
I think Searle just made a blunder when he said that the Chinese Room does not understand Chinese because the people operating it did not understand Chinese.

That would not seem be be relevant since they are just part of the machinery in this case.

Also, as far as I recall, Searle did not provide a definition for understand.

Personally I have no trouble saying the Chinese Room understands Chinese, just so long as it is actually able to carry on a conversation in Chinese - even if in very slow motion.

Similarly I would have no problem saying that a desk checked program understands just so long as it is able to meet the behavioural criteria at whatever speed.

I would baulk, however, at any certainty about either being conscious.
 
Last edited:
I don't know. Though real water can't make virtual flowers grow, and vice-versa, are programs modeling consciousness conscious, by definition ?

Well, is a simulated decision a real decision?

Obviously, the water in Second Life won't water my real garden.

But if I "program" a "computer" in Second Life to play chess, based on a sufficiently detailed simulation of an an actual chessboard (involving simulated pieces, simulated moves, and what not, that simulation can play chess in the real world.
 
drkitten said:
Anything that is capable of computing -- of performing arithmetic, really -- is either equivalent to a Turing machine or is less powerful than a Turing machine.

Since humans are capable of performing arithmetic, they are either equivalent to or less powerful than a Turing machine.
What? Why can't humans be more powerful than a Turing machine? Anything more powerful can perform arithmetic.

~~ Paul
 
So the question is, can you design a system that is capable of running an algorithm, but which will do what an algorithm won't do?

Isn't this like designing a physical system that won't obey the laws of physics?
 
What? Why can't humans be more powerful than a Turing machine?

Because Turing machines are universal (in terms of capacity, not necessarily in performance).

More formally, because nothing not explicitly counterfactual can be more powerful than a Turing machine, and because humans are not counterfactual.
 
I don't know. Though real water can't make virtual flowers grow, and vice-versa, are programs modeling consciousness conscious, by definition ?

I think there should be a question mark at the end of that sentence.
 
This was one of the objection I raised to the Chinese Room argument when you raised it initially, several pages back.
I don't know what you mean by "you raised it"?

Why are you raising it now when you know that I am aware of this objection to the CR argument and had introduced it to this thread ages ago?
It's apropos.

I would baulk, however, at any certainty about either being conscious.
Utter nonsense. Searle is making the same appeal to intuition that you are. The turing test is a test for strong AI. Searle is trying to rebut strong AI.

BTW: Did you know that Searle wrote a book about consciousness? The Mystery of Consciousness by John R. Searle ?

Did you know that he references The Chinese Room in the book?
 
Last edited:
No, it's like designing a car that can float.

But no one claimed that cars couldn't float. The claim, however, is that all algorithms behave according to the mathematics of information processing.

So the question asks for someone to design something that is provably impossible to design.
 
drkitten said:
More formally, because nothing can be more powerful than a Turing machine, and because humans are not examples of "nothing."
But the Church-Turing thesis is only a thesis, not a proof. I guess we're assuming it's true.

~~ Paul
 
Robin said:
Thats pretty much the heart of the issue. Simulations are just models of a thing, they aren't ontologically identical to what they're intended to model.
Indeed, but sometimes the ontological difference is largely irrelevant. Adding 2 + 2 in a computer program is, for all intents and purposes, equivalent to adding 2 + 2 on your fingers. The question is whether this equivalence holds for consciousness.

~~ Paul
 
Robin said:
Similarly I would have no problem saying that a desk checked program understands just so long as it is able to meet the behavioural criteria at whatever speed.

I would baulk, however, at any certainty about either being conscious.
So what is there about consciousness that is over and above behavior?

~~ Paul
 
That does not seem to be relevant.
Your argument is an appeal to intuition. NOTHING MORE.

You don't offer evdince or proof. You simply say that you don't see how that could work and you apeal to us why it wouldn't work.

This explains why your appeal to intuition is problematic.

You've not resolved the problem.
 
I don't know what you mean by "you raised it"?
You were, as far as I know the first to bring up the Chinese Room argument.
It's apropos.
Well obviously you thought so, but I am asking you why you thought it a propos. Especially when I had already brought it up ages ago.
Utter nonsense. Searle is making the same appeal to intuition that you are.
No, Searle is making a specific argument with a specific structure. The person is the computer, the tiles and instructions are the program. He says the person does not understand and yet can converse in Chinese using the program. So therefore a computer conversing in Chinese would not understand. Pinker seems to have misunderstood this and I have read where Searle has corrected this misapprehension.

I, on the other hand, was making an appeal to intuition and explicitly so (I said so to Paul and I said so to you)
The turing test is a test for strong AI. Searle is trying to rebut strong AI.
Yes, I know. Every time you say this I say, yes, I know. Will you stop saying it so I can stop saying "yes I know"? It is getting a bit tedious.

Why are you repeating all this stuff, dragging it out?
 
But the Church-Turing thesis is only a thesis, not a proof. I guess we're assuming it's true.

No more than the Theory of Evolution or the Theory of Relativity is "only a theory."

The Church-Turing thesis is a proof that all definitions of information processing so far proposed are equivalent. At this point, the list of actual definitions is quite lengthy. We don't even have a well-formed definition of anything more powerful than a Turing machine -- except for the explicitly counterfactual notion of "oracle computing," which rather blatantly assumes magic.

So Robin and Westprog are in the rather uncomfortable position of asking whether or not it's possible to exceed the speed of light. I have told them, several times, that relativity theory says that it isn't. They then ask if relativity theory applies to angelic unicorns that are defined to have the ability to exceed the speed of light at will.

Relativity theory says that no such creatures exist. Relativity theory even says that such creatures would violate the basic causal structure of the universe. If R&W want to take seriously the possibility of angelic unicorns, they first need to demonstrate that such creatures are even coherent.

Similarly, the mathematics says that no information processor more powerful than a Turing machine exists. At this point, the proponent of angelic unicorn accountants needs to step up with something more sophisticated than "could SO!"
 
BTW: What do you mean by it undertands?
Oh for crying out loud are we to go over all this stuff yet again?

By "understand" I mean the ability to take broad information and apply it to specific situations recognising significant deviations.

Thus if the Chinese room is able to pass the Turing test in Chinese, then by my definition it understands Chinese.

If a computer can pass a school comprehension in English then it understands English.
 

Back
Top Bottom