• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

Whooaaaa... Stop, you can't keep going. Are you aware of your walet at that time? THAT'S the point.
No, I am not aware of my wallet. But are you then saying that understanding=awareness?
This is caled memory and it is only works when you are conscious.
No it doesn't.
There's no need to keep using the word colloquialy.
You brought up the subject.
For sometime now I've said you don't need to accept my usage of the word.
But you told me my usage of the word was irrelevant to the argument.
It's not central to my point.
What is your point then? I thought that you were saying that it was completely uncontroversial and plain what Searle meant by "understand". I am saying it is not clear what Searle meant by "understand".
You are being boorish.
In what way am I being boorish?

If boorish means ill-mannered, am I using ill-mannered language towards you, like "silly" and "obtuse"
Words are not laws of physics.
Whoever said they were?
They are simply a means to convey information. If you object to my usage of a word I'm ameniable to forgo the use of the word.
I am not objecting to your usage of the word. I am trying to find out what your usage is.
I tried to explain it to you and you won't listen to me so fine.
No, I have looked back through all your posts. The only definition you have given is the one that depended on comprehension.

You said that my definition was irrelevant. You said that Searles definition was completely uncontroversial.

You have called me all sorts of insulting names for not knowing what this uncontroversial definition is.

And yet when I repeatedly ask you to plainly state the definition he is using you avoid the question and act as though I am being unreasonable.

All I ask is that you state "The definition of 'understand' that Searle uses in the Chinese argument is ..."

I never said otherwise. This is a straw man you've invented.
No that is my genuine interpretation of what you are saying. If we do not understand when we are unconscious then understanding goes away when we are unconscious and comes back when we are awake.

But nobody really thinks that. We think of "understanding" as something that stays with us, even when we are not conscious of it, or even conscious at all.
 
I would just like to weigh in that the Chinese Room is an utterly pointless exercise.

It is pointless because nobody in the entire field of A.I. is trying to build true intelligence by faking true intelligence with a huge rule table.

So this argument that you can make a room that converses in Chinese using only a huge rule table is irrelevant, because honestly nobody except for the natural language translation crowd even cares. Well, nobody except for the philosphy professors and grad students who's income depends on making such arguments...
 
"Feel"? Who said anything about "feel"?
Me. That was the whole point of my argument about the desk checked algorithm, not that it understood, but that it felt.
You've tried to make the trivial significant for what I simply do not know.
Whoa... back up. Swap you for me. You have explicitly said that it is silly and absurd for me to claim that I don't know what Searle means by "understand".

If I have come across as belittling your intelligence or your knowledgability then I apologise as humbly as it is possible. I have the highest respect for your intelligence and knowledgeability, I have on more than one occasion changed my opinion on a subject based on an an argument that you have made.
Consciousness isn't exclusive of understanding and vice versa. You are trying to create a false dichotomy. I either demonstrate that consciousness and understanding are exactly equivalent or I accept that they are mutually exclusive.

Sorry, no.
Now that is completely unfair

I said that Searles argument was about understanding whereas mine was about consciousness. You said that I had found a distinction without any difference.

I never said that they were mutually exclusive, I just said they were different.
 
No, I am not aware of my wallet. But are you then saying that understanding=awareness?
I don't know how many times I can disabuse you of this notion.

Understanding, awareness, thinking, consciousness, sentience, cognition, etc., are not mutually exclusive terms. That doesn't mean that they are exactly synonyms either. You are creating a straw man.

All I ask is that you state "The definition of 'understand' that Searle uses in the Chinese argument is ..."
And I've told you that the precise definition is beside the point. Move on.

If we do not understand when we are unconscious then understanding goes away when we are unconscious and comes back when we are awake.
No consciousness no understanding. Not exact terms but not mutually exclusive. Both are dependant on one another and, understanding, could very well be used as a synonym for consciousness. Not an exact one but that's beside the point.

Fridge door is closed (no consciousness). Light is off. (No understanding).
Fridge door is open (consciousness). Light is on. (There is understanding).

They go hand in hand. That the light bulb doesn't disapear when you close the fridge door is a rather silly point.

But nobody really thinks that. We think of "understanding" as something that stays with us, even when we are not conscious of it, or even conscious at all.
Like when you are dead? You will still have understanding? Really?
 
Last edited:
Which also means the previous half second might have been monks and quills in an entirely different universe, simulating the me in this current simulated universe, the rules of which might be somewhat different from each other.
It might have been monks with quills in a different universe, but even if the rules of the other universe were different, the rules of the universe being experienced by the virtual brain would be identical in both cases (so far as you can detect rules of the universe in one half a second).
 
Now that is completely unfair
I will apologize Robin and will cut out any barbed rhetoric. I'm sorry. You really seem to have gone out of your way to push a rather trivial point.
 
Last edited:
It might have been monks with quills in a different universe, but even if the rules of the other universe were different, the rules of the universe being experienced by the virtual brain would be identical in both cases (so far as you can detect rules of the universe in one half a second).

Sorry, having worked on a few projects of porting applications from one computer platform to another that's wrong. Including 2 projects where it involved the target platform simulating the source platform.
 
I don't know how many times I can disabuse you of this notion.
Once will do, your questions appear to point to that notion.
Understanding, awareness, thinking, consciousness, sentience, cognition, etc., are not mutually exclusive terms. That doesn't mean that they are exactly synonyms either. You are creating a straw man.
Creating a straw man would be misrepresenting your position. I have said over and over that I don't understand your position.
And I've told you that the precise definition is beside the point. Move on.
Well if the point is a non-precise definition, then let's have the non precise definition.
No consciousness no understanding.
And as I have said I disagree and I have already said why
Not exact terms but not mutually exclusive.
Depending on your definition of understand. By my definition they are clearly mutually exclusive, because by my definition even a non-conscious machine could understand something.

You see this is the point - you seem to be defining "understanding" with a conscious sensation of feeling. Whereas I think of "understanding" as an underlying brain state and only related to consciousness because consciousness is how we interface with the world.

But we can have that feeling "I understand" even when there is no subject matter to understand, triggered by drugs or certain meditation practices.

I would say that the CR understands Chinese. It just doesn't have that "I understand" feeling that humans associate with understanding.
Like when you are dead? You will still have understanding? Really?
Oh come on now who is being obtuse now? I think you know I am talking about periods of sleep and general anaesthesia.

There is a medical distinction between unconsciousness and death.
 
I said that Searles argument was about understanding whereas mine was about consciousness.
I honestly don't think there is any significant difference as it applies to the discussion. However, I've said on a number of occasions that I was willing to forgo that premise. I can still make an argument that Chinese Room is about consciousness.

Dennet at one time said that thermostats were aware. When questioned on this later he said that thermostats were not aware the way humans are aware (they are not self aware).

The subject at hand is can computers think? What does that mean? Well, it means a whole host of things. A big part of that is understanding. Now, what is the subject at hand? Can computers think? What is the context? The Turing Test. What is the Turing Test? The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence.

What is inteligence? Exactly? I don't know. Is understanding a key component of inteligence? Yes. Would I be confused if someone said understanding to mean some sort of sentience or consciousness? No.

In light of those premises I don't understand your complaint. Given that I've told you over and over that I was willing to forgo my original statement that Turing was speaking coloquialy I think it VERY UNFAIR of you to continue to push that point. And I think that is, in part, the basis of my frustration. I don't know why you keep doing that.
 
Last edited:
I will apologize Robin and will cut out any barbed rhetoric. I'm sorry. You really seem to have gone out of your way to push a rather trivial point.
I don't think it is a trivial point. I don't think my thought experiment is the same as Searle's.

I don't think it is trivial that my thought experiment is not the same as Searle's.

The definition of "understanding" is irrelevant to my thought experiment, because it is not related to "understanding".

Nevertheless the definition of "understanding" is far from a trivial point.
 
Depending on your definition of understand. By my definition they are clearly mutually exclusive, because by my definition even a non-conscious machine could understand something.
Thank you. Yes. I've no problem with your definition. The question becomes, was Searle's thought experiment an attempt to demonstrate that the Turing test could not demonstrate consciousness.

You see this is the point - you seem to be defining "understanding" with a conscious sensation of feeling. Whereas I think of "understanding" as an underlying brain state and only related to consciousness because consciousness is how we interface with the world.
A thermometer is aware. Is a thermometer self aware?

I would say that the CR understands Chinese. It just doesn't have that "I understand" feeling that humans associate with understanding.
Now you are interjecting an additional definition of understanding. One that is consistent with what I've been saying.

Oh come on now who is being obtuse now? I think you know I am talking about periods of sleep and general anaesthesia.
So, let me ask you this question, while you are asleep do you have that "I understand" feeling that humans associate with understanding?

There is a medical distinction between unconsciousness and death.
I think this is very relevant. If by understanding you mean that data is stored in memory then yes. If by understanding you mean that "I understand" feeling that humans associate with understanding, then you don't have that feeling while you are unconscious.
 
Last edited:
I don't think it is a trivial point. I don't think my thought experiment is the same as Searle's.
IMO it it is.

The definition of "understanding" is irrelevant to my thought experiment, because it is not related to "understanding".
By "undestanding" do you mean that "I understand" feeling that humans associate with understanding.
 
I honestly don't think there is any significant difference as it applies to the discussion. However, I've said on a number of occasions that I was willing to forgo that premise. I can still make an argument that Chinese Room is about consciousness.
But here is the point.

If you claim that the Chinese Room could, with different programming, know how a rose smelt, or how a peach tasted, or how the wind felt on it's face then the arguments would be the same.

But "understanding" of any sort has no relevance to my argument since it is about how the virtual brain would feel. Whether it would have conscious experiences that were identical to conscious experiences.

None of that is remotely implied by anything Searle said.

The subject at hand is can computers think? What does that mean?
No, that was the subject of Searle's argument. It was only brought up because you said it was the equivalent to mine

The subject question I was asking is can computers feel?
In light of those premises I don't understand your complaint. Given that I've told you over and over that I was willing to forgo my original statement that Turing was speaking coloquialy I think it VERY UNFAIR of you to continue to push that point. And I think that is, in part, the basis of my frustration. I don't know why you keep doing that.
Well it is also very unfair of you to continue to say that my argument is equivalent to Searle's. If you continue to do this then I will continue to ask for that definition of "understand".

Otherwise don't say you have rebutted Searle's argument therefore you have rebutted mine.

The question of mine is clearly "do you think it is possible that this moment you are experiencing right now could in reality be a billion years of pencils marking on papers?"
 
First I think we have to define what we mean by "feel". I've tried in other threads.

I think ultimately that we could program a computer to feel, so I don't see a real objection.
 
A thermometer is aware. Is a thermometer self aware?
Yes, if you stick the probe onto the cover of the thermometer it reads its own temperature, therefore it can be self-aware.

As I said to PixyMisa, self-awareness is trivial
Now you are interjecting an additional definition of understanding. One that is consistent with what I've been saying.
Yes, and I am saying that I think that is the definition you and Searle are using, not the one I am using.
So, let me ask you this question, while you are asleep do you have that "I understand" feeling that humans associate with understanding?
No we don't, but as I have pointed out, it is not what I mean by the word.
I think this is very relevant. If by understanding you mean that data is stored in memory then yes.
No I don't mean data stored in memory.
If by understanding you mean that "I understand" feeling that humans associate with understanding, then you don't have that feeling while you are unconscious.
No I don't mean that either, as I thought I had made clear.

I think that is what you and Searle mean.
 
First I think we have to define what we mean by "feel". I've tried in other threads.

I think ultimately that we could program a computer to feel, so I don't see a real objection.
But do you think it possible that this moment you are experiencing right now could in reality be a billion years of people marking papers with pencils?
 
Last edited:
I have a question that I hope is not a derail or already dealt with earlier in this long thread. I'm not used to thinking in terms of computer function but rather in terms of neurons, so I always feel a little lost in the computer discussions.

My understanding of Searle's latter argument against the computational theory of mind is that computation is a function that has to be defined by an observer -- it is an observer dependent function.

If that is the case, then computation could never explain mind since we end up with the homunculus fallacy. There must always be an observer to assign a computational function, but the whole idea was to explain how an observer is even possible at a simpler level.

But, I think Searle misses the mark because he commits one of the more common fallacies that I see in many discussions -- he analyzes the problem at the wrong level. At base, computers, as Turing equivalents, don't compute -- their computational function is something that we have them do, it is a function we impose because they are our tools. At base, don't they just do what all Turing equivalents do -- change 1 to 0, change 0 to 1, move one space to left, move one space to right?

We can take those changes and use them to compute, but there is still something that a computer does that is more basic than computation.

The same is true of neurons. At base, they fire at certain rates that can be changed. That this can be structured to result in computation and create a homonculus (us) is how consciousness arises; but neurons are not computers at heart, just as those devices we call computers are not either. They are Turing equivalent "machines".

Or do I have this all wrong?
 
But do you think it possible that this moment you are experiencing right now could in reality be a billion years of people marking papers with pencils?

I'm not sure the question makes sense because I experience in a particular span of time. It will always be difficult for us to see what we experience in short time spans as something that takes billions of years to carry out. I think it is entirely possible for the same thing to occur over billions of years of a different type of process, but for one thing, marking papers with pencils is a different arena than that in which our experience of feeling occurs.

It isn't just some form of computation that is important but the means by which it occurs because feeling in the way that we do it is something that occurs within a larger system. If calculations include the entire system, then in some sense I would say "yes" (with the exception that the actual experience is a time-sensitive phenomenon).
 
Yes, if you stick the probe onto the cover of the thermometer it reads its own temperature, therefore it can be self-aware.
In that "aha I'm aware" sense.

As I said to PixyMisa, self-awareness is trivial
How about aware that someone else is aware that you are aware (no, this is not a joke). It's called theory of mind.

Yes, and I am saying that I think that is the definition you and Searle are using, not the one I am using.
I think you are confused. That seems to me to be exactly the kind of thing you are trying to demonstrate with your thought experiment.

No I don't mean data stored in memory.
Oh, ok, then no, when you are asleep you do not have understanding.

No I don't mean that either, as I thought I had made clear.
You are not at all clear.

I think that is what you and Searle mean.
I can't find a distinction between this and what you mean with your thought experiment.
 
I have a question that I hope is not a derail or already dealt with earlier in this long thread. I'm not used to thinking in terms of computer function but rather in terms of neurons, so I always feel a little lost in the computer discussions.

My understanding of Searle's latter argument against the computational theory of mind is that computation is a function that has to be defined by an observer -- it is an observer dependent function.

If that is the case, then computation could never explain mind since we end up with the homunculus fallacy. There must always be an observer to assign a computational function, but the whole idea was to explain how an observer is even possible at a simpler level.

But, I think Searle misses the mark because he commits one of the more common fallacies that I see in many discussions -- he analyzes the problem at the wrong level. At base, computers, as Turing equivalents, don't compute -- their computational function is something that we have them do, it is a function we impose because they are our tools. At base, don't they just do what all Turing equivalents do -- change 1 to 0, change 0 to 1, move one space to left, move one space to right?

We can take those changes and use them to compute, but there is still something that a computer does that is more basic than computation.

The same is true of neurons. At base, they fire at certain rates that can be changed. That this can be structured to result in computation and create a homonculus (us) is how consciousness arises; but neurons are not computers at heart, just as those devices we call computers are not either. They are Turing equivalent "machines".

Or do I have this all wrong?

Nope, you have it exactly right.

The fact that something must be defined by an observer doesn't change the fact that a system which satisfies such a definition is different from one that does not regardless of whether an observer exists. All the observer does is ... observe.
 

Back
Top Bottom