• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

In any case, there's no "self-awareness" involved. There's a computer and there's a car. The computer controls the car. The car is no more self-aware than it is when it's controlled by a human being.

We may choose to describe the car/computer combination as a single self, but it doesn't imply that there's any real self-awareness involved.
Right. Nothing up to this point necessitates self-awareness.

The computer has no model of itself, and neither does the car, no matter how sophisticated the system.
The computer may or may not have a model of itself and the car. To say that this does not happen "no matter how sophisticated the system" flies in the face of the fact that programmers all over the world work on systems incorporating models of themselves every single day.

You are simply and incontrovertibly and factually wrong, Westprog. It's not true. We do this. I do this. It's real, it happens, and no amount of fact-free argumentation from you can change that.

If we consider the computerised car as self-aware, then we should consider the car/human combination as a single entity.
You can do so. In most contexts it makes no sense.

That has a far better claim on self-awareness, since the human is aware of both himself and the car.
That is in no way a better claim.

Self-awareness is one of the many terms used in this context which just evaporate into nothingness when looked at closely.
Again, this is completely wrong in every possible way. We build systems that are self-aware because they have useful and novel behaviours beyond those of systems that lack self-awareness.
 
I asked you for your definition of "understand" and you defined it in terms of "comprehend".
NO! You said you didn't understand what the colloquial sense of understanding is. I'm simply trying to convey to you my understanding and not provide a comprehensive rock solid definition. I don't think one exists.

But if "comprehend" means "understand" then your definition is circular.
I don't think they mean exactly the same thing. To comprehend in the sense I'm using it is different than the sense you are using it. By providing the word "comprehend" I'm giving you the context that I'm using it.

So I still don't know what you mean by "understand".
Often times when you look in a dictionary for usage you will see synonyms to give you a sense of the word. If I understand that you exist then I'm aware that you exist. Awareness of an external object is usage with a synonym provided by the dictionary for usage of the term consciousness.

No, I mean are you ruling out that I can be conscious of something and not understand what I am conscious of.
It depends on your usage of the word "understand". If by "understand" you mean to be aware of it's existence then of course.

Again you are missing the point. I am trying to establish what you mean by "understand".
And I'm telling you.

You have already said that my definition is irrelevant to the argument. I just want to know what definition of "understand" is relevant to the argument.
I've no idea how many times I can tell you before you understand. At some point we have to move on.

To understand that something exists you must be conscious. That's all. You really are making a big deal of this and I don't understand why. You don't have to accept my usage of the word. I don't give a damn. I've told you over and over.

Are we going to debate the point ad nauseam?
 
Last edited:
OK, so explain why you think it can have a concept of one car on the road and not of another car on the road.
I don't. I never said that, and I didn't imply that.

Car. Another car. Yet another car.

"Self" is undefined. Please rephrase question.
"I" is undefined. Please rephrase question.
"You" is undefined. Please rephrase question.

Car. Car. Car. Squirrel!

The concept of self, that this car is different in that it is me, is a different abstraction from the concept of car. Indeed, the computerised car doesn't need a concept of car as such, just a concept of obstacle. (Everything else is a non-obstacle.)
 
I expect to see significant results from simulations of animal brains within the next decade. We have the capacity to do the simulations already, but as far as I know this hasn't been applied to behaviour studies for advanced animals.
So what animals has it been applied to?
How can it not? Is it a model or not? You just said it was a model. If it's a model, then it produces a model of the behaviour of the system being modelled. That's the entire point of models. If it doesn't do that, it's not a model.
So every attempt at modelling a system succeeds?
There is no rational reason to assert that a physical system cannot be modelled.
I am not aware that I made any such assertion. There is a perfectly rational reason to assert that we cannot be sure that a particular approach to modelling a physical system will be successful until it has actually been done.
 
I don't. I never said that, and I didn't imply that.
Well don't be to sure that you are being clear about what you mean.

Will you agree then that a robotic car can be aware of itself?
Car. Another car. Yet another car.
Person. Another person. Yet another person
"Self" is undefined. Please rephrase question.
"I" is undefined. Please rephrase question.
"You" is undefined. Please rephrase question.
I am not sure what you are talking about here.
Car. Car. Car. Squirrel!
Person. Person. Person. Squirrel!
The concept of self, that this car is different in that it is me, is a different abstraction from the concept of car.
So what? It has the concept of the car to which it's controls pertain and from which viewpoint it's data must be interpreted.

So how does that differ from a human's concept of self?
 
RandFan said:
And I'm telling you.
Except that you haven't yet.
I've no idea how many times I can tell you before you understand. At some point we have to move on.
Just once will do.
To understand that something exists you must be conscious. That's all.
So the colloquial definition of understanding is being conscious?
You really are making a big deal of this and I don't understand why.
I am not making a big deal of it. The interpretation of my argument seems to hinge upon your colloquial definition of "understand" and I am simply trying to find out what that definition is.

I don't understand why you are making such a big deal of the fact that I am asking.
You don't have to accept my usage of the word. I don't give a damn.
I was happy to accept it for the purposes of the argument. I just wanted to know what it was
I've told you over and over.
No you haven't. You gave me a definition of "understand" in terms of "comprehend". I said "comprehend" was another word for "understand" and initially you agreed. Now you appear to have a different definition but I am not clear about what it is.
Are we going to debate the point ad nauseam?
We are not debating the point. I am asking for a definition and you are telling me that you have given me the definition over and over again. But you haven't.

All I am looking for is "The colloquial definition of understand upon which the Chinese Room argument depends is ..."

Why is that so unreasonable?
 
Well don't be to sure that you are being clear about what you mean.

Will you agree then that a robotic car can be aware of itself?
Oh, absolutely. Just that it doesn't need to be.

Person. Another person. Yet another person

I am not sure what you are talking about here.

Person. Person. Person. Squirrel!
A car can navigate quite well without the notion of self, without being self-aware. It might be able to do a better job if it is self-aware, but it is not required.

So what? It has the concept of the car to which it's controls pertain and from which viewpoint it's data must be interpreted.
No it doesn't. Movement is relative, so it can just as well view things as though it were moving the obstacles.

So how does that differ from a human's concept of self?
Self-awareness is the ability to examine one's own mental processes in some respect. Our computer-controlled car not only does not need to be self-aware, it doesn't even require the concept of self.
 
So what animals has it been applied to?
Mostly to simple creatures like flatworms, but recently to rats and cats.

So every attempt at modelling a system succeeds?
No, but every model is a model.

I am not aware that I made any such assertion. There is a perfectly rational reason to assert that we cannot be sure that a particular approach to modelling a physical system will be successful until it has actually been done.
If the "particular approach" is computation, then this is incorrect. If it's a physical system, it can be modelled computationally. If it can't be modelled computationally, it is magic.
 
A car can navigate quite well without the notion of self, without being self-aware. It might be able to do a better job if it is self-aware, but it is not required.
Well I don't understand.

Car..Car..Car.

The middle one speeds up when "accelerate" is selected. The middle one slows down then "brake" is selected. The middle one goes slightly left when "left" is applied to "steering wheel". The middle one goes slightly right when "right" is applied to the steering wheel.

So how does the car navigate if it is only aware of the first and the last, but not the middle car, to which these controls pertain?

How can it navigate when it is not aware of the object from which position the data must be interpreted?
Movement is relative, so it can just as well view things as though it were moving the obstacles.
So it is going to have a sort of Tycho Brahe universe? Where accelerating moves the world a little faster and moves all the other cars relatively? Well I suppose so, but it would still need to know where the centre of the Brahe universe was and how the rest of the universe can be moved from this centre.

In this case it is not only self-aware, it thinks it is the Solipsist.
Self-awareness is the ability to examine one's own mental processes in some respect.
Oh, so that is your definition of self aware. I only mean aware of self. Can you give me an example of examining our own mental processes?
 
Mostly to simple creatures like flatworms, but recently to rats and cats.
Got any links?
No, but every model is a model.
Even the ones that don't work.
If the "particular approach" is computation, then this is incorrect. If it's a physical system, it can be modelled computationally. If it can't be modelled computationally, it is magic.
All modelling is computational, so that is not an approach. But there are different approaches to modelling. We don't model everything from the bottom up for example.

Nobody assumes a model is going to work from get go, otherwise there would be no point in modelling.

All I have said is that I would prefer to wait until the results are in before deciding what the results will be.

I don't know why that is such a controversial thing to say.
 
I've no idea how many times I can tell you before you understand. At some point we have to move on.

To understand that something exists you must be conscious. That's all. You really are making a big deal of this and I don't understand why. You don't have to accept my usage of the word. I don't give a damn. I've told you over and over.

Are we going to debate the point ad nauseam?

Probably.
 
Probably.
All I was looking for was the colloquial definition of "understand" he was talking about. No debate at all, just asking a question.

It is him, not me who is keeping this going ad-nauseam. Maybe I have simply missed it - can you tell me what that definition is from the post?
 
Got any links?
Here's the Wikipedia page on the Blue Brain project, one of the more advanced in terms of complexity (if not utility).

I can't remember exactly which critters have had their neural systems completely mapped; it's something on the level of complexity of a flatworm, but not necessarily a flatworm.

Even the ones that don't work.
Depends on what you mean by "work".

All modelling is computational, so that is not an approach.
Well, we are talking about the ability of computation - in the mathematical sense - to model consciousness. So, as I keep saying, there is no rational reason to doubt that this will work.

But there are different approaches to modelling. We don't model everything from the bottom up for example.
No, but we model everything with models.

Nobody assumes a model is going to work from get go, otherwise there would be no point in modelling.
Yes. But there is no rational reason to think that a physical process can't be modelled, which is what you seem to be asserting as the default position.

All I have said is that I would prefer to wait until the results are in before deciding what the results will be.
I said:

We've already proven that a computer can perform all the functions of the brain, no matter what those functions might be.

And you responded:

I think I would prefer to wait for proof of the pudding.

When I see the first simulation of a mouse's brain function on a computer that provides mouse-like behaviour I would begin to be more confident of this.

If that is meant to be a response to my statement, it contradicts established mathematical proofs.

If it's not mean to be a response to my statement, then it doesn't necessarily contradict anything, it's just not terribly relevant.

I don't know why that is such a controversial thing to say.
Depends on how you meant it.

If you are asserting that we don't know if consciousness can be modelled computationally, then you're simply wrong.

If you are saying that we don't know what is the best approach to modelling consciousness computationally, then I'd probably say you're also wrong, but that's a point worth discussing.
 
Last edited:
All I was looking for was the colloquial definition of "understand" he was talking about. No debate at all, just asking a question.

It is him, not me who is keeping this going ad-nauseam. Maybe I have simply missed it - can you tell me what that definition is from the post?

Last time I did that I ended up being forced to type several forum pages worth of explication on the word "understand" and got nothing but a headache for my troubles. You're on your own on this one.
 
If that is meant to be a response to my statement, it contradicts established mathematical proofs.
What are the mathematical proofs you are talking about?
If you are saying that we don't know what is the best approach to modelling consciousness computationally, then I'd probably say you're also wrong, but that's a point worth discussing.
All I said is that was that I would wait for the results of the model before deciding what the results would be.

All I said. I don't have omniscience.
 
Last time I did that I ended up being forced to type several forum pages worth of explication on the word "understand" and got nothing but a headache for my troubles. You're on your own on this one.
Ah, wisdom.
 
All I was looking for was the colloquial definition of "understand" he was talking about. No debate at all, just asking a question.
I gave you the answer and you proceeded to debate the answer. It's a rather simple concept. I can't make you understand it. You won't answer my questions. Though I answer yours. I ask you, can you understand something to exist if you are not conscious? You won't tell me which leads me to think that perhaps you just don't want to know.
 
What are the mathematical proofs you are talking about?
Primarily the Church-Turing thesis, which demonstrates the equivalence of all known forms of computation. (It is actually is a proof despite the name; the "thesis" is with respect to its broader implications.)

All I said is that was that I would wait for the results of the model before deciding what the results would be.
Well, no, you didn't say that; the context in which you raised your objection was discussing computability, not the applicability of any particular method. But I'll accept that's what you meant.

That said, we do know that consciousness can be modelled computationally. Any physical process is either deterministic or random, or some combination of deterministic and random processes.

We can model deterministic processes with arbitrary precision. We can model random process using pseudo-random number generators with arbitrary repeat intervals.

There is simply no way a physical process can not be computable.
 
Seale's argument was about understanding whereas mine is about consciousness.

I think you've found a distinction without any difference. Searle was speaking in the coloquial sense when he said understand. Otherwise there would have been no point to his thought experiment.
Searle's point was to rebut AI. Your entire complaint is hyper silly and absurd. To think Searle's thought experiment had nothing to do with consciousness when the entire point was to show why the Turing test could never demonstrate consciousness. I just don't get why this is so difficult when this is such a trivial point. If it was about understanding without respect to consciousness then there would not have been so many replys by those who defend strong AI. Searle wouldn't have written about AI. Why is this even a point of debate? Are we next going to debate the meaning of the word "is"?

Let's assume for arguments sake that I'm wrong. That Searle didn't mean consciousness when he said "understand" though how one could understand without being conscious you won't tell us. Clearly Searle is making a case against strong AI. Right?
 
Last edited:

Back
Top Bottom