keyfeatures
Critical Thinker
- Joined
- Feb 23, 2012
- Messages
- 436
Are you saying that it can't?
Did the car evolve from the horse? Is a car a horse?
Are you saying that it can't?
I'd just like to know if an artificial mind built from silicon, metal, plastic, whatever, would be conscious in the kind of "sense of self" "I"-ness way that people and other higher apes seem to be.
Under the premises that Searle offers us, the Room understands Chinese. You can't deny it; it is a valid and necessary conclusion.
The problem is as I stated it: To build such a static rule-based system as Searle suggests is a physical impossibility. The problem is not in the structure of the argument or in our conclusion, it's hidden in his premises.
That has nothing to do with the Chinese Room argument.
That means you took the right lesson from the Chinese Room argument - which is not at all the lesson Searle intended. Searle argues that AI is impossible; he believes in the magic bean. That's unsupported by his argument - or by anything else, for that matter.
You mean a sense of being, "to be or not to be?".I'd just like to know if an artificial mind built from silicon, metal, plastic, whatever, would be conscious in the kind of "sense of self" "I"-ness way that people and other higher apes seem to be.
To answer that, first you have to explain and define in what way we are conscious or have a sense of "self". Otherwise you may as well ask, can a human be god?
Evidence?(S3) Syntax alone is not sufficient for semantics.
Non-responsive.Did the car evolve from the horse? Is a car a horse?
Everyone, now that you have defined your terms.For them what find the sums in natural language tricky a tautology...
opposite (maths) = additive inverse.
Who knows what I'm talking about.
Self-referential information processing.To answer that, first you have to explain and define in what way we are conscious or have a sense of "self".
OK; if you think a chatbot could fool everyone indefinitely - how would you attempt to assess whether it was really thinking? Do you feel it is impossible to establish?By simply talking to it you would not be able to tell if it is truly thinking. Therefore the answer must always be no.
And if it's impossible to establish, how is the distinction meaningful?OK; if you think a chatbot could fool everyone indefinitely - how would you attempt to assess whether it was really thinking? Do you feel it is impossible to establish?
OK; if you think a chatbot could fool everyone indefinitely - how would you attempt to assess whether it was really thinking? Do you feel it is impossible to establish?
And if it's impossible to establish, how is the distinction meaningful?
Non-responsive.
Self-referential information processing.
Your turn.
Everyone, now that you have defined your terms.
Which, as I said, means that your "tough question" is not tough at all, but trivial.
I did suggest we use your definition of thinking.You could assess that it was computing. If you consider that thinking, then a calculator thinks.
Non-responsive.You might need a mechanic. Or a vet.
A computer isn't necessarily conscious; they're not purpose built to be so. However, a computer program can be.By this definition a computer isn't conscious.
This is patently untrue. If all data were of equal value, it would be inert.A computer crunches data. All data is of equal value to it.
That's rather circular, don't you think?In order to get information you need sentient subjectivity.
As I said, if you don't understand what you mean, that's your problem, not ours.If the receiver fails to pick up sarcasm does that mean the transmitter is not being sarcastic? It's all getting very difficult.
I did suggest we use your definition of thinking.
It seems like a simple question on a matter of opinion, yet I get the sense you're avoiding it...
Would you maintain it's not possible to assess whether something is thinking (by your definition) by talking to it?