• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
For them what find the sums in natural language tricky a tautology...

opposite (maths) = additive inverse.

Who knows what I'm talking about.
 
I'd just like to know if an artificial mind built from silicon, metal, plastic, whatever, would be conscious in the kind of "sense of self" "I"-ness way that people and other higher apes seem to be.

To answer that, first you have to explain and define in what way we are conscious or have a sense of "self". Otherwise you may as well ask, can a human be god?
 
Under the premises that Searle offers us, the Room understands Chinese. You can't deny it; it is a valid and necessary conclusion.

The problem is as I stated it: To build such a static rule-based system as Searle suggests is a physical impossibility. The problem is not in the structure of the argument or in our conclusion, it's hidden in his premises.


That has nothing to do with the Chinese Room argument.


That means you took the right lesson from the Chinese Room argument - which is not at all the lesson Searle intended. Searle argues that AI is impossible; he believes in the magic bean. That's unsupported by his argument - or by anything else, for that matter.

That's not how the state of the art looks at it. The leading researcher is Rappaport (Sneps) and in 2001 in the Journal of Logic, Language, and Information he did his best to destroy Searle's Chinese Room but only ended up proving his point.

The central Searle point is the argument from semantics:

(S1) Computer programs are purely syntactic.
(S2) Cognition is semantic.
(S3) Syntax alone is not sufficient for semantics.
-----------------------------------------------------
Therefore: No purely syntactic computer program can exhibit semantic cognition.

From the Rappaport paper abstract:

"A theory of "syntactic-semantics" is advocated as a way of understanding how computers think (and how the Chinese Room Argument objection to the Turing Test can be overcome) ... syntax can suffice for the semantical enterprise."

Notice he is not arguing about the semantic problem exposed by Searle. There is a "semantical enterprise" that must be undertaken. Thus Searle is right, even to his critics. The hacker argument thus comes down to there being two ways to perform this "semantical enterprise": try to twist syntactic techniques into producing semantic results (Sneps) or simply start trying to understand and develop semantic programming techniques. That's it. But both ways Searle was absolutely right: The "semantic enterprise" MUST be undertaken for authentic cognition or thinking machines. And WITHOUT it, semantic cognition cannot happen, as even Rappaport agrees.

But if these syntactic-semantic machines fail the Syntactic BS Detector Test as bad as the modern chatbots do (the modern day examples of "thinking machines") then they also are not truly thinking i.e., they don't understand a damn word of what you are saying.
 
Last edited:
I'd just like to know if an artificial mind built from silicon, metal, plastic, whatever, would be conscious in the kind of "sense of self" "I"-ness way that people and other higher apes seem to be.
You mean a sense of being, "to be or not to be?".

I think it would depend on whether the artificial mind were alive or not. If it were alive I see no reason why not, given sufficiently advanced technology.



Psst, have you noticed how the computationalists ignore any references to life? To them life is an irrelevance.
 
Last edited:
To answer that, first you have to explain and define in what way we are conscious or have a sense of "self". Otherwise you may as well ask, can a human be god?

One might even think you are making a reference to being when you use the word God.

It may well be the case that one is not possible without the other
 
For them what find the sums in natural language tricky a tautology...

opposite (maths) = additive inverse.

Who knows what I'm talking about.
Everyone, now that you have defined your terms.

Which, as I said, means that your "tough question" is not tough at all, but trivial.
 
By simply talking to it you would not be able to tell if it is truly thinking. Therefore the answer must always be no.
OK; if you think a chatbot could fool everyone indefinitely - how would you attempt to assess whether it was really thinking? Do you feel it is impossible to establish?
 
OK; if you think a chatbot could fool everyone indefinitely - how would you attempt to assess whether it was really thinking? Do you feel it is impossible to establish?
And if it's impossible to establish, how is the distinction meaningful?
 
OK; if you think a chatbot could fool everyone indefinitely - how would you attempt to assess whether it was really thinking? Do you feel it is impossible to establish?

No it is possible to feel but not to think its thinking.
 
And if it's impossible to establish, how is the distinction meaningful?

It's not meaningful when you "think" it's impossible. It's also not meaningful if you "think" it's possible.
It's only meaningful when your thoughts correspond to reality.
Inventing abstract distinctions as you have for consciousness is the easy part. Showing that this distinction is meaningful by physical demonstration is the hard part. So far the demonstrations are not convincing.
So the question is whether the distinction is useful.
If your career depends on it or you fantasize about living forever or you love your mechanical toys more than humans you may be emotionally attached to this useless distinction.
This is not our problem it's yours.
 
Self-referential information processing.

Your turn.


By this definition a computer isn't conscious. A computer crunches data. All data is of equal value to it. In order to get information you need sentient subjectivity. Information only exists for those who live in virtual reality.
 
Everyone, now that you have defined your terms.

Which, as I said, means that your "tough question" is not tough at all, but trivial.

If the receiver fails to pick up sarcasm does that mean the transmitter is not being sarcastic? It's all getting very difficult.
 
You could assess that it was computing. If you consider that thinking, then a calculator thinks.
I did suggest we use your definition of thinking.

It seems like a simple question on a matter of opinion, yet I get the sense you're avoiding it...

Would you maintain it's not possible to assess whether something is thinking (by your definition) by talking to it?
 
You might need a mechanic. Or a vet.
Non-responsive.


By this definition a computer isn't conscious.
A computer isn't necessarily conscious; they're not purpose built to be so. However, a computer program can be.

A computer crunches data. All data is of equal value to it.
This is patently untrue. If all data were of equal value, it would be inert.

In order to get information you need sentient subjectivity.
That's rather circular, don't you think?

If the receiver fails to pick up sarcasm does that mean the transmitter is not being sarcastic? It's all getting very difficult.
As I said, if you don't understand what you mean, that's your problem, not ours.
 
I did suggest we use your definition of thinking.

It seems like a simple question on a matter of opinion, yet I get the sense you're avoiding it...

Would you maintain it's not possible to assess whether something is thinking (by your definition) by talking to it?

From a scientific point of view an opinion is irrelevant. What matters is what can be known, and with what degree of certainty. You might as well ask me if I think Obama is a nice person. I could give you my answer and you could agree or disagree. So what? That's why I'm saying subjectivity is not adequate to decide whether something is thinking or not. And why the Turing test doesn't test for thinking. Human opinion is not a scientific assessment.
 
Status
Not open for further replies.

Back
Top Bottom