• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
To realize why the above statement is so wrong see if you can spot the difference between this sentence
Charles the First walked and talked Half an hour after his head was cut off.​
and this one
Charles the First walked and talked; Half an hour after, his head was cut off.​


The "atomic strings" are exactly the same yet the meaning (I hope you noticed) is utterly different.

Lest you nitpick about the punctuations being an added set of "atomic strings" think of the SPOKEN sentences where the punctuations are just representations of the INTONATIONS of speech....how can a sentence reduced to definitions of its constituent set of “atomic strings” convey the differences in NUANCES OF MEANING and INTENTIONS of the spoken language with all its intonations and stresses?

Yes, you are correct, I was not clear. A language can be reduced to a set of atomic strings AND the set of rules that dictates how those strings can be combined.

However I don't consider the statement that "a dictionary is the set of atomic strings that all languages can be reduced to" to be inconsistent with that. If you use a language's rule set to reduce a language to its atomic strings, what you have is a set of atomic strings.
 
Unless (until?) a machine can feel and experience the same way as a human does, there will be no machine that 'thinks' like a human.

This is what I don't understand -- why do you think anyone holds a different view?

This is the same view held by every single supporter of the computational model of consciousness. Certainly all of us here agree 100%.

So I really hope you aren't arguing against computers being conscious with the idea that we mean things like the cruddy video game AI I can program really thinks like a human, or that machines like Watson can even produce a painting as well as I can ( and I am not that great of an artist, lemme tell you ).

That isn't what anyone is talking about, it is a strawman perpetuated by a few forum members to distract people from the real issues.
 
If a person thinks they are talking to an intelligent person, does that make the machine intelligent like a person?

It depends on the intelligence of the person giving the test.

I can tell you right now, if a machine passes my Turing test, it is intelligent like a person.
 
Do you believe the Jeopardy champion computer is thinking rather than furnishing table-look-up rote responses after negotiating many if/then statements and database lookups?

I'd say no, that doesn't necessarily answer the 'Is it thinking?' question.

Then you are wrong.

Try learning about how the program actually works before you make statements like this.
 
Unless (until?) a machine can feel and experience the same way as a human does, there will be no machine that 'thinks' like a human.

We must be careful of shifting the goalposts here. Artificial thinking, artificial consciousness, and artificial life are three distinct AI goals, each harder then the previous. So above should say "Unless (until?) a machine can feel and experience the same way as a human does, there will be no machine that is conscious like a human." Artificial consciousness would indeed require experience, feeling and thinking, along with a memory and self. In theory a simpler thinking machine could exhibit real cognition, but have none of the other features.
 
Last edited:
It isn't interesting to anyone doing research in AI, or even anyone who knows how to program.

It is only interesting to philosophy professors, and maybe students who think the might want to be philosophy professors. And religious people.

I'm an AI programmer and it is VERY interesting to me. It taught me to give up believing syntactic programs like syntactic chatbots understand a damn thing, so try another method. And thus was born semantic programming, semantic web and semantic reasoning (google for more). It will help the public to do the same. It seems like non-programmers have the hardest time understanding it's basic truth
 
Using language from one area we are familiar with to describe another area that we are not familiar with is simply a sign of the immaturity of our understanding.

It might help you as a specialist computer expert to understand biology but its simply a learning aid not the real description.
Confusing the metaphor with the real thing is infantile.

I suppose you also think the video was of an actual ribosome?

That was intensely mean spirited! I must be close to the nerve.

Borrowing words from one discipline to apply to another is actually a good method of analysis and explanation. It takes advantage of the brain's wonderful skills of pattern recognition. Of course, analogies shouldn't be taken too far. I infer you think the extremely common phrase in biology "cellular machinery" qualifies as an overextended analogy.

Explain exactly why the ribosome cannot fairly be called a machine.
 
Last edited:
The Chinese Room experiment is a very interesting rebuttal of the Turing Test (also see this video at minutes 16:20 to 21:02)

Two fallacies I see in the Chinese Room thought experiment:

1) It assumes "understanding" is a magical bean, without properly examining that assumption.

2) If it's a given that there is a magic bean of understanding, the lookup table had to have been created by someone using that very bean. In that case, there's merely a time delay between the use of the magic bean and the mechanical act of looking up and replying with symbols in the room. A little like the time delay between coming up with something to say, and actually saying it. The lungs, vocal chords and tongue (like the Chinese Room) have no clue of the meaning of what's being said. It's not a philosophical problem in my estimation.
 
It doesn't have to actually be a human. Does it?

Will it need to ****, eat, have periods, worry about getting cancer, have a mother, consider getting a pet dog, go to the dentist, forget where it put its keys, regret that tattoo of an ex's name? Humans think differently as a result of these functions and concerns. The human that is desperate for a piss makes different choices.

I take the embodied intelligence position to its logical conclusion. It's not just the form of the human body that impacts on the way the brain 'thinks', but also the detail of the structures and its dependence on its environment. You might be able to mimic those very convincingly, but whilst it's a mimic and not biological human, it's still not going to to think like a human. How much will the average person will be able to tell the difference? It's not possible to know that. The senses can be fooled by artificial flavours, smells, images and so forth. If I think a filmed performance of an Elvis impersonator is the real Elvis performing, does that make him the real Elvis? It doesn't take a Chinese room to show how human perception/observation is not enough to judge. I'm surprised a scientist would argue that it is.
 
Communication via a common language, e.g. English. Either via speech or text.

Well obviously computers can do this. If we reduce language to signal, response, signal, even rocks communicate with each other. Without reciprocal empathy factor though I don't think we can call it the same communication as between two humans. Even if I read a work by a dead human, part of the communication comes from knowing they had the same, or similar, feelings to myself. So a computer can only be seen to communicate like a human if a human is filling the gaps.

Let's use what you mean by thinking.

Would you maintain it's not possible to assess whether something is thinking by talking to it?

You could assess that it was computing. If you consider that thinking, then a calculator thinks. I can't talk to a baby either, or someone who speaks a different language. Yet I still know that they are thinking. How come?
 
Call me crazy but this sounds like some kind of woo-ish pseudo-science that was made up by someone who doesn't know much about either the brain or computing.

Is the opposite of 1 nothing? Or minus 1? It's a tough question. But probably not for someone with your impressive training. I'm stuck on; This statement is false - True or false? yesnoyesnoyesnoyesnoyesnoyes...You gonna give me that negative feedback routine again? Rings a Bell. Keep your cool. Oh, I forgot, you're doing binary.
 
It shows the computer knows nothing but syntactic squiggles of the form IF 100101 THEN 1010110. It does not understand a single word of what you are saying.
Under the premises that Searle offers us, the Room understands Chinese. You can't deny it; it is a valid and necessary conclusion.

The problem is as I stated it: To build such a static rule-based system as Searle suggests is a physical impossibility. The problem is not in the structure of the argument or in our conclusion, it's hidden in his premises.

The modern day proof is chatbots. Believing otherwise leads to hackers fooling you and silly nonsense. Nobody would consider it truly intelligent or cognitive because it has exactly zero understanding of what it's saying. But listen to the hacker's hype
That has nothing to do with the Chinese Room argument.

The Searle Chinese Room is just a thought experiment designed to bring your mind around to seeing this obvious hacker truth. It was never meant to be actually built and that is immaterial. Long live the Searle Chinese Room Experiment line in the sand. We won't be fooled.
That means you took the right lesson from the Chinese Room argument - which is not at all the lesson Searle intended. Searle argues that AI is impossible; he believes in the magic bean. That's unsupported by his argument - or by anything else, for that matter.
 
Will it need to ****, eat, have periods, worry about getting cancer, have a mother, consider getting a pet dog, go to the dentist, forget where it put its keys, regret that tattoo of an ex's name? Humans think differently as a result of these functions and concerns. The human that is desperate for a piss makes different choices.

Why would anyone want a robot that had to do that stuff? Of course a robot would have different motivations etc, who said otherwise? But what does any of that have to do with "consciousness"?

I take the embodied intelligence position to its logical conclusion. It's not just the form of the human body that impacts on the way the brain 'thinks', but also the detail of the structures and its dependence on its environment. You might be able to mimic those very convincingly, but whilst it's a mimic and not biological human, it's still not going to to think like a human. How much will the average person will be able to tell the difference? It's not possible to know that. The senses can be fooled by artificial flavours, smells, images and so forth. If I think a filmed performance of an Elvis impersonator is the real Elvis performing, does that make him the real Elvis? It doesn't take a Chinese room to show how human perception/observation is not enough to judge. I'm surprised a scientist would argue that it is.

Again, I'm not asking about an artificial human meant to deceive people into thinking it is an actual human.

I'd just like to know if an artificial mind built from silicon, metal, plastic, whatever, would be conscious in the kind of "sense of self" "I"-ness way that people and other higher apes seem to be.
 
Status
Not open for further replies.

Back
Top Bottom