• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
What?

Imagine some mad genius at some future time who may not be constrained by the ethical considerations of a University Ethics Comittee.

What would make it impossible?

How many mad, genius, unethical scientists does it take to change a lightbulb?

Assuming that one person had all the skills and knowhow, why would they want to? Why not just make a human if you want a human? If you're not to fussed about ethics you could even tweak genes to get a different genotypic expression.

An imitation is never the thing it imitates, it only subjectively (to the observer) manifests some of the same functions / attributes. From a practical point of view, that may be what is wanted. From that standpoint it might be judged to have succeeded. From a logical truth point of view, the simulation can't be the thing it simulates.
 
How many mad, genius, unethical scientists does it take to change a lightbulb?

Assuming that one person had all the skills and knowhow, why would they want to? Why not just make a human if you want a human? If you're not to fussed about ethics you could even tweak genes to get a different genotypic expression.

An imitation is never the thing it imitates, it only subjectively (to the observer) manifests some of the same functions / attributes. From a practical point of view, that may be what is wanted. From that standpoint it might be judged to have succeeded. From a logical truth point of view, the simulation can't be the thing it simulates.

So if someone wants to make a conscious machine by building something that processes info just like a brain, with all the feedback loops and signal cross-talk etc, it still wouldn't think like a person?

It doesn't have to actually be a human. Does it?
 
Do you believe the Jeopardy champion computer is thinking rather than furnishing table-look-up rote responses after negotiating many if/then statements and database lookups?
It's certainly not using table-look-up rote responses, but whether it's thinking depends on your definition of thinking. Personally, I think it uses some of the mechanisms/algorithms a human does (in the abstract), but not enough of them and not flexibly enough to satisfy my concept of thought.

I'd say no, that doesn't necessarily answer the 'Is it thinking?' question.
?? sorry, what doesn't answer the 'Is it thinking?' question - the Turing test?

Does this mean you'd maintain that it's not possible to assess whether something is thinking by talking to it?
 
Last edited:
Communication via a common language, e.g. English. Either via speech or text.

Let's use what you mean by thinking.

Would you maintain it's not possible to assess whether something is thinking by talking to it?

Communication requires both parties actually understand what is being said. A hacker could build a very clever syntactic program that understands not a single word (e.g. chatbots). By simply talking to it you would not be able to tell if it is truly thinking. Therefore the answer must always be no.

an existing chatbot as illustration: http://www.jabberwacky.com/
 
Last edited:
It wasn't my question. I suggest you direct that at the original poster, but I see you've already lost track of the conversation.

The question carried with it certain assumptions, as questions usually do, and those assumptions were what I pointed out.
 
Yes it is. The human brain does the maths that comes before the demonstration/consequence can be performed. I'm not saying consciously.
Then so do computers.

Well a human brain can see a way to show why there are an infinite number of primes.
How is that different from what computers do?

That may well be because a human brain does more than compute, also it intuits.
How is this "intuition" anything other than computation?

The computer does not contain a map of its own perceptions.
Of course it does.

Then we give it an incomplete set of states to do this (0,1), which doesn't help matters.
What state can't be represented by 0 or 1?
 
One big flaw with the Turing test...does the success of an imitation prove it is the thing?

If a person thinks they are talking to an intelligent person, does that make the machine intelligent like a person?
Yes. That's the point. Intelligence is a label for a class of behaviours; if the computer exhibits those behaviours, wherefore then should we not assign the label?

(False analogies snipped.)
 
The Chinese Room experiment is a very interesting rebuttal of the Turing Test (also see this video at minutes 16:20 to 21:02)
The Chinese Room couldn't rebut its way out of a wet paper bag.

The Room as a system understands Chinese. Any argument to the contrary contradicts the premises of the argument.

The Room as Searle describes it is, of course, utterly physically impossible; that's where our intuition leads us astray. To handle even a modest exchange of messages it would have to be larger than the visible Universe and would operate on a timeframe of trillions of years.
 
You must've missed it. Computers compute.

What do they compute? Computations. How do they compute them? In a computery way.

And we can avoid a real definition in this way for ever, really, and pretend that we are talking about something that is actually going on in some objective way.

It's quite easy to consider computations in terms of the human beings who interpret them. That would be my preferred definition. If, however, it's being insisted that a computation is going on independently of a human being, then it should be defined in some way what that means.

It's not so much that I haven't received an answer to this - it's that it's considered unfair and absurd to ask the question.

If instead of a computer, we had an auto-kitchen, designed to provide nutritious and tasty meals. We'd recognise that there's no objective way to tell whether the computer was preparing edible meals, without reference to the human being who was going to eat them. We could maybe dither about trying them out on the dog, but fundamentally, we'd have to accept that the machine didn't care what it was producing. If the same was accepted about the computer, we wouldn't have an issue. Of course, it can't be accepted because then the whole AI edifice collapes. So the issue of what computation is is never addressed, and we go round and round and round.
 
In order to represent things very real, like sheep in backyards.

The sheep are very real, but the quality of "sixness" is something that humans apply to make sense of the situation. In fact, you can never have more than one of the same thing. If you have two things, then they must be different things. In any physical context, there are an effectively infinite number of different things. Counting is a trick human beings use to pretend that things can be different and the same at the same time. It's an ingenious lie.
 
It's quite easy to consider computations in terms of the human beings who interpret them. That would be my preferred definition.
Yes, we know you prefer that, because it preserves your magic bean.

If, however, it's being insisted that a computation is going on independently of a human being, then it should be defined in some way what that means.
Human brains are computers. That human interpretation is computation.

It's not so much that I haven't received an answer to this - it's that it's considered unfair and absurd to ask the question.
Completely untrue. You've received very precise and detailed answers; you just don't like them.

That's not our problem.
 
Exactly what is it about machines or computers that makes them incapable of experiencing consciousness or feelings?

Now that's an example of arguing from ignorance.

It's a question, actually. Coming from someone who claims that other people use the term wrong, it's damn funny.

Just Asking Questions, eh?

It wasn't my question. I suggest you direct that at the original poster, but I see you've already lost track of the conversation.

The question carried with it certain assumptions, as questions usually do, and those assumptions were what I pointed out.

I don't see where you pointed out any assumptions. You just mistook question for a logical fallacy.
 
rocketdodger said:
About what? Nothing is external to a brain in a vat. Not to mention what would be BIV's lack of intentionality.

"external" is relative.


Not sure what to say to you as I don't know whether I am corresponding with brain-in-a-vat rocketdodger or member-of-a-linguistic-community rocketdodger.

Meaning and content can be justifiably ascribed only to the latter.
 
The Chinese Room couldn't rebut its way out of a wet paper bag.

The Room as a system understands Chinese. Any argument to the contrary contradicts the premises of the argument.

The Room as Searle describes it is, of course, utterly physically impossible; that's where our intuition leads us astray. To handle even a modest exchange of messages it would have to be larger than the visible Universe and would operate on a timeframe of trillions of years.

It shows the computer knows nothing but syntactic squiggles of the form IF 100101 THEN 1010110. It does not understand a single word of what you are saying. The modern day proof is chatbots. Believing otherwise leads to hackers fooling you and silly nonsense. Nobody would consider it truly intelligent or cognitive because it has exactly zero understanding of what it's saying. But listen to the hacker's hype:

Take for example the chatbot Cleverbot. http://www.zabaware.com/
1. "Giving your computer the power of thought" FALSE doesn't understand a thing, the power of thought - just a clever syntactic trick.
2. Feature: -Understands English" FALSE see above.

No hacker/programmer worth his salt believes any of that, and never will. It's easy to prove if it is understanding anything. For any chatbot enter Socrates is a man. All men are mortal. Is Socrates mortal? and see what you get. A bunch of BS showing it is clearly not understanding a word you have typed in. I call this the Syntactic BS Detector Test.

The Searle Chinese Room is just a thought experiment designed to bring your mind around to seeing this obvious hacker truth. It was never meant to be actually built and that is immaterial. Long live the Searle Chinese Room Experiment line in the sand. We won't be fooled.

Cleverbot in action:
Me: Socrates is a man.
Ultrahal: I can sense that Socrates is a man.
Me: All men are mortal.
Ultrahal: All Men are Mortal is a 1946 novel by Simone de Beauvoir. It tells the story of Raimon Fosca, a man cursed to live forever.
(*BS spewing. Hasn't understood a word. Should have seen and stated the deductive conclusion of our conversation. THAT is cognition.*)
Me: Is Socrates mortal?
Ultrahal: Socrates is a man. (*WRONG Circular BS - I told IT THAT in first sentence*)

LOL. C'mon. Time to get real. This is BS, not understanding English, not the power of thought (understanding/using language)
 
Last edited:
If a person thinks they are talking to an intelligent person, does that make the machine intelligent like a person?



Yes. That's the point. Intelligence is a label for a class of behaviours; if the computer exhibits those behaviours, wherefore then should we not assign the label?

(False analogies snipped.)



Missed that part did you Pixy?...or you just prefer to ignore it.

So basically what you're saying is if any old street bum whack job decides that the computer generated answering service on the other end of the line (or the other 'side' of the computer screen) is actually a real live person (or does the whack job require some rudimentary education? ...what level...grade 3 / 6 / 10 / technical college / university / phd...????) then we therefore have proof that computers are conscious (intelligent / whatever)....like a person...or like... whatever. Just a note....there does not exist a definition of intelligence any more than there exists a definition of consciousness. 'It's what he just did'...does not actually qualify as one.
 
Last edited:
A robot is a tool not an observer. Its perception and calculation only has meaning when filtered through human observation.

To be precise, its perception and calculation only has human meaning when filtered through human observation.

And? This seems obvious to me.

The question you are utterly ignoring is whether the perception and calculation of the robot has meaning to the robot, or other robots, or other creatures, besides humans.

Do you think a chipmunk carrying nuts to its nest cares what meaning you assign to its actions? Do you think the thought "I am carrying nuts to my nest" is going through it's chipmunk brain?
 
Well a human brain can see a way to show why there are an infinite number of primes. That may well be because a human brain does more than compute, also it intuits. Given that what the human brain is seeking to do is to map its own perceptions, and it ultimately contains a map of its own perceptions, this is not surprising. The computer does not contain a map of its own perceptions. It can only calculate from the values of our perceptions we feed into it. Then we give it an incomplete set of states to do this (0,1), which doesn't help matters.

Call me crazy but this sounds like some kind of woo-ish pseudo-science that was made up by someone who doesn't know much about either the brain or computing.
 
Status
Not open for further replies.

Back
Top Bottom