• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Fine, include chimps (with their brains made of neurons and DNA 95% the same as humans) in the category of intelligent beings. It really doesn't change the principle.

Ok, now that you are backtracking, is there anything else we can include?

I guess all we need to do is show you a youtube video of X doing something intelligent and you will concede that X might be an intelligent being?

It's why I prefer to leave animals aside in discussions of consciousness.
I think we all know that the reason you prefer to leave animals aside in your discussions is because of the proverbial wrench the facts about animal cognition throw into the proverbial works of your arguments.

In fact I don't know how you can tolerate the idea of a chimp doing anything cognitively better than a human. I wouldn't stand for it if I were you!!!

If they aren't conscious, it doesn't prove anything, and if they are conscious, it doesn't prove anything.
Wait -- if an animal with a simpler brain than a human's is conscious, it doesn't prove anything?

I tend to think that at the very least if X is conscious it certainly proves that a brain as simple as the brain of X can lead to consciousness. Not sure why you consider such an observation "not proving anything."

Let me try another version of this sentiment and see if it fits:
rocketdodger said:
It's why I prefer to leave computers aside in discussions of consciousness. If they aren't conscious, it doesn't prove anything, and if they are conscious, it doesn't prove anything.
 
Sometimes a yes/no answer isn't enough. Do the social and cultural skills you have learned and been taught demonstrate your own intelligence or that of the culture you learned them from?

Could this be a false dichotomy? ;)
Ha! I suppose it could be. But is there really no difference between the knowledge and experience a child acquires as it grows and the programming and data supplied to a computer? Hmmm. I confess I am stumped by that but still cussedly reluctant to accord intelligence to the machine.

I guess we are going to need a definition of intelligence to continue this side-branch in a discussion about consciousness. I offer:

The capacity to perform a whole bunch of cognitive functions

knowing it is inadequate and being uncomfortably aware that it does not rule out what I want to exclude: that computers are intelligent.

I may have this completely wrong. Maybe intelligence is an entirely mechanical thing, in our case a capacity of our personal, on-board computers, but also in our case infused or mixed up with conciousness in a waybwhich makes it hard to separate the two. So it would be something like a hand - just a useful but essentialy mechanical tool which has evolved under selection pressure to improve our chances in bumping off neandertals.

Hmm. I might be sold on that on reflection.
 
Whether or not there's something called intelligence embedded in these objects, it certainly needs conscious human beings to release it.

What do you mean by 'needs conscious human beings to release it' ?

It seems to suggest that an autonomous problem-solving (i.e. intelligent) machine cannot continue to autonomously problem-solve without conscious humans to take action to 'release' that intelligence, which sounds absurd. That surely can't be what you meant, so what did you mean?

If you mean that humans had to make the machine and set it going, then yes, even if such a machine could replicate, it would ultimately have a human origin - but that doesn't mean it can't be intelligent or have intelligence itself. You yourself are a biological machine of human origin, and you seem to be intelligent, although you're just the result of a complex coding of DNA interacting with a complex environment.

Is it the definition of intelligence that is the problem here? for me it's a basic generalised problem-solving capability, if that helps.
 
Fine, include chimps (with their brains made of neurons and DNA 95% the same as humans) in the category of intelligent beings. It really doesn't change the principle.

It's why I prefer to leave animals aside in discussions of consciousness. If they aren't conscious, it doesn't prove anything, and if they are conscious, it doesn't prove anything.

Consciousness is biological?
 
...is there really no difference between the knowledge and experience a child acquires as it grows and the programming and data supplied to a computer?
There is a difference in the mechanisms involved, but if you isolate it to specific examples, algorithms are algorithms.

Hmmm. I confess I am stumped by that but still cussedly reluctant to accord intelligence to the machine.

I guess we are going to need a definition of intelligence to continue this side-branch in a discussion about consciousness.

I offer:

The capacity to perform a whole bunch of cognitive functions

knowing it is inadequate and being uncomfortably aware that it does not rule out what I want to exclude: that computers are intelligent.
You could define it to explicitly exclude non-biological systems, but let's face it, that's no better than the 'intelligence of the gaps' approach I mentioned earlier, that effectively defines it as 'the clever stuff we haven't yet managed to get a machine to do'.

Maybe intelligence is an entirely mechanical thing, in our case a capacity of our personal, on-board computers, but also in our case infused or mixed up with conciousness in a waybwhich makes it hard to separate the two. So it would be something like a hand - just a useful but essentialy mechanical tool which has evolved under selection pressure to improve our chances in bumping off neandertals.
That's closer to my view of it - a problem-solving capability.
 
What do you mean by 'needs conscious human beings to release it' ?

It seems to suggest that an autonomous problem-solving (i.e. intelligent) machine cannot continue to autonomously problem-solve without conscious humans to take action to 'release' that intelligence, which sounds absurd. That surely can't be what you meant, so what did you mean?

He means that the behavior humans classify as "autonomous problem-solving" isn't fundamentally different from any other behavior in a way that can be expressed without referencing the already existing intelligence of humans.

For example, if you compared a functioning problem solving robot with a powered down one, and noted that the functioning robot could make decisions to keep itself functioning longer while the powered down robot didn't, westprog will respond that without humans around to notice the difference between the functional and powered down robots, the robots are essentially the same -- two hunks of metal and electronic materials.

The fact that one of the robots is doing things like responding to environmental changes differently than the other is irrelevant because both of them still respond to environmental changes, and without a human to classify one behavior as "functional" and the other as "powered down" they are just "responding to environmental changes" like anything else does ( rocks or bowls of soup in particular, westprog looooves rocks and bowls of soup ).

Note that piggy has jumped on board with this viewpoint as well -- he has stated many times in this thread that without a human to interpret the output, the internal behavior of a working computer is not different in any appreciable way than that of a powered down computer. So piggy would say that the term "autonomous problem-solving" is only meaningful to an existing intelligence, and otherwise the machine doing what we call "autonomous problem solving" is just going through the molecular motions like any other machine on any other day.
 
He means that the behavior humans classify as "autonomous problem-solving" isn't fundamentally different from any other behavior in a way that can be expressed without referencing the already existing intelligence of humans. <and so-on>
OIC, thanks.

I thought the light of rational discussion had emerged from behind the clouds of obfuscation for a moment there. My mistake.
 
What do you mean by 'needs conscious human beings to release it' ?

It seems to suggest that an autonomous problem-solving (i.e. intelligent) machine cannot continue to autonomously problem-solve without conscious humans to take action to 'release' that intelligence, which sounds absurd. That surely can't be what you meant, so what did you mean?

If you mean that humans had to make the machine and set it going, then yes, even if such a machine could replicate, it would ultimately have a human origin - but that doesn't mean it can't be intelligent or have intelligence itself. You yourself are a biological machine of human origin, and you seem to be intelligent, although you're just the result of a complex coding of DNA interacting with a complex environment.

Is it the definition of intelligence that is the problem here? for me it's a basic generalised problem-solving capability, if that helps.

I believe that here you're referring to hypothetical machines that are capable of autonomous action. If so, they fall in the same category as humans.

I was describing the various systems that humans use to extend their intelligence. I still maintain that none of these systems exhibit intelligence in isolation, and that if a human being is not involved, they posses no intelligence at all. This goes for books, tools, CD's, DVD's - and computer programs.
 
I believe that here you're referring to hypothetical machines that are capable of autonomous action.
Not necessarily hypothetical. There are plenty of machines around that are capable of autonomous action (even the Roomba floor cleaner has a large degree of autonomy). The question is when does a behaviour qualify as intelligent, regardless of agency.

I was describing the various systems that humans use to extend their intelligence. I still maintain that none of these systems exhibit intelligence in isolation, and that if a human being is not involved, they posses no intelligence at all. This goes for books, tools, CD's, DVD's - and computer programs.

Books, CD's, DVD's and other recorded media, I'd agree with. 'Tools' is too vague to comment on, but what about computer programs that control autonomous robots? There are autonomous machines that have a degree of problem-solving capability (e.g. autonomous planetary rovers) - do they 'fall into the same category as humans'?
 
Not necessarily hypothetical. There are plenty of machines around that are capable of autonomous action (even the Roomba floor cleaner has a large degree of autonomy). The question is when does a behaviour qualify as intelligent, regardless of agency.


When did this all become about intelligence all of a sudden....didn't we all already agree that Intelligence != consciousness....did I miss something?


But anyway...you might find a few candidates here....there are many clips, they are quite amazing....you should watch them all.



Books, CD's, DVD's and other recorded media, I'd agree with. 'Tools' is too vague to comment on, but what about computer programs that control autonomous robots? There are autonomous machines that have a degree of problem-solving capability (e.g. autonomous planetary rovers) - do they 'fall into the same category as humans'?



It is as intelligent as the programmer who programmed it because it is a REMOTELY CONTROLLED vehicle.

If you see an RC car acting intelligently in navigating around a room and you do not know it is remotely controlled.....do you call IT intelligent?

If you do not consider a remotely controlled vehicle that seems to behave very cleverly in avoiding obstacles and corners, intelligent because it is remotely controlled......then why would you attribute it to a robot that is running a SCRIPT.

A programmed robot/rover is doing nothing but following INSTRUCTIONS that REMOTELY CONTROL it over time and space instead of just space.

Running a program regardless of how clever the program might be is just remote control...... a step higher in remote control complexity than an RC car just as an RC car is a step higher in puppetry than Punch and Judy.


ETA: If a human being seemingly did something very clever and then you discover that all he did was follow step by step instructions to do it.... would you still attribute the cleverness of the action to him?
 
Last edited:
It is as intelligent as the programmer who programmed it because it is a remotely controlled vehicle.
It's not a remote control vehicleWP.
If you see an RC car acting intelligently in navigating around a room and you do not know it is remotely controlled.....do you call IT intelligent?
False analogy.
If you do not consider a remotely controlled vehicle that seems to behave very cleverly in avoiding obstacles and corners, intelligent because it is remotely controlled......then why would you attribute it to a robot that is running a script.
Because running a script is not remote control.
A programmed robot/rover is doing nothing but following instructions that remotely control it over time and space instead of just space.
You're equivocating.
Running a program regardless of how clever the program might be is just remote control......
...and now you're equivocating in bold and blue text.
a step higher in remote control complexity than an RC car just as an RC car is a step higher in puppetry than Punch and Judy.
So if you recognize that they are different, why are you claiming they are the same thing?
ETA: If a human being seemingly did something very clever and then you discover that all he did was follow step by step instructions to do it.... would you still attribute the cleverness of the action to him?
Another false analogy. Note that the human brain follows the laws of physics--step by step. So if a human being did anything at all that you recognize as intelligent, then the human being did so using the step by step application of physical laws on material entities. In order to apply this logic, you're going to need to invoke special pleading when it comes to humans following physical laws.

Incidentally, a common operational criteria for intelligence is simply that an entity evaluates some environment and figures out what to do. By this criteria, not all programmed entities are intelligent--but you can tell which ones aren't by whether or not the programmer coded specific responses to specific conditions, or even didn't bother programming responses and simply programmed a sequence of actions. This is one of the definitions of intelligence that is used to define whether or not a programming paradigm falls under the guise of "artificial intelligence" or not (the other popular one, commonly found in gaming, is to simply mimic an entity--specifically for play purposes; often, these are tweaked to not be quite so intelligent simply to promote an engaging game play, as people generally don't try to play games they perceive they can't possibly accomplish anything in).
 
Last edited:
Another false analogy. Note that the human brain follows the laws of physics--step by step. So if a human being did anything at all that you recognize as intelligent, then the human being did so using the step by step application of physical laws on material entities. In order to apply this logic, you're going to need to invoke special pleading when it comes to humans following physical laws.



From the above statements it is obvious you do not understand what the meanings of the following concepts are
  • Follow
  • Laws of physics
  • Step by step
  • Physics

I think you ought to research and understand those concepts and then come back to discuss the topic further.
 
Last edited:
I think you ought to research and undestand those concepts and then come back discuss the topic further.
Do you have a specific objection?

By default, if you don't mention it, I'm going to interpret this as meaning you don't.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom