• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
If you can describe a definition of an intelligent device that can be evaluated without the intentions of some human being having to be taken into account, then I'd be interested to hear it.

A squirrel is intelligent because when a predator is approaching the squirrel runs away.

What "intention of some human being" is taken into account there?

Are you claiming that squirrels are the same as rocks?

Or are you claiming that if there were no humans to label squirrels as intelligent, they would be the same as rocks?

Or are you claiming that without humans, the self-preservation behavior of a squirrel would cease to be significant? Even to the squirrel who is preserving itself?

Or are you claiming that the self-preserving nature of life isn't significant in the first place? That the fact that life has managed to exist for billions of years on this planet, as essentially the same homeostatic system, while rocks have crumbled into dust and been washed away, isn't significant?

Or are you claiming that rocks could be said to exhibit self-preserving behavior just like a life form, just like a squirrel? If so, are you convinced that rocks exhibit reproductive behavior as well? Can rocks collect nuts to last them through the winter, too?

Maybe you should stop ignoring animals when you think about this stuff, westprog, since it seems like a measly little squirrel has managed to devastate your entire argument. Squirrels have nothing to do with humans, yet they are very different from rocks.
 
Last edited:
It's not 'all about' intelligence. Nothing sudden about it - intelligence has been part of the discussion for a while now.

Not sure how Discovery channel clips are relevant here. I've seen plenty of them.


Are you using some new definition of 'autonomous' that encompasses remote-control?

So you keep asserting, but it seems to me that that logic isn't particularly helpful, because you could equally argue that ultimately humans are machines 'remote-controlled by evolution' - i.e. DNA codes for the construction of a complex learning machine that has some behaviours 'built-in', can make use of rules and algorithms provided to it, and can develop its own rules and algorithms. We have developed machines that can learn, use supplied rules and algorithms and create and apply new ones - admittedly in a far more limited range of contexts, but fundamentally the same kinds of capabilities. Why do you feel the biological machine is intelligent, but the electronic one is not? Precisely what is it that you feel is lacking in the electronic machines that makes them not intelligent?



This is exactly what I have been questioning in my recent posts. If you are suggesting that what you learn from external sources doesn't count as 'cleverness' (is this a synonym for intelligence?), just what is it that does count?

If it isn't a result of external influences, could it be something internal? perhaps something structural, involving the layout or connectivity of your brain? Something that is a result of whatever coded for your development - i.e. your DNA? Do you inherit the coding for your intelligence in the DNA you receive from your parents?

Is it fair to say you are only as intelligent as the genes you inherited from your parents will allow? Just as a machine is only as intelligent as the code gets from its programmer will allow?

Just askin' ;)




Are you a proponent of Intelligent Design? All the above is similar to the argument between Intelligent Design and Evolution.

What you are arguing for is that Intelligent Design is basically the same as Evolution.

I saw an episode of Futurama that exquisitely lampoons the issue. I think it is Season 5 or 6 episode 9 it is called "A Clockwork Origin".

However, other than fiction and conjectures proposed in nice Cartoons and SciFi movies, I think you need to establish in your mind firmly the difference between on the one hand FICTION, DESIGN and CONSTRUCTION and on the other hand REALITY, EVOLUTION and INSTRUCTION.

Once you have the difference established firmly then you would appreciate why the argument you give above is wrong. Before that I am afraid we are going to be arguing fruitlessly.
 
Last edited:
No, thus the ending /.


Then we have considerably more than one example to study.

There may be more than one individual animal or plant etc. However they are all extensions of the same life form from a common ancestor. As such all the instances of consciousness in life forms are the same in what process it is and how it emerges.
 
Before that I am afraid we are going to be arguing fruitlessly.

I don't think the argument is ever going to be fruitful.

[SNIP]
Do you consider such a discussion to bearing of "edible fruit," as it were?

Edited by kmortis: 
Removed personal comment
 
Last edited by a moderator:
What do you call it, how a squirrel runs up a tree when a predator is approaching, that is different from how a rock sits there when a predator is approaching?

As it turns out, I'm actually more on your side than not when it comes to intelligence -- although intelligence isn't consciousness (although I wouldn't be surprised if consciousness turned out to require intelligence).

Avoiding predators is a bit dicey, though, when it comes to ascribing intelligence.

I don't have a precise definition to offer, but I generally link intelligence with adaptability. (Many insects, for example, are great at eluding predators in various ways, but their responses appear to be reflex reactions... just extremely effective ones.)

When I think of intelligence, I think of the difference between mud daubers and bower birds.

If you interfere with a mud dauber while it's building its nest, it can't compensate for that, because it's just performing a reflex action, so if you mess with her construction in the right way, you can force her to construct a non-functional nest.

On the other hand, if you mess with a bower bird's bower -- or if it simply doesn't attract females -- he will repair it and improve it.

Whales exhibit intelligence because they have culture -- new songs are invented, learned, and mimicked from season to season.

Chimps are intelligent because they can observe tool use and learn it -- which is trickier than it seems at first glance, because it requires the chimp to understand what the intention of the action is... without that, there's no way for the young chimp to decide what is relevant (picking out the right size stick, for instance, and stripping off the leaves) and what is not relevant and therefore does not need to be emulated (sneezing, pursing the lips, scratching an ear).

Human babies have truly astounding abilities to attribute intention, and can learn to perform an action correctly in some cases even after seeing an adult try to perform it and fail.

Squirrels not only elude predators, but they are capable of learning that some things which might appear like predators are really not, and in fact might be a source for a yummy meal.

So to me, adaptability and flexibility are part of the picture.

As for the rock rolling down the hill, even tho its path may be unpredictable, that unpredictability does not appear to arise from anything the rock is doing, but rather from the chaotic interaction of the rock and the hillside.
 
There may be more than one individual animal or plant etc. However they are all extensions of the same life form from a common ancestor. As such all the instances of consciousness in life forms are the same in what process it is and how it emerges.


Not to mention that as far as I know, the brains of all the animals we consider to have a shot at being attributed consciousness are in fact the same with the difference being additional lobes. But basically the same structure and material and mechanism of operation.

So the ONLY object we have that has achieved consciousness in this entire Earth and solar system (aaik) in the billions of years it has existed is the bundle of matter called the brain and there is no other object that is similar in action but different in structure or material of mechanism.

For example look at flight in animals…. There are the bats, birds, insects and not to mention gliding animals…. many different structures and mechanisms. Look at swimming, breathing, locomotion. However, the only mechanism for consciousness is the brain.

By the way…. shifting the discussion and OP thread topic to intelligence does not solve the problem either.
 
Last edited:
By the way…. shifting the discussion and OP thread topic to intelligence does not solve the problem either.

Well, keep in mind that some of the comp.lits are reasoning from the study of intelligence and attributing that to the study of consciousness.

As I've said before, this is a lot like studying cars, developing a general "theory of transportation", then asserting that airplanes can be understood in terms of that theory based on cars.

And as long as you don't bother to actually look at airplanes, you can continue to believe they are the same thing.

Of course, people who design, build, and fly airplanes will have no use for such a "theory of transportation" as applied to aircraft. Good thing, too, cause if they tried to use it they'd never get anything off the ground.
 
Well, keep in mind that some of the comp.lits are reasoning from the study of intelligence and attributing that to the study of consciousness.
As I've said before, this is a lot like studying cars, developing a general "theory of transportation", then asserting that airplanes can be understood in terms of that theory based on cars.

And as long as you don't bother to actually look at airplanes, you can continue to believe they are the same thing.

Of course, people who design, build, and fly airplanes will have no use for such a "theory of transportation" as applied to aircraft. Good thing, too, cause if they tried to use it they'd never get anything off the ground.



Maybe because the comp.lit crowd think that it might be a little easier to argue for machine Intelligence through Design and forgetting all about the word "Artificial" in the term "AI" and basically converting the whole thing into an Intelligent Design argument when it comes to consciousness.

It is amazing to me that someone can actually argue, not metaphorically but literally, that writing a program in a computer is the same as teaching a person and that "animals are following the laws of physics step by step in the manner that a CPU executes a program".

And all that after insisting that average modern computers running mundane programs are conscious if only you limit your scope of "thinking". Much like you might be excused or even commended by some for "reasoning" that kissing is the cause of pregnancies if that is your "operational definition" and are consistent about it.

Where can we go from there?
 
Last edited:
If you have a better objective definition for intelligence, that makes a robot that does the dishes more intelligent than a thunderstorm, for example, then I'd like to see it.
If by 'objective', you mean measurable or quantifiable, there are a range of measures used in animal studies to assess various levels of intelligence, from tool-using, imitation, and theory of mind, to learning by association, simple discrimination, spatial orientation, habituation, etc. There are so many different facets to intelligence that a single definition is necessarily abstract.

When you say "problem-solving", I assume that the problem is a human problem, set for the device. The device itself doesn't have problems to solve.
The machine may have been programmed to set its own goals within its operational context, so the human influence may only be to set the long-term goal. But why do you think the source of the goal being sought is relevant? Does it make a difference to our assessment of the intelligence required for avian navigation if we discover the goal was set by migratory stimulus rather than by a racing pigeon fancier?

When you describe the device as being "flexible" that usually means within very limited bounds.
Yes, obviously; biological creatures develop fit certain environmental niches, and are optimized to function within those niches. In an analogous way, machines are designed for specific roles or functions and optimized for those functions.

A dish-washing robot might be considered intelligent because it was always able to wash the dishes.
That's more than a bit vague - many people have a dish washer that is able to wash and dry the dishes - given a water supply, electricity, and detergent. Nobody I know considers their dish washer intelligent. I don't think they'd change their minds even if they never broke down.

The kind of flexibility we want is to be able to recognise when a dish is already clean, finding the right place to put them away, and to replace a wet tea-towel.
A wet tea-towel? :D Remind me not to have you design a dish washing robot!

A dishwashing robot that sometimes strangled the cat, broke the dishes and set fire to the house would be much more flexible, objectively speaking, but we wouldn't regard its additional repertoire as representing higher intelligence, because it would not be translating our intentions into action.
This is either saying that a poorly specified goal will probably not result in success (e.g. if an octopus had a goal of catching and eating crabs but ignoring predators, it wouldn't be very successful), or that inefficient behaviours are generally undesirable, which is surely stating the obvious.

"Efficient" - well, that assumes a particular goal for the device, and that other things that it does are extraneous to that goal.
Yes, of course it assumes a goal - what useful behaviour doesn't have a goal? biological life is no different.

Don't quite know what you mean by 'other things that it does are extraneous to that goal' - what other things? things that aren't directly relevant to the goal are extraneous in that sense, but may be taken into account. Efficiency involves minimizing resource, time, and energy use, so for example, battery level, temperature, performance characteristics, etc., may be accounted for in establishing the most efficient means to achieve the goal. For an autonomous vehicle, plotting a longer route to an objective, that navigates around a hill, might be evaluated as more efficient than a shorter route over the hill that would use more power, overheat the brakes, and increase the risk of toppling. Such a machine that could dynamically assess its state and the environment, and make decisions based on that information might be considered more intelligent than a machine that drives a straight line to the objective, ignoring all other factors. There are machines than can do the former and machines that can do the latter. I would consider the former more intelligent. An objective observer would see the machine acting with some intelligence. If they discovered it was a human creation, would they think humans must be clever to make an intelligent machine, or that the machine isn't intelligent after all? If they then discovered humans behaving intelligently, would they think isn't evolution wonderful, to produce intelligent humans, or that humans aren't intelligent after all, they just have the encoded cleverness of evolution?

Where does the goal come from? Some human, of course.
Why is that relevant? Surely it's the behaviour demonstrated in achieving the goal that determines intelligence. When wolves co-operate to hunt their prey, they are said to demonstrate some intelligence in doing that. When a guide dog (seeing-eye dog) keeps its master out of danger, it is said to show some intelligence in doing that. Why does it matter who or what set the goal?

The same applies to "effective". Our homicidal faulty dishwashing robot might be enormously effective at creating mayhem, but since that's not what a human wants, we don't describe it as effective.
:confused: What has that got to do with the price of fish? if an evolutionary maladaptation arises, it isn't effective, and the creature dies out.

Why such focus on anthropocentric goal setting? A goal is a goal. Whether it's a machine or a biological organism, and whoever or whatever sets its goals, we have to look at its behaviour to determine whether it qualifies as intelligent.

If you can describe a definition of an intelligent device that can be evaluated without the intentions of some human being having to be taken into account, then I'd be interested to hear it.
If you can define anything without the intentions of some human being having to be taken into account, I'd be interested to hear it. Definitions are a human concept; as such, they are all dependent on human intention.
 
Are you a proponent of Intelligent Design?
No.

All the above is similar to the argument between Intelligent Design and Evolution.
No, it isn't.

What you are arguing for is that Intelligent Design is basically the same as Evolution.
No, I'm not.

Once you have the difference established firmly then you would appreciate why the argument you give above is wrong. Before that I am afraid we are going to be arguing fruitlessly.
If you can read Intelligent Design into what I posted, further discussion will almost certainly be fruitless :boggled:

However, I am slightly curious to know how you think Intelligent Design has anything to do with it.
 
There may be more than one individual animal or plant etc. However they are all extensions of the same life form from a common ancestor. As such all the instances of consciousness in life forms are the same in what process it is and how it emerges.

I think there's a significant lesson in the parallel evolution of intelligence and probably consciousness in vertebrates and cephalopods; they share a common ancestor with barely a group of neurons to call a ganglion, and went on to develop radically different nervous system architectures, yet octopuses, for example, show strikingly intelligent behaviour, and show signs of conscious awareness.
 
Last edited:
westprog said:
Incidentally, a common operational criteria for intelligence is simply that an entity evaluates some environment and figures out what to do.

I was wondering about this issue for some time. How do we provide an objective definition of intelligence? One that applies independently of human (or possibly animal) concerns?
Careful. Anything a human identifies is necessarily identified by a human by the very fact that a human identified it. But that doesn't make it subjective. In fact, so long as we identified something real, there is by definition of real an extension to whatever intension we identify.
I don't believe that we can.
I just did :)
An intelligent object responds to changes in its environment? All objects respond to changes in their environment.
Actually, no. You missed it--apparently you stopped parsing that sentence at the "and" and thought you had read it all. Evaluating an environment is a necessary criteria, but this is not the definitive one. Intelligence by this criteria is in the figuring out what to do as a result of the evaluation.

The classic AI 101 example is tic-tac-toe. Since this is a very tractable game, it's easy to program a player for this game that simply gives a specific programmed response to each opposing move. Such responses are not considered intelligent because, whereas this program does indeed evaluate an environment, it does not "figure out what to do". An intelligent approach would have the tic-tac-toe playing program analyze the puzzle space, compare possible moves to others using some sort of algorithm, and move according to the result of this analysis.

The intelligent approach is necessary to program a decent chess playing game, since in this case, the problem is pragmatically intractable beyond something akin to an open book.

Thus, your particular response is moot, so let's fast forward:
If not, why not?
Because it does not analyze its environment and figure out what to do.
The way that we gauge inanimate intelligence in practice is the extent to which a device does what we want it to. A robot that charged around breaking things would not be considered intelligent - because it would be useless for us. A robot that could make us a cup of tea would be thought of as a smart robot.

There's nothing wrong with looking at things this way.
Actually, no. There is something wrong with this. In fact, the above is wrong in every way I can imagine. The degree to which something is useful to us does not have any indication on its intelligence. If I were to ask a geeky friend to do my diffy q homework, and he did it well, then he would be demonstrating remarkable intelligence. And that would be useful. But if I took my car to an automated car wash, and it did an extraordinary and impressive job cleaning my car and making it shiny, I wouldn't as a result judge the car wash to be intelligent. So this would not be a demonstration of intelligence. But it is still useful. Furthermore, if I did my diffy q homework myself, then turned it into my professor, he could start looking at it, and then simply rip it up in disgust. I certainly wouldn't take this as an indication of professor's lack of intelligence; in fact, quite the opposite... he was disgusted with my homework precisely because he was intelligent. But that would actually be harmful to me--less than useful.

So every sanity check I sling at the rule of utility equals intelligence simply fails. The rule of utility equating to intelligence simply does not hold--utility to me isn't a criteria.
Ascribing intentionality to the device is a pointless exercise. It is the intentionality of the person who constructs the device that matters.
No, it's not pointless. But oddly enough, the intentionality of the person who constructs the device does indeed matter. The more the machine figures out what to do and the less the person who constructed it, the more we call the machine intelligent. The more the person who constructed it simply anticipated all possible responses and hard coded them in, the less intelligent we consider the machine.
It's actually very easy to construct devices that respond in complex ways to their environment.
Try getting your devices to figure out how to respond.
... We consider an intelligent house one that does what we want.
Nope. See above.
It's also important that we are able to look at the universe in an objective way, with no particular objects given privilege. Confusion arises when we confuse the two. We look at a particularly useful tool - a vacuum cleaning robot, for example - and somehow become convinced that it possesses some objective property, shared by us but by nothing else in the universe. "How can you say it's not intelligent? Look, it just plugged itself in to recharge!" We convince ourselves that by doing what we want, the device has intentions of its own. How much of this discussion is just the pathetic fallacy repeated over and over?
I think you're so quick to point out a pathetic fallacy that you wind up committing a genetic fallacy. You see, we are humans. And everything we say is a human description; all concepts, every last drop of them, are human concepts for human purposes. This includes even the killer terms "consciousness", "intelligent", and even "objective". But the way you caution about how important it is for us to remain objective actually boils away all use of the concept.

What you're forgetting is that in order to describe some thing, given that we actually are describing that thing, the thing itself must be constrained by the description. It doesn't matter if we're the humans who think the machine has a purpose--that, ironically, is a non-sequitur. What matters is whether or not there is a thing in the machine that is driving it towards a purpose, and how well it achieves it. What the human intended for it to do isn't the objective property--what it actually does is. That's the objective property.

The proof is in the pudding. Chess programs really do beat humans in tournaments; and any AI programmer worth his salt should be able to write a chess program that would beat himself. In order to even accomplish this feat, that chess program must, of necessity, have an objective set of processes in it that achieve the goal of playing a good chess game. That a programmer put this goal in has nothing to do with it. That this goal is artificial also has nothing to do with it. The relevant factor is that it does indeed play a good chess game.

And maybe you can find a good chess playing algorithm coming from the rock. But until you actually show it to me, I'm not obliged to accept that it exists. If you want to appeal to objectivity, it's incumbent upon you to show that it's there. Personally, I think all you'll find is a big mess.
 
If you have a better objective definition for intelligence, that makes a robot that does the dishes more intelligent than a thunderstorm, for example, then I'd like to see it.
The thunderstorm isn't analyzing its environment in order to figure out what to do. Or phrased more elaborately, the thunderstorm doesn't have within it a model of its environment, whereby it applies an algorithm to search a range of potential actions in order to select an action, and doesn't react according to the selected model.
When you say "problem-solving", I assume that the problem is a human problem, set for the device.
Nope. It is any goal-based behavior. Whether a human set the behavior or not doesn't matter.
The device itself doesn't have problems to solve.
So long as it has a goal state which it attempts to achieve using an algorithm, it has a problem to solve. You can define "goal state" in this sense as the modeled state which, if the device finds a potential action to achieve it, will result in an action that achieves it. Note that having a goal state requires that the device models the environment and searches for a way to achieve that state, and that it is able to direct its actions according to the model.
...A dishwashing robot that sometimes strangled the cat, broke the dishes and set fire to the house would be much more flexible, objectively speaking, but we wouldn't regard its additional repertoire as representing higher intelligence, because it would not be translating our intentions into action.
Actually, no, that's not relevant. We might not like said robot's actions, but whether or not we like its actions has nothing to do with the judgment of whether the robot is intelligent. Where exactly did you get this criteria?

If the robot revolted, and set its goal to strangle the cat--and especially if it had to apply something clever to achieve its goal--then the robot is intelligent.
 
I think there's a significant lesson in the parallel evolution of intelligence and probably consciousness in vertebrates and cephalopods; they share a common ancestor with barely a group of neurons to call a ganglion, and went on to develop radically different nervous system architectures, yet octopuses, for example, show strikingly intelligent behaviour, and show signs of conscious awareness.

Yes I am aware of these differences. I am treating all life forms as having a precursor or necessary component of consciousness, starting from cellular life.

It is the step change engendered in the evolution of cellular life which set the stage and ingredients for consciousness to emerge.

I am regarding plants as having qualities which can lead to consciousness along with all animals.

A simple elemental form of conscious analogous to the simple form of intelligence found in a toaster.
 
Last edited:
I am treating all life forms as having a precursor or necessary component of consciousness, starting from cellular life.
...
I am regarding plants as having qualities which can lead to consciousness along with all animals.

A simple elemental form of conscious analogous to the simple form of intelligence found in a toaster.
Good luck with that. Definitions that allow plants consciousness and attribute intelligence to an on-off switch with a timer seem to me too broad to be useful.

ETA: the lesson I was pointing out is that intelligence and possibly consciousness can arise in nervous systems with radically different developmental histories and architectures, which suggests it isn't just a lucky fluke of vertebrate brains, but is quite likely to arise where creatures with a nervous system evolve in suitable environments. I guess punshhh missed the implicit requirement for an advanced nervous system.
 
Last edited:
...snip... We convince ourselves that by doing what we want, the device has intentions of its own. ...snip...

Yep - as I said the definition for conciousness is the set of behaviours of ourselves and others that we learn to label with the word conciousness.
 
Yep - as I said the definition for conciousness is the set of behaviours of ourselves and others that we learn to label with the word conciousness.

Would you regard your own thought as a behavior to yourself?
 
Would you regard your own thought as a behavior to yourself?

Given the folklore that our language is steeped in it is difficult to express some observations without the baggage getting in the way - so we need to step lightly so as to not trip over past incorrect understandings that litter our vocabulary.

From how I've parsed your question - the set of behaviours that I identify with the label "conciousness" includes the "internal voice" (e.g. what I am doing when I type this post).
 
Well, keep in mind that some of the comp.lits are reasoning from the study of intelligence and attributing that to the study of consciousness.

As I've said before, this is a lot like studying cars, developing a general "theory of transportation", then asserting that airplanes can be understood in terms of that theory based on cars.

And as long as you don't bother to actually look at airplanes, you can continue to believe they are the same thing.

Of course, people who design, build, and fly airplanes will have no use for such a "theory of transportation" as applied to aircraft. Good thing, too, cause if they tried to use it they'd never get anything off the ground.


The reason intelligence is ever brought up is simply as a response to the "humans can do X and machines cannot" argument.

The reason animal intelligence is ever brought up is simply as a response to the "machine intelligence isn't real intelligence because people made machines" argument.

Would you like to get rid of the intelligence factor altogether piggy? Do you want to allow for consciousness to exist independently of any level of intelligence? I will if you will.

Also, if you are into analogies, lets at least get it right: this is alot like studying cars, developing a general theory of transportation, and then noting that when airplanes land they do so on the same kind of wheels as cars ( namely, round ones ), that cars can be propelled by airplane engines, that both airplanes and cars are controlled by steering, throttle, and brakes, that both have headlights so the operator can see in the dark, that both are made of metal and plastics, that both carry passengers, that both of them tend to crumple up or explode when they crash, and that when cars go *really* fast they experience identical forces to what airplanes experience, and really fast cars use wings ( upside-down ).

So I could be stupid, but it seems to me that you can learn a heck of a lot about airplanes by studying cars.
 
Last edited:
Given the folklore that our language is steeped in it is difficult to express some observations without the baggage getting in the way - so we need to step lightly so as to not trip over past incorrect understandings that litter our vocabulary.

From how I've parsed your question - the set of behaviours that I identify with the label "conciousness" includes the "internal voice" (e.g. what I am doing when I type this post).

Hmm interesting.
Yes there can be a lot of baggage in language, revealing baggage.
Take the word "I". I assume this is what you mean by "internal voice".
Now this is one of those words that appears impossible to learn from others.
It certainly is not obvious that we would say "I" as small children referring to ourselves because others referred to themselves this way. It is really difficult to describe what "I" is to a child in an objective way, unlike the word "big" for example. It seems to be a word which is simply realized rather than learned from others. "I" does not appear to have a functional definition as it can only describe yourself not others. Perhaps this is where the difficulty is in establishing a definition for consciousness as it includes this "internal voice".
 
Status
Not open for further replies.

Back
Top Bottom