• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
How about if the rock gets another rock dropped on it, and breaks ? Could that be considered as "processing" ?

Hmmmm, well actually that is a very good question.

It is certainly not "processing" in the sense of "using information to some end," unless you want to call breaking an "end" and if that is the case then we might as well give up on this thread right now since I like to think that my consciousness is a little different than a rock breaking.

However it might be called "processing" if we include the act of refining or changing the form of information in processing. Because for example if the first rock is "information" about something, like "whether someone dropped the rock," then the second rock breaking is also information about said event, correct? I guess you could say the second rock is converting the information into an even more specific form -- if I see the second rock break I know that not only did someone drop the first rock, but also that the first rock had enough energy to break the second one, which is different information than my brain could access by observing the first rock alone.

However note that it is still the brain of the observer that is doing the "using information to some end." So I would say no, the second rock in such a contrived situation is not processing information, although it is a very subtle difference.

I would point out, though, that if someone wants to say the breaking rock is processing information, they will have to admit that it does so by being a switch.
 
Last edited:
ETA: If a human being seemingly did something very clever and then you discover that all he did was follow step by step instructions to do it.... would you still attribute the cleverness of the action to him?

If a computer seemingly did something very clever and then you discover that it did not follow step by step instructions to do it.... would you still attribute the cleverness of the action to the programmer?
 
This is one of the definitions of intelligence that is used to define whether or not a programming paradigm falls under the guise of "artificial intelligence" or not (the other popular one, commonly found in gaming, is to simply mimic an entity--specifically for play purposes;

How unfortunately true.

The majority of us in the game A.I. profession wish we could spend time on more true "intelligence" systems but the reality is that such systems are simply not quite feasible yet.

The reason is the "uncanny valley" -- as we make game A.I. more "intelligent" it starts to seem less and less real due to not being quite as intelligent as a real entity. When they are more fake and just "mimic" behaviors, people tend to just accept it as entertainment and play along with it. When they get smart, people expect them to be *really* smart, and we don't have the resources to do that yet ( the primary limiting factor is animation -- neural network driven procedural animation isn't up to snuff yet and approximating a human behavior set with canned animations takes potentially thousands of animations ).

often, these are tweaked to not be quite so intelligent simply to promote an engaging game play, as people generally don't try to play games they perceive they can't possibly accomplish anything in).

Yes, in fact the hardest thing to do as a game A.I. programmer is dumb down "your baby" so the player has an easier time killing them.

At the same time, the most satisfying moments are when "your baby" annihilates a player through genuine intelligence.
 
Last edited:
At the same time, the most satisfying moments are when "your baby" annihilates a player through genuine intelligence.

I agree that computers can be intelligent, but what has this to do with consciousness?

Defining consciousness in the only way we can (due to only having one example to study) as an emergent property of living things.

We cannot pin down what this consciousness is and how it arises. So until we can, we cannot demonstrate that consciousness emerges from our modeling of these life forms.

SRI produces intelligent computers, not conscious computers.

This position has been repeatedly demonstrated throughout this thread.
 
How about if the rock gets another rock dropped on it, and breaks ? Could that be considered as "processing" ?

An avalanche if big enough would be very intelligent in its behavior, if a little narrow minded.
 
Since I have more time now, another reply.

First off--I didn't define a brain as a symbol system. In fact, I never even gave you a definition for brain. Therefore, one has to wonder where you got that this was my definition. If it wasn't from me, that leaves only you.

Now, you have no idea what symbol manipulation is, and no inclination to find out. You instead simply want to declare your expertise about it, and accuse those who do understand it of committing grave errors in thinking. When I finally figured out the exact nature of your error and pointed it out, instead of manning up and admitting you were wrong--you simply accused me of talking "shop talk" and trying to confuse you.

So I'll say it once more. A symbol is simply a uniquely identifiable state of a system that uses a series of transformations. These symbols are simply steady states of the system under the various transformations that the system performs. These symbols need not represent something--they just transform in particular ways given these transformations. What makes a particular instance of symbol that symbol per se is simply that it has the same effect on all transformations as any other instance of the symbol would have; what makes symbols different is that there exists some transformation that can distinguish them--that is, that for some transformation there is a different effect on one than there is for another.

Given a set of symbols and transformations such as the above, and an input, then you can perform analyses on the input using those symbols. In fact, we can define "input" into this system as some mechanism by which a particular set of symbols can be produced that reflects some state of the external world. For example, if you had a touchpad interface, and a grid of a particular symbol we call "1"; and pushing on it flips the symbol to another one that we call "0", then that is an input--and that can be analyzed by this system by transforming those two equivalence classes. Now per your account, I had to actually say that "1" meant "not touched" and "0" meant "touched", but really, nothing changes if I use 0 for untouched and 1 for touched. Or X for untouched and Y for touched. Or if I just left off all of the convenience labels, and simply said that there's a distinct symbol invoked when you touch the pad at certain locations with a particular pressure from when you don't.

But in all of those cases, there is a symbol in the system that corresponds to your touching the pad versus not touching it. And whatever you call those symbols, the transformations of them can analyze that touching. And by "analyze", I simply mean that the symbols--as defined above--can be affected in various complex ways by what's being touched.

So if your touchpad can identify when someone draws the symbol "2" on it, then that means exactly what it says. It means that it can produce distinct behaviors in terms of its internally identifiable states (i.e., symbols) when someone draws the symbol "2" versus when someone doesn't. Given a device does this, it does not matter what things are "supposed to represent"; that state will be there sometimes, and it won't be there other times, and the times it will be there would be when you draw that symbol, and the times it won't will be when you don't.

It's symbol manipulation. It's not poetry.

Yes this is an interesting development and no doubt will increase the intelligence of computers.

As an emulation of consciously derived subjectivity, I doubt it can at this stage compare with the subjective meaning or understanding invoked by consciousness in a human.
 
( the primary limiting factor is animation -- neural network driven procedural animation isn't up to snuff yet and approximating a human behavior set with canned animations takes potentially thousands of animations ).
The day I learned this is the day the gaming industry lost a good piece of nameless, unappreciated gristle for the relentless eternal crunch time mill. I've just never sympathized with the graphics > all mentality. I understand it, sure, it sells games, but it doesn't make them. If I had my druthers, I'd take all them Halo of War Duty whippersnappers and lock 'em in a room with Oregon Trail and Nethack until they at least hit disk 2 and Sokoban, respectively.

whytoobigs said:
often, these are tweaked to not be quite so intelligent simply to promote an engaging game play, as people generally don't try to play games they perceive they can't possibly accomplish anything in
This too. Early games were mostly limited by how much stuff they could have going on in the game at once. Battles were fought with dozens of units, tops. Army bases had maybe a handful of guys guarding them, no more than a couple per room. Games today have the same scale not because of any hard limit, but because people figure that's just the way games are, and they can't think of any possible reason to go beyond that. So we get the same starcraft and quake clones we've had for decades, the only addition being prettier graphics and chest-high walls everywhere.

[/off-topic nerdrage]

punshhh said:
Defining consciousness in the only way we can (due to only having one example to study) as an emergent property of living things.

We cannot pin down what this consciousness is and how it arises. So until we can, we cannot demonstrate that consciousness emerges from our modeling of these life forms.
Are you saying animals aren't conscious? Or rather, are you defining consciousness as something that animals don't have?
 
Last edited:
I agree that computers can be intelligent, but what has this to do with consciousness?

Well it doesn't "necessarily" have anything to do with it.

However you have to look at the history of this debate to get context.

First, we bring up the intelligence of computers just as a plain old defense against the viewpoint "computers can't do X." Because while it is true that computers haven't been able to do many of the things people said they might do, they have also been able to do many of the things people said they would never do.

Second, we bring up intelligence because other people drag intelligence into the consciousness debate. Turns out many of the things people consider an aspect of consciousness are just some sort of intelligence. If a computer can do that stuff too, then we at least know a computer can display some aspects of consciousness.

In fact this is why these debates usually devolve into arguments about fundamentals regarding things like subjective experience or emotion. It is so plainly obvious these days that computers can be intelligent, nobody ( well, almost nobody ) wastes time with red herrings about computers not being able to write poetry or play games or discover scientific theories or prove math theorems -- all things that 50 years ago many philosophers thought were exclusively human abilities and lo and behold computers have already done all of that.

I agree that intelligence probably has nothing to do with subjective experience or emotion. However now and then people don't want to limit the discussion to those two aspects of consciousness, probably because they feel that doing so would devalue their precious consciousness, and so everyone has to rehash the same old external links proving that computers can indeed be intelligent and put people in their place. I honestly get tired of it, but what can you do ?
 
Last edited:
[/off-topic nerdrage]
Am I the object of this?


Are you saying animals aren't conscious? Or rather, are you defining consciousness as something that animals don't have?
No I'm lumping all living things together. I am not defining what life forms are or aren't conscious. Rather I am acknowledging that consciousness is a not uncommon quality of life forms.
 
When did this all become about intelligence all of a sudden....didn't we all already agree that Intelligence != consciousness....did I miss something?
It's not 'all about' intelligence. Nothing sudden about it - intelligence has been part of the discussion for a while now.

But anyway...you might find a few candidates here....there are many clips, they are quite amazing....you should watch them all.
Not sure how Discovery channel clips are relevant here. I've seen plenty of them.

It is as intelligent as the programmer who programmed it because it is a REMOTELY CONTROLLED vehicle.

If you see an RC car acting intelligently in navigating around a room and you do not know it is remotely controlled.....do you call IT intelligent?
Are you using some new definition of 'autonomous' that encompasses remote-control?

A programmed robot/rover is doing nothing but following INSTRUCTIONS that REMOTELY CONTROL it over time and space instead of just space.

Running a program regardless of how clever the program might be is just remote control...... a step higher in remote control complexity than an RC car just as an RC car is a step higher in puppetry than Punch and Judy.
So you keep asserting, but it seems to me that that logic isn't particularly helpful, because you could equally argue that ultimately humans are machines 'remote-controlled by evolution' - i.e. DNA codes for the construction of a complex learning machine that has some behaviours 'built-in', can make use of rules and algorithms provided to it, and can develop its own rules and algorithms. We have developed machines that can learn, use supplied rules and algorithms and create and apply new ones - admittedly in a far more limited range of contexts, but fundamentally the same kinds of capabilities. Why do you feel the biological machine is intelligent, but the electronic one is not? Precisely what is it that you feel is lacking in the electronic machines that makes them not intelligent?

ETA: If a human being seemingly did something very clever and then you discover that all he did was follow step by step instructions to do it.... would you still attribute the cleverness of the action to him?

This is exactly what I have been questioning in my recent posts. If you are suggesting that what you learn from external sources doesn't count as 'cleverness' (is this a synonym for intelligence?), just what is it that does count?

If it isn't a result of external influences, could it be something internal? perhaps something structural, involving the layout or connectivity of your brain? Something that is a result of whatever coded for your development - i.e. your DNA? Do you inherit the coding for your intelligence in the DNA you receive from your parents?

Is it fair to say you are only as intelligent as the genes you inherited from your parents will allow? Just as a machine is only as intelligent as the code gets from its programmer will allow?

Just askin' ;)
 
Am I the object of this?
No, thus the ending /.

No I'm lumping all living things together. I am not defining what life forms are or aren't conscious. Rather I am acknowledging that consciousness is a not uncommon quality of life forms.
Then we have considerably more than one example to study.
 
By the same logic, people are not intelligent. It's their teachers and parents who are.

Quite. But by considering that logically parallel argument (i.e. yours, above), maybe the people who make the first argument can find a reasonable way to clarify their position. It's a long shot... ;)
 
Incidentally, a common operational criteria for intelligence is simply that an entity evaluates some environment and figures out what to do.

I was wondering about this issue for some time. How do we provide an objective definition of intelligence? One that applies independently of human (or possibly animal) concerns? I don't believe that we can. An intelligent object responds to changes in its environment? All objects respond to changes in their environment. They do so i various ways, some complex, some simple. There's no quantifying it in any simple and obvious way. A pebble rolling down a slope will follow an enormously complex, unpredictable path - far more complex than a robot vacuum cleaner, for example. Do we consider it possesses intelligence? If not, why not?

The way that we gauge inanimate intelligence in practice is the extent to which a device does what we want it to. A robot that charged around breaking things would not be considered intelligent - because it would be useless for us. A robot that could make us a cup of tea would be thought of as a smart robot.

There's nothing wrong with looking at things this way. Indeed, it's the only sensible way to evaluate technology designed with a purpose. A device is intelligent according to the complexity of the purposes for which it is used. Ascribing intentionality to the device is a pointless exercise. It is the intentionality of the person who constructs the device that matters.

It's actually very easy to construct devices that respond in complex ways to their environment. The trick is constructing devices that don't respond to their environment. When building a house, the ideal would be to have a house the interior of which remained the same no matter what the external environment might be providing. A house that would automatically preserve a constant temperature regardless of wind, rain and sun, would be an ideal house. Providing a house that would react in hugely complex, unpredictable ways to whatever went on outside would be very easy. It wouldn't be considered to be a more intelligent house, though, just because it was able to do more things. We consider an intelligent house one that does what we want.

It's quite normal and reasonable that we should look at the things we build according to how useful they are. It's also important that we are able to look at the universe in an objective way, with no particular objects given privilege. Confusion arises when we confuse the two. We look at a particularly useful tool - a vacuum cleaning robot, for example - and somehow become convinced that it possesses some objective property, shared by us but by nothing else in the universe. "How can you say it's not intelligent? Look, it just plugged itself in to recharge!" We convince ourselves that by doing what we want, the device has intentions of its own. How much of this discussion is just the pathetic fallacy repeated over and over?
 
Quite. But by considering that logically parallel argument (i.e. yours, above), maybe the people who make the first argument can find a reasonable way to clarify their position. It's a long shot... ;)

Perhaps someone who has experience of creating and maintaining human beings could give an estimate of how long it takes for them to develop intentions of their own.
 
How do we provide an objective definition of intelligence? One that applies independently of human (or possibly animal) concerns? I don't believe that we can.
What's wrong with something along the lines of 'flexible, efficient, and effective problem-solving'? Add 'creative' if you like. By addressing a putative reason for intelligence, it implies the necessary understanding, learning, intellect, etc.

The way that we I gauge inanimate intelligence in practice is the extent to which a device does what we I want it to.

We I consider an intelligent house one that does what we I want.
FTFY.

We convince ourselves that by doing what we want, the device has intentions of its own.
You may do, I certainly don't. I don't think a device 'doing what we want' has anything to do with intentionality, or, for that matter, intelligence. A pencil eraser is a device that 'does what I want'...
 
What's wrong with something along the lines of 'flexible, efficient, and effective problem-solving'? Add 'creative' if you like. By addressing a putative reason for intelligence, it implies the necessary understanding, learning, intellect, etc.


FTFY.


You may do, I certainly don't. I don't think a device 'doing what we want' has anything to do with intentionality, or, for that matter, intelligence. A pencil eraser is a device that 'does what I want'...

If you have a better objective definition for intelligence, that makes a robot that does the dishes more intelligent than a thunderstorm, for example, then I'd like to see it.

When you say "problem-solving", I assume that the problem is a human problem, set for the device. The device itself doesn't have problems to solve. When you describe the device as being "flexible" that usually means within very limited bounds. A dish-washing robot might be considered intelligent because it was always able to wash the dishes. The kind of flexibility we want is to be able to recognise when a dish is already clean, finding the right place to put them away, and to replace a wet tea-towel. A dishwashing robot that sometimes strangled the cat, broke the dishes and set fire to the house would be much more flexible, objectively speaking, but we wouldn't regard its additional repertoire as representing higher intelligence, because it would not be translating our intentions into action.

"Efficient" - well, that assumes a particular goal for the device, and that other things that it does are extraneous to that goal. Where does the goal come from? Some human, of course. The same applies to "effective". Our homicidal faulty dishwashing robot might be enormously effective at creating mayhem, but since that's not what a human wants, we don't describe it as effective.

We might have a long term concept of Daneel Olivaw or Wall-E - the robot with feelings and intentions of his own - but we don't have that kind of robot. We have devices with a purpose designed into them by human beings.

If you can describe a definition of an intelligent device that can be evaluated without the intentions of some human being having to be taken into account, then I'd be interested to hear it.
 
How much of this discussion is just the pathetic fallacy repeated over and over?

I fail to see how it is a "fallacy" to describe a certain class of behaviors as "intelligence."

Look, westprog, why don't we just settle this once and for all -- YOU can provide the term you want to use.

What do you call it, how a squirrel runs up a tree when a predator is approaching, that is different from how a rock sits there when a predator is approaching?

What do you want to call it? We are all waiting.
 
Status
Not open for further replies.

Back
Top Bottom