westprog said:
Incidentally, a common operational criteria for intelligence is simply that an entity evaluates some environment and figures out what to do.
I was wondering about this issue for some time. How do we provide an objective definition of intelligence? One that applies independently of human (or possibly animal) concerns?
Careful. Anything a human identifies is necessarily identified by a human by the very fact that a human identified it. But that doesn't make it subjective. In fact, so long as we identified something real, there is by definition of real an extension to whatever intension we identify.
I don't believe that we can.
I just did
An intelligent object responds to changes in its environment? All objects respond to changes in their environment.
Actually, no. You missed it--apparently you stopped parsing that sentence at the "and" and thought you had read it all. Evaluating an environment is a necessary criteria, but this is not the definitive one. Intelligence by this criteria is in the figuring out what to do as a result of the evaluation.
The classic AI 101 example is tic-tac-toe. Since this is a very tractable game, it's easy to program a player for this game that simply gives a specific programmed response to each opposing move. Such responses are not considered intelligent because, whereas this program does indeed evaluate an environment, it does not "figure out what to do". An intelligent approach would have the tic-tac-toe playing program analyze the puzzle space, compare possible moves to others using some sort of algorithm, and move according to the result of this analysis.
The intelligent approach is necessary to program a decent chess playing game, since in this case, the problem is pragmatically intractable beyond something akin to an open book.
Thus, your particular response is moot, so let's fast forward:
Because it does not analyze its environment and figure out what to do.
The way that we gauge inanimate intelligence in practice is the extent to which a device does what we want it to. A robot that charged around breaking things would not be considered intelligent - because it would be useless for us. A robot that could make us a cup of tea would be thought of as a smart robot.
There's nothing wrong with looking at things this way.
Actually, no. There is something wrong with this. In fact, the above is wrong in every way I can imagine. The degree to which something is useful to us does not have any indication on its intelligence. If I were to ask a geeky friend to do my diffy q homework, and he did it well, then he would be demonstrating remarkable intelligence. And that would be useful. But if I took my car to an automated car wash, and it did an extraordinary and impressive job cleaning my car and making it shiny, I wouldn't as a result judge the car wash to be intelligent. So this would not be a demonstration of intelligence. But it is still useful. Furthermore, if I did my diffy q homework myself, then turned it into my professor, he could start looking at it, and then simply rip it up in disgust. I certainly wouldn't take this as an indication of professor's lack of intelligence; in fact, quite the opposite... he was disgusted with my homework precisely because he was intelligent. But that would actually be harmful to me--less than useful.
So every sanity check I sling at the rule of utility equals intelligence simply fails. The rule of utility equating to intelligence simply does not hold--utility to me isn't a criteria.
Ascribing intentionality to the device is a pointless exercise. It is the intentionality of the person who constructs the device that matters.
No, it's not pointless. But oddly enough, the intentionality of the person who constructs the device does indeed matter. The more the machine figures out what to do and the less the person who constructed it, the more we call the machine intelligent. The more the person who constructed it simply anticipated all possible responses and hard coded them in, the less intelligent we consider the machine.
It's actually very easy to construct devices that respond in complex ways to their environment.
Try getting your devices to figure out how to respond.
... We consider an intelligent house one that does what we want.
Nope. See above.
It's also important that we are able to look at the universe in an objective way, with no particular objects given privilege. Confusion arises when we confuse the two. We look at a particularly useful tool - a vacuum cleaning robot, for example - and somehow become convinced that it possesses some objective property, shared by us but by nothing else in the universe. "How can you say it's not intelligent? Look, it just plugged itself in to recharge!" We convince ourselves that by doing what we want, the device has intentions of its own. How much of this discussion is just the pathetic fallacy repeated over and over?
I think you're so quick to point out a pathetic fallacy that you wind up committing a genetic fallacy. You see, we are humans. And everything we say is a human description; all concepts, every last drop of them, are human concepts for human purposes. This includes even the killer terms "consciousness", "intelligent", and even "objective". But the way you caution about how important it is for us to remain objective actually boils away all use of the concept.
What you're forgetting is that in order to describe some thing, given that we actually are describing that thing, the thing itself
must be constrained by the description. It doesn't matter if we're the humans who think the machine has a purpose--that, ironically, is a non-sequitur. What matters is whether or not there is a thing in the machine that is driving it towards a purpose, and how well it achieves it. What the human intended for it to do isn't the objective property--what it actually does is. That's the objective property.
The proof is in the pudding. Chess programs really do beat humans in tournaments; and any AI programmer worth his salt should be able to write a chess program that would beat himself. In order to even accomplish this feat, that chess program must, of necessity, have an objective set of processes in it that achieve the goal of playing a good chess game. That a programmer put this goal in has nothing to do with it. That this goal is artificial also has nothing to do with it. The relevant factor is that it does indeed play a good chess game.
And maybe you can find a good chess playing algorithm coming from the rock. But until you actually show it to me, I'm not obliged to accept that it exists. If you want to appeal to objectivity, it's incumbent upon you to show that it's there. Personally, I think all you'll find is a big mess.