What is the worst idea in philosophy?

Originally quoted by Throg
I remember reading about an artificial intelligence program that supposedly replicated ant behaviour with just eight rules. I don't know any of the details of the study but I cannot imagine that it replicated the the sensory abilities of the ant. Nevertheless, the claim was that when you put several of these AI "ants" together you got emergent complexity in their group behaviour far beyond what you would expect from just eight rules. Here's a link to a website with "interesting facts" website about ants, including the fact that they have 250 000 brain cells, which is more than I would have guessed. Interesting Facts About Ants


Thanks for the link. Far more brain cells than I would have expected also. The same link also mentioned that ants may have the same processing power of a Mac II, which sounds quite impressive also.

I'll trade one of my "anecdotal evidence" stories for yours. A few years ago I had a programming class that required us to take about a half a dozen rules that governed how various animals (in the categories of omnivores, carnivores and herbivores) interacted with each other. These were enough rules to create a mind boggling boring video game but not much else. I'll grant you that AI can be fascinating, but my poor computer program with its ~6 rules was anything but.

I know I can be a hard ass about wanting references, but I have my reasons for it. :)

Originally quoted by Paul C. Anagnostopoulos
I think Sheldrake devises shoddy protocols and then does not describe them well enough to find all the problems. I pointed out a problem with his telephone telepathy experiments and he agreed that it was a problem, although he said it didn't affect the results.

Oh, so you are the one we have to thank! I recalled reading about the telephone telepathy experiment procedure problems, but not who at JREF contacted Sheldrake to verify them.

I love Sheldrake's creativity and ideas, I find these examples showing his sloppiness disappointing.

Edited for syntaxical sloppiness.
 
Shera said:

I'll trade one of my "anecdotal evidence" stories for yours. A few years ago I had a programming class that required us to take about a half a dozen rules that governed how various animals (in the categories of omnivores, carnivores and herbivores) interacted with each other. These were enough rules to create a mind boggling boring video game but not much else. I'll grant you that AI can be fascinating, but my poor computer program with its ~6 rules was anything but.

That's not quite analogous. In order for it to be so, you would have to make the rules contingent upon the behaviour of other other AI's in which case emergent behaviour of the group would be more complex than the 6 rules for each individual might suggest. Of course, it occurs to me that the term "rule" is somewhat elastic so it's entirely possible that one rule could equate to a single line of code or ten thousand lines code. Not very helpful, I know. Also one would have to ask if your rules were formulated in terms of a normal sequential intruction programming language, or a neural net model. The latter model produces far more complex and unpredictable behaviours from simple rules than does the former. Just do a search for "neural net" and you'll find reams of interesting stuff.

I know I can be a hard ass about wanting references, but I have my reasons for it.

I can't give any references for the ant program as I never saw any. It was just something I heard and thought worth considering as an interesting possibility. Do some reading on neural nets and you'll find it's not as implausible as it might seem.
 
Gestahl said:
Singer has quite a few unorthodox and on the surface very odd beliefs.

I think his statement there was more of an attempt to say we should treat animals as we treat mentally deficient humans, not the other way around. However, it is a dangerous idea if used in the other direction.
Let's see some of his own words (and their context) to judge whether this is true or not.

For example here is a quote from Practical Ethics:
“killing a disabled infant is not morally equivalent to killing a person. Very often, it is not wrong at all.”
or
No infant - disabled or not - has as strong a claim to life as beings capable of seeing themselves as distinct entities, existing over time.
He advocates the killing (replacing) of spina bifida, haemophiliac and downs syndrome infants, so he seems to be suggesting that we treat disabled infants in the same way we treat animals.
(See excerpt from Practical Ethics for context: http://www.utilitarian.net/singer/by/1993----.htm)

Also, from the FAQ on his website:
So killing a newborn baby is never equivalent to killing a person, that is, a being who wants to go on living.
(His FAQ is here for context: http://www.princeton.edu/~psinger/faq.html)
 
The worst idea in theology I have ever seen has to be Logical Deism, invented by Franko on our very own board.
 
Throg said:
That's not quite analogous. In order for it to be so, you would have to make the rules contingent upon the behaviour of other other AI's in which case emergent behaviour of the group would be more complex than the 6 rules for each individual might suggest. ...

It was an object oriented program where each object's (animal) behavior depended upon the other animals' behavior. Of course I'm sure it didn't compare to AI program written by mathematicians with PhDs, but it was certainly not an old fashioned procedural language program either.

.....
I can't give any references for the ant program as I never saw any. It was just something I heard and thought worth considering as an interesting possibility. Do some reading on neural nets and you'll find it's not as implausible as it might seem.

A quick search on google didn't seem to provide any relevant "hits". If by chance you happen to see a site on this pls do post it. Thanks.
 
Robin said:
For example here is a quote from Practical Ethics:
...
“killing a disabled infant is not morally equivalent to killing a person. Very often, it is not wrong at all.”
But I agree. I have never seen a strong argument for the so-called sanctity of life. I do not think homo sapiens sapiens have any special property that puts them above animals, except self-awareness. Thus infants lacking that self-awareness may be subject to lower moral status.

... now I'm reading the excerpt, and FAQ. I'm surprised, actually, as I hadn't known anyone agreed with me on this point, which began as an opinion on abortion. However, Singer doesn't say any infants *should* be replaced, just that it is not morally reprehensible for parents to take that action. The only things endangered by this are collections of animal cells that do not even understand they exist. Certainly, they could come to understand that, but why should they, unless someone thinks it worthy to bring them to that state of maturity, personally? For me to think a baby's life was not worth living would take a more debilitating disorder than Down Syndrome, spina bifida or hemophilia, but I would not object to parents who wanted to abort a pregnancy for any reason, so long as no one else wished to adopt the baby.
 
ReFLeX said:
But I agree. I have never seen a strong argument for the so-called sanctity of life. I do not think homo sapiens sapiens have any special property that puts them above animals, except self-awareness. Thus infants lacking that self-awareness may be subject to lower moral status.
What brought you to the conclusion that other animals lack self-awareness in comparison to your observations that humans have it?
 
Shera said:
It was an object oriented program where each object's (animal) behavior depended upon the other animals' behavior. Of course I'm sure it didn't compare to AI program written by mathematicians with PhDs, but it was certainly not an old fashioned procedural language program either.




Whether it was object-oriented or not (and how many computer languages haven't been in the last twenty years?) is irrelevant and has no impact on the actual capabilities of the processing model. I did not intend to impugn your programming skills. One can write neural net simulations in conventional progamming languages (I have done so myself) and then use the neural net to solve problems in an emergent manner quite unlike the sequential discrete logic solutions one would explicitly program using a conventional programming language.

A quick search on google didn't seem to provide any relevant "hits". If by chance you happen to see a site on this pls do post it. Thanks

You have got to be kidding. I got 286,000 from Google with the term "Neural nets", and most of those on the first page seemed to have some relevance. Please forgive me, if I treat any future claims you make in this light.
 
Batman Jr. said:
What brought you to the conclusion that other animals lack self-awareness in comparison to your observations that humans have it?

Hmm. I can't remember what class this was, either social psych or ethics. But I learned that only humans aged 17-24 months and older, and certain primate species are capable of identifying themselves visually. I will have to track that information down. You are familiar with the "rouge test"?
I couldn't find a link really explaining what it was, but
http://www.longwood.edu/staff/bjornsenca/Feldman11.htm
What they do is dot a bit of red makeup on the kid's nose and see if it tries to wipe it off.

Here is a guy who tries to argue that animal self-awareness is shown through supremacy and submission and other behaviours, his first example is that an animal grooming itself is "a gesture of love towards one's self", but that "it must be aware of itself being groomed" I'm not buying. I expect it comes down to what you accept as self-awareness, and I think the recognition that you have a stable existence in the most salient sensual realm is an appropriate standard.
 
Gestahl said:
Worst idea ever: the idea of teleology, or that things have a purpose intrinsic to their nature.

Teleology is absolutely crucial in archeology and many other historical and semi-historical disciplines.

The problem with teleology is not with the concept itself, but with the idiots who try to apply it beyond its area of validity. (I also can't blame Ford for all the bad drivers on the road).

I think a lot of the other concepts discussed on this thread are bad per se, in that they have no area of validity whatsoever.
 
ReFLeX said:
Hmm. I can't remember what class this was, either social psych or ethics. But I learned that only humans aged 17-24 months and older, and certain primate species are capable of identifying themselves visually. I will have to track that information down. You are familiar with the "rouge test"?

It's more often called the 'mirror test,' or sometimes the 'Gallup test' after Gordon Gallup who developed it (1970) as a test of animal behavior.

Gallup himself has writtena semi-definitive chapter on research to date.

As a minor note, so far, self-awareness appears to be confined to humans, other great apes and cetaceans (specifically dolphins).
 
Posted Originally by Throg
Please forgive me, if I treat any future claims you make in this light.
Quite all right.

Background info for any watchers who might be curious:

Throg and I have strong difference of opinions on the value of offering anecdotal evidence or a theory based on something heard years ago (with its risk of being imperfect recall) offered as though it is thoroughly addressing the issue and with the same weight as a specific source. My discretionary time has just been slashed and that means I have less time to write posts. If anyone is really interested, I suggest you read the "Why do you visit this site?" in the Critical Thinking forum.

However, here's a cut and paste from one of my posts in that thread.

Posted Originally by Shera
Feel free not to provide citations. But then the information offered is frequently not much better than an anecdotal story due to imperfect recall or other reasons. I personally find it very annoying to have this type of "evidence" regarded as highly as a cited and specific source, and even more annoying to have it regarded as thoroughly addressing the issue. This is not a personal attack, just an observation made based on experience with many people.
 
Shera said:
Quite all right.

Background info for any watchers who might be curious:

Throg and I have strong difference of opinions on the value of offering anecdotal evidence or a theory based on something heard years ago (with its risk of being imperfect recall) offered as though it is thoroughly addressing the issue and with the same weight as a specific source. My discretionary time has just been slashed and that means I have less time to write posts. If anyone is really interested, I suggest you read the "Why do you visit this site?" in the Critical Thinking forum.

However, here's a cut and paste from one of my posts in that thread.

Not sure of the relevance. I have admitted that I do not have citations to hand and said that I do not have the time to do research for you. You claimed to have searched on Google and found nothing relevant on "Neural Nets"; it took me about four seconds to find 262,000 links. I have been honest, you have not.
 
In regard to this thread didn't you specify an Ants AI program?

For fuller context, if anyone is interested, they can read the other thread. I'm moving on.
 
I forgot to ask....why are "worst" ideas the worst?

How is one idea better or worse than another?

Idea:

Something, such as a thought or conception, that potentially or actually exists in the mind as a product of mental activity.

What qualities does an idea have to possess to be called "good"?

I can think of one - ideas are representations of something, in symbolic form. Two things an idea has to possess are reliability and validity .

Reliability means that if an idea is supposed to represent something, the idea of X, it must represent the SAME X, not change in it's definition. The idea of "cat" cannot be changed to represent "dog" or "horse" or "eggplant". It must represent the same thing consistently.

Also, an idea must have validity, meaning it must accurately represent what it's created to represent. It must have precision. The idea of "unemployment rate" to be a good idea, must actually measure that accurately. It can't measure something else, or measure it badly.

Example, if you said "The unemployment rate is X" but consistently measure it by leaving out statistics from 5 states, the idea would be reliable. You are using the idea in the right way consistently.

But....it would not be valid!

Sounder ideas are more convincing than 'bad' ones. And more appropriate. And are based on some education or insight.
 

Back
Top Bottom