Originally quoted by Throg
I remember reading about an artificial intelligence program that supposedly replicated ant behaviour with just eight rules. I don't know any of the details of the study but I cannot imagine that it replicated the the sensory abilities of the ant. Nevertheless, the claim was that when you put several of these AI "ants" together you got emergent complexity in their group behaviour far beyond what you would expect from just eight rules. Here's a link to a website with "interesting facts" website about ants, including the fact that they have 250 000 brain cells, which is more than I would have guessed. Interesting Facts About Ants
Thanks for the link. Far more brain cells than I would have expected also. The same link also mentioned that ants may have the same processing power of a Mac II, which sounds quite impressive also.
I'll trade one of my "anecdotal evidence" stories for yours. A few years ago I had a programming class that required us to take about a half a dozen rules that governed how various animals (in the categories of omnivores, carnivores and herbivores) interacted with each other. These were enough rules to create a mind boggling boring video game but not much else. I'll grant you that AI can be fascinating, but my poor computer program with its ~6 rules was anything but.
I know I can be a hard ass about wanting references, but I have my reasons for it.
Originally quoted by Paul C. Anagnostopoulos
I think Sheldrake devises shoddy protocols and then does not describe them well enough to find all the problems. I pointed out a problem with his telephone telepathy experiments and he agreed that it was a problem, although he said it didn't affect the results.
Oh, so you are the one we have to thank! I recalled reading about the telephone telepathy experiment procedure problems, but not who at JREF contacted Sheldrake to verify them.
I love Sheldrake's creativity and ideas, I find these examples showing his sloppiness disappointing.
Edited for syntaxical sloppiness.