In particular, this is very wrong:
Jaron Lanier said:
Enough! I hope the reader can see that my game can be played ad infinitum. I can always make up a new kind of sensor from the supply store that will give me data from some part of the physical universe that is related to itself in the same way that your neurons are related to each other by a given AI proponent
Suppose there are two systems, each consisting of two balls. System 1 is ball 1a and 1b, system 2 is ball 2a and 2b.
Lanier claims that no matter what balls 1a and 1b do, we can find a sensor that maps that behavior to balls 2a and 2b.
Here is a trivial example of where that breaks down: If balls 1a and 1b influence each other, presumably by getting within some threshold of proximity, but balls 2a and 2b don't influence each other, it is literally impossible to find a mathematical isomorphism with the behavior of balls 2a and 2b.
Or another example: If balls 1a and 1b remain stationary, while balls 2a and 2b are moving in any direction other than parallel to each other -- again, no possible isomorphism.
No such sensors exist.
What Lanier is probably trying to say, since he can't be so stupid as to actually think the above examples are wrong, is that his "sensors" might need to be incredibly complex computers themselves. Something like that
could map the behavior of two stationary balls to two colliding balls, or whatever we could imagine.
But that makes his argument irrelevant -- concluding that we could map the behavior of a rainstorm to the behavior of your brain, using a set of computerized sensors each of which are orders of magnitude more complex than your entire brain in the first place, and furthermore would need to communicate with each other in order to successfully perform the mapping ( or, you could view it as one gigantic sensor ), smacks of stupidity that is worthy of some world record.