Ah, I didn't know that, and it didn't seem like the narrator of the BBC show understood that either. Next time I watch it I'll see if I missed it.
Heh, I just made that up. That is my own interpretation, based on the fact that I could argue why a lookup table is not equivalent to intelligence without referencing absurd scenarios, and it would be far more clear to everyone.
Hence, there must have been an ulterior motive, I tell myself. I am wary of any philosopher interested in consciousness and cognition who doesn't immerse themselves in programming, it seems very non-genuine. And Searle, like Penrose, is that type. ( Penrose isn't a philosopher, but he isn't a programmer either, so any notion he has about what an algorithm can or cannot do is amateur, and that is why I don't respect him at all when it comes to this issue ).
Note that I feel sort of the same way about all these types, regardless of which side they support. Dennet, Blackmore, etc. I can't stand listening to people quote Daniel Dennet or Susan Blackmore talking about how little we really know when it comes to consciousness, and saying "see they are even supportes of the computational model and they admit that we don't know much."
So, Searle was arguing that the Chinese Room, like expert systems, did not understand the subject, but were playing back the understanding of the experts that created the table. Funny how so many people misunderstand its point, like the point of Schrodinger's Cat.
Yeah but here is the thing -- was Searle clear that the instructions the guy in the room follows are merely some implementation of a lookup table? I don't recall that being explicitly part of the description, and if they are, he hasn't done a good job squashing all the bad versions of the chinese room that are crawling around.
Because all I ever hear from armchair philosophers is that the chinese room is supposed to show that *any* mechanical instructions the guy follows somehow invalidate any possible understanding of chinese that the room might have.
In other words, I see the most common interpretation to be a suggestion that the idea of machine consciousness is absurd.
But you and I and anyone who thinks about it knows this isn't the case -- if the instructions on the cards represent something more like CPU instructions and register values, meaning the guy is actually just implementing an algorithm that could be anything, it is less clear cut that the idea of the room understanding chinese is absurd. And if the instructions on the cards represent something like a neural network simulation, then it isn't clear at all that the room doesn't understand chinese. In that case it seems like the room *does* understand chinese.
This is just one of those cases -- like every other case in this discussion, actually -- where incorrectness stems primarily from a failure of being specific when it comes to what we are talking about.