I certainly agree that it is extremely useful to model the mind, and neurons, in that way, and that tremendous strides are being made with that model in what everyone must admit are our early explorations of brain function. But in no way has a broad-based CTM been proven. Not even close.
I'm running out of time to keep up with this thread. So, I'll perhaps unfairly only respond to a bit. However, it is the crux.
Okay, certainly it has not been proven, but it follows from
everything we know about physics. Yes, physics. Physics is, as far as we know, computational. Certainly QM is - our predictions and calculations have reached a level of precision that we have never achieved in any other field.
From physics you get to chemistry. Again, chemistry is computational, so far as we can tell. We conclude this in two different ways. First, we observe that we can compute everything that we have seen so far. Second, reductionism. Chemistry devolves to physics, or QM. Put another way, QM in a macro environment is described as chemistry. And, as we know from Turing, any combination of computable elements is also computable.
My field, language, is one area in particular where CTM has not yet provided as robust an explanatory framework as we might hope. (Here's a
sample critique from 2006, for example.)
"Explanatory" - I don't want to be one of those people who grasp a word out of context, but I think you probably chose this word well.
QM is not a good explanatory model of chemistry. No one uses QM to do chemistry, except in certain circumstances. There are far better models.
Yet, there is no doubt that chemistry is merely the sum behavior of QM.
Just because we can't right now come up with an easy computational model for language in
no way means that language is not computational.
This is where my assertions of dualism comes in. You are saying the brain is chemicals and networks, both of which we have extraordinary evidence are computable, and then you say the sum of the parts is not computable. It just doesn't follow without a dualist element.
You said that my analogy with the daisies was "inapt because there is no programming controlling the swaying". What you forget is that there is no "programming" in the brain, either.
Piggy, here you go again, making assertions about a field you know little about. The network of neurons and the information stored in the neurons is the programming. It's a very basic tenet of information theory. Daises are so bad an analogy to a computational brain that I'm astonished that you are suggesting that it is in any way a rebuttal to what I am saying.
And recent studies into biological systems and how they behave and evolve give us reason to doubt that purely computational systems could evolve in biological specimens.
You'll have to cite those.
Biological systems that are very rigid, like transistors, are very bad at absorbing shocks. They are fragile. Biological systems tend to evolve with wiggle-room. They are fuzzy. Like the heart. It can take a good bit of knocking around before it goes into fibrillation...And although it is very useful to model neurons computationally, transistors they are not.
Once again you don't understand computable, and you seize on irrelevant aspects. Computable does not mean deterministic, it does not mean rigid, it does not mean an inability to handle fuzziness. And certainly physical robustness has nothing to do with it. Finally, if neurons are computable, they are computable. We are talking about equivalence, not identity.
As you probably know, a simple model of a neuron consists of a synapse where the neuron picks up neurotransmitters (NTs) from adjoining neurons. When a sufficient number of NT molecules bombard the neuron, it reaches its threshold and fires, sending a signal down its length and releasing its own NTs into the next synapse. It then re-collects the NT molecules.
We can model this set-up computationally, even writing a simple program with values for n (the number of NT molecules to meet the threshold), an increment and decrement to bring the value of f (fire) from 0 to 1 and back down to 0, etc.
And that's extremely useful.
But it's an idealization.
The biological reality is messier, more fluid, more open, with all sorts of other variables around it, and there's reason to believe that it has to be that way in a real-world evolved biological system.
Still computable.
I wonder - I read a book that I now forget, by a prominent language theorist, using arguments much like this. What a terrible book, because he understood nothing about computability. I wonder if you have been influenced either by him or the field in general. Because you have said nothing that is not computable. "Messy" "fluid" "open" are all ill-defined words in the space of information theory. More importantly, nothing you are describing is uncomputable. The book talked about things like Excel, and how it was exact, created the same result every time, and imagine if your taxes were computed differently each time. Sure, but only because the algorithms chosen were for computing taxes. What a misunderstanding of computing - the same misunderstanding you are showing.
So we cannot be certain, and we have good reason to doubt, that neurons actually are purely computational components, even though it is useful to model them that way at this stage of our investigation of the brain.
Only if you don't understand information theory.
And as we scale up to less granular levels of organization, this same kind of fuzziness persists. In its real-time operations, the brain deals with all sorts of competing associative impulses, and very often makes mistakes by accepting the incorrect one (although even here computational models have proven useful, by describing the accepted association in terms of the number of "hits" -- in other words, the more numerous the associations, the more likely it is that the brain will choose that option, even if those associations have nothing to do with the task at hand).
Associations and mistakes are computable. Trivially so.
This is why I have serious doubts that a cog brain would work like a neural brain. Cogs are quite rigid, and not very handy with the kind of threshold-based workings that we see in the brain. Maybe there's a way to reproduce this with cogs, though, I don't know. But I'd have to see it to accept that it's possible. Maybe, but maybe not.
There we go with rigid Excel! Computing is not rigid, except by choice.
Can all the workings of the brain be performed by a TM? Right now, there's no reason to accept the assertion, and some very compelling evidence to make us doubt that it will turn out to be the case.
No, you are presenting personal increduality based on a lack of understanding of a field as 'reason'.
Everything is phyics. Physics is computable. Every combination of computable is computable. Without a form of dualism, brains have to be computable. Information science 101 and physics 101.
So, after all that (and ignoring the pencil-brain thing, which has turned out be a red herring and now I see why) let's look again at the speed question.
I'm completely uninterested in the speed question, especially when discussed with such a basic misunderstanding of physics and computation.