roger:
Having read post 267, I can now see where the gaps are in our communication on this -- or some of them, at least.
If I had known you were making such grand assumptions about the computational theory of the mind (CTM), I would not have said that I agree with you.
I certainly agree that it is extremely useful to model the mind, and neurons, in that way, and that tremendous strides are being made with that model in what everyone must admit are our early explorations of brain function. But in no way has a broad-based CTM been proven. Not even close.
I would wager -- in fact, I would wager quite a bit, at very high odds -- that although it is a "good enough" model for current investigations, it will turn out to have significantly limited explanatory power down the road.
My field, language, is one area in particular where CTM has not yet provided as robust an explanatory framework as we might hope. (Here's a
sample critique from 2006, for example.)
You might be interested, btw, in Raymond Tallis's "Why the Mind is Not a Computer". It's rather thin, both in scope and in hard information on the brain, but it's an interesting examination (a la Dennett and Pinker -- though he would disagree with Pinker certainly on the topic of CTM) of how our own brains may have fallen victim to the associative nature of language in carrying over spurious assumptions when describing the brain in computational terms.
You said I'd get the Nobel Prize if I could prove that something in the brain is not computational, and certainly I'd get some kind of prize if I could do that. But not because I would be refuting anything that has supposedly been established. Rather, it would be because I settled an open question.
You said that my analogy with the daisies was "inapt because there is no programming controlling the swaying". What you forget is that there is no "programming" in the brain, either. Like the daisies, it is a purely physical, specifically biological, system interacting with the material world. But it is one which we know generates consciousness, whereas daisies do not.
And recent studies into biological systems and how they behave and evolve give us reason to doubt that purely computational systems could evolve in biological specimens.
You compare neurons to transistors, but that comparison doesn't quite fit.
Biological systems that are very rigid, like transistors, are very bad at absorbing shocks. They are fragile. Biological systems tend to evolve with wiggle-room. They are fuzzy. Like the heart. It can take a good bit of knocking around before it goes into fibrillation.
In fact, it was recently discovered that a highly regular heartbeat is bad news, because highly regular heartbeats are prone to fibrillation. Some random variation in heartbeat pattern is a good thing -- it means your heart can absorb shocks and return to its natural operational state. If it gets too rigid, it can too easily get knocked into the alternate contractive pattern that will kill you.
And although it is very useful to model neurons computationally, transistors they are not.
As you probably know, a simple model of a neuron consists of a synapse where the neuron picks up neurotransmitters (NTs) from adjoining neurons. When a sufficient number of NT molecules bombard the neuron, it reaches its threshold and fires, sending a signal down its length and releasing its own NTs into the next synapse. It then re-collects the NT molecules.
We can model this set-up computationally, even writing a simple program with values for n (the number of NT molecules to meet the threshold), an increment and decrement to bring the value of f (fire) from 0 to 1 and back down to 0, etc.
And that's extremely useful.
But it's an idealization.
The biological reality is messier, more fluid, more open, with all sorts of other variables around it, and there's reason to believe that it has to be that way in a real-world evolved biological system.
So we cannot be certain, and we have good reason to doubt, that neurons actually are purely computational components, even though it is useful to model them that way at this stage of our investigation of the brain.
And as we scale up to less granular levels of organization, this same kind of fuzziness persists. In its real-time operations, the brain deals with all sorts of competing associative impulses, and very often makes mistakes by accepting the incorrect one (although even here computational models have proven useful, by describing the accepted association in terms of the number of "hits" -- in other words, the more numerous the associations, the more likely it is that the brain will choose that option, even if those associations have nothing to do with the task at hand).
This is why I have serious doubts that a cog brain would work like a neural brain. Cogs are quite rigid, and not very handy with the kind of threshold-based workings that we see in the brain. Maybe there's a way to reproduce this with cogs, though, I don't know. But I'd have to see it to accept that it's possible. Maybe, but maybe not.
Can all the workings of the brain be performed by a TM? Right now, there's no reason to accept the assertion, and some very compelling evidence to make us doubt that it will turn out to be the case.
So, after all that (and ignoring the pencil-brain thing, which has turned out be a red herring and now I see why) let's look again at the speed question.
If we had a robot brain which, by whatever method, worked like a human brain -- because we have no other model to use -- what would happen to its consciousness if we slowed down the rate of data-transfer between its basic components (the equivalent of neurons)?
First, we cannot assume that this brain is some sort of TM. That would be jumping the gun.
Instead, we must assume it works like a human brain, which may be some sort of TM equivalent, but maybe (I'd say most probably) not.
Well, obviously if we slow down the rate to zero, there's no consciousness. (That's why I kept bringing that up -- not because I thought you were arguing otherwise, but as one end of a continuum.)
Somewhere between natural brain-speed and 0, then, there's a point where consciousness is not sustainable. Is it at 0?
No, it can't be. It must be higher than 0. Why? Because from cases like Marvin's, and more recent research demonstrating that we act on stimuli before we are consciously aware of them, we see that consciousness is a specialized downstream process involving the coordination of highly processed information. Because the phenomenon of conscious awareness requires coordination of coherent aggregate data, and because neural impulses are ephemeral, there must be a point higher than 0 at which coherence is insufficient to maintain conscious awareness.
Right now, no one knows what it is, but because we know that neurons fire very quickly, it's safe to assume that a rate of 1 impulse per second would be too slow.
Can we build a robot brain that would accept that rate?
I doubt it, because you'd have to accept a kind of flickering consciousness, like a movie run frame by frame.
But that wouldn't work, either, for reasons that another poster mentioned above.
Consciousness is not a point-in-time phenomenon. It's smeared out over time. There's a kind of Heisenberg effect to it, with associations being continually wrangled and forced into service. It's not like a camera lens, opening to the world and letting the light in.
So I can't see an impluse-per-second rate being high enough, even though no one can say, at this moment, what the floor would be.