Actually, I really don't get the time thing
If I was modeling a wave then I wouldn't say that the wave had failed to exhibit wave behavior because it was not in real time. Time is part of the model
If I was modeling plant growth I would not say that it failed to exhibit the correct behavior because it was not in real time.
I think I understand the requirement for being in 'time'. Consider the 'this is not a pipe' painting (
http://en.wikipedia.org/wiki/The_Treachery_of_Images), the model of the wave is only an abstracted ideal, a symbolic representation of something else that exists in our material world. Why, in the case of consciousness, should we expect the symbol to actually be the thing it is representing?
This is an interesting question in it's own right. I tend to feel that, if we continue to progress in our technological prowess, we will eventually be able to build conscious machines. But I don't have a good answer to that question other than to note, as others have, that consciousness isn't a material thing, but a process - i.e. a non-material thing. Non-material objects are not bound by the limitations of material objects.
Hmmm.....strikingly similar to arguments for ESP and other such stuff, isn't it?
On the other hand, why should a non-material object be considered 'real' when it clearly isn't existing in our spacetime continuum any more than a non-material object should be considered 'real' when it just as clearly doesn't exist in our spacetime continuum? Time is as much a part of the familiar four dimensional place we live in as the three spacial dimensions.
So while I don't agree with them, I think their argument has merit. We simply don't know enough yet for me to feel confident of either side.
I have no good answer for the question about why should simulated consciousness actually BE consciousness, I only know that I think that it is true. If consciousness were simulated sufficiently closely, it would actually be consciousness.
So why is the model of the brain different in this respect?
Would the same go for a mouse brain model?
An ant model?
A tapeworm?
I think yes for the mouse, no for the tapeworm and probably not the ant, though GEB does have an interesting conversation with an ant colony in it. But those are my instinctual feelings about it. I don't know why I feel that way exactly and I don't know why the model of the brain should be different in this respect. It just seems that way to me and I don't feel there is a good reason to reject that perception.
Can you think of a reason why the model of the brain might be different in that respect?
For every other bodily function, in order for the overt behavior to take place -- blinking, shivering, regulating the heartbeat, regulating temperature, running, focusing light on the iris -- the firing of neurons has to be coupled with some sort of executive mechanism of another type.
Piggy, I don't know if your meaning has eluded anyone else, but I had been a bit puzzled with this argument. I had been thinking you were talking about an executive mechanism as that which coordinates the work of the other parts. But it dawned on me with this post that you meant executive as in executing - actually doing the physical work in a material sense. Your posts now make much more sense to me.
Consciousness is certainly the weirdest bodily function we know of, and it appears to be different in some profound ways from all the others.
Our brains, our consciousness appear to be one of the main advantages we have over all other species on earth. It's clearly fundamental to our success as a species.
The study I cited upthread, which made use of deep brain implants, finds that at a "signature" of conciousness is a simultaneous activation of 4 different types of waves spanning the space of the brain.
Yes, thanks for that. It was quite interesting. There was also an interesting study published late last year identifying activity in the glial cells.
There's no doubt that machines can, in theory, be built which will also do consciousness, just as our bodies do consciousness.
But it will not involve "running the logic" alone.
For the behavior to happen, there must be the logic (which, again, is an abstraction for what's actually happening physically) combined with an executive mechanism of some sort to produce actual behavior.
I appreciate your articulating the issue so well. I find this issue an interesting one. Thanks.