I really do need to keep my promise and get off this topic, but since I posed a question….
I asked how one would build a machine that doesn't just respond to light, but actually produces color, the way our brains do.
One answer is to create a perfectly detailed virtual digital simulation of a normal human brain, which produces color in response to light.
But there's a huge problem with that answer.
It relies on the assumption that there exists a "Pinocchio point" at which a simulation becomes so detailed that the machine takes on the qualities and behaviors of what's being simulated.
But Pinocchio points do not exist -- no matter how detailed the simulation, the machine running the simulation never takes on the qualities and behaviors of the system being simulated.
In other words, there is no sufficiently detailed digital simulation of oxidation which will cause the computer to rust. And there is no sufficiently detailed simulation of a tornado which will cause the computer to have a windspeed, or of an aquarium which will cause the computer to be wet.
And this fact does not change simply because we are trying to simulate a bodily function, whether that's digestion or heartbeat or consciousness. Even a perfect simulation of digestion will never cause us to point at the machine and say that
it is digesting.
And remember, our goal is to make a conscious machine -- the
machine itself must be doing whatever is necessary to be conscious, just as our bodies are.
"But wait," some have said, "you're committing a framing error. You can't look at the machine -- you have to use the simulation itself as a frame of reference."
Two problems with that.
First,
there is no such frame of reference.
Where do simulations exist? In other words, if I run a simulation of a tornado, where's the tornado? If I want to talk about what's going on in the tornado, what frame of reference do I use?
It can't be the frame of reference of the machine, because the computer running the sim has no windspeed and can't knock down houses.
So the simulated tornado doesn't exist in the machine. Where, then, does it exist?
This is where systems theory comes in.
In a system which includes only the machine, there is no tornado.
The simulated tornado only exists in the mind of an observer which is properly built to interpret the actions of the machine as a tornado.
The simulated tornado depends on a system including the machine and an observing mind.
So, for example, suppose aliens wiped out the human race and took over the world, and walked up to our machine running the virtual digital sim….
Now, these aliens have evolved on another planet, so we cannot expect that they share our same sensory apparatus or conscious qualia. Perhaps they have qualia that let them consciously experience magnetic fields (as birds likely do). They probably respond to light, but the odds are extremely small that they respond to the same tiny band of the spectrum that we do. They may respond to many of the same chemicals we can smell, but they won't have evolved to experience odors as we do -- who knows what qualia they produce in response.
And obviously, our numbers and letters are meaningless to them.
The computer running the sim is not, as we've established, actually creating a tornado. It is only producing non-tornado-like output which is
designed to trigger ideas about tornados in human brains.
The patterns of pixels on the screen, the patterns of waves emanating from the speakers, the information in the printouts, all of these are tailored to fool a human brain specifically. The aliens, even if they've seen real tornados, won't be able to recognize a tornado in the simulation, no matter how detailed it is. It won't even sound like a tornado to them, because we haven't bothered to tailor the sound to their range of hearing (if they have such a sense).
The only way to make them experience a tornado is to throw a real tornado at them.
The simulation only exists in a system containing the machine running the sim and a brain built to understand it. There is no "tornado" in some frame of reference independent of both the raw physical actions of the machine and the imagination of the observer. The "world of the simulation" does not exist.
But wait, isn't sufficient information about the tornado preserved in the actions of the machine, so that we can say there's a tornado in those actions?
No. Again, go back to the analogy of the rock, water, and shore.
The properties of the rock don't transfer to the water, and the properties of water don't transfer to the shore.
The machine doesn't preserve or reproduce any actual qualities of the tornado. It translates patterns into another medium. In doing so, it preserves
information about the tornado, but not the tornado.
And that information is open-ended.
What I mean by this is that you can take that same information and use it to describe a different system. For example, you can invert your axes. You could interpret the temporal data as if it were spatial data. There are all kinds of things you can do to get a different system from the same data.
And you can't avoid this by making a complete simulation. First, there's Heisenberg to deal with -- which makes completely perfect simulation impossible -- but even without that problem, an interpreter has to decide what background to anchor the information upon. You could, of course, program that in, but it's infinite regression, because then
that system can be construed against different backgrounds.
And even if you could avoid those problems, you've got a third problem, because without outside information, there's no way for an interpreter to know that the simulation is complete -- it could just as well be an incomplete simulation of something else!
From the point of view of the machine, there's no way to determine what it's supposed to be running a simulation of. There are always infinite options.
So now let's return to our proposed solution -- can we make a machine "see blue" by running a perfect simulation of a human body looking at a perfect simulation of a clear daytime sky, right down to the molecules and photons?
Nope.
Why?
Because to make a machine "see blue" in the real world it must be doing what our brains are doing in the real world when we see blue. When we look at the machine itself, the raw physical apparatus, we must see actual physical processes that mimic what our brains are doing when we see blue -- or else some other physical processes which have the same result. And we can't shift our frame of reference to any "world of the simulation" because such worlds are imaginary.
But wait….
We know that you can get a workstation to simulate a workstation, so isn't the brain a special case? Isn't it the case that a brain is also a general purpose computer, and therefore a computer simulating a brain is just like a computer simulating a computer?
No.
That's because
it's not true that the brain is, in all its functions, a general purpose computer. (That link explains it well enough, so I refer you to that explanation.)
And specifically, when we look at what the brain is doing when conscious experience is going on, we see very un-computer-like behavior. Conscious experience relies on behavior that is tightly synchronized and coordinated in time. Also, the signature waves must be generated, become coherent, and strengthen sufficiently.
This process is time-dependent; it can't be run at any arbitrary speed. This alone throws the "Turing machine" analogy right out the window.
Also, the result does not in any way resemble the output of a Turing machine or a general purpose computer. The result is this hologram-like-thing I've been calling the phenogram, which is the result of a physical process, not a symbolic calculation.
OK, so we can't shortcut the process by running a perfect simulation of a brain that sees blue -- first of all because perfect sims are impossible, but also because even a perfect sim wouldn't create a Pinocchio point for the machine.
Well, can we simply rig the computer to respond differently to what we see as "blue light" than it does to other types of light which we see as not blue?
No.
Why not?
Well, the answer should be clear from my post defining consciousness. Differential
behavior is simply the old bounce-back system. We know we can get that without involving consciousness at all. So that's no solution.
At the end of the day, there are no such shortcuts.
We have seen by observing animal brains that the A-->B-->C neural chain does not, by itself, generate conscious experience.
If it did, we'd be conscious of everything going on in our brains. But we're not. Not by a long shot.
Nor can we say that "self-referential" neural behavior is sufficient. There are self-referential feedback loops all over the brain which have no effect on our conscious experience.
And we can't extrapolate that out to higher-order "self-reference" -- as in self-awareness, knowing that I am a conscious being -- because such higher-order thought
demands that consciousness already be present as a pre-requisite.
It's not the neural chain that "feeds" our conscious experience, that generates qualia, that sparks the phenogram: It's tightly synchronized oscillations in specific areas of brain real-estate, in the presence of a trio of signature deep brain waves, that somehow (nobody yet knows how)
translates a subsection of that neural activity -- which contains, in part, neural
translations of the stuff that bounces off our bodies and bounces around inside our bodies -- into something
entirely different from either neural activity or the activity of stuff in the outside world.
So what do we need to do in order to make a machine "see blue" in response to light from the sky?
It's not enough to make it respond overtly to the light, because that can be done without making the machine generate a phenogram.
No, at the end of the day, the only way to make a machine "see blue" or have any other conscious experience will be to figure out how our own brains produce this truly bizarre bodily function which results not just in some overt response, not just in some chain reaction among components, but in conscious experience which is something different from the neural activity, and
then figure out how to make a machine do the same.
But however it's done, it will have to be done in the real world, not a virtual simulation.
That's how we know that it's a hardware problem, at least in part. There can be no programming-only solutions.
OK, that's said, now I really do want to extricate myself from the tar-baby of "computer consciousness" speculations and focus instead on the real meat and potatoes -- the biology of consciousness in animals.