A "flying machine", by definition, flies. Unless it's broken. It doesn't simulate flight: it flies.
Fine. And a conscious machine, by definition, is conscious. Unless it's broken. It doesn't simulate consciousness; it
experiences consciousness.
The claim being made by computationalists is that simulated flight is "real in the simulated world", but not real in "our world". Some of us see that as dualism. A machine that actually flies isn't a simulation.
No one is making that claim. Simulation of flight is a real thing in the real world. Flight simulators are made of matter. You can kick them and everything. Real pilots use them to really learn things. Simulation of flight is only
flight in the sense of that being a description of the behavior of the state of the simulation (which is what we mean by "in the simulated world") but the existence of a state and the behavior of that state are also real phenomena in the real world.
I think conscious machines are possible. If you slowly replace neurons with transistors, I don't see why consciousness would disappear.
"I think flying machines are possible. If you slowly replace a bird's feathers, bones, muscles, and other parts with composite materials and miniature actuators, I don't see why flight ability would disappear."
The difference is, after replacing neurons with equivalent neuron-shaped transistorized devices, we would not have to stop there. Since electronic impulses along wires are faster than action potentials, we can do more replacements.
For instance, we can replace (not simulate, replace) the system of interconnections between neurons, with an internal list of its connections and a database in a sufficiently fast external computer. Each neuron (for now) stays where it is, but instead of having direct connections with the other neurons on its list, it now reports its state through a wire to the database and it receives the data for the state of each of its connections through another wire. So it receives and processes the same information as before.
Now we can replace (not simulate) the neurons' internal lists of connections, and instead, have the database keep track of that as well. The neuron still receives and processes the same information as before.
Also, since the neurons are no longer physically connected to one another, but instead to the database, we can rearrange the neurons spatially however we wish (as long as signal propagation times are taken into account and suitably adjusted as needed). Each neuron still receives and processes the same information as before.
But, if our database system can handle the load, we can replace (not simulate) much of the input processing the neuron performs on the information it receives, by having the database system pre-process the data it's communicating to the neuron to perform the same computation -- so that instead of communicating all the information about all the neuron's inputs, it only tells the neuron, for example, the weighted sum. The neuron now merely has to perform its output processing (e.g. comparing the weighted sum to its current threshold, and updating its history state that determines that threshold) and telling the database when it fires.
But, at this point it makes more sense to replace (not simulate) the remainder of the neuron's behavior, by having the database system take that over as well. So, instead of sending processed neural input information to the neuron and receiving the neuron's state information back, it adds the rest of the processing the neuron was doing, to its own processing task list instead. That adds more processing work, but reduces the communications work.
So now we have replaced -- NOT SIMULATED -- the transistorized neural brain with a computer system that does not use or need neurons at all, and that can be architected in whatever way we wish (or will work most efficiently). Does the computer have a vast array of small processors each doing the work that was formerly done by a single neuron, or is each neuron's state just represented by some data in some area of memory, with just one ultra-powerful CPU doing all the processing? It doesn't matter, except insofar as practical design issues are concerned.
But this is much different than the claims that simulated consciousness is actual consciousness.
Correct. That is a different kind of claim, that relates to a different line of argument.
However, only some concepts of conscious machines are reasonable to describe as simulated consciousness, and those concepts exist only as an existence argument. A conscious machine need not be a simulated brain, though that (as for the machine I just described) could be one path toward designing one.
If to achieve a flying machine we had to mimic every characteristic of a bird, we would probably not have managed it yet. But, even before airplanes were invented, we could answer a philosophical argument that artificial flying machines are inherently impossible with a philosophical thought experiment, "suppose we built a machine that perfectly replicated the relevant characteristics of a bird; why would we not expect it to fly?" The answer "That would only be a simulated bird, so it couldn't really fly in the real world" would make no sense. Neither does the claim that a machine that perfectly replicated the functioning of a brain, such as the one I just described, would be only a mere simulation of a brain, so it couldn't really be conscious in the real world.
If a machine replicated a human brain entirely, it should be conscious. If it weren't I don't know if "magic" would enter the picture. It could be that consciousness is limited to biological creatures in some strange way we don't yet understand.
"If a machine replicated a bird entirely, it should be able to fly. If it weren't I don't know if 'magic' would enter the picture. It could be that flying is limited to biological creatures in some strange way we don't yet understand."
Yeah, maybe. I'm not ruling it out. It could have been true for birds and flying, too, but it wasn't. Could have been true for any number of other characteristics of biological creatures as well, but it wasn't. It turns out, plants create substance out of molecules from the air, water, and soil rather than from nothing, muscles move by forces applied by the electromagnetic interactions of matter rather than from the force of will, and diseases cause illness by mere chemical and physical interaction with living tissue, rather than by the spiritual effects of sin.
Why do you seem to
expect a different kind of answer, for consciousness? I can see not ruling out the possibility in principle, but why do you seem to think it's the more likely possibility? That's not a rhetorical question. I'm really curious. Is it because conscious experience "feels" so different than other things in nature (if such a comparison could even be made)?
Respectfully,
Myriad