rocketdodger
Philosopher
- Joined
- Jun 22, 2005
- Messages
- 6,946
Second, could it not be the case that one type of simulation could map exactly what is simulated -- namely any mental activity? Speaking dualistically, which is wrong of course, there is no actual physical output with mental activity in the same way that there is a physico-chemical process to digestion for instance. So, while a simulation of digestion does not physically digest anything, is it the case that a simulation of thinking does not think? Or simulation of feeling does not feel? Or simulation of consciousness is not conscious?
Simply because most simulations do not produce the output that the "real-world" processes do, does it mean that this is true of all simulations? Is the argument simply over-generalized?
Yes, the argument is over-generalized.
Nobody in their right mind thinks consciousness is some kind of fundamental property of a substance (or they are just very uneducated).
What is left is relationships. This whole issue boils down to the simple question of whether consciousness is a feature of relationships between physical entities (making it dependent on the physical entities) or a feature of relationships between other relationships (making it independent of the physical entities, or information).
People like westprog think that the primary relationships might be important, for example that they might need to be between synapses of biological neurons, etc.
People like myself (strong AI supporters) think that the primary relationships are irrelevant and only the secondary and greater relationships are important.
Note that in a simulation, the primary relationships really are "simulated" while the secondary and greater are not -- they are just as real as in any other physical system. So it fits right in with what you are talking about, the question of whether a process in a simulation can still be a real process.
Last edited: