Yes you do. Please read Ichneumonwasp's post above.I think we just went over this. As you put it "Simulated oranges are not oranges in our world". Since I don't know of any other world but "our world"
There is a difference between simulation and reality for physical objects, but it is not at all clear that there is a clear distinction between simulation and reality for actions (aside from the obvious truism that a simulation is not the thing itself). That is the fundamental point when discussing consciousness. Consciousness is not an object; it is an action.
The better analogy that we have used repeatedly in these discussions is with running. Simulated running is running -- not in the real world but in the simulated world. The closer the simulation is to actual running the more informative the process by which we acheive the simulation is for understanding how running works in the real world. Same for consciousness, at least theoretically. The idea is that a simulation can potentially provide a model for how consciousness works in the brain; not the any simulation would be consciousness in the brain. Obviously it cannot be -- it is a simulation after all. But if the simulation provides enough information, if it is close enough to how the brain works, then we probably should even speak of it "being conscious" within its simulated world.
ETA:
Or more succinctly, as Pixy said, it's a category error, or a framing error. The simulated orange is an orange within the simulation. A real orange is an orange in the real world.
For a start, we cannot assume that a robot will exhibit the behaviours of consciousness, and then use that as evidence that a robot can be conscious. So far, there has been no AI that has come remotely close to passing a Turing test. The assumption used to be that by 2001 HAL would be able to talk to us just like a human being over a slightly fuzzy phone line. Nowadays, most people (even those who believe in the computational approach) don't expect robots or computers to exhibit signs of consciousness for the foreseeable future.
What Piggy has indicated that simply producing a machine designed to produce one form of activity doesn't mean that it will exhibit another form of activity.
If we don't know (and despite the endless assertions, we still don't know) what, precisely, produces consciousness in the only physical structure in which we know it exists, we can't, therefore, leave out aspects and be sure that the baby hasn't been thrown out with the bathwater. Either the precise mechanism that produces consciousness must be found, or else we cannot assume that it will be there.
To say as a counterargument that the hypothetical robot will exhibit consciousness is meaningless, until a robot can be found that actually does so.
Nope. That's the diametric opposite of my point.So If I understand you correctly a simulated brain can only have simulated consciousness in a simulated world if we are to avoid a category error?
No, you could not eat an orange simulated by a turing machine, if you are outside the simulation.
But you could have a conversation with a simulated consciousness, you could play games with a simulated consciousness, you could have cyber sex with a simulated consciousness, you could show a simulated consciousness your photos and have it react like a normal person would via. a chatroom or webcam, and finally if you hooked the simulated consciousness up to a body it could control outside the simulation then you could walk down the street with it to the store and buy a pack of cigarettes.
You have no trouble assuming I am conscious, just because of the way I respond to your communication. If I told you I was a simulated consciousness, what would you say -- oh, sorry, I change my mind because you aren't flesh and bone? I don't get it.
The only way that running in a simulation is a similar thing to running in the real world is in the consciousness of the person observing the simulation, which perceives the existence of similar relationships which objectively do not exist.
A film of a person running will look, to an observer, much like an actual person running. Objectively, it's nothing like that at all - just patterns of light on a wall. To expect the simulation to exhibit all the behaviours of running would be the same category error as viewers in the first cinema who fled the approaching train in panic.
Yes, this is the core of it. How is it even meaningful to discuss something that exhibits all the behaviours of X but is not X? For any X, not just consciousness?I still maintain -- and this was the point of that previous exchange -- that it makes no sense to discuss a robot that acts exactly like it is conscious but which is not conscious. How could you tell that is the case? The only reason I 'know' that you are conscious is because of your behavior and vice versa (and because I know my experiences and have every reason to believe the same of you). If we structure a robot to function the same as we do and if it displays all the behaviors of a conscious being, on what grounds could we conclude that it is not conscious?
We have no basis on which to know any of this. There is simply no reason to assume that we could interact with a simulated consciousness any more than we can interact with a simulated orange.
A simulation is not the thing simulated.
The options seem fairly obvious to me. Either we can't discuss the matter in the first place and so should remain silent; or we can and should try to agree on what we are talking about in the first place.
.
I sincerely hope you know why that is a totally inadequate response. I certainly hope that you do not think that anyone in this thread has ever argued that the behavior of any game character even remotely approximates a conscious entity.
I think that we are perfectly entitled to discuss the subject, but should accept that precise definitions of what we are talking about do not exist.
(There are wrong and misleading definitions around, of course).
Wrong. The particles of the semiconductors in the computer hardware are the particles of the semiconductors in the computer hardware. The orange is the orange.
Let me see if I understand you correctly -- the computer memory where the data that we label an "orange" in the simulation is actually not the orange in the simulation?
Where, then, is the orange in the simulation? Magic bean land? And I suppose the monitor is a magical window into magic bean land?
And since the orange interacts with other simulation constructs, I suppose that must all be in magic bean land as well?
Or are you saying the whole thing is in our imagination?
Why is that different from a normal orange? If you remove every human in the universe, an orange is no longer an orange. It becomes just a bunch of particles that maybe monkeys (although they are no longer monkeys, since that is a human word as well ) like to eat. But the "orange" is gone, forever.
Doesn't make much sense to me. But then the idea of saying some things are "real" and some are "not" has no logical or mathematical basis to begin with.
Why would a simulated orange contain vitamin C in our frame?
Nope. That's the diametric opposite of my point.
An orange in a simulation has all the behaviours of an orange in our world, but it is not in our world. You - real-world you - can't eat it.
A conscious mind in a simulation has all the behaviours of a conscious mind in our world... And you can communicate with it, because information is substrate-independent.
That's intrinsic to what a simulation is; if information didn't pass between the two world, it wouldn't be a simulation, it wouldn't be anything at all.
So:
There's a difference between how we can interact with an orange in a simulation and an orange in our world.
There is no difference between how we can interact with a mind in a simulation and a mind in the real world.
Conflating these two - the orange and the mind - in either direction, for whatever reason - is the category error. Thinking we need to - or even can - interact with a mind in the same way we interact with an orange, is the category error.
Simulated computation is real computation. Simulated art is real art. Simulated minds are real minds. When it comes to information processing, the map is the territory.
Why would a simulated consciousness be conscious in our frame?
Thanks for stating your perspective clearly.
How is information being substrate-independent not dualistic?
Thanks for stating your perspective clearly.
How is information being substrate-independent not dualistic?