That question "How does an interpreter work?" is the really interesting one.
Yes, and you're not addressing it.
But we can't get to it unless we clear the deck of the kinds of mistakes that lead folks to think that a person who is being simulated on a computer could become conscious.
Sure, I can't wait.
In a way, it's true that the simulated person can't become conscious because simulations involve (require) observers.
But that's one of those "I was late because my daughter thought the sprinkler was a duck" sort of statements... perhaps true, but not fleshed out enough.
But you have a bootstrapping issue here. If a simulated person can't become conscious because simulations require an observer, then neither can a brain become conscious because a brain requires an observer. After all, the ultimate simulation is the real thing. And to just start with our knowledge that a brain
does have an observer is jumping the gun--even begging the question.
The simulated person cannot become conscious because it is imaginary... that's another way of saying it, but just as unclear.
But this is where what people are
really trying to tell you comes into play.
You
say that simulations are imaginary. But why, if simulations are so imaginary, do we bother to actually
build them? The answer is obvious. It's because we need them to actually exist in order to do something.
If you want an imaginary simulation, have Donald Duck build it. If I build it, it's going to be real. I'd really like to reserve the concept of "imaginary simulation" to the thing that Donald Duck builds in a cartoon, because if I'm the boss, and I pay good money to someone to run a simulation for me, they had better be able to put their hands on some physical device that is actually performing a bit of work. I want to have some way of firing them when they simply "imagine" that the simulation exists.
But it gets worse...
I think the key question to ask here is this: When we're talking about the man in the simulation, what all do we have to have in the room, so to speak?
Well, first, we need the simulator. We're assuming it's a computer here, which is fine. We need that thing. Let's say it's plugged in and idling.
No no no. You're trying to teach me a lesson of consigning all simulators to the world of Donald Duck. But you're coming up with all of your own tests.
I think I'll make a test of my own. My concern is that whatever rules you're coming up with have a bootstrapping issue. So to test this out, I want to make my simulation be an actual interpreter.
That's not a big problem. I need someone to simulate... I volunteer. I'm of sound mind and body, and I hereby attest to my own conscious state. Furthermore, being simulated requires no effort on my part--which is perfect for me!
But now I need something to simulate me. A computer is too easy--we can have debates indefinitely because some people have a major issue seeing a computer as being a real agency, while for others it's no problem. I have a much better idea.
I'm going to pick another physical system to simulate me. In fact, I'm going to choose as the simulator--westprog's brain. Oh, but don't worry, I'm not simply going to leave it at that--if I don't actually do any work building the simulation, then we can hardly call it one. So here's what I'll do. I'll set up the simulator by asking westprog to pretend he is me for the next few moments.
Now, you play with your simulation of a conscious entity, and I'll be running along with my own.
Now, funny thing about this idling... when you think about it, you could pick any collection of parts in that machine -- atoms or molecules or bigger pieces, whatever -- and assign any number of symbolic values to the kinds of changes they're making.
Sure. I'll give westprog a slinky and have him wait in a chair while I plan out my assignment of entities within my physical system that is simulating me to the entities in the simulated system. For giggles, I just had an interesting idea. While running my simulation, I'm going to run it in a simulated world. In fact, I won't even try too hard to make it realistic... I always wondered what it would be like if I could actually visit candy island. So I'm going to write a Candy Island simulator. To attach it to my simulation, I'll use virtual goggles and a headset.
And if you did that, you could then describe any number of possible worlds, and track their development over time by looking at what the machine was doing.
That's true. To start with, whenever westprog says anything in English, while pretending to be me, I'm going to map that to the equivalent English. We'll call that run "A". Maybe I'll run the simulation again, in run "B", and XOR the letters of each word uttered in English together, map that to ASCII text, and presume that my simulation is saying that.
In other words, you could simply decide, in your own mind, that these sets of changes represented logical values for each type of change, and in each case you'd come up with a possible world.
Right. In run "A", westprog says, "I can't believe it! You can even eat the trees here!" ...or something similar. I know because I always wanted to say that when I visited Candy Island.
In run "B", westprog's just spouting out gibberish, most of which is not only not pronounceable, but messes up my terminal.
So what does it mean to say that the world changes?
What do we need "in the room" for that to happen?
Well in my case, the simulated me is running around in a fictional virtual world where he is on an island made entirely of candy.
The computer can sit there and change state all day, but just as no human could examine the computer and determine the state of the world you've decided to imagine (except as one of a practically infinite number of possibilities) the computer itself could not do so, even if it somehow literally "knew" everything about itself.
And therefore, westprog cannot experience Candy Island. According to the rule.
But now I'm getting suspicious. I know westprog is conscious, and would even be conscious while pretending to be me. Even when he is immersed in a fictional world.
So something about your rule is bothering me.
The computer alone in a room does not, and cannot, constitute the world you imagine.
Why? Well, because you imagined it! It's a state of your brain. It only changes when you look at the machine and subsequently change your mind about what's going on in that world.
That's correct.
I am the one that is imagining that westprog is me in Candy Island! It's all how
I interpret the results when I look at where westprog is!
Only, something is wrong here. westprog is conscious the entire time, when I'm looking at the results or not. In fact, I'm sort of jealous--he's actually experiencing the wonderful virtual immersed world of Candy Island. I'm just peeking in from time to time trying to infer how much I would enjoy it if I were that simulated entity...
Yeah, yeah, I know. That's a mistake. I'm anthropomorphizing westprog.
This fact -- and it is a fact -- is extremely important.
In order for the state of the "world of the simulation" to change -- which is to say, the world which the simulation is intended to represent -- a brain needs to change states.
Well, certainly this particular one is true. westprog's brain state changes. Nevertheless, something is a bit odd about your application here. I can't help but think that this intended representation is supposed to be the one in
my brain.
Well, let's try scenario B. Oh, yeah. Gibberish. That simulation's not working. I'm going to be nice to westprog, though, and not suggest dismantling it to debug it.
There is not enough information in the computer alone to make that happen.
Well, in my case, I think there's plenty of information in my simulation to make this happen. In fact, there's information to spare, and a lot of that information is probably doing other important things.
And this fact does not change when we somehow make the computer change in certain ways to custom fit what we imagine.
But it changes when I put something I know
is an interpreter in the same situation. Maybe I should invoke special pleading when I do that.
So you see, we don't -- at this point -- have to ask about how those changes in brain state come about. And that, after all, is the same as asking "How does an interpreter work?"
My simulator's interpreter works just fine, thank you. But per all of the rules we went through, I am supposed to presume it doesn't work.
That's the problem. Now,
you never bothered to consider the problem--if you note, my entire question to you was,
if your rules were true, then how come
we are interpreters. Hopefully you'll see what I'm getting at, since in my case,
I added an example that actually
was an interpreter, and all of your rules still suggested that I treat it as if it wasn't one.
Your rules are garbage. They conclude that the machine is not an interpreter. And I'm sure that's the conclusion you wanted to get. The problem is, they conclude that even if I put an interpreter in. So I don't trust that those rules actually even work.
I would love to move on to the question of how the interpreter works.
You'd better, before you start defending silly claims such as that westprog is not conscious.
But the explanation -- or, rather, the partial and still unsatisfactory explanation we have -- isn't going to make sense to anyone who still accepts that the world of the simulation is anything but imaginary.
Well, westprog
was only pretending to be me, sure. But he's a real person. (Well, technically, in this case, he was imaginary, because it's a hypothetical experiment, but I hope you see that I was actually trying to raise
that problem, which you managed to completely snuff off with a long post.)
And the key to making the picture come into focus isn't so much focusing on the need for the brain state (or interpreter or observer)... the key is to be extremely rigorous in making clear distinctions at all times between the physical computations of the real world, on the one hand, and the symbolic computations we imagine them to equate to when we build and use information processors.
I want you to note something here though. I never believed the simulated version of me was actually me. But the simulated version of me here
was actually westprog. And he was conscious. And he is real.
If you avoid conflating those two things, you won't make the errors that lead to conscious people inside of simulations who are experiencing the world which the simulation is intended to represent.
It depends. If their simulation is being run in a Donald Duck cartoon, you have a point. If not, see your problem case above.