I don't know what you mean by "need to include brain states".
Let's say we have an artificial object--a quarter. It rolls onto the floor next to another quarter. Is that physical addition? What if, instead, I put two quarters into a vending machine. Still physical addition? In this case, note that I intentionally put two quarters in--that certainly requires brain states.
Suppose that I put five quarters in, and press A1 to get a snack. Is that physical addition? Now, how about if instead I put one dollar bill in, and one quarter, and press A1 to get a snack. Still physical addition? If so, do we count the sum as the same as the first physical addition? Note that here, to get the required result, we do need $1.25.
But what's so special about a brain here? It's trivial to add a physical machine that converts these placements to a physical quantity. Are you ruling out brain states just to rule out brain states?
Ok, I'll put it back into context for you and we'll move on from there.
Remember, the topic is symbolic information systems, such as adding machines and simulator machines.
What does the difference between symbolic aggregation and physical aggregation have to do with that, and what's up with the role of the brain state?
If two quarters are washed into the same spot in a gutter, or if they're picked up by a hand and placed together, these are both examples of physical aggregations of the objects.
And as your examples show, no brain state is
required for this to happen -- it can happen with one, without one, doesn't matter.
But what about our marble machine? What sort of aggregation can it peform, and under what circumstances?
To examine this, we'll use an old schoolbook way of looking at systems.
Imagine a page with two big circles drawn on it, representing two systems. Whatever's inside the circle is necessary for that system, whatever isn't, ain't.
So if I want to represent the system necessary to move a pile of brush to the county composter, I need the brush pile to be in the circle, of course, and the county composter, and some sort of vehicle that can carry the brush, and some sort of passage for the vehicle between the two places.
That's a fair representation of our necessary system.
Notice we leave out the color of the vehicle, because although the vehicle will certainly have some color to it, no particular color is
necessary for the system.
Ok, so let's look at the first circle, which only contains the marble machine, nothing else.
If that's all that's in the circle, can it be an information processor?
As it turns out, the answer is "No". Here's why....
Without a brain state to determine the "meanings" of the patterns of paint on the machine, they might as well not be there. You need a brain to assign those meanings and to interpret them.
Without that, all you're left with is an object you can drop marbles through.
You can prove this to yourself by viewing the video with the sound off, and ignoring the patterns of paint -- remember, inside our circle there's nothing that has any means of deriving any meaning from them. (Even if the machine were completely self-aware, this would still be the case.)
Without a brain state somewhere in the circle, the system cannot be a symbolic information processor. The only addition it can do is to channel the marbles at the top into a single group at the bottom.
So let's use our other circle to describe the minimum system needed for this thing to work as an info processor.
Well, we need at least one brain capable of assigning values to different aspects of the machine, determining the rules of operation, and interpreting the results of the machine's behavior. (A programmer and a user or reader.)
It might be one brain or more involved, but at least one is
required.
And this is true of any object in the universe that is (or can be) used as an information processor.
By itself, alone in the circle, as an independent unobserved system, it cannot perform that function. To do so, at least one brain has to be involved in the system.
And here's an extremely important consequence of that fact:
The "world of the simulation" -- which is to say, whatever a simulation program is supposed to represent -- cannot be located in the simulator.
Remember, PixyMisa's marble machine is intended to simulate aggregation -- that is, its physical activities are designed to mimic, in a certain way and
only if you understand the symbology, the process of grouping things together. In that way, it can answer the question "How many quarters do I have in the house if I've got 3 in my pocket, 32 in my coin collection, and 2 under the sofa cushions?"
But by itself, it can't do that.
It can
only do that if an observing brain changes its internal states while oberving it.
Which leads us to an extremely important fact:
Changes in the state of the "world of the simulation" exist as changes in the state of a brain.
Notice that there is nowhere else for this "world of the simulation" (WOTS) to be.
All we need for the simulator to run is the machine and a brain that understands its symbols and usage.
Changes in the state of the machine cannot
be changes in the WOTS because, as we've seen, by themselves they mean nothing... they are only what they are, the movements of the machine.
The only other choice we have is the other object in the system... the brain, or brains.
And indeed, we find that the WOTS must change every time the relevant brain states change.
In short, the world of the simulation literally exists in your imagination, not in the simulator.
This is not philosophy, but physics.
Now I hope you can see the relevance to the question of whether or not a machine
simulating (not replicating) a brain can be wired into a robot body and produce a conscious robot.
No, because
the brain being simulated doesn't exist in the simulator.
You might as well plug in a machine simulating a sunset over Miami Beach.
But when we examine the system, it is
so tempting, when we move along the sensory nerves up to the skull, and get to that computer running the brain sim, to switch our perspective over to the WOTS in our own imaginations and take our eye off the ball, which is simply the mass of moving parts in front of us.
In other words, if you want to know how a part will work in a machine -- and that's what we're doing here, sticking a computer box into a machine -- you can only pay attention to what it's physically doing.
So we could only use our marble machine inside another machine that needed a few marbles at a time. The fact that we can symbolically use that machine to add two-digit numbers doesn't mean it can work in a machine that needs 20 marbles to come out of it.
And the fact that we can "read" the simulator in order to change our own brain states which
are the simulated brain... well, it doesn't mean anything at all as far as whether or not a robot body will be conscious if we stick a machine designed to run simulations in its head, rather than a machine designed to be conscious.
The answer is no, it won't.
If you want that robot to be conscious, you've got to design and build a brain that does whatever is physically necessary to make that happen, not designed to run simulations of brains.
That is what I have been saying.
And that is why consciousness cannot be programmed.
For the same reason you can't go and program yourself a new truck or a bigger house or clean laundry.