I didn't specify or mean to imply that this was only some part of the brain. It's the whole thing, I don't know where this "part of the brain" thing keeps coming from.
It comes from me. You see, in order to establish that you're wrong about structure being relevant, all I have to do is argue that a brain that is not equivalent in structure to the brain you would normally say has personal continuity, still has personal continuity.
If I just establish that, I'm done.
Now, if only a tiny part of that brain, that nevertheless plays a role in personal continuity, can be replaced by a different structure, then I have demonstrated my point.
The criteria is, I walk into a room--I smell apples, cinnamon, pie crust, and pine--and I feel like I'm at grandmas. Why? Because of an association I had as a kid at grandma's house. This is a completely subjective kind of thing--it applies to me, and not you, so we're not merely talking about "smelling things". That's why this example was brought up, remember?
So, if the structure of my brain
changes, then ipso facto it's not the same structure. But if I still feel like I'm at grandma's, then at least my
sense of personal continuity remains the same. Neither you, nor I, can tell the difference.
I'm not going to try to argue about replacing the entire brain. Different kinds of things may have different kinds of arguments, and they may not actually relate to
why I think you're wrong. But I think this part is both applicable and demonstrates a flaw in your view. It meets all of the burden it has to meet to demonstrate that flaw.
That was your objection, not mine.
But it was your example, and I have no idea how you intended on making an engineering feasibility argument by introducing it.
I'm pretty sure I did:
[In a multi-compartment model] the geometry and composition of the original neuron in this case is paramount, because accurately modeling such things is what gives rise to the right behavior. This is function from structure.
Here's where we have problems. You think you say something, but you never explicitly say it. If by the above this is what you mean:
A multi-compartmental model is structurally equivalent to the neural networks it is modeling.
...then I can know you are making a specific claim. Because you do not explicitly make the claim, but you just assume that I will assume that's what you meant, I cannot address it.
However, assuming this is what you meant, you're wrong. A multi-compartmental model will likely have a different structure than a neuron. It will also likely not be functionally equivalent.
If you don't want to be frustrated when talking to me, make explicit claims. All I can get from your quote is that a structure causes a function, which I didn't find much objection in. If you meant to imply that multi-compartmental models were equivalent to the neurons they model, then say so.
Divorced from structure, there is no way to get a functionally equivalent model without the kind of information extraction you disqualified
here.
First off, that post is about informational continuity, not functional equivalence. Regarding functional equivalence, there's no way to get a functionally equivalent model, period. So the point is moot.
This is why I started out simply comparing computers to other computers. What should count here is simply whether the interactions are "good enough" for us to consider. And that's fuzzier. But unless you establish why you think a MCM is acceptable and an IF is not, we're not going to make much progress here at all.
But regarding informational continuity, I can certainly exploit other structures besides neural ones--not just theoretically, but in practice. If I write down my password, and use that slip of paper as an aid the first few times, I've used something entirely different than the brain structure of my memory. But it's still genuinely my password; and if I claim to know it was the password I came up with, the fact that I used a sheet of paper as temporary memory space still counts towards a valid causal chain. At a smaller scale, our brains actually use "cheats" like this constantly at a level we're not usually conscious of (an example of which is how it exploits the fact that things don't tend to change much to mimic our sense of a complete visual field).
However, simulating the structure sidesteps the issue by allowing you to only care about how the brain works, not what that working means.
That's a different topic altogether. Meaning comes from the way we interact with the environment. In particular, we act as agencies; as such, we are capable of recognizing patterns in the environment, initiating actions, remembering the effects of initiating actions, using those remembered effects to instantiate goal based plans, observing the effects of carrying out those plans in order to develop a sense of asserted control (or lack thereof), and having particular sets of drives that tune our interests in meeting particular goals.
The meaning comes from the way these kinds of interactions play out. I am driven to interact with people--interact, volitiously, as an agency. I carve out a concept of my own personhood (which merges into the sense of personal continuity). I become socialized. I learn particular "social habits"--purposes such as that cups are "things to drink out of". I know this not only as book knowledge, but as a kind of applied knowledge--I know not only what a cup "is for", but how to recognize a cup, how to use that recognition to reach out for the cup and touch it, and how to drink out of it; all because of the flow of information throughout this entire set of interactions.