• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Does the traditional atheistic worldview contradict materialism?

I thought I did the most relevant example I could think of.
You gave that example, sure. But you made no claims either way about whether the IF model you have in mind is functionally equivalent to the compartmental model.

I'm sure I can build some IF network just as good as your
compartmental one, though it might be messy in the Turing Tarpit. It very likely will wind up using more neural nodes.
The geometry and composition of the original neuron in this case is paramount, because accurately modeling such things is what gives rise to the right behavior.
If the function is different it's not a functional equivalent.
 
They would be functionally equivalent for the subset of neurons I mentioned, yes, and you're right that you can make a more complicated network of simple nodes perform the same as complex ones. However, a Chinese room implementation of your brain would also be functionally equivalent in the same manner.
 
They would be functionally equivalent for the subset of neurons I mentioned, yes, and you're right that you can make a more complicated network of simple nodes perform the same as complex ones. However, a Chinese room implementation of your brain would also be functionally equivalent in the same manner.
In an abstract sense, sure. In practice, no. The device replacing the structure has to stimulate the rest of the brain in a timely fashion. If I feel like I'm at grandma's house 40 years later, that's no good. The functions of the entire system are not equivalent. We can't slow down reality outside the system, so to be equivalent, we must meet timing constraints.

But you still don't need the original structures to make this work. You just need to meet this sort of invariant, and you're done.
 
The gentleman in the Chinese room that you say works too slowly. This is a hypothetical situation, so he can work as quickly as he hypothetically needs to.
 
The gentleman in the Chinese room that you say works too slowly. This is a hypothetical situation, so he can work as quickly as he hypothetically needs to.

Then it would be functionally equivalent, and as such, the rest of my brain would not know the difference. But this is a really bizarre question; CR's are usually invoked to make arguments about the nature of understanding. So I still don't know what you're getting at.

Could we just start with the IF network case before going off and speculating about these bizarre situations? Otherwise I'm just going to get confused. It doesn't help if we just randomly jump from hypothetical to hypothetical.

If that specific grandma circuit were replaced by a functionally equivalent IF circuit, my argument is that nothing would change. All detectable differences that could matter are ruled out simply because we preserved the function. So it would still smell like I was at grandma's.

Would you argue that something is different here? If so, could you explain what that would be?
 
Yes. The issue neatly illustrates the difference between simulation and emulation. This wiki page on mind uploading that I just found has more detailed information.

More relevant to your interests is that the function-only models require a third party to reverse engineer all of the information encoded in your brain before building up a new functionally equivalent way of doing the same thing, whereas structural models can dumbly mimic the biological processes faithfully enough that functional equivalence arises naturally.
 
I haven't read every bit of every post in this surreal thread, but it seemed to start off with a confusing ambiguity in the use of the term 'personal identity', then degenerate from there. I probably missed something, but I'm sure someone will explain what.

It seems to me that in a thought experiment, we can allow practical impossibilities to explore ideas. So I'm prepared (for the sake of argument) to accept that subatomically exact duplicates of an individual can be made. To remove problems of contiguity, let's say two duplicates are made and the original is destroyed. The duplicates wake up at the same time in indistinguishable rooms. I'll also accept a 100% deterministic universe, particles without identity, and whatever else is necessary for the duplicates to remain physically indistinguishable.

So we have 2 indistinguishable people in separate rooms, looking the same, thinking exactly the same thoughts, having the same feelings, the same personalities, their mother couldn't tell them apart, nor could any measurement. They are duplicate instances of the same personality and consciousness running on structurally identical brains. Even when they are told of each other's existence, they each feel they have their own 'identity' (personhood, sense of self), and they are right, even though they are indistinguishable and they both feel their individuality in identical ways; and they are indeed physically separate individuals, as any external observer can tell.

Naturally, as soon as one's environment differs from the other's, their experiences will differ and they will start to become distinguishable, inexact duplicates, indicating that they really were separate personal identities despite originally being indistinguishable.

It seems true that if you extend the indistinguishable environments until you have two indistinguishable universes, you can start asking empty philosophical questions like 'are they really the same person, and what do we mean by that?' - but, of course, that would also apply to everyone and everything else in the two duplicate universes. [My predilection would be that they would still be two separate universes containing indistinguishable duplicates of people].

The only thing I can see that all this tells us is that we need to be very careful with the definition and use of concepts like 'personal identity' in far-fetched thought experiments. In the real world, none of this is possible.

For me, the interesting question that arises from this thought experiment, is what happens if you present these two indistinguishable deterministic individuals with each other, face-to-face (via a window-like screen, so the environment remains identical for each) ?? Do they both continue to exactly mirror each other's actions?
 
For me, the interesting question that arises from this thought experiment, is what happens if you present these two indistinguishable deterministic individuals with each other, face-to-face (via a window-like screen, so the environment remains identical for each) ?? Do they both continue to exactly mirror each other's actions?

Oh, I think we know what'll happen! Nudge nudge, eh, you know what I mean?
 
More relevant to your interests is that the function-only models require a third party to reverse engineer all of the information encoded in your brain before building up a new functionally equivalent way of doing the same thing, whereas structural models can dumbly mimic the biological processes faithfully enough that functional equivalence arises naturally.
There's an article around somewhere describing research where rats learned a task, while the neural pattern evoked by the task was recorded, then the memory of the task was erased by blocking retrieval via the hippocampus, and finally it was artificially restored by external stimulation with the recorded pattern.

That sounds fairly close to such reverse engineering...
 
There's an article around somewhere describing research where rats learned a task, while the neural pattern evoked by the task was recorded, then the memory of the task was erased by blocking retrieval via the hippocampus, and finally it was artificially restored by external stimulation with the recorded pattern.

That sounds fairly close to such reverse engineering...

Unfortunately I can't read the article, so I am not sure of the memory disconnect mechanism, which I would like to see.
http://www.springerlink.com/content/j31u4656868m4482/#section=817377&page=1

This gives a clue:
http://medgadget.com/2011/06/brain-implant-restores-and-enhances-memory-formation.html
In one of the experiments, researchers had rats learn a task, pressing one lever rather than another to receive a reward. Electrical probes recorded the rats’ brain activity between CA3 and CA1, two regions of the hippocampus largely responsible for converting short-term memory into long-term memory. Researchers next used drugs to block the interaction between CA3 and CA1, which caused the rats to remember what they learned for only 5-10 seconds. The device was activated, and the pharmacologically blocked rats were shown to be able to form long-term memories once again, only this time with the help of the neural implant.
Now without reading the study it is hard to say because really they aren't talking about long term memory in the protocol that I can read.

they are talking about a short term processing memory which would be an analog to the short to medium term task memory. The protocol essential involves a reward for pressing a lever and then pressing the other lever after the rat goes across the cage to do a nose press.

And so really i am not sure that the process of the 'recreation' actually involves the creation of memories that are stored and returned. But that it allow the rat to regain the process of short term retention.

It is not so much an implant of a memory as the return of the ability to retain briefly. (IMO)
 
Last edited:
Yes. The issue neatly illustrates the difference between simulation and emulation. This wiki page on mind uploading that I just found has more detailed information.

More relevant to your interests is that the function-only models require a third party to reverse engineer all of the information encoded in your brain before building up a new functionally equivalent way of doing the same thing, whereas structural models can dumbly mimic the biological processes faithfully enough that functional equivalence arises naturally.
Again, you confuse me. We went from an impossibly fast Englishman answering Chinese questions by using a complex rulebook on top of which we built an elaborate multi-compartmental model of a specific piece of a neural network, to the pragmatic engineering concerns of building a functional model that is functionally equivalent?

I'm completely lost here. My original point had to do with whether or not a specific structure is necessary in order to create a sense of personal continuity; when I smell apples, pie crust, cinnamon, and pine, somehow "grandma" comes to mind, and I'm not really sure the mechanics behind how that works matters too much for me.

Mind upload seems like an entirely different topic than this.
 
Last edited:
Wait, whut?

No, that was my original point. You posited "information continuity" as a possible convention for keeping a sense of personal continuity, and I've been trying to figure out whether the structure of the brain is necessary by your definition. You keep saying it isn't, then rejecting all examples where it isn't, so at this point I dunno what the hell.
 
Wait, whut?

No, that was my original point. You posited "information continuity" as a possible convention for keeping a sense of personal continuity, and I've been trying to figure out whether the structure of the brain is necessary by your definition.
Look. One post:
The gentleman in the Chinese room that you say works too slowly. This is a hypothetical situation, so he can work as quickly as he hypothetically needs to.
Another post:
More relevant to your interests is that the function-only models require a third party to reverse engineer all of the information encoded in your brain before building up a new functionally equivalent way of doing the same thing,​
So in exhibit A, you want to make your Chinese Room work so fast that it's functionally equivalent; presumably because you think a hypothetical Chinese Room being functionally equivalent would be something I disagree with as, well, being equivalent.

In exhibit B, you want your objection to be, we cannot in practice do it.

You are inconsistent here. What are you arguing? Are you arguing that we need to have the same structure for theoretical reasons to maintain continuity? Or are you arguing that we need to have the same structure for practical reasons? State your position.

Also, it's not clear to me what you mean exactly by structure; are you claiming that a multi-compartmental neural network model has the same structure as the wetware it is modeling? I don't think it does.

Regardless, my position is simple. Only the way the information flows through the system (including the way it interacts with the environment) has to be preserved (and in particular, the kinds of information we care about--I could get into that later). Structure does not have to be preserved. That I can change the structure and make it behave the same way should be suffice to argue that it doesn't have to be preserved. Making a part of the structure different, yet functionally equivalent, is a way of demonstrating that specific structure isn't necessary.
 
Last edited:
I'm getting a bit tired of your arguments jumping all over the place. Is a Chinese room of your brain "functionally equivalent," in whatever manner satisfies you, or not?
 
I'm getting a bit tired of your arguments jumping all over the place. Is a Chinese room of your brain "functionally equivalent," in whatever manner satisfies you, or not?
What a silly question.
Beelzebuddy said:
However, a Chinese room implementation of your brain would also be functionally equivalent in the same manner.
My response.
The device replacing the structure has to stimulate the rest of the brain in a timely fashion.​
You:
The gentleman in the Chinese room that you say works too slowly. This is a hypothetical situation, so he can work as quickly as he hypothetically needs to.
Me:
Then it would be functionally equivalent, and as such, the rest of my brain would not know the difference.

So the question you just asked me, was already answered multiple posts ago.

If you want me to be explicit, I'll be explicit, again. Yes, if the CR guy is impossibly fast and is connected in the right way to the structures of the brain such that he is functionally equivalent, then he is functionally equivalent.

Presumably, this is a problem for some reason for my view that a particular structure is not required. I hope you're not presuming that it's a problem for my view because we cannot in practice have a Chinese Room guy work that fast.

But this is precisely the nature of the problem you want to raise with the IF--that we cannot in practice build one that is functionally equivalent to a neural network. Unfortunately, I've contended many, many posts ago that we cannot build a functionally equivalent machine to a brain--because of quantum mechanical thingies, remember?

The problem I'm having with this discussion is that you are not presenting any views. I've yet to hear whether or not you think your multi-compartmental model has the same structure as the original network, for example. I say it doesn't, you say... well, you never did say.

Now, yes, I still have my position that informational continuity is important to personal continuity, and that structure doesn't matter. What objections do you have? Please phrase it in the form of a clear specific objection, and not another example I have no idea how to relate to.

For example, explain why you think structure is necessary.
 
Last edited:
The device replacing the structure has to stimulate the rest of the brain in a timely fashion.
I didn't specify or mean to imply that this was only some part of the brain. It's the whole thing, I don't know where this "part of the brain" thing keeps coming from. If that doesn't change your answer, go ahead and disregard this.

I hope you're not presuming that it's a problem for my view because we cannot in practice have a Chinese Room guy work that fast.
That was your objection, not mine.
The device replacing the structure has to stimulate the rest of the brain in a timely fashion. If I feel like I'm at grandma's house 40 years later, that's no good. The functions of the entire system are not equivalent. We can't slow down reality outside the system, so to be equivalent, we must meet timing constraints.

The problem I'm having with this discussion is that you are not presenting any views. I've yet to hear whether or not you think your multi-compartmental model has the same structure as the original network, for example. I say it doesn't, you say... well, you never did say.
I'm pretty sure I did:
[In a multi-compartment model] the geometry and composition of the original neuron in this case is paramount, because accurately modeling such things is what gives rise to the right behavior. This is function from structure.

Now, yes, I still have my position that informational continuity is important to personal continuity, and that structure doesn't matter. What objections do you have? Please phrase it in the form of a clear specific objection, and not another example I have no idea how to relate to.
Divorced from structure, there is no way to get a functionally equivalent model without the kind of information extraction you disqualified here. However, simulating the structure sidesteps the issue by allowing you to only care about how the brain works, not what that working means.
 
I didn't specify or mean to imply that this was only some part of the brain. It's the whole thing, I don't know where this "part of the brain" thing keeps coming from.
It comes from me. You see, in order to establish that you're wrong about structure being relevant, all I have to do is argue that a brain that is not equivalent in structure to the brain you would normally say has personal continuity, still has personal continuity.

If I just establish that, I'm done.

Now, if only a tiny part of that brain, that nevertheless plays a role in personal continuity, can be replaced by a different structure, then I have demonstrated my point.

The criteria is, I walk into a room--I smell apples, cinnamon, pie crust, and pine--and I feel like I'm at grandmas. Why? Because of an association I had as a kid at grandma's house. This is a completely subjective kind of thing--it applies to me, and not you, so we're not merely talking about "smelling things". That's why this example was brought up, remember?

So, if the structure of my brain changes, then ipso facto it's not the same structure. But if I still feel like I'm at grandma's, then at least my sense of personal continuity remains the same. Neither you, nor I, can tell the difference.

I'm not going to try to argue about replacing the entire brain. Different kinds of things may have different kinds of arguments, and they may not actually relate to why I think you're wrong. But I think this part is both applicable and demonstrates a flaw in your view. It meets all of the burden it has to meet to demonstrate that flaw.
That was your objection, not mine.
But it was your example, and I have no idea how you intended on making an engineering feasibility argument by introducing it.
I'm pretty sure I did:
[In a multi-compartment model] the geometry and composition of the original neuron in this case is paramount, because accurately modeling such things is what gives rise to the right behavior. This is function from structure.
Here's where we have problems. You think you say something, but you never explicitly say it. If by the above this is what you mean:
A multi-compartmental model is structurally equivalent to the neural networks it is modeling.​
...then I can know you are making a specific claim. Because you do not explicitly make the claim, but you just assume that I will assume that's what you meant, I cannot address it.

However, assuming this is what you meant, you're wrong. A multi-compartmental model will likely have a different structure than a neuron. It will also likely not be functionally equivalent.

If you don't want to be frustrated when talking to me, make explicit claims. All I can get from your quote is that a structure causes a function, which I didn't find much objection in. If you meant to imply that multi-compartmental models were equivalent to the neurons they model, then say so.
Divorced from structure, there is no way to get a functionally equivalent model without the kind of information extraction you disqualified here.
First off, that post is about informational continuity, not functional equivalence. Regarding functional equivalence, there's no way to get a functionally equivalent model, period. So the point is moot.

This is why I started out simply comparing computers to other computers. What should count here is simply whether the interactions are "good enough" for us to consider. And that's fuzzier. But unless you establish why you think a MCM is acceptable and an IF is not, we're not going to make much progress here at all.

But regarding informational continuity, I can certainly exploit other structures besides neural ones--not just theoretically, but in practice. If I write down my password, and use that slip of paper as an aid the first few times, I've used something entirely different than the brain structure of my memory. But it's still genuinely my password; and if I claim to know it was the password I came up with, the fact that I used a sheet of paper as temporary memory space still counts towards a valid causal chain. At a smaller scale, our brains actually use "cheats" like this constantly at a level we're not usually conscious of (an example of which is how it exploits the fact that things don't tend to change much to mimic our sense of a complete visual field).
However, simulating the structure sidesteps the issue by allowing you to only care about how the brain works, not what that working means.
That's a different topic altogether. Meaning comes from the way we interact with the environment. In particular, we act as agencies; as such, we are capable of recognizing patterns in the environment, initiating actions, remembering the effects of initiating actions, using those remembered effects to instantiate goal based plans, observing the effects of carrying out those plans in order to develop a sense of asserted control (or lack thereof), and having particular sets of drives that tune our interests in meeting particular goals.

The meaning comes from the way these kinds of interactions play out. I am driven to interact with people--interact, volitiously, as an agency. I carve out a concept of my own personhood (which merges into the sense of personal continuity). I become socialized. I learn particular "social habits"--purposes such as that cups are "things to drink out of". I know this not only as book knowledge, but as a kind of applied knowledge--I know not only what a cup "is for", but how to recognize a cup, how to use that recognition to reach out for the cup and touch it, and how to drink out of it; all because of the flow of information throughout this entire set of interactions.
 
Last edited:
Hello, I'm very happy to join your forum. :)

I'd like to share some thoughts on the hypothetical situation in the original topic.
We assume that a teleporting device exists, which can instantly make a complete copy of a human being down to the quantum level and we assume that the copy becomes alive instantly. I assume that what we call "personality" is the continuous, uninterrupted process happening in one's brain and body and all the memories somehow stored in the brain and body.
If we somehow manage to make a perfect copy of someone it will be indistinguishable from the original in all aspects but that is only if viewed from the outside, from the point of view of all external observers. The "copy" will have the same memories as the "original", each of them could swear that he was the "original". But they will be TWO separate human beings with separate (although identical) personalities (i.e. processes happening in their bodies) and separate experiences. Assuming that the "copy" doesn't know that it is the result of a copying process, but the "original" knows he was copied, only the "original" will know what happened and that there is another person identical to him. The machine can be programmed to kill the original when done with the copying process. In this case, the original will die, he will experience death and he and his particular personality (i.e. process in his body) will cease to exist. This will be the very real death, that everyone of us will experience some day. The "original" will not magically continue to live or "wake up" in the "copy", he will be just gone. But from the point of view of an external observer nothing will change - the "copy" is identical and no one can tell that he was copied. The "copy" himself won't be able to tell that he is a copy, from his point of view nothing has changed, he has the memories of his entire life as if he actually lived it, although he in fact began to exist just a moment ago. He will remember stepping in the device in the US and the next moment stepping out of a device in Bulgaria. The "copy" will experience teleportation, but the "original" will experience only death.
In short: Suppose you are in the US and I am here in Bulgaria. I want you teleported here. You step in the teleportation device, you hear "click" and in my device here a perfect you appears, living and breathing. As far as I'm concerened you are here. In fact the real, original you is still in the US. Then the device in the US kills the "original" you. You actually die, you experience your death, you cease to exist. You don't experience automagical "transfer" to Bulgaria, i.e. at the moment of your death you don't start experiencing being in Bulgaria. It's just over for you. But the "you" everyone else perceives is still living - here in Bulgaria. No one, even the copied "you" will be able to tell that he is not the same being that was in the US, or that this being have died. But still a human being is dead.
It raises an ethical question - is it OK to end a life if no one will be able to tell what happened? As far as everybody is concerned you are alive and well in Bulgaria. But still a human being suffered very real death. There is no way anyone can prove by any means that the "copy" is not the "original". If for example the "original" body is completely incinerated then legally speaking no one is killed, because there is no body and the person who is supposedly killed is alive and well.
So if such device ever existed I will not agree to step inside, because If I do I will die. But I may agree for YOU to step inside (in fact I won't), because as far as I'm concerned you will not die :)
 
Last edited:

Back
Top Bottom