• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Resolution of Transporter Problem

FWIW: I think it is a great thought experiment because it is capable of getting people to question what they think of as self. It was a big reason for me to think long and hard on what I thought of as self.

Exactly. I found myself driven to really ask just what I believed was dying in the process of teletransportation. I found this liberating, and now I object to people running down the thought experiment when it seems clear that actually they aren't even prepared to let themselves really go there.

Nick
 
So whilst I (and I assume Merc) would still maintain that a person is being killed I acknowledge that how that would be viewed by society may evolve into something that doesn't consider the "teleport death" the same way as the death that results from stabbing someone in the heart.

I think it's important to make a distinction here between killing "a person" and killing "someone." In probably every other scenario involving killing that humans have faced they might be considered one and the same but here there is a distinction. "One" is a process that can be recreated, not an object.

Thus I would consider the myriad emotionally-manipulative sideshows that Darat, Merc, and some others invariably come up with when faced with the Teletransporter to actually merely be demonstrative of their lack of self-awareness. I appreciate that this could be seen as an antagonistic statement, and that it will no doubt produce an unconsciously driven reaction, but that's how it is. It is needed imo to appreciate that the Teletransporter calls for some fairly major reworking of our moral perspective.

Nick
 
Last edited:
Exactly. I found myself driven to really ask just what I believed was dying in the process of teletransportation. I found this liberating, and now I object to people running down the thought experiment when it seems clear that actually they aren't even prepared to let themselves really go there.

Nick
Your words and thoughts. Not mine.

Personally I don't care if anyone else is interested in the thought experiment. I used to be but it seems a fruitless enterprise to get people to care about things they don't care about. If Merc finds the hypo meaningless then that is his loss. It's not as if he doesn't have a solid enough grasp of philosophy though.

Bear in mind the old saw. You can lead a horse to water but you can't make him drink. And also bear in mind the corollary, you can lead a horticulture but you can't make her think.
 
Are you Interesting Ian? Why on earth would interacting with 57 (and only 57, if I understand what you are saying here) magically make 137 hear you, from his/her shower at a completely different set of time-space coordinates? If you are talking to all of the duplicates at once, there need be many duplicates of you as well, and you are engaging in sloppy seconds with number 135.

No, because of the scenario. I specifically said that all copies would be magically kept in sync. And by "magically" I don't mean "in a way that isn't possible in reality," I mean "in any number of ways that would be very involved so I don't feel like going into detail."

Thus, in the scenario I gave, when a single individual interacts with any of the copies, all the other copies behave exactly the same. If I ask 57 a question, both 57 and 135 answer me, even though I am not standing in front of 135, because 135 thinks I am standing in front of it. And if I were to move to 135, 57 would be none the wiser because 57 is thinking the exact same thoughts as 135.

Only if the information is substrate-free. Which is why the assumptions you bring to the problem determine your logical conclusion.

Why does it need to be substrate free? All that needs to happen is for multiple substrates to carry the same information. Or is that what you mean by substrate free? If so, I don't see why that is such a wacky idea -- the story of a book would be "substrate free" by that definition.

Yes, again, you are assuming substrate-free information, and you are not following your example through.

I don't think you understood the example. All the copies are forcibly kept in sync.

So what happens, for example, if all copies of a work of Shakespeare are forcibly kept in sync? When a page of one degrades, that same page degrades in an identical way in all other copies. When a page is torn, or marked on, or whatever, the same thing happens to all of them. When the cover is scuffed, all covers are scuffed. You get the idea.

In what ways are the copies different than the original? By definition, only in world position. My question to you is whether two copies of a person are really different people given that they differ only in world position and nothing else.
 
Merc's objection is not that you can't create duplicates of someone just that every-time you do by how we usually define a person you are creating a new person. Sure they will all say that they are Darat, they may in fact be completely indistinguishable from the first Darat, but what there are is a group of distinct and separate Darats, they will be no more connected to one another than you and me.. (Albeit it is very likely that given a similar input into their environment they will all respond in a very similar manner.)

However to kill anyone of those is killing a person no matter that there will still be Darats left in the universe.

You misread my example as well. All the copies are forcibly kept in sync.

So it seems to me that killing one of the copies (and by kill I mean instantaneously remove, without affecting the other copies) is simply removing one of the redundant substrates on which the consciousness resides. Since it has other substrates on which to continue, it is no big deal. Right?
 
Try this idea.

Right now I put you (cyborg) into suspended animation, I then duplicate you and create new-cyborg. I wake you up (cyborg) and show you videos of new-cyborg playing with your children, having a meal with your wife and so on.

You'll just shrug your shoulders and say "Fair enough, might as well top myself since 'I'm' still playing with my kids and enjoying life with my wife?"

Why would he do that?

Why wouldn't he say "hmm, this is pretty cool, my wife and kids still have a father and I can go do some interesting stuff with my life that I wouldn't have been able to with only a single me."
 
I must admit that my certainty over the position I had previously taken in the "teleporter argument", has been shaken by some of these posts. I have put off responding until now, because I really needed some time to think about my stance, before opening my big mouth again.

Experience.

*I* experience my life.

I am definitely a behavioral determinist. I know that we cannot fully comprehend, or calculate the amount of variables that go into a human beings decision making process(choices/free will). However, it is my opinion that these variables do exist, and that choice is really a massively complex matter of cause and effect, the complexity of which creates the illusion of free will.

This does not mean that my very perception, my self-awareness, my experiencing life, is an illusion.

Allow me to make a simple analogy. Most people who are not determinists in regard to human behavior feel that life is like a ride in a car, and they are in the drivers seat, in control of the car. I feel as if I have been fooled, and the car is really driving itself. I am just along for the ride. While I understand that my appreciation of the scenery is just as deterministic as the route I travel, I feel that *I* still do get to subjectively enjoy the drive.

This is why I would be scared to get into that teleporter!

I feel like *my* life experience would be snuffed out upon my destrution at pad A.

I feel like my consciousness would not "jump" to the new me, I feel that the man who materializes at pad B would have a new consciousness, obviously his life would still be deeply affected by my/his past, because that is all a part of the variable pool that goes into his decision making, but I feel that the decision *maker*, the one who is experiencing that drive, would be a new(not different, just new) experiencer of life.

Just as a thought experiment... Imagine that instead of destroying the man on pad A, you put him in cryostasis instead(its the future after all!). The very nanosecond that you freeze the man on pad A, you create the copy of his data at pad B. You then show the copy a playing card. After showing him the card, you destroy him and simultaneously unfreeze the man at pad A.

Does he know what card was shown at pad B?

If not, then I would say that I am still too scared to ride the teleporter. Because I would fear that the man created on pad B would simply carry on where I left off, instead of actually allowing me to experience the rest of my/his life.
 
I must admit that my certainty over the position I had previously taken in the "teleporter argument", has been shaken by some of these posts. I have put off responding until now, because I really needed some time to think about my stance, before opening my big mouth again.

Experience.

*I* experience my life.

I am definitely a behavioral determinist. I know that we cannot fully comprehend, or calculate the amount of variables that go into a human beings decision making process(choices/free will). However, it is my opinion that these variables do exist, and that choice is really a massively complex matter of cause and effect, the complexity of which creates the illusion of free will.

This does not mean that my very perception, my self-awareness, my experiencing life, is an illusion.

Allow me to make a simple analogy. Most people who are not determinists in regard to human behavior feel that life is like a ride in a car, and they are in the drivers seat, in control of the car. I feel as if I have been fooled, and the car is really driving itself. I am just along for the ride. While I understand that my appreciation of the scenery is just as deterministic as the route I travel, I feel that *I* still do get to subjectively enjoy the drive.

This is why I would be scared to get into that teleporter!

I feel like *my* life experience would be snuffed out upon my destrution at pad A.

I feel like my consciousness would not "jump" to the new me, I feel that the man who materializes at pad B would have a new consciousness, obviously his life would still be deeply affected by my/his past, because that is all a part of the variable pool that goes into his decision making, but I feel that the decision *maker*, the one who is experiencing that drive, would be a new(not different, just new) experiencer of life.

Just as a thought experiment... Imagine that instead of destroying the man on pad A, you put him in cryostasis instead(its the future after all!). The very nanosecond that you freeze the man on pad A, you create the copy of his data at pad B. You then show the copy a playing card. After showing him the card, you destroy him and simultaneously unfreeze the man at pad A.

Does he know what card was shown at pad B?

If not, then I would say that I am still too scared to ride the teleporter. Because I would fear that the man created on pad B would simply carry on where I left off, instead of actually allowing me to experience the rest of my/his life.

I used to think along these lines, but then I thought of the following:

You could invent a transporter that didn't make a copy but instead slowly "moved" you from one location to another, while replacing any parts it moved with a simulated part at the other end. So the process starts with a whole person at the source and a whole simulated person at the destination, and both are in sync -- if the person at the source does anything, the simulation at the destination mirrors it.

Then, as the process goes on, individual neurons or even molecules are slowly swapped out at either end. And since each neuron or even molecule is surrouned by either natural or simulated neighbors -- which are always in sync between the source and destination -- it will behave normally.

If you would agree that a teletransporter like this would preserve your self, then we can move on.

This exercise (if you agree) illustrates that what is important is a continuity of information -- the information at the destination must match exactly the information at the destination. By going slowly we can insure that the person at the destination is actually the same person at the source -- by the end of the process the source is a complete simulation of the real material at the destination and we can simply shut it off like any other hologram or whatever.

But what happens if you decrease the time interval over which the process takes place?

That is what really got me thinking -- if you decrease the interval to infinitessimal, then there is no chance of a discontinuity of information regardless of what happens at the source or destination.

What happens when you decrease the interval to infinitessimal? You are simply freezing the information so neither source nor destination can change before the process is complete.

But you don't need to decrease the interval! You could also simply freeze the information and then take as long as you want to transport it. The effect would be the same -- no discontinuity.

So if you could freeze a person in some magical stasis field, copy them fully to another location, and then destroy the original before it was taken out of stasis, by all logical and mathematical accounts the copy would be the original. Of course, that is assuming consciousness is a physical process.
 
Last edited:
I used to think along these lines, but then I thought of the following:

You could invent a transporter that didn't make a copy but instead slowly "moved" you from one location to another, while replacing any parts it moved with a simulated part at the other end. So the process starts with a whole person at the source and a whole simulated person at the destination, and both are in sync -- if the person at the source does anything, the simulation at the destination mirrors it.

Then, as the process goes on, individual neurons or even molecules are slowly swapped out at either end. And since each neuron or even molecule is surrouned by either natural or simulated neighbors -- which are always in sync between the source and destination -- it will behave normally.

If you would agree that a teletransporter like this would preserve your self, then we can move on.

This exercise (if you agree) illustrates that what is important is a continuity of information -- the information at the destination must match exactly the information at the destination. By going slowly we can insure that the person at the destination is actually the same person at the source -- by the end of the process the source is a complete simulation of the real material at the destination and we can simply shut it off like any other hologram or whatever.

But what happens if you decrease the time interval over which the process takes place?

That is what really got me thinking -- if you decrease the interval to infinitessimal, then there is no chance of a discontinuity of information regardless of what happens at the source or destination.

What happens when you decrease the interval to infinitessimal? You are simply freezing the information so neither source nor destination can change before the process is complete.

But you don't need to decrease the interval! You could also simply freeze the information and then take as long as you want to transport it. The effect would be the same -- no discontinuity.

So if you could freeze a person in some magical stasis field, copy them fully to another location, and then destroy the original before it was taken out of stasis, by all logical and mathematical accounts the copy would be the original. Of course, that is assuming consciousness is a physical process.


The idea of a slow replacement scenario is what has shaken my ideas about this so badly in the first place. Not really like the one you described however.

I would think that in a scenario like your own, where I am slowly replaced by artificial parts... My role as an "experiencer" of life, would network in with those artificial parts and I would still be experiencing life as the artifical me when my old body was finally finished moving to the new location.

I don't see how killing the artifical me at that point would allow me to continue on experiencing the life of my old body at the other end of the teleporter.

Am I too hung up on this *experiencer* of life bit? I feel like there is something wrong or unkowable about this entire scenario, but I really can't pin down what.

I feel sort of like I *shouldn't* be scared to get into the teleporter... But I am.
 
The idea of a slow replacement scenario is what has shaken my ideas about this so badly in the first place. Not really like the one you described however.

I would think that in a scenario like your own, where I am slowly replaced by artificial parts... My role as an "experiencer" of life, would network in with those artificial parts and I would still be experiencing life as the artifical me when my old body was finally finished moving to the new location.

I don't see how killing the artifical me at that point would allow me to continue on experiencing the life of my old body at the other end of the teleporter.

Am I too hung up on this *experiencer* of life bit? I feel like there is something wrong or unkowable about this entire scenario, but I really can't pin down what.

I feel sort of like I *shouldn't* be scared to get into the teleporter... But I am.

You should read the OP and see what you think. Reading what you write, it sounds like you are still hung up on the physical aspect of things. I think it is critical to understand exactly what D2 really is.
 
No, because of the scenario. I specifically said that all copies would be magically kept in sync. And by "magically" I don't mean "in a way that isn't possible in reality," I mean "in any number of ways that would be very involved so I don't feel like going into detail."

Thus, in the scenario I gave, when a single individual interacts with any of the copies, all the other copies behave exactly the same. If I ask 57 a question, both 57 and 135 answer me, even though I am not standing in front of 135, because 135 thinks I am standing in front of it. And if I were to move to 135, 57 would be none the wiser because 57 is thinking the exact same thoughts as 135.
This reminds me a bit of the "intelligent design" proponents who claim that the designer need not be god... any omnipotent, omniscient entity would do.

Your "kept in sync" is functionally identical to Interesting Ian's shared single awareness. The "any number of ways that would be very involved" is a measure of the lengths you are going to assume your conclusions without appearing to.
Why does it need to be substrate free? All that needs to happen is for multiple substrates to carry the same information. Or is that what you mean by substrate free? If so, I don't see why that is such a wacky idea -- the story of a book would be "substrate free" by that definition.
The "information" which makes up your consciousness may (unless you begin by assuming that it is all physically represented within you) include elements of the environment you are in. We certainly act differently in different situations; our remembering is facilitated by PDAs or phone books; the stimuli in our environment exert tremendous control over our public and private behavior. Our conscious awareness may be (we can assume it is; we can assume it is not, or we can await data) quite literally dependent on the physical environment we are in as well as the physical conditions of our body. (in the prior thread on this, a simple question was asked: if we replicate a thrown ball in mid-flight, are we replicating the momentum as well? Are we replicating the distance from the ground?) Your assumption that the replicants can be forcibly kept in synch is what I tried to address by requiring the environment to be copied as well. If you think you can do it without the environment, that path reflects assumptions you have made.
I don't think you understood the example. All the copies are forcibly kept in sync.
In my scenario, there is a mechanism for that. It may not be necessary, but it is sufficient. It does not assume one view or another.
So what happens, for example, if all copies of a work of Shakespeare are forcibly kept in sync? When a page of one degrades, that same page degrades in an identical way in all other copies. When a page is torn, or marked on, or whatever, the same thing happens to all of them. When the cover is scuffed, all covers are scuffed. You get the idea.
Don't tell the libertarians--you will be in trouble for scuffing my copy.
In what ways are the copies different than the original? By definition, only in world position. My question to you is whether two copies of a person are really different people given that they differ only in world position and nothing else.
Yes. Of course. As Westprog linked above. You seem to have no problem understanding that if you kill off n-1 of them at random, the last one is a real person. Clearly, that means each of them is.
 
The "any number of ways that would be very involved" is a measure of the lengths you are going to assume your conclusions without appearing to.

The assumption is that consciousness is a finite amount of information.

The conclusion is that a teletransporter doesn't kill anyone.

If you think the assumption is the same as the conclusion, you are right -- that is the whole point of logical equivalence. 2 + 2 == 4 is equivalent to saying 1 + 1 == 2. But I don't think you would accuse someone of "assuming the conclusion" when they say "if you assume 1 + 1 == 2, then it turns out 2 + 2 == 4 as well."

Or would you?

And what exactly are you attacking, the assumption or the equivalence? If you are attacking the assumption ... then nothing, because it is clearly labeled as an assumption in the OP. If you are attacking the equivalence you could at least provide some arguments showing why it is wrong.

The "information" which makes up your consciousness may (unless you begin by assuming that it is all physically represented within you) include elements of the environment you are in. We certainly act differently in different situations; our remembering is facilitated by PDAs or phone books; the stimuli in our environment exert tremendous control over our public and private behavior. Our conscious awareness may be (we can assume it is; we can assume it is not, or we can await data) quite literally dependent on the physical environment we are in as well as the physical conditions of our body. (in the prior thread on this, a simple question was asked: if we replicate a thrown ball in mid-flight, are we replicating the momentum as well? Are we replicating the distance from the ground?) Your assumption that the replicants can be forcibly kept in synch is what I tried to address by requiring the environment to be copied as well. If you think you can do it without the environment, that path reflects assumptions you have made..

The environment is irrelevant. What is relevant is the effect of the environment. Whether or not an environmental effect can be simulated or need be real is a question of implementation and doesn't change the outcome of the experiment.

Or are you claiming that a human mind would somehow "just know" whether the mountain vista in front of it was simulated or real?

EDIT: It almost seems to me that you are arguing that if the copies were kept in sync forcibly then they would in fact not be true separate copies. Well, that is what I am trying to show!

In my scenario, there is a mechanism for that. It may not be necessary, but it is sufficient. It does not assume one view or another.

No there isn't. You clearly asked how 135 would know of the question 57 was asked. If your 135 and 57 were in sync, each of them would know everything any of them know.

You seem to have no problem understanding that if you kill off n-1 of them at random, the last one is a real person. Clearly, that means each of them is.

You seem to have a problem understanding that if you burn n-1 copies of a work of Shakespeare, the last remaining instance of the work is very special simply because it is the last remaining instance of the work.
 
Last edited:
I have been over this before - in the strictest sense 'I' am not the 'I' of a second ago and I'm not the 'I' of a second to come. The 'I' is constantly 'dying'. Clearly we are not the 'same' person - but then I'm clearly not the same person I was a second ago.

So accepting that to talk about 'I' meaningfully you have to talk about 'paths' - we share paths but not identical ones and they're continually diverging. I doubt I'd kill myself but then I wouldn't be so bothered by the concept of the existence of one of my potential paths continuing inspite of my demise.

When it comes to the sort of infintessimal difference we're talking about in the transporter scenario I'd hardly be bothered at all.

Right so you aren't saying multiple "I"s are not created and agree that each "I" that is destroyed is a person being killed, just that you don't think it's a big deal. That's why I said earlier:

So whilst I (and I assume Merc) would still maintain that a person is being killed I acknowledge that how that would be viewed by society may evolve into something that doesn't consider the "teleport death" the same way as the death that results from stabbing someone in the heart.​
 
The environment is irrelevant. What is relevant is the effect of the environment. Whether or not an environmental effect can be simulated or need be real is a question of implementation and doesn't change the outcome of the experiment.

Or are you claiming that a human mind would somehow "just know" whether the mountain vista in front of it was simulated or real?
Assumes a human "mind"; when I said sufficient but perhaps not necessary, I simply am pointing out that the way we already know that we see mountain vistas is by having said stimulus in front of us. If you want to assume that you can control them some other way, you are assuming things about "minds" that divorce them from the environment. This is why you can conclude that "the environment is irrelevant".
EDIT: It almost seems to me that you are arguing that if the copies were kept in sync forcibly then they would in fact not be true separate copies. Well, that is what I am trying to show!
That is not at all what I am saying. Remember, the simple fact of their taking up different time-space coordinates was sufficient for me to say they are separate. Here, I am just attempting to point out some of the additional assumptions inherent in your description.
No there isn't. You clearly asked how 135 would know of the question 57 was asked. If your 135 and 57 were in sync, each of them would know everything any of them know.
And the "keeping in synch" is your handwaving. It presupposes all the elements necessary to conclude that self is information only, independent of substrate or environment. Of course, if we start with your assumptions we end up with your conclusions.
You seem to have a problem understanding that if you burn n-1 copies of a work of Shakespeare, the last remaining instance of the work is very special simply because it is the last remaining instance of the work.
And the second to last was very special because it could have been the last, but for the flip of a coin? In the many-worlds example, of course, any of us could be one of a multitude of Darats, but each might think he was the last (or only) remaining instance of the work.

And are you suggesting that if we have all of Shakespeare's works backed up digitally, there would be no reason other than sentimentality to keep the originals? That the only important part is the information?

Douglas Adams wrote a bit about that, with poems written on leaves, and time travel...
 
Right so you aren't saying multiple "I"s are not created and agree that each "I" that is destroyed is a person being killed, just that you don't think it's a big deal.

I've never said anything different - I really don't understand where all this confusion has arisen from.
 
And are you suggesting that if we have all of Shakespeare's works backed up digitally, there would be no reason other than sentimentality to keep the originals?

There is no other reason other than sentimentality to keep the originals.

You can't argue that it's a rational decision - it's clearly an emotional attachment.
 
Assumes a human "mind"; when I said sufficient but perhaps not necessary, I simply am pointing out that the way we already know that we see mountain vistas is by having said stimulus in front of us. If you want to assume that you can control them some other way, you are assuming things about "minds" that divorce them from the environment. This is why you can conclude that "the environment is irrelevant".

I could be wrong but I believe existing evidence suggests that if you block the input from sensory neurons a person won't know they are standing in front of a mountain vista.

To me this implies that if you tricked sensory neurons into firing the way you want the individual would think they were wherever you wanted them to think they were.

But at any rate this is irrelevant because I am talking about keeping every atom of every neuron in sync (if need be -- I don't think the granularity needs to be that fine, but who knows). So you are right that I am assuming such a thing is possible.

If you don't want to make that assumption then clearly the thought experiment is not for you.

And the "keeping in synch" is your handwaving. It presupposes all the elements necessary to conclude that self is information only, independent of substrate or environment. Of course, if we start with your assumptions we end up with your conclusions.

So what? If you start with the assumption 1 + 1 == 2, you end up with the conclusion 2 + 2 == 4.

If you start with the assumption that the Earth is flat, you end up with the conclusion that sailing to the edge is a bad idea.

Both are logically valid arguments.

It seems like you are really attacking the assumption that self is information only. OK. But that wasn't the point of this thread. The point of this thread was to discuss what the implications are if that assumption is true.

And the second to last was very special because it could have been the last, but for the flip of a coin?

Sure. If what is important is that the work survives in some form, then clearly a 0.5 probability of being the last surviving work (should one of them randomly disappear) makes it much more important than a copy with a 0.001 probability of being the last surviving work but not as important as a copy with a probability of 1.0.

Your argument here is severly flawed because you are ignoring the fact that humans assign dynamic value to things all the time. None of the coins minted in 1500 were that special in 1500. Now the remaining ones are simply because they are the remaining ones. So are you saying all those coins were just as special as the current survivors when they were minted? If so, why didn't anyone treat them that way?

In the many-worlds example, of course, any of us could be one of a multitude of Darats, but each might think he was the last (or only) remaining instance of the work.

I don't care about any "many worlds" interpretations because in those interpretations neither I nor anyone I care about knows anything of the other copies. In the teletransporter example I am explicitly informed of what happens.

And are you suggesting that if we have all of Shakespeare's works backed up digitally, there would be no reason other than sentimentality to keep the originals? That the only important part is the information?

Absolutely. But why is sentimentality so bad, Mercutio? If we are sentimental because those originals were touched by the great Shakespeare himself, so what? You are sounding like a theist who is deluded into thinking that value must be absolute. Why can't we assign sentimental value to objects?

What you have to realize is that not everyone is as sentimental towards their physical bodies as you are towards the original works of Shakespeare. I, for one, don't give a hoot about this rotting piece of meat my mind resides in. I don't like that I am bald, I don't like my complexion, I don't like my genetic diseases, I don't like a whole heck of a lot. And I certainly don't attach any extra value to it simply on the basis that it has been through 30 years of my life. What I do like is my mind. So if I could keep the mind and swap out the cruddy body for a new one -- ideally, one that I could pick and choose at whim -- I would be very inclined to do so.
 
There is no other reason other than sentimentality to keep the originals.

You can't argue that it's a rational decision - it's clearly an emotional attachment.

There are other related reasons: preserving information that can't yet be encoded digitally, such as the chemical composition of the paper, meaningful scratches and tears by the author, the remains of Shakespeare's dinner or lunch accidentally smeared on the paper.
 
There are other related reasons: preserving information that can't yet be encoded digitally, such as the chemical composition of the paper, meaningful scratches and tears by the author, the remains of Shakespeare's dinner or lunch accidentally smeared on the paper.

None of which have any impact whatsoever on the contents of the work.
 

Back
Top Bottom