• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Star Trek Transporter Enigma

Yes. That's what I just said, I think.



I'm wondering if rocketdodger is assuming something beyond physics as we understand it -- something like Bostrom's simulation hypothesisWP, where we exist inside a computer program (which might have the simultaneous, universal [privileged] access to its simulated agents required to establish mental identity -- though why the simulaters would code it so the agents were sharing a single instance of a particular mind rather than instantiating two of the class 'mind' with temporarily identical attributes still doesn't make sense, even from a programming pov).

I don't know what the point of it is. He seems to be just saying "Suppose that two bodies shared a single mind. Then would you admit that two bodies shared a single mind. Would you?! Would you?!" Well, yeah, but so what? We're no longer dealing with a thought experiment, we're dealing with the land of make-believe.

That's what I am considering. My point is, without assuming something like the simulation hypothesis, whether two separate systems are identical or not has no meaning within physics beyond the fact they are identical: it does not mean they aren't separate.

And no two systems are ever identical. They might have certain properties that are identical, but some other properties are always different - because if all the properties were the same, they'd be the same system.
 
I don't know what the point of it is. He seems to be just saying "Suppose that two bodies shared a single mind. Then would you admit that two bodies shared a single mind. Would you?! Would you?!" Well, yeah, but so what? We're no longer dealing with a thought experiment, we're dealing with the land of make-believe.

Since "mind" is a set of activities it's a strange phrase: "share a mind". If two people are running with identical gaits at identical speeds do we say they are "sharing a running"? A "run"? A "running-ness"? Sometimes we do in philosophy; but when we do, it's understood that we mean they are sharing a description: that their separate activities share, have, an identical description.

I think the basis of the teleporter confusion is the claim that a mind is equivalent to some algorithm which describes it. They're not. One exists within an active physical system; the other, in a set of symbols to ideally describe it. Except in cases of self-reference, descriptions of descriptors, which minds are not... Words ain't what they talk about. I don't know how else to say it. :boggled:

And no two systems are ever identical. They might have certain properties that are identical, but some other properties are always different - because if all the properties were the same, they'd be the same system.

If you include position in time and space, yes. (That may have been the upshot of the entropy entry you linked to a couple of pages ago.)
 
Since "mind" is a set of activities it's a strange phrase: "share a mind". If two people are running with identical gaits at identical speeds do we say they are "sharing a running"? A "run"? A "running-ness"? Sometimes we do in philosophy; but when we do, it's understood that we mean they are sharing a description: that their separate activities share, have, an identical description.

I think the basis of the teleporter confusion is the claim that a mind is equivalent to some algorithm which describes it. They're not. One exists within an active physical system; the other, in a set of symbols to ideally describe it. Except in cases of self-reference, descriptions of descriptors, which minds are not... Words ain't what they talk about. I don't know how else to say it. :boggled:

I think you may underestimate just how mystical the Strong AI position is.

If you include position in time and space, yes. (That may have been the upshot of the entropy entry you linked to a couple of pages ago.)

The easiest way to tell that something is different to something else is that it's somewhere else. The same thing can't be in two different places at the same time.
 
I don't know what is gained by considering such a concept - since unlike the transporter, we know it to be impossible. Let me make it clear - I've no particular interest in this particular hypothetical. I can't see what it illustrates, beyond the absurdity and impossibility of a "common mind" operating at a distance.

I know that your default position in discussions is to refuse to answer anything, and to refuse to let your position be known about anything, unless you fully understand what the implications of such a response or position will be in the context of the rest of the discussion.

I know that you have learned to do this as a defense mechanism because you have gotten burned by people turning your own words against you so many times in the past.

But in this case, I think you do know the implication of your response -- thus you also refuse to make one.

Why don't you answer my question? Because you know that as soon as you do you won't be able to smugly proclaim that the idea in the OP -- which everyone knows is stupid, btw, thats why nobody is seriously discussing it -- is the end-all be-all of this issue.

You like to be right, so you simply repeat the red herring truisms that nobody is even disputing and refuse to participate in any other way. Honestly, look at your response:
That's what was described in the OP and that's what I'm interested in.

As usual you just retreat rather than trying to actually move forward with a discussion.
 
I think you may underestimate just how mystical the Strong AI position is.

I suspect this extreme version of the teleporter enigma, with its seeming promise of eternal life -- the gospel of "He has risen!" replaced with "Beam me up, Scotty!" -- owes more to so-called Transhumanism, whose assumptions Strong AI proponents such as Douglas Hofstadter are deeply critical of.

The easiest way to tell that something is different to something else is that it's somewhere else. The same thing can't be in two different places at the same time.

Without minimizing the arguments for, which at first glance are quite seductive (plus it looks so easy on Star Trek), that does seem a major obstacle.
 
Last edited:
I know that your default position in discussions is to refuse to answer anything, and to refuse to let your position be known about anything, unless you fully understand what the implications of such a response or position will be in the context of the rest of the discussion.

I know that you have learned to do this as a defense mechanism because you have gotten burned by people turning your own words against you so many times in the past.

But in this case, I think you do know the implication of your response -- thus you also refuse to make one.

Why don't you answer my question? Because you know that as soon as you do you won't be able to smugly proclaim that the idea in the OP -- which everyone knows is stupid, btw, thats why nobody is seriously discussing it -- is the end-all be-all of this issue.

You like to be right, so you simply repeat the red herring truisms that nobody is even disputing and refuse to participate in any other way. Honestly, look at your response:

As usual you just retreat rather than trying to actually move forward with a discussion.

If you actually read my answers, you'd realise that I've answered this several times, in detail, and discussed it far more than it merits. Obviously if you put forward the hypothesis that two separated bodies share a common mind, and then ask if that would lead one to admit that two separated bodies share a common mind, then yes, A implies A. Since A is impossible according to the laws of physics as we understand them, I have no idea what the implication of this is supposed to be. (I assume that it will be along the lines of "Rocketdodger right, everyone else wrong"),
 
How are you defining "mind" here? As activity (or potential activity -- acquired skills, habits, behaviors in memory)? Like happiness (acting happy), or running? Treating an activity as separate from the things whose state it describes, i.e, which are doing it, is reification (fallacy)WP: a fallacy, that is, unless "mind" were somehow a special class of activity.

No it is more simple than that. I am talking about something like taking all of the neurons in a person, putting the aggregate in a vat, and using an as-yet undefined magical technology to "connect" the neurons to the body they were removed from, such that everything functions transparently from the perspective of the neurons and the body -- third part observers obviously see a difference.

Then, make a copy of the body and hook it up to the same neurons, such that each neuron gets information from two bodies instead of just one, and sends information to two bodies instead of just one.

In your one-question-two-replies scenario, I'm also unclear how the question is input to the second respondent across the universe. Are you assuming a universal program that recognizes the two systems have identical "minds"; that is, are acting the same way, but is unable to discriminate between them on the basis of position, so that asking either a question will elicit the same reply; prick one with a pin and both say "ouch"? Does it matter when the second says "ouch"? (That is, without a universal program to link identical minds, according to special relativity, there is always a necessary time-lag for the transmission of information between them. Whether they are separated by the diameter of the universe, which is estimated at close to 100 billion light-years, or only by the diameter of the solar system, a cricket pitch, a grapefruit, the head of a pin, is it possible to speak of them as having the same mind, without artificially privileging some intermediate reference frame for making that determination? Where does that intermediate 'universal' reference frame come from? Without it, there seems no way of even establishing identity, let alone propagating whatever consequences that is supposed to have for the mind of those identified as such.)


All of that is something that one would need to consider if this was a serious possibility. It isn't. It is only an exercise to get westprog to see that he/she has a misunderstanding about the nature of our relationship with the universe.

Have you read Hume? He spends a great deal explaining how everything we understand -- everything -- is based on the repeated observation of events. If we observe event X, then event Y, we might think X is related to Y, or that X causes Y, etc, and if we know of other events or other facts we learned by observing events in the past, or modeling such events in our mind, we might think X actually has nothing to do with Y, and so on and so forth. But at a fundamental level, that is all there is -- that is all there can be from the perspective of a mind (any mind) -- observations of events. We call sequences of events behavior, and that is all there is -- behavior.

Every single attribute of the physical world, and anything related to it, boils down to behavior. And as we know, behavior is always relative. That is the whole point of relativity -- everything is behavior, and behavior is always relative to the observer, thus everything is relative to the observer.

Yet westprog thinks that for some (unexplained) reason that the "location" event/observation, or behavior, as observed by a third party, somehow trumps the same observation from a first party -- that it is not relative. In other words, if there was a single mind that was distributed amongst two bodies, the observation of that mind that it is in fact a single mind, is somehow irrelevant in the scheme of things because westprog clearly sees two bodies, and therefore there must be two minds.

The whole point behind all of this was to convince people that this nonsense about "location" being the end-all be-all of behavior is just that -- nonsense. It is no more important and no less important than any other behavior of a physical system, and it is certainly observer relative just like everything else.
 
If you actually read my answers, you'd realise that I've answered this several times, in detail, and discussed it far more than it merits. Obviously if you put forward the hypothesis that two separated bodies share a common mind, and then ask if that would lead one to admit that two separated bodies share a common mind, then yes, A implies A. Since A is impossible according to the laws of physics as we understand them, I have no idea what the implication of this is supposed to be. (I assume that it will be along the lines of "Rocketdodger right, everyone else wrong"),

No, you have not.

You specifically said that we can discern whether two bodies are the same person by pricking one with a pin and seeing which one reacted.

I responded with the question "what if we prick one with a pin and both bodies respond?"

Your only answer thus far has been "it isn't physically possible." Really, is that your only answer?
 
Without minimizing the arguments for, which at first glance are quite seductive (plus it looks so easy on Star Trek), that does seem a major obstacle.

Why?

Can you not have the same story in different locations at the same time?
 
The only thing of concern is whether there is something essential to consciousness that would be "lost" were the substrate to change from one set of particles to another. I say no, there isn't, because there is not a single behavior of the system -- including westprog's precious "location" -- that changes from the system's frame of reference. If you or anyone else can think of one, I would love to hear it. But I haven't heard one yet.


I agree you can recreate a perfect replica of the consciousness and (assuming we can replicate an environment perfectly, quantum unpredictably notwithstanding) have two identical identities. I thought you were arguing that consciousness somehow leaps through the transporter, whereas I think the original consciousness stays put. But actually I am not sure what you are arguing now.


You keep stipulating some link between mind1 and body2, but why? It's a bizarre assumption not indicated by the original problem, and to my mind it is pretty uninteresting. If we assume a magical link that means both bodies share one mind (ignoring issues about how one conciousness would cope with 2 sets of senses and relativity etc) then what? Okay I agree in this (impossible) situation one mind controls 2 bodies, what does this prove?


At this point its like you're not even wrong.
 
No, you have not.

You specifically said that we can discern whether two bodies are the same person by pricking one with a pin and seeing which one reacted.

I responded with the question "what if we prick one with a pin and both bodies respond?"

Your only answer thus far has been "it isn't physically possible." Really, is that your only answer?

I really don't understand the point. I have no problem saying that a person who's been given heart, lung, kidney and liver transplants is still one person, with one mind. I might even consider that some clever mechanism might allow a single brain to control multiple bodies. Such a situation poses no particularly interesting dilemmas.

However, that's not the situation we are dealing with. We're considering two entirely separate people, where a stimulus to one will not result in a reaction from the other. The multi-body single-mind example simply shows us what we aren't dealing with.
 
I agree you can recreate a perfect replica of the consciousness and (assuming we can replicate an environment perfectly, quantum unpredictably notwithstanding) have two identical identities. I thought you were arguing that consciousness somehow leaps through the transporter, whereas I think the original consciousness stays put. But actually I am not sure what you are arguing now.

If the aim is just to confuse things, then he's doing a good job.

You keep stipulating some link between mind1 and body2, but why? It's a bizarre assumption not indicated by the original problem, and to my mind it is pretty uninteresting. If we assume a magical link that means both bodies share one mind (ignoring issues about how one conciousness would cope with 2 sets of senses and relativity etc) then what? Okay I agree in this (impossible) situation one mind controls 2 bodies, what does this prove?


At this point its like you're not even wrong.

We were saying - of course they're two different people, they don't react to the same stimuli. He then came up with a counterexample of a person with two bodies and a single brain - which would be a single person, because it would pass the test of two bodies reacting to a single stimulus. What he doesn't seem to realise is that this either does nothing to rebut the original contention, or else supports it. It's as uninteresting a question as to whether your fingernails are part of you or not.

Basically I said that if they were the same person, they would react together to the same stimulus, but they won't, so they aren't. So he's made up a different scenario where two bodies do share a mind - not realising that this actually implicitly accepts my test as valid.
 
Last edited:
Your only answer thus far has been "it isn't physically possible." Really, is that your only answer?

It would sound like a pretty definitive answer to me. The problem is that the technology you are talking about (according to the TV shows it appears in) does not "connect" the two entities it creates. It creates two identical but different entities that are quite separate. Will Riker and Tom Riker are not "connected". Neither were "good" Kirk or "bad" Kirk. So I am not sure where you get the idea that they are.
 
How are you defining "mind" here? As activity (or potential activity -- acquired skills, habits, behaviors in memory)? Like happiness (acting happy), or running? Treating an activity as separate from the things whose state it describes, i.e, which are doing it, is reification (fallacy)WP: a fallacy, that is, unless "mind" were somehow a special class of activity.

No it is more simple than that. I am talking about something like taking all of the neurons in a person, putting the aggregate in a vat, and using an as-yet undefined magical technology to "connect" the neurons to the body they were removed from, such that everything functions transparently from the perspective of the neurons and the body -- third part observers obviously see a difference.

Then, make a copy of the body and hook it up to the same neurons, such that each neuron gets information from two bodies instead of just one, and sends information to two bodies instead of just one.

Sounds like the flipside of one of those old b-movie horror flicks: instead of "The Thing with Two Brains!" this is "The Brain with Two Things!" I'm not sure how you would enervate either body into motion and activity, give it a "mind", if it were just a nerveless sac of flesh, a corpse, basically, but I guess we can leave Dr Frankenstein XIII to work out these magical details.


:frankenst . :frankenst

In your one-question-two-replies scenario, I'm also unclear how the question is input to the second respondent across the universe. Are you assuming a universal program that recognizes the two systems have identical "minds"; that is, are acting the same way, but is unable to discriminate between them on the basis of position, so that asking either a question will elicit the same reply; prick one with a pin and both say "ouch"? Does it matter when the second says "ouch"? (That is, without a universal program to link identical minds, according to special relativity, there is always a necessary time-lag for the transmission of information between them. Whether they are separated by the diameter of the universe, which is estimated at close to 100 billion light-years, or only by the diameter of the solar system, a cricket pitch, a grapefruit, the head of a pin, is it possible to speak of them as having the same mind, without artificially privileging some intermediate reference frame for making that determination? Where does that intermediate 'universal' reference frame come from? Without it, there seems no way of even establishing identity, let alone propagating whatever consequences that is supposed to have for the mind of those identified as such.)


All of that is something that one would need to consider if this was a serious possibility. It isn't. It is only an exercise to get westprog to see that he/she has a misunderstanding about the nature of our relationship with the universe.

Have you read Hume? He spends a great deal explaining how everything we understand -- everything -- is based on the repeated observation of events. If we observe event X, then event Y, we might think X is related to Y, or that X causes Y, etc, and if we know of other events or other facts we learned by observing events in the past, or modeling such events in our mind, we might think X actually has nothing to do with Y, and so on and so forth. But at a fundamental level, that is all there is -- that is all there can be from the perspective of a mind (any mind) -- observations of events. We call sequences of events behavior, and that is all there is -- behavior.


From my reading, Hume's point is that we never directly observe "causality"; we observe conjunctions of events, and infer causality ("X causes Y" is a kind of epistemic shorthand for "X is always seen to immediately adjacently precede Y"). So we should be careful when speaking of causality to remember that it's a concept we add to our understanding of the world, to help us organize it, it's not something we ever observe directly in the world; and as a concept that we add to the world, it's open to doubt: our understanding of "causality", anything we mean by the epistemic shorthand "X causes Y" beyond "X is always seen to immediately adjacently precede Y", may differ from actual causality (this was especially important in the era he was writing, which still tended to think of "causality" in Aristotelian terms, as teleological, directed towards an end; Hume's strictly empirical notion of causality is the modern scientific one).

Every single attribute of the physical world, and anything related to it, boils down to behavior. And as we know, behavior is always relative. That is the whole point of relativity -- everything is behavior, and behavior is always relative to the observer, thus everything is relative to the observer.

Yet westprog thinks that for some (unexplained) reason that the "location" event/observation, or behavior, as observed by a third party, somehow trumps the same observation from a first party -- that it is not relative. In other words, if there was a single mind that was distributed amongst two bodies, the observation of that mind that it is in fact a single mind, is somehow irrelevant in the scheme of things because westprog clearly sees two bodies, and therefore there must be two minds.

Again, I think this is dangerously close, and perhaps crossing over in-to the idealist's reification fallacy aforementioned. "Mind" is a set of activites, a description of a given body's behaviors, not a separate single thing that exists prior to any body, which can then be shared among different bodies.

The whole point behind all of this was to convince people that this nonsense about "location" being the end-all be-all of behavior is just that -- nonsense. It is no more important and no less important than any other behavior of a physical system, and it is certainly observer relative just like everything else.

I agree that location is no less important than any other attribute of a physical system. However, it does often seem as if it's being treated as if it were less important, even completely irrelevant, in 'porter supporters' [pro-transporter] arguments.

Without minimizing the arguments for, which at first glance are quite seductive (plus it looks so easy on Star Trek), that does seem a major obstacle.

Why?

Can you not have the same story in different locations at the same time?

Well, we have to be very careful as philosophers to specify exactly what we mean here by "story", making sure we don't fall into the idealist trap of assuming "the story" has some sort of prior, separate existence just because language usage suggests it. A book is a sequence of symbols intended to produce from reader to reader a similar "event" (properly speaking, sequence of mental events). We can assign each separate reading of the book, i.e., each telling of its story, a label: reading1, reading2, reading3, etc. As long as all the readings are from equivalent sequences of symbols, semantically equivalent, that is -- reading it in different translations considered as reading the "same story" -- it's customary to group all these separate readings together -- {reading1, reading2, reading3, ...} -- and to refer to this grouping -- this class description of different readings of semantically equivalent sequences of symbols -- as "the story". So it's the same story wherever it's read, but by that we only mean that it's the same class of event, an event that shares a class description with certain other events, not the same single event.

Similarly, "mind" is a class description for certain activities. To say a mind is in two places is merely to say that two separate activites share a description, nothing more; certainly not that they are a single mind, or the "same mind" (at most we can mean two minds [activities] with identical attributes / descriptions).

Speaking of stories, maybe we can illustrate the dilemma better via a simple one. Call it "Same Train":

Once upon a time all the matter in the universe was arranged into a track and two identical trains. The only difference between the two trains was position: each was at the opposite end of the track. Other than that, one might say they were the same train. Each train had the same schedule to follow, or algorithm if you will: Depart one end of the track, and continue at top speed until arriving at the other end of the track.

Now some passengers on the same train, aware of the other same train coming towards them on the same track, were a bit concerned there might be a problem.

"Don't worry," the conductor assured them. "We know the same train can never collide with itself. And because of the Principle of Positional Irrelevance, the other train, which differs from this one only in position, is in fact the same train. Therefore, no problem: there will be no collision."

Greatly relieved by the conductor's logic, the passengers returned to their berths, wondering how they could ever have been so naive as to believe there might be a problem. And so the same train continued on what seemed its intuitively obvious but logically impossible collision course with the same train.

"Woot woot!" went the same train as it entered the midway tunnel and the engineer spotted the same train's oncoming light at the end of it.

The End

As a postscript and analogy to the teleporter enigma, it seems we have two possible outcomes:

(1) The Principle of Positional Irrelevance, on which the argument for teleportation seems to rely, is true, and the same train arrives safely at each destination.

(2) The Principle of Positional Irrelevance is false, meaning the trains are in fact two different, separate trains with identical attributes, and there is a cosmologically massive collision, possibly producing a big bang... definitely a big blow to the teleporter argument.

The End? :train
 
Last edited:
From my reading, Hume's point is that we never directly observe "causality"; we observe conjunctions of events, and infer causality ("X causes Y" is a kind of epistemic shorthand for "X is always seen to immediately adjacently precede Y"). So we should be careful when speaking of causality to remember that it's a concept we add to our understanding of the world, to help us organize it, it's not something we ever observe directly in the world; and as a concept that we add to the world, it's open to doubt: our understanding of "causality", anything we mean by the epistemic shorthand "X causes Y" beyond "X is always seen to immediately adjacently precede Y", may differ from actual causality (this was especially important in the era he was writing, which still tended to think of "causality" in Aristotelian terms, as teleological, directed towards an end; Hume's strictly empirical notion of causality is the modern scientific one).

Yes, exactly. But what most people don't realize -- and what westprog certainly doesn't realize -- is that "location" is just another type of causality. We never directly observe it any more than we directly observe any other type of causality.

I agree that location is no less important than any other attribute of a physical system. However, it does often seem as if it's being treated as if it were less important, even completely irrelevant, in 'porter supporters' [pro-transporter] arguments.

It is not less important, or even completely irrelevant, in general. But it can be, because any attribute can be less important than others, or even completely irrelevant, depending on the context. Location isn't on some pedestal, that is what I am saying. It is just like any other attribute.
 
It would sound like a pretty definitive answer to me. The problem is that the technology you are talking about (according to the TV shows it appears in) does not "connect" the two entities it creates. It creates two identical but different entities that are quite separate. Will Riker and Tom Riker are not "connected". Neither were "good" Kirk or "bad" Kirk. So I am not sure where you get the idea that they are.

The idea is that they are connected for an instant, then diverge into two separate identities.

The whole point of contention here is whether there is that instant of connexion or not. Nobody disagrees that after any amount of time has passed at all the two bodies are entirely different consciousnesses.

And it has nothing to do with the technology. It has to do with a fundamental question of whether the same consciousness can inhabit two physically distinct brains for even a single planck time. I am trying to show, through hypotheticals, that there is nothing inherently contradictory about such an idea. Whether it is physically possible isn't what is important at this stage -- only logical possibility.
 
However, that's not the situation we are dealing with. We're considering two entirely separate people, where a stimulus to one will not result in a reaction from the other. The multi-body single-mind example simply shows us what we aren't dealing with.

Nope.

The situation we are dealing with is that a stimulus to the one body prior to teleportation results in reactions from both bodies after teleportation.
 
Nope.

The situation we are dealing with is that a stimulus to the one body prior to teleportation results in reactions from both bodies after teleportation.

It would help if you would actually state what you're at the moment only implying. I've being quite clear about this - post teleportation we are dealing with two different people. The fact that at a previous stage there was only one person is irrelevant. At one stage in every person's past, there were just two people, and after some activity and a wait of nine months, there would be three people. Prior to birth, things that happened to the mother had an effect that might also apply to the new, separate person - drug withdrawal, for example.

This makes no difference. There are very simple tests we can use to see whether there are two people, and following the tranporter's operation, they all apply.

If we are to use the "stimulus to one body now affects two" argument to deny that there are two people, then nobody could claim to be a different person to his mother. It's a non-argument. What matters is whether a stimulus to one body effects the other, and it doesn't.
 
The idea is that they are connected for an instant, then diverge into two separate identities.

The whole point of contention here is whether there is that instant of connexion or not. Nobody disagrees that after any amount of time has passed at all the two bodies are entirely different consciousnesses.

And it has nothing to do with the technology. It has to do with a fundamental question of whether the same consciousness can inhabit two physically distinct brains for even a single planck time. I am trying to show, through hypotheticals, that there is nothing inherently contradictory about such an idea. Whether it is physically possible isn't what is important at this stage -- only logical possibility.

I don't know whether it even means anything to describe consciousness at a single planck time. If it does, then I see no reason to consider that two bodies at two separate locations, with no connection between them, are the same person. If this is putting location on a pedestal, so be it. Location, location, location.
 
Yes, exactly. But what most people don't realize -- and what westprog certainly doesn't realize -- is that "location" is just another type of causality. We never directly observe it any more than we directly observe any other type of causality.

We do observe differences in it.

It is not less important, or even completely irrelevant, in general. But it can be, because any attribute can be less important than others, or even completely irrelevant, depending on the context. Location isn't on some pedestal, that is what I am saying. It is just like any other attribute.

Difference in location, from the pov of either system, seems the most basic difference: enough to establish they are different systems, different synchronous frames of physical activity, whatever the status of their other attributes. So far, I'm not persuaded the teleporter enigma, even as a thought-experiment, can get around that (though it would be cool if it could).
:alien011:
 

Back
Top Bottom