• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Star Trek Transporter Enigma

Personally, I think that the fact that I belong to a class which has other members is a totally inadequate reason to end my existence, because I'm not conscious as a class, I'm conscious as a class instance. The fact that other members of the class exist is meaningless. There are plenty of other instances of the class Human, or Mammal, or Physical Object around. That doesn't mean that I regard my own unique existence as being redundant. My existence will still be unique, because I'm the only person at this location at this time. Somebody else is at a different location. He might be like me, but if he was me he'd be where I was. I find I'm always in one place at a time.

Right, if there are other instances of you already running, that is no consolation to your personal instance being ended.

But if you're instance will be started up again in a different location, where you want to be, shortly after it is ended, then I think that is a good reason to accept your personal instance being ended.
 
Key-rye-st what a mess! I know that's by far not the greatest explanation ever. But that's the best I can sort it out right now.

A better way to explain the difference is to look at things from a purely A.I. or cognitive science point of view: Assuming everything is particles, for all collections of particles X, you can partition the universe into three sets: 1) the set containing only the particles of X, 2) the set containing any collections of particles that link X to other collections, 3) collections that have nothing to do with X.

In other words, if your brain is X, your brain is part of partition 1. It is an actual instance.

If I studied your brain, the particles in my brain that are involved in the memory/thought of the study of your brain, E.G. the description of the algorithm being instanced by your brain, would be part of set 2. Those particles in my brain are a logical potential instance, to use your terms. And the similar thoughts of other humans or aliens, or in a written record of that study, or descriptions in copies of some book made about the study, etc, would also be part of set 2.

The rock on the side of the road that neither of us have seen or thought about would be part of set 3.

Make sense? I find that thinking of everything in terms of particles greatly simplifies stuff...

Yes, I think so. If by "an instant" you mean a time interval so small it might as well be zero, then yes, as the destruction and replication events would occur in no time. But events taking no time aren't possible events -- events can be defined as "instantaneous" in calculus, but this merely sets a mathematical limit, based on infinite time-slices, we never actually reach -- so the instant teleportation hypothetical is really just saying: "take someone who's here; now imagine she's not here, she's over there, instantly; is it the same person?" Yes. Because in this fanciful case she wasn't destroyed and replicated, just imagined here, and then there. So logically, it's a purely imaginary case, and an impossible class at that, with a described candidate for membership, but no logically possible membership, no potential instance; thus, an empty set, without potential, or of course, actual instances.

Err, not really. I am saying that even though there would be no way to detect that the particles were different, they would indeed be different, because the magic machine assures us of it.

The question is whether or not there is something inherently important about the same particles.

No, I mean a spatiotemporally continuous execution, from activation to deactivation. (Unless something goes wrong with the execution, the state transitions will imply the causal links in the system, at least for their level of description).
OK I am glad we agree upon this concept.

Yes, I think you mean "[f]or
it is not separate instance according to all criteria for 'instance'"; however, it is separate for some criteria, possibly crucial criteria, as outlined in the discussion of the extra distinctions of "instance" above.
Yep exactly, so the only question is which of those criteria are are crucial. It might not seem like we made progress but we have -- most people insist that if the material changes the instance must be different according to all criteria.

So would you say the difference is expectation? (This is a provocative and puzzling case, because by "causal efficacy", which is a stupid-sounding phrase I must admit, though I can't think of a better one, I really mean "causal in the way intended, expected, planned, etc." To distinguish it from causality that has effects which are useless to us because they aren't the effects we thought would happen).

Well "expectation" might be a good human word for it, but actually I can be precise -- I just learned in another thread that in fact all the laws of physics operate bidirectionally, I.E. they are all invertible functions, and the only thing that dictates what we perceive as the flow of time in a given direction is entropy!

So I would say that the difference is that in the genuinely determined scenario you can run the laws of physics backwards, or take the state transitions backwards by inverting the transition function, and get to the correct starting point -- before the teleporter was used. On the other hand, in the random scenario, you can't, because by definition a true random event is not reversible I.E. the state transition function is non-invertible.
 
Last edited:
Right, if there are other instances of you already running, that is no consolation to your personal instance being ended.

But if you're instance will be started up again in a different location, where you want to be, shortly after it is ended, then I think that is a good reason to accept your personal instance being ended.

We aren't even saying the instance would be ended, we are saying it is more like just put on pause or something -- there is a genuine causal link between the source and destination, not a "stop" and then "start."

Or, we are saying that if you stop an instance, and then start up another instance using the data from the last state of the first instance, they are in fact the same instance, not two different ones.
 
In the light speed transport, there would have been no deactivation event for the local system, no loss of physical integrity, no cessation of conscious activity (aside from whatever happened to the body during the transport).

All things only evident from the outside. In terms of subjectivity there is no net change, new location aside.

Nick
 
Good point, because in a manner of speaking -- yes and no. So it's a point of confusion I should try to clear up...

An algorithm is a class description. What does it describe? Itself: it defines a set whose members may be thought of as ideal instances that in turn potentially describe, match one-to-one in detail, an actual instance, at the level of detail of the algorithm, in whatever context we are concerned with (functional, structural, et al.). That's what I intend by a potential instance becoming an actual instance, a description only.

To avoid confusion, the "actual instance" may be called by its more traditional philosophical name, an object.

Though it might be stretching things to call a computer program an object. A process, perhaps.

Well, I'd qualify that a bit. Logical concepts are exact, in the sense that there should be no disagreement over how many "two" is, because ideal; but incomplete as descriptions of our physical observations because a priori experience. Our observations are recorded in empirical concepts, empirical meaning from our senses, which are inexact -- what the hell is "cute", exactly? -- because they are a posteriori experience, and experience, especially when you throw in emotional valuation et al, is often a lot messier than the logical necessity we imagine (though not always: groups of "two" occur in both the logical and empirical world).



Just to expand on this important point: the parabola approximates the stone's throw extremely well in a perfect vacuum -- the problem is finding one, of course -- i.e., the path is a parabola, empirically-speaking, but it's effectively impossible to describe that path to a mathematical precision, and owing to quantum fluctuations to even effect an ideal parabola if we could. The algorithm is an exact description of itself only, and only at the level of function. There are often processes that match that functionality when they work as we expect them to; however, in the real world, sometimes they don't.

I suppose that we test whether an object matches its logical description by its behaviour. If a stone lands where we predict, we consider it's flight is parabolic. Of course, no point on the stone will follow a perfect parabola. It's not clear if it's even meaningful to consider a perfect parabola for an object in motion.

 
Right, if there are other instances of you already running, that is no consolation to your personal instance being ended.

But if you're instance will be started up again in a different location, where you want to be, shortly after it is ended, then I think that is a good reason to accept your personal instance being ended.

But I won't be in the new location. Somebody else will be there. I'm happy for him, but the fact that he's somewhere I want to be doesn't mean that I'm going to kill myself. I might just press the transport button again, so I can end up there.
 
But I won't be in the new location. Somebody else will be there. I'm happy for him, but the fact that he's somewhere I want to be doesn't mean that I'm going to kill myself. I might just press the transport button again, so I can end up there.

Why do you keep saying "somebody else?"

How can it be somebody else if it is the same instance as before?
 
Key-rye-st what a mess! I know that's by far not the greatest explanation ever. But that's the best I can sort it out right now.

A better way to explain the difference is to look at things from a purely A.I. or cognitive science point of view: Assuming everything is particles, for all collections of particles X, you can partition the universe into three sets: 1) the set containing only the particles of X, 2) the set containing any collections of particles that link X to other collections, 3) collections that have nothing to do with X.

In other words, if your brain is X, your brain is part of partition 1. It is an actual instance.

If I studied your brain, the particles in my brain that are involved in the memory/thought of the study of your brain, E.G. the description of the algorithm being instanced by your brain, would be part of set 2. Those particles in my brain are a logical potential instance, to use your terms. And the similar thoughts of other humans or aliens, or in a written record of that study, or descriptions in copies of some book made about the study, etc, would also be part of set 2.

The rock on the side of the road that neither of us have seen or thought about would be part of set 3.

Make sense? I find that thinking of everything in terms of particles greatly simplifies stuff...

As long as we don't oversimplify. Ok, let's see how this translates for me (note: I use "computational process" below to distinguish it from "algorithm", which is a description of a computational process):

[1] There is my brain -- the collection of actual particles properly arranged that form my brain -- which can be described as "my brain" (though it is my brain whether described or not); [2] there are external functional descriptions (actual instances of the algorithm) for the computational process of my brain -- collections of actual particles properly arranged that encode description routines -- which can be described by potential logical instances of the algorithm for the computational process of my brain (though they encode the description routine whether described or not); [3] and all the rest of the actual particles, to complete a universal three-way partition.

Yeah, I guess that's alright. And I think it helps to highlight the few disagreements, over what's "actual physical" and what's "logical potential".

Yes, I think so. If by "an instant" you mean a time interval so small it might as well be zero, then yes, as the destruction and replication events would occur in no time. But events taking no time aren't possible events -- events can be defined as "instantaneous" in calculus, but this merely sets a mathematical limit, based on infinite time-slices, we never actually reach -- so the instant teleportation hypothetical is really just saying: "take someone who's here; now imagine she's not here, she's over there, instantly; is it the same person?" Yes. Because in this fanciful case she wasn't destroyed and replicated, just imagined here, and then there. So logically, it's a purely imaginary case, and an impossible class at that, with a described candidate for membership, but no logically possible membership, no potential instance; thus, an empty set, without potential, or of course, actual instances.

Err, not really. I am saying that even though there would be no way to detect that the particles were different, they would indeed be different, because the magic machine assures us of it.

The question is whether or not there is something inherently important about the same particles.

Synchronously active in the same place. According to special relativity, that makes them a unique system, particles moving through space as a unit with its own time (inertial frame-of-reference relative to other frames).

No, I mean a spatiotemporally continuous execution, from activation to deactivation. (Unless something goes wrong with the execution, the state transitions will imply the causal links in the system, at least for their level of description).
OK I am glad we agree upon this concept.

"To 'agreement'!" :Banane35:

Yes, I think you mean "[f]or
it is not separate instance according to all criteria for 'instance'"; however, it is separate for some criteria, possibly crucial criteria, as outlined in the discussion of the extra distinctions of "instance" above.
Yep exactly, so the only question is which of those criteria are are crucial. It might not seem like we made progress but we have -- most people insist that if the material changes the instance must be different according to all criteria.


Yes, that's an old philosophy chestnut: material's being swapped in and out, and form being slightly altered, always, of course, for any object. Which is why I wonder if synchronous unity isn't a better, even the best (correct?) way to demarcate an "object" (or process).

So would you say the difference is expectation? (This is a provocative and puzzling case, because by "causal efficacy", which is a stupid-sounding phrase I must admit, though I can't think of a better one, I really mean "causal in the way intended, expected, planned, etc." To distinguish it from causality that has effects which are useless to us because they aren't the effects we thought would happen).

Well "expectation" might be a good human word for it, but actually I can be precise -- I just learned in another thread that in fact all the laws of physics operate bidirectionally, I.E. they are all invertible functions, and the only thing that dictates what we perceive as the flow of time in a given direction is entropy!

So I would say that the difference is that in the genuinely determined scenario you can run the laws of physics backwards, or take the state transitions backwards by inverting the transition function, and get to the correct starting point -- before the teleporter was used. On the other hand, in the random scenario, you can't, because by definition a true random event is not reversible I.E. the state transition function is non-invertible.

Entropy. Interesting: that's a perspective I hadn't thought of.

I think randomness, operant quantum randomness, is going to a be a definite wrench in the teleporter works, if it turns out it's necessary to consciousness, or compromises the match-all link-up of original to copy (that is: running events backwards will roll "snake eyes" back into a hand, but forward again the hand may not roll another snake eyes).

And we have to be careful that "link" isn't purely descriptive, i.e. ideal, so as not to mix idealism in with our assumed materialism. They don't go together at all, barring dualism, which doesn't even go together with itself.

Good enough for tonight; good discussion. :)


In the light speed transport, there would have been no deactivation event for the local system, no loss of physical integrity, no cessation of conscious activity (aside from whatever happened to the body during the transport).

All things only evident from the outside. In terms of subjectivity there is no net change, new location aside.

Nick

Right. Subjective and objective knowledge can conflict. Objective would describe what's going on, not what seems to be going on (unless consciousness is somehow a special case).


Good point, because in a manner of speaking -- yes and no. So it's a point of confusion I should try to clear up...

An algorithm is a class description. What does it describe? Itself: it defines a set whose members may be thought of as ideal instances that in turn potentially describe, match one-to-one in detail, an actual instance, at the level of detail of the algorithm, in whatever context we are concerned with (functional, structural, et al.). That's what I intend by a potential instance becoming an actual instance, a description only.

To avoid confusion, the "actual instance" may be called by its more traditional philosophical name, an object.

Though it might be stretching things to call a computer program an object. A process, perhaps.

Yes, that's a better name for it here. The object we refer to isn't a "thing", in the normal senses of these words, but a "process", sequence of events, of changes in things. And the algorithm describes the computational process, at some level of functionality (though it might suggest structure, and run-time execution, those aren't its proper subject nor object).

Well, I'd qualify that a bit. Logical concepts are exact, in the sense that there should be no disagreement over how many "two" is, because ideal; but incomplete as descriptions of our physical observations because a priori experience. Our observations are recorded in empirical concepts, empirical meaning from our senses, which are inexact -- what the hell is "cute", exactly? -- because they are a posteriori experience, and experience, especially when you throw in emotional valuation et al, is often a lot messier than the logical necessity we imagine (though not always: groups of "two" occur in both the logical and empirical world).


Just to expand on this important point: the parabola approximates the stone's throw extremely well in a perfect vacuum -- the problem is finding one, of course -- i.e., the path is a parabola, empirically-speaking, but it's effectively impossible to describe that path to a mathematical precision, and owing to quantum fluctuations to even effect an ideal parabola if we could. The algorithm is an exact description of itself only, and only at the level of function. There are often processes that match that functionality when they work as we expect them to; however, in the real world, sometimes they don't.

I suppose that we test whether an object matches its logical description by its behaviour. If a stone lands where we predict, we consider it's flight is parabolic. Of course, no point on the stone will follow a perfect parabola. It's not clear if it's even meaningful to consider a perfect parabola for an object in motion.

It's center-of-mass should come closest under ideal conditions; but physical conditions will always tend to diverge slightly from the ideal (thank you, QM! & granularity of spacetime so-called 'continuum'!). :p
 
As long as we don't oversimplify. Ok, let's see how this translates for me (note: I use "computational process" below to distinguish it from "algorithm", which is a description of a computational process):

I consider computational process == instance of an algorithm
 
Entropy. Interesting: that's a perspective I hadn't thought of.

I think randomness, operant quantum randomness, is going to a be a definite wrench in the teleporter works, if it turns out it's necessary to consciousness, or compromises the match-all link-up of original to copy (that is: running events backwards will roll "snake eyes" back into a hand, but forward again the hand may not roll another snake eyes).

Wiki article on arrow of time

Certain subatomic interactions involving the weak nuclear force violate the conservation of both parity and charge conjugation, but only very rarely. An example is the kaon decay [1]. According to the CPT Theorem, this means they should also be time irreversible, and so establish an arrow of time. Such processes should be responsible for matter creation in the early universe.

So time does have a direction - in that a certain very small class of subatomic interactions are not time-reversible. It's an odd one - because nobody has been able to figure out any relationship between these interactions and the way time works and is perceived.
 
Nick227 said:
There's no way for an external observer to tell them apart. But you, the (destructively) transported fellow would know, so to speak, by never waking up.

I don't think that is so. If we take the case Hellbound then put forward... that the lights go down and both you and the clone find yourselves together in the room, there is still no way for either of you to know who is the original.

Nick

Yes, but that isn't destructive teleportation. However, I do acknowledge it as an issue for simple cloning, assuming this must be done with the source (ugly bag of mostly water) being unconscious. And if conscious, and the copy suddenly pops into existence, conscious, you still have the issue of the clone standing in the output unit, and the source in the input unit. You'd have to propose deliberate deception on the part of the clone machine makers to deliberately make input and output modules identical.

Note birds and whatnot that sense the Earth's magnetic field would notice, as the clone would suddenly feel a different orientation, again, neglecting deliberate deception in the cloning machine setup.
 
Assuming an electronic thermometer that records its measurements, you gained a thermometer that has recorded the exact same thing as the thermometer you lost. In total, you didn't lose anything, just as you haven't lost anything if you cut a computer file in one folder, and paste it in another.

Interesting analogy -- sometimes the file is "moved" by literally copying, byte-for-byte, to a new spot on the HDD, with the source bytes being "deleted". Other times, the operating system just swaps some file pointers around, but the actual data is not copied.

It's much more efficient, of course, when "moving" a file to do the latter -- the former is only done when crossing filesystem boundaries, such as from C: to D: drive.



"So," the hamster-pet-app asks, "what if my app was moved to D drive, but they forgot to delete me on the C drive. Which is me?"
 
But I won't be in the new location. Somebody else will be there. I'm happy for him, but the fact that he's somewhere I want to be doesn't mean that I'm going to kill myself. I might just press the transport button again, so I can end up there.

And it won't be YOU celebrating your next birthday party, it will be SOMEONE ELSE!! :eek:
 
And it won't be YOU celebrating your next birthday party, it will be SOMEONE ELSE!! :eek:

That's possibly true. But it was me celebrating my last birthday.

That's the trouble with the transporter arguments. If one believes them, then why not live for today? It's somebody else who'll be bankrupt, diseased and in jail.
 
That's possibly true. But it was me celebrating my last birthday.

That's the trouble with the transporter arguments. If one believes them, then why not live for today? It's somebody else who'll be bankrupt, diseased and in jail.

Umm... anyone willing to use the transporter believes the EXACT OPPOSITE of that.

The copy is me. Me ten years from now is still me. Me after I am knocked unconscious is still me. Me after a nap is still me.
 
Umm... anyone willing to use the transporter believes the EXACT OPPOSITE of that.

The copy is me. Me ten years from now is still me. Me after I am knocked unconscious is still me. Me after a nap is still me.

And if there's somebody else exactly like you?
 
And if there's somebody else exactly like you?

Then it is you.

Here is the issue: we (as a species) have never had to address this question before, and your default answer is that if someone else is exactly like you then it must somehow be a separate "you," just because.

But why? What, in a logical sense, mandates that it must be separate?
 
Last edited:
Then it is you.

Here is the issue: we (as a species) have never had to address this question before, and your default answer is that if someone else is exactly like you then it must somehow be a separate "you," just because.

But why? What, in a logical sense, mandates that it must be separate?

"You" is defined as a singular pronoun. Logically, "you" cannot refer to two persons. There may be "somebody else exactly like me" in another universe. That does not mean I am that person.
 
"You" is defined as a singular pronoun. Logically, "you" cannot refer to two persons. There may be "somebody else exactly like me" in another universe. That does not mean I am that person.

How can I tell if that person is me or not? There are lots of techniques we can use. I think of a number between one and a thousand. Does he know what it is? I stick a pin in him. Does it hurt both of us? Is he occupying the same space that I occupy?

If these simple tests are failed, then I know instantly it isn't me. Whether or not he happens to look like me is irrelevant. What test could I apply that would show that he is me?

Maybe he was me, at some stage in the past - when there was only one of us, and we occupied the same space time coordinates. Once we both exist, we are different people.
 
Here's a conundrum... imagine half your brain got squashed... but it so happened that someone had an artificial hemisphere ready to plug into the remaining organic part of your brain. For the sake of argument let's suppose it can perfectly duplicate the functions of your brain, including the current memory state supported by that side.

Would you consider yourself to still be yourself?
 

Back
Top Bottom