• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Star Trek Transporter Enigma

Two instances of the same class. If that class is restricted to have only one member (an "identity class"), then all instances will be copies of each other (where "member" refers to potential instances).



If it turns out it isn't. As far as I'm able to determine from slogging through the morass of JREF teleportation threads, those who have a problem with getting in the teleporter are arguing that consciousness is an active material process of a material instance of "a person", and that destroying that person destroys that process; those who don't have a problem are arguing that consciousness is the description of the material instance of "a person" and his or her active material processes (consciousness included), iow, the class or idea of this person, and this person's consciousness, which many think of and may be nothing more than the active material process, persists somehow in its static, symbolic description.

On one side, it seems to me, speaking philosophically, is an emergent materialism which identifies the process of consciousness with its separate material instance; on the other, a pythagorean idealism which identifies consciousness with its unique descriptive class. So whether you get in the teleporter or not depends on your metaphysics. (I incline to emergent materialism; though of all the idealisms, pythagorean is the most tempting: by far the least silly).

There is a third option:

My position is that human consciousness is a form of self referential information processing. It is an algorithm -- a series of computation steps -- that knows about itself.

My position is that the steps in the algorithm -- like any other algorithm -- can be thought of as a series of state transitions within the systems the algorithm is instantiated upon. Think about how programs are executed on a computer, how each step in a program represents a set of state transitions in the hardware. Well, my position is that the algorithm of consciousness is the same kind of thing in our brain -- the steps correspond to state transitions in our neural network.

My position is that these state transitions are deterministic, assuming quantum randomness is insignificant. This means the next state is determined by only the current internal state, the current external state, and a deterministic state transition function (which in the physical domain is simply the laws of physics).

My position, then, is that you can model consciousness (any algorithm, actually) as a series of state transitions in some system somewhere. That is, F(Si(t), Se(t), t) --> Si(t+1), where F( ) is the state transition function, Si( ) is the internal state, and Se( ) is the external state. If you looked at time slices of consciousness -- we can use plank time as the duration since then we know we captured any relevant events -- the algorithm would look something like this in the physical domain: S(1)-->S(2)-->S(3)--> ... -->S(current time).

My position is that consciousness is those deterministic transitions between states, the "--->" you see above. It is the algorithm itself, not the physical stuff the algorithm is running on. It isn't your brain, it is the directed "movement" from one state of your brain -- or any brain -- to the next.

My position is that if you take a subsequence of this algorithm -- suppose S(10)-->S(11)-->S(12) -- and split it between multiple systems, or instances, it remains the same algorithm precisely because the deterministic state transitions are exactly the same. In other words, if F(Si(10), Se(10), 10) occurs on system A, and determines state 11 on system B, and if F(Si(11), Se(11), 11) occurs on system B and determines state 12 on system C, the algorithm and hence the consciousness is exactly the same as it would be if everything occured in the same system.

So if your brain is in state 1, and the laws of physics combined with state 1 result in state 2 one planck time later, then the system where state 2 is located should be irrelevant. State 2 is still part of the algorithm, the same algorithm, because it was determined by state 1.

And finally, my position is that if you somehow add an intermediate step in there between determining state 2 and the system actually being set to state 2 -- such as communicating across space to an identical system that it should be set to state 2 -- the algorithm and hence the consciousness is still the same, because state 2 is still determined by state 1. The fact that there was a middleman doesn't change that key element. Nor would it change if that communication took a very, very long time -- if the original was scanned, then destroyed, and the information took a billion years to reach the destination, and only then was the copy made -- it would still be the same algorithm and hence the same consciousness. Because state 2 was determined by state 1

In other words, if your criteria for defining "instance" relies upon the deterministic transitions from one state of the system to the next then it is valid to say that the source and destination copies are the same instance -- since the source determines the initial state of the destination.
 
So is your argument based on Parfit's -- as I understand it: that even though blobru#1's consciousness can't be transferred in the sense we normally think of someone's consciousness, that it should be enough for blobru#1 to be destroyed knowing an exact duplicate, his namesake, blobru#2, will carry on his legacy, in his stead ?

Yes. It's an argument about the Self, essentially. If you accept that materialism procludes the existence of a persisting self then you shouldn't have a reason not to travel. That's how I understand it.


If I adopt an emergent materialist pov, the destruction (deactivation) of that separate active material process called consciousness that gave rise to the notion of "me".

Well, that's your own personal view. I also adopt an emergent materialist pov, but for me it's clear that there can be no difference between the illusory sense of experiencing self created by Nick#2 from that created by Nick#1!

Nick
 
Last edited:
it's clear that there can be no difference between the illusory sense of experiencing self created by Nick#2 from that created by Nick#1!

Yeah, right, Mr. Nick #227. Who are you, and what have you done with Nick #226?

:D
 
I confess I've only skimmed this thread, so I apologize if I'm duplicating points already made.

First, isn't it impossible for us to do what the OP suggests? (I don't mean just technically, but logically impossible to know everything about every particle in our bodies. Heisenberg's Uncertainty Principle.)

I think the implication is that if identical processes are going on in the same versions - then it doesn't matter about individual particles. So if two computers are running the same program on the same data with the same output, then they are functionally identical.

IMO this begs numerous questions.
 
"I" "Awareness" "Consciousness" "Self"

Whatever these are, they are caused by material, and material processes. This is what materialists think.

It is why materialists are not worried about their "self" being destroyed when they use the transporter any more than they are worried about their liver being destroyed. Because it will be rebuilt exactly on the other side, and function just the same way as it did before.

Saying we think that consciousness is 'magically transported' into the copy is as ridiculous as saying we think that our liver is magically transported into the copy. It is rebuilt. Out of material. Because that is what it was made out of before.
 
IMO this begs numerous questions.
"Begging the question" is the argumentative fallacy of treating as granted the very thing being argued. Perhaps you meant that it raises numerous questions.
 
Some might argue that the Star trek transporter must be forever impossible because of the difficulty of resolving the paradox.

Here are four paradoxes related to a Star Trek transporter.

I. Suppose you keep reducing the time that separates dematerialization (DEM) and rematerialization (REM).
If the theoretical minimum between DEM and REM is zero such that the transport becomes instantaneous then there is an unbroken continuum of consciousness and existence for the transported object.
In this case the DEM object and the REM object are the same object.

II. Suppose that the "from" pad and the "to" pad are the same pad.
Suppose that the transport is done in empty space to guarantee that the DEM object and the REM object are composed of the same matter.
In this case the DEM object and the REM object are the same object.

III. Suppose the first and second scenarios are combined.
In this case the DEM object and the REM object are even more clearly the same object.

IV. Suppose the transporter is configured to start REM before DEM is complete.
This would not be any different from normal cellular replacement.
In this case the DEM object and the REM object are the same object.
 
let me replace your transporter paradox with another one. Now the transporter do not transport you. It first kill you when you push the button (the potatium injection), THEN atom by atom copy you somewhere, and discard the corpse , by for example burning it. The copy has all memory up to the point you push the button.

Would you push the button ?

I can think of two types of people who have already pushed that button -- those that are cryogenically frozen before they die (which I do not know has actually happened, so you don't have to count this one) and martyrs who believe they are going somewhere 'else.'

All it takes is a little faith in the technology. Airplanes come to mind.
 
Two instances of the same class. If that class is restricted to have only one member (an "identity class"), then all instances will be copies of each other (where "member" refers to potential instances).



If it turns out it isn't. As far as I'm able to determine from slogging through the morass of JREF teleportation threads, those who have a problem with getting in the teleporter are arguing that consciousness is an active material process of a material instance of "a person", and that destroying that person destroys that process; those who don't have a problem are arguing that consciousness is the description of the material instance of "a person" and his or her active material processes (consciousness included), iow, the class or idea of this person, and this person's consciousness, which many think of and may be nothing more than the active material process, persists somehow in its static, symbolic description.

On one side, it seems to me, speaking philosophically, is an emergent materialism which identifies the process of consciousness with its separate material instance; on the other, a pythagorean idealism which identifies consciousness with its unique descriptive class. So whether you get in the teleporter or not depends on your metaphysics. (I incline to emergent materialism; though of all the idealisms, pythagorean is the most tempting: by far the least silly).

There is a third option:

My position is that human consciousness is a form of self referential information processing. It is an algorithm -- a series of computation steps -- that knows about itself.

My position is that the steps in the algorithm -- like any other algorithm -- can be thought of as a series of state transitions within the systems the algorithm is instantiated upon. Think about how programs are executed on a computer, how each step in a program represents a set of state transitions in the hardware. Well, my position is that the algorithm of consciousness is the same kind of thing in our brain -- the steps correspond to state transitions in our neural network.

My position is that these state transitions are deterministic, assuming quantum randomness is insignificant. This means the next state is determined by only the current internal state, the current external state, and a deterministic state transition function (which in the physical domain is simply the laws of physics).

My position, then, is that you can model consciousness (any algorithm, actually) as a series of state transitions in some system somewhere. That is, F(Si(t), Se(t), t) --> Si(t+1), where F( ) is the state transition function, Si( ) is the internal state, and Se( ) is the external state. If you looked at time slices of consciousness -- we can use plank time as the duration since then we know we captured any relevant events -- the algorithm would look something like this in the physical domain: S(1)-->S(2)-->S(3)--> ... -->S(current time).

My position is that consciousness is those deterministic transitions between states, the "--->" you see above. It is the algorithm itself, not the physical stuff the algorithm is running on. It isn't your brain, it is the directed "movement" from one state of your brain -- or any brain -- to the next.

My position is that if you take a subsequence of this algorithm -- suppose S(10)-->S(11)-->S(12) -- and split it between multiple systems, or instances, it remains the same algorithm precisely because the deterministic state transitions are exactly the same. In other words, if F(Si(10), Se(10), 10) occurs on system A, and determines state 11 on system B, and if F(Si(11), Se(11), 11) occurs on system B and determines state 12 on system C, the algorithm and hence the consciousness is exactly the same as it would be if everything occured in the same system.

So if your brain is in state 1, and the laws of physics combined with state 1 result in state 2 one planck time later, then the system where state 2 is located should be irrelevant. State 2 is still part of the algorithm, the same algorithm, because it was determined by state 1.

And finally, my position is that if you somehow add an intermediate step in there between determining state 2 and the system actually being set to state 2 -- such as communicating across space to an identical system that it should be set to state 2 -- the algorithm and hence the consciousness is still the same, because state 2 is still determined by state 1. The fact that there was a middleman doesn't change that key element. Nor would it change if that communication took a very, very long time -- if the original was scanned, then destroyed, and the information took a billion years to reach the destination, and only then was the copy made -- it would still be the same algorithm and hence the same consciousness. Because state 2 was determined by state 1

In other words, if your criteria for defining "instance" relies upon the deterministic transitions from one state of the system to the next then it is valid to say that the source and destination copies are the same instance -- since the source determines the initial state of the destination.


Making, by these criteria, an instance of an algorithm identical with an algorithm (class); that is, the logical or conceptual description identical with the physical or material process it describes, which raises some prima facie objections. Such as:

If the material instance is the same as its descriptive class, then we shouldn't be able to distinguish between them. There should be no way to tell a material instance of consciousness from an equivalent description of that consciousness (that is, the logical relations, the potential instances, it describes; note every class describes potential instances; moreover, an identity class, because it is restricted to a unique potential instance, is that potential instance). If an instance of the consciousness algorithm is identical to the consciousness algorithm as a descriptive class, then the potential instance of consciousness is identical to the actual instance.

Clearly, a person's consciousness is an actual instance of the class that describes it. Since the potential and actual instance are identical, there should be no way to tell them apart. If that's the case, then we should be able to do anything with a potential instance of a given consciousness that we can with an actual instance of a given consciousness: talk, picnic, go scuba-diving, get to know it better... spend some quality time with it. Yet we can't. Why not?

Maybe because the unique potential instance is not identical to an arbitrary material instance. Maybe the material embodiment, the particular material system where the consciousness is active, where the potential becomes actual, the class uniquely instantiated, shouldn't be taken for granted.

That's an excellent description of your position, btw. If I'm not mistaken (wouldn't be the first time), from a materialist pov, I think the obvious objection to it, as I've outlined in the reductio ad absurdum above, is its attempt at a matter-less account of consciousness.


So is your argument based on Parfit's -- as I understand it: that even though blobru#1's consciousness can't be transferred in the sense we normally think of someone's consciousness, that it should be enough for blobru#1 to be destroyed knowing an exact duplicate, his namesake, blobru#2, will carry on his legacy, in his stead ?

Yes. It's an argument about the Self, essentially. If you accept that materialism procludes the existence of a persisting self then you shouldn't have a reason not to travel. That's how I understand it.

I understand it as "if you expand your notion of 'self' to include your copies, then you shouldn't be bothered by the destruction of your particular 'self' (embodied consciousness)"; however, you may be right, I haven't looked at it in detail.

If I adopt an emergent materialist pov, the destruction (deactivation) of that separate active material process called consciousness that gave rise to the notion of "me".

Well, that's your own personal view. I also adopt an emergent materialist pov, but for me it's clear that there can be no difference between the illusory sense of experiencing self created by Nick#2 from that created by Nick#1!

Nick

Yes, it's certainly my own "personal" view. :D I won't argue that.

I will argue that in emergent materialism the "self" (or consciousness or person or whatever) that emerges corresponds to a distinct active material system. It can be copied, but the activities of the copy are the copy's activities, not the original's (even if they both dance divinely, with exactly the same degree of divinity). Which is to say, the illusory senses of experiencing self you refer to can be both identical and separate, just as 2 and 2 are separate but identical instances of "2" (more precisely, separate but identical instances of the symbol class "2" for the defining class of "2", or "two", or "II", "dos", et al.). I doubt that will win any of "you" over, though (although I still have high hopes for Nick87.5). ;)
 
Last edited:
"Begging the question" is the argumentative fallacy of treating as granted the very thing being argued. Perhaps you meant that it raises numerous questions.

In this case, the thing being taken for granted is the very nature of consciousness, as if we understood it entirely, instead of not at all. The point is that we don't know that consciousness is a computer program. We don't know what destroying a body and brain and reassembling it does. So arguments that assume that we do know are "treating as granted the thing being argued".
 
I'm trying to point out to you that if you are dead you will not be concerned that you are dead.

And I am trying to point out to you that it matetrs to you before you push the button. You seem to think "not fearing death since youn don't care after dying" is a materialist POV but it has nothing to do with materialism but more a certain lack of survival instinct :).

No problem. Along as the death is instant and painless. And the "somewhere" is a place I want to go!

Nick

That is a choice of yours, but refusing that choice is not a proof of pseudo materislism, it is a proof that one want to enjoy the emerging property of consciousness as long as possible.
 
I understand it as "if you expand your notion of 'self' to include your copies, then you shouldn't be bothered by the destruction of your particular 'self' (embodied consciousness)"; however, you may be right, I haven't looked at it in detail.

Well, "expanding your notion of self" is one way. Another for me is to really ask yourself what you believe to be changed, from the copy to the original. In my experience, if you remove the notion of a dying observer (which is what materialism indicates) it gets easier!

Yes, it's certainly my own "personal" view. :D I won't argue that.

I will argue that in emergent materialism the "self" (or consciousness or person or whatever) that emerges corresponds to a distinct active material system. It can be copied, but the activities of the copy are the copy's activities, not the original's

But are you saying that the copy is in any way different, aside of being in a different environment now? If the body is identically replicated at the point of transfer, how would anything be different from a scenario where the original body could somehow be moved, at light speed, to the destination?

Do you accept that, technology permitting, there is no difference between destruction and recreation at destination, from travelling at light speed to destination, assuming no change occurs to the original during travel?

Nick
 
Last edited:
That is a choice of yours, but refusing that choice is not a proof of pseudo materislism, it is a proof that one want to enjoy the emerging property of consciousness as long as possible.

But there is not in actuality a self which is enjoying it, Aepervius. This is what materialism indicates. Thus your refusal to me indicates that you do not accept a materialist understanding of self.

Nick
 
Last edited:
But there is not in actuality a self which is enjoying it, Aepervius. This is what materialism indicates. Thus your refusal to me indicates that you do not accept a materialist understanding of self.

Nick

Materislism indicate nothing as such. Materialism indicate that the emerging process of consciousness is generated by the brain, the interraction between the neuron, and is based on material interraction only. It says ABSOLUTELY NOTHING about not existing a self. Cogito ergo sum. Nunc, that cogito is not something "special" separated from the material, it is only the emerging process of that material. As such we agree. Where we disagree is that you think since this is not special, we can make copy of it, and get ride of the original. That is where you have your *fail*. Nothing in materialism negate the attempt to avoid termination by that emerging process. So the original even if a copy is created will attempt to survive. You say "the original should not care". Please shows us precisely where in the definition of materialism that come from.

"the theory of materialism holds that the only thing that exists is matter; that all things are composed of material and all phenomena (including consciousness) are the result of material interactions."

Nowhere does it say what you pretend it says.

Sure my self is the result of material interraction, but nowhere in that philosophy does it hold that it does not matter if you die or not.
 
But there is not in actuality a self which is enjoying it, Aepervius. This is what materialism indicates. Thus your refusal to me indicates that you do not accept a materialist understanding of self.

Nick

Nope. Materialism indicate that the self as a separate entity from matter DO NOT EXISTS. Indeed the SELF is an EMERGING property of the matter. And that is ONLY what materislism says, nothing more ! Destroy the matter you destroy the self. This is the WHOLE POINT I am trying to say you. Your understanding of the self from a materislism point of view is wrong or warped.

Not wanting to have my sepcific emerging property to be ended, does not mean I imagine it is somehow not based on material interraction, it only means I do not want those material interraction to end !

Your teleporter experiment only prove that the emerging property known as self do not want to be ended. It does not demonstrate *AT ALL* that the person don't hold onto the materislism philosophy !
 
Making, by these criteria, an instance of an algorithm identical with an algorithm (class); that is, the logical or conceptual description identical with the physical or material process it describes, which raises some prima facie objections. Such as:

If the material instance is the same as its descriptive class, then we shouldn't be able to distinguish between them. There should be no way to tell a material instance of consciousness from an equivalent description of that consciousness (that is, the logical relations, the potential instances, it describes; note every class describes potential instances; moreover, an identity class, because it is restricted to a unique potential instance, is that potential instance). If an instance of the consciousness algorithm is identical to the consciousness algorithm as a descriptive class, then the potential instance of consciousness is identical to the actual instance.

Clearly, a person's consciousness is an actual instance of the class that describes it. Since the potential and actual instance are identical, there should be no way to tell them apart. If that's the case, then we should be able to do anything with a potential instance of a given consciousness that we can with an actual instance of a given consciousness: talk, picnic, go scuba-diving, get to know it better... spend some quality time with it. Yet we can't. Why not?

Maybe because the unique potential instance is not identical to an arbitrary material instance. Maybe the material embodiment, the particular material system where the consciousness is active, where the potential becomes actual, the class uniquely instantiated, shouldn't be taken for granted.

That's an excellent description of your position, btw. If I'm not mistaken (wouldn't be the first time), from a materialist pov, I think the obvious objection to it, as I've outlined in the reductio ad absurdum above, is its attempt at a matter-less account of consciousness.

But "classes" don't exist in and of themselves. "class" is just a way to partition systems. To say a system is an instance of a class only means that it is a member of a partition of the set of all systems, the partition which we can describe using a class description -- but that "class" thing doesn't exist out in the void, just waiting for something to instantiate it.

So an algorithm doesn't really exist at all apart from systems that instantiate it. If you have a book that describes the algorithm, that is nothing more than exactly what it is called -- a description. The algorithm itself only exists when it is instantiated upon some system.

Now, if something were to read (perhaps "process" is a better word since I don't want to imply a human "reader") that book in such a way that was isomorphic to the running of the algorithm, E.G. "if the result is 2356 then turn to page 326262 and proceed," ect, then the algorithm would actually exist -- in instance form -- and you would be able to
talk, picnic, go scuba-diving, get to know it better... spend some quality time with it
, right?

Thus as far as the transporter is concerned the question is whether or not swapping out all the material of an instance necessarily "changes" the instance we are looking at. In other words, if the instantaneous state of your brain combined with the laws of physics and any new input (although that can be roped into the state) would result in a given state one planck time in the future, would it matter if we swapped out every single particle and initialized them to this future state?

I say no, it does not matter. You could continually swap out particles and as long as the "swap" didn't change the sequence of state transitions that is essential to the algorithm it would always be the same instance. How could it not be the same instance, given that each step is determined by the previous one?

In other words, the fact that the source and destination are deterministically linked is what makes it the same instance and the same consciousness. I don't think there is even a question about this.

What there is a question about is whether or not, for example, if you simply destroy someone at the source and then randomly generate initial states of a brain in a body at the destination and don't "let it go" until the initial state matches the (now destroyed) source. Even in this case, I lean towards saying it would be the same instance, because the act of checking for a "match" is a deterministic link. The only case I can think of where it would NOT be the same instance is if you destroy the source, then branch off into multiple universes where a single random brain state is picked and loaded into the destination and the body is allowed to go. If one of those random picks happens to match the (now destroyed) source, would it be the same instance? I say NO, because there is no deterministic link in that case.
 
Last edited:
Nope. Materialism indicate that the self as a separate entity from matter DO NOT EXISTS. Indeed the SELF is an EMERGING property of the matter. And that is ONLY what materislism says, nothing more ! Destroy the matter you destroy the self. This is the WHOLE POINT I am trying to say you. Your understanding of the self from a materislism point of view is wrong or warped.

Not wanting to have my sepcific emerging property to be ended, does not mean I imagine it is somehow not based on material interraction, it only means I do not want those material interraction to end !

Your teleporter experiment only prove that the emerging property known as self do not want to be ended. It does not demonstrate *AT ALL* that the person don't hold onto the materislism philosophy !

Well, the word "self" to me indicates a broader category of behaviours and concepts than I am here taking issue with. This is why I usually restrict myself to "experiencing self" in Transporter discussions. This apparent "experiencing self" is constructed through thinking - inner monologue that occurs usually briefly post hoc to sensory information processing. The action of this thinking is to construct the notion of there being an experiencing self - an entity that is experiencing life - and to relate what is apparently happening to it. But it is purely notional. There is not in material terms an actual self which experiences. Merely the story that one exists.

This notional experiencing self is of course an emergent phenomenom, but it is not just the fact of its emergence that means that it can be identically replicated. It is also the fact that it does not actually exist in the sense in which it seems to exist.

So, understanding this, one can see that because there is not in actuality an experiencing self, merely the story of one existing, it can be replicated. Just as a story can be replicated and the original book destroyed, without any loss of meaning, so the same can happen to the notional experiencing self being constructed in our own heads.

Nick
 
Last edited:
But there is not in actuality a self which is enjoying it, Aepervius. This is what materialism indicates. Thus your refusal to me indicates that you do not accept a materialist understanding of self.

Although I tend to agree with you, I'm not sure I entirely buy this.

I think it might be helpful to think of death as an action-- the last act you will perform. It is not a mere cessation of further life-events. Sugar-coating death by making it quick and painless doesn't necessarily make it philosophically different from having an axe murderer burst into the room and hack you to pieces.

I'm not convinced it's automatically contradictory for a materialist to not want to die, regardless of whether there's a copy of them walking around or not.

Tak
 
Last edited:

Back
Top Bottom