• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Star Trek Transporter Enigma

I'm not convinced it's automatically contradictory for a materialist to not want to die, regardless of whether there's a copy of them walking around or not.

Tak

Is this technophobia? For some reason it reminds me of the arguments about man moving faster than a horse could run. "Surely, he will be unable to breathe at such speeds?"

When the transporter appears, people will use it. Frequently. Except for a few Luddites (except, by then there will be a new term for Luddites).
 
I'm not convinced it's automatically contradictory for a materialist to not want to die, regardless of whether there's a copy of them walking around or not.

Tak

This is where you are missing the whole point. The copy IS me. This is why the people using the transporter don't mind dying, because they won't be dead for very long. Or even if they are dead for very very long, they won't notice it.

It seems like you and several others are looking at it like we don't mind dying because in some abstract sense another version of you lives on.
If it is an exact copy, then it IS ME.

This is why we are appearing not to care about death; not because we are so noble to be satisfied with a copy of us living on, but because we believe that the death will only be temporary, until we are rebuilt elsewhere by the transporter.
 
This is where you are missing the whole point. The copy IS me. This is why the people using the transporter don't mind dying, because they won't be dead for very long. Or even if they are dead for very very long, they won't notice it.

It seems like you and several others are looking at it like we don't mind dying because in some abstract sense another version of you lives on.
If it is an exact copy, then it IS ME.

This is why we are appearing not to care about death; not because we are so noble to be satisfied with a copy of us living on, but because we believe that the death will only be temporary, until we are rebuilt elsewhere by the transporter.

Suppose it were proven, without any doubt, that the universe is, in fact, infinite in size. Hence whatever exists locally will, inevitably, be exactly duplicated somewhere, some unimaginable distance away. Would the fact that some exact duplicate undoubtedly exists somewhere else mean that you don't care whether you personally live?

In fact I don't think that once two separate instances of a person exist that there is any question about it. If I'm alive somewhere, the existence of another version of "me" somewhere else has no more to do with it than the existence of two protons. The two protons might be identical in all respects except location, but we don't consider that they are the same proton.
 
Suppose it were proven, without any doubt, that the universe is, in fact, infinite in size. Hence whatever exists locally will, inevitably, be exactly duplicated somewhere, some unimaginable distance away.

While this is a common belief, it is logically incorrect.

I can have infinite apples, only 4 watermelons, and no oranges.
 
Suppose it were proven, without any doubt, that the universe is, in fact, infinite in size. Hence whatever exists locally will, inevitably, be exactly duplicated somewhere, some unimaginable distance away. Would the fact that some exact duplicate undoubtedly exists somewhere else mean that you don't care whether you personally live?

In fact I don't think that once two separate instances of a person exist that there is any question about it. If I'm alive somewhere, the existence of another version of "me" somewhere else has no more to do with it than the existence of two protons. The two protons might be identical in all respects except location, but we don't consider that they are the same proton.

Some may choose to swallow the all-in-one-pill and leave the 5 course haute-cuisine to us technophobes :D
 
If I'm alive somewhere, the existence of another version of "me" somewhere else has no more to do with it than the existence of two protons. The two protons might be identical in all respects except location, but we don't consider that they are the same proton.

If you are alive, yes, it has nothing to do with you. If you are dead, however, and then this copy comes into existence somewhere else, i don't see how that would be any different from your point of view from you falling asleep and waking up somewhere else.
 
The two protons might be identical in all respects except location, but we don't consider that they are the same proton.

Unless everything you do to one proton results in an identical behavior change for the other one.

What do we "consider" that?
 
I think it might be helpful to think of death as an action-- the last act you will perform. It is not a mere cessation of further life-events. Sugar-coating death by making it quick and painless doesn't necessarily make it philosophically different from having an axe murderer burst into the room and hack you to pieces.

I'm not convinced it's automatically contradictory for a materialist to not want to die, regardless of whether there's a copy of them walking around or not.

For me it's clear that we are beings created through natural selection. So, of course, we don't want to die. That's the tendency. And, even now, we don't actually have Transporters. They have merely entered our consciousness at a conceptual level.

Thus, the immediate reaction is to think "No way am I going to push that button!" It's natural, as I see it.

But... there is also the rational mind. And it is capable of overriding our fears. And, for me, if you say you are a materialist then this means you also apply the principles of materialism to what you conceive or identify as "yourself." Meaning you have to accept, as I see it, that there can be nothing lost when your body is destroyed and then perfectly replicated. It might feel like "Whoa, this is scary, it will not be me that emerges from the Transporter pod, but just a copy." But if you believe materialism to be correct, you must accept that this feeling is just a part of being the product of natural selection and not necessarily representational of reality.

If you're really a materialist you have to push the button.

Nick
 
Suppose it were proven, without any doubt, that the universe is, in fact, infinite in size. Hence whatever exists locally will, inevitably, be exactly duplicated somewhere, some unimaginable distance away. Would the fact that some exact duplicate undoubtedly exists somewhere else mean that you don't care whether you personally live?

For me what makes the Transporter meaningful is that Nick227 wants to go to the place where Nick228 will emerge.

And, also, I have questions about this identical me. OK, so the body is the same, but is the environment it grew up in and all the events of its life identical? And are there forces specific to an individual location that will have created a difference? Because even in an infinite universe you only have one of each place.

Nick
 
Last edited:
This is where you are missing the whole point. The copy IS me. This is why the people using the transporter don't mind dying, because they won't be dead for very long. Or even if they are dead for very very long, they won't notice it.

No, I do understand your position - as a materialist, your materially-identical copy is you in every sense that matters. As a matter of fact, I mostly agree; I'm just arguing on behalf of materialists who object, because I don't think it necessarily proves them to be non-materialists.

It seems like you and several others are looking at it like we don't mind dying because in some abstract sense another version of you lives on.

Well, they're both you. And if you threaten to kill one of them, I don't think (s)he needs to be a non-materialist to object.
 
Last edited:
While this is a common belief, it is logically incorrect.

I can have infinite apples, only 4 watermelons, and no oranges.

I'm making certain assumptions about the nature of the infinite universe - particularly, that it is homogenous. Add whatever restrictions you feel are needed to obtain certainty.
 
No, I do understand your position - as a materialist, your materially-identical copy is you in every sense that matters.

As a materialist, how do you determine what "matters"? How do you determine what "me" is?
 
As a materialist, how do you determine what "matters"? How do you determine what "me" is?

Well, that's a whole topic in its own right, but I think we're treating it as one of the preconditions in this discussion: it's presumed that we have a technology that creates exact, identical physical copies of people.

However, it might be important to specify whether this technology requires the destruction of the original, or if it can make a copy without affecting the original at all. It seems to me the two situations lead to two different ethical dilemmas.
 
I understand it as "if you expand your notion of 'self' to include your copies, then you shouldn't be bothered by the destruction of your particular 'self' (embodied consciousness)"; however, you may be right, I haven't looked at it in detail.

Well, "expanding your notion of self" is one way. Another for me is to really ask yourself what you believe to be changed, from the copy to the original. In my experience, if you remove the notion of a dying observer (which is what materialism indicates) it gets easier!

Even without the "dying observer", I'm still stuck on the deactivating local system (its relevance, according to emergent materialism).

Yes, it's certainly my own "personal" view. :D I won't argue that.

I will argue that in emergent materialism the "self" (or consciousness or person or whatever) that emerges corresponds to a distinct active material system. It can be copied, but the activities of the copy are the copy's activities, not the original's

But are you saying that the copy is in any way different, aside of being in a different environment now?

In terms of its properties, no.

If the body is identically replicated at the point of transfer, how would anything be different from a scenario where the original body could somehow be moved, at light speed, to the destination?

In the light speed transport, there would have been no deactivation event for the local system, no loss of physical integrity, no cessation of conscious activity (aside from whatever happened to the body during the transport).

Do you accept that, technology permitting, there is no difference between destruction and recreation at destination, from travelling at light speed to destination, assuming no change occurs to the original during travel?

Nick

No. There is the difference of destruction (of integrity) and recreation (of integrity). Without integrity, a local system (and its properties) ceases to exist. To duplicate that integrity is to create another system.

Take a hammer to a clock and it loses the integrity required to tell time. Toss it in the trash and builid another clock. Set the new clock to the time the trashed clock would have been telling if you hadn't trashed it. The new clock (whether or not it's materially identical: one might have hands and the other a digital display; so long as it's functionally identical: i.e., ticks at the same rate) is telling time in the same way (actively "conscious" of the passing of time at the same rate); but, according to special relativity, unless it is in precisely the same place the trashed clock was, it is not telling the same time; furthermore, since it is a different clock, even though it is telling time in the same way, and even if we assume it is in the precisely the same place so that it is telling the same time, it is not the same telling of time (the activity of the trashed clock having ceased, temporarily).

Same with human as clock consciousness (for what else are we conscious of but the changing of space, which is the passing of time, and vice versa; therefore... probably)? :cuckooclo


Making, by these criteria, an instance of an algorithm identical with an algorithm (class); that is, the logical or conceptual description identical with the physical or material process it describes, which raises some prima facie objections. Such as:

If the material instance is the same as its descriptive class, then we shouldn't be able to distinguish between them. There should be no way to tell a material instance of consciousness from an equivalent description of that consciousness (that is, the logical relations, the potential instances, it describes; note every class describes potential instances; moreover, an identity class, because it is restricted to a unique potential instance, is that potential instance). If an instance of the consciousness algorithm is identical to the consciousness algorithm as a descriptive class, then the potential instance of consciousness is identical to the actual instance.

Clearly, a person's consciousness is an actual instance of the class that describes it. Since the potential and actual instance are identical, there should be no way to tell them apart. If that's the case, then we should be able to do anything with a potential instance of a given consciousness that we can with an actual instance of a given consciousness: talk, picnic, go scuba-diving, get to know it better... spend some quality time with it. Yet we can't. Why not?

Maybe because the unique potential instance is not identical to an arbitrary material instance. Maybe the material embodiment, the particular material system where the consciousness is active, where the potential becomes actual, the class uniquely instantiated, shouldn't be taken for granted.

That's an excellent description of your position, btw. If I'm not mistaken (wouldn't be the first time), from a materialist pov, I think the obvious objection to it, as I've outlined in the reductio ad absurdum above, is its attempt at a matter-less account of consciousness.

But "classes" don't exist in and of themselves. "class" is just a way to partition systems. To say a system is an instance of a class only means that it is a member of a partition of the set of all systems, the partition which we can describe using a class description -- but that "class" thing doesn't exist out in the void, just waiting for something to instantiate it.

Not under materialism, no. Classes existing [prior to and more causal than matter, as prescriptions rather than descriptions] "out in the void" is idealism.

So an algorithm doesn't really exist at all apart from systems that instantiate it. If you have a book that describes the algorithm, that is nothing more than exactly what it is called -- a description. The algorithm itself only exists when it is instantiated upon some system.

It exists in the book (referring to potential instances). It goes from a description existing in a book of no active material system to a description existing in a book of an active material system.

Now, if something were to read (perhaps "process" is a better word since I don't want to imply a human "reader") that book in such a way that was isomorphic to the running of the algorithm, E.G. "if the result is 2356 then turn to page 326262 and proceed," ect, then the algorithm would actually exist -- in instance form -- and you would be able to
talk, picnic, go scuba-diving, get to know it better... spend some quality time with it
, right?

With the locally active material system that the algorithm in the book describes, yes.

Thus as far as the transporter is concerned the question is whether or not swapping out all the material of an instance necessarily "changes" the instance we are looking at. In other words, if the instantaneous state of your brain combined with the laws of physics and any new input (although that can be roped into the state) would result in a given state one planck time in the future, would it matter if we swapped out every single particle and initialized them to this future state?

So long as the swapping out doesn't destroy the active integrity of the local system, I don't think it does, in theory (because, in theory, this may be happening all the time; in fact, a gradual version of this is happening locally all the time, over time).

I say no, it does not matter. You could continually swap out particles and as long as the "swap" didn't change the sequence of state transitions that is essential to the algorithm it would always be the same instance. How could it not be the same instance, given that each step is determined by the previous one?

It depends of the level of detail of the algorithm's description. One could change the material instance to execute the algorithm more or less efficiently, using different routines beneath the level of the algorithm's description. This complicates the question, of course. The same algorithm is being executed by a different local material system. It seems it's still the same locally continuous instance of the algorithm, but its material composition has been altered beyond a simple one-to-one swapping out of particles. So an altered system, a different system, it seems. Definitions become fuzzy here, as this situation isn't encountered, or recognized as such when it is, much.

In the case of the altered system, I would say at the level of functionality it is the same instance; at the level of execution sequence it is a different instance; while in the case of the swapped system, it is the same instance at both levels. In either case, however, at the level of active integrity, which may also be relevant to consciousness, it's a different instance each time the algorithm is run (sometimes crashing, sometimes not, in my pc experience).

In other words, the fact that the source and destination are deterministically linked is what makes it the same instance and the same consciousness. I don't think there is even a question about this.

As long as the universe 'knows' about the link. But under materialism, I'm not sure how it could, or why it should (here putting 'knows' in scare quotes is facetious shorthard for "as long as the determined state transition is causally efficacious"; with the possible exception of quantum entanglement, all causality is local, dependent on spatiotemporal adjacency, in the observable universe. There is still the question then of local causality to overcome.)

What there is a question about is whether or not, for example, if you simply destroy someone at the source and then randomly generate initial states of a brain in a body at the destination and don't "let it go" until the initial state matches the (now destroyed) source. Even in this case, I lean towards saying it would be the same instance, because the act of checking for a "match" is a deterministic link. The only case I can think of where it would NOT be the same instance is if you destroy the source, then branch off into multiple universes where a single random brain state is picked and loaded into the destination and the body is allowed to go. If one of those random picks happens to match the (now destroyed) source, would it be the same instance? I say NO, because there is no deterministic link in that case.

So the causality is only local to one universe within the multiverse: that is, global within some one select universe, but not the entire multiverse? Hmm... this is really getting messy.

As for the random initial states hypothetical, I agree that by your reasoning they should be the same instance. But I think, as outlined above, there are problems with said reasoning (besides some possibly still unresolved points of algorithm ontology, issues of levels of instance integrity re algorithms, and global causality for deterministic state transitions, in 'this' universe and across the multiverse).

And apologies in advance if the language seems to be getting bogged down in abstraction; your hypothetical raises several interesting and detailed points which I can't respond to in earnest without some philosophical jargon, unfortunately (owing also to my being too dim-witted to write more gooder, no doubt). :o
 
Last edited:
Well, that's a whole topic in its own right, but I think we're treating it as one of the preconditions in this discussion: it's presumed that we have a technology that creates exact, identical physical copies of people.

However, it might be important to specify whether this technology requires the destruction of the original, or if it can make a copy without affecting the original at all. It seems to me the two situations lead to two different ethical dilemmas.

Well, there's a very simple principle which we can use. We use it all the time, without even thinking about it. Do I own two Far Side mugs, or only one? Well, if the mugs are in two different places at the same time, I own two of them. The rest of the stuff about the persistence of the "me" and "is it still you" isn't relevant. Just count the number of people. That two people look the same has nothing to do with it.
 
It exists in the book (referring to potential instances). It goes from a description existing in a book of no active material system to a description existing in a book of an active material system.

We are just using different terminology. My "description of an algorithm" == your "algorithm" and my "algorithm" == your "instance of an algorithm."

I will adopt your version to aid communication.

It depends of the level of detail of the algorithm's description. One could change the material instance to execute the algorithm more or less efficiently, using different routines beneath the level of the algorithm's description. This complicates the question, of course. The same algorithm is being executed by a different local material system. It seems it's still the same locally continuous instance of the algorithm, but its material composition has been altered beyond a simple one-to-one swapping out of particles. So an altered system, a different system, it seems. Definitions become fuzzy here, as this situation isn't encountered, or recognized as such when it is, much.

In the case of the altered system, I would say at the level of functionality it is the same instance; at the level of execution sequence it is a different instance;

We agree completely here. I think the assumption in the transporter scenario is that only "routines" below the level of detail necessary to capture -- in full -- consciousness are omitted from the description.

In other words, that any change in instance due to execution sequence would be irrelevant.

Note that this is trivially feasible to achieve because even if the full state vector of the set of all particles at every planck time interval is required we can always just simulate the whole shebang rather than taking any shortcuts.

while in the case of the swapped system, it is the same instance at both levels.

So you agree that if there was a magic teleporter that could simply swap the system in an instant and then move the particles to the destination in an instant, it would be the same instance both functionally and sequentially?

In either case, however, at the level of active integrity, which may also be relevant to consciousness, it's a different instance each time the algorithm is run (sometimes crashing, sometimes not, in my pc experience).

Yes I agree.

What do you mean by "active integrity?" Do you mean the causal efficacy of each state transition in the system? Because then I think we agree completely.

As long as the universe 'knows' about the link. But under materialism, I'm not sure how it could, or why it should (here putting 'knows' in scare quotes is facetious shorthard for "as long as the determined state transition is causally efficacious"; with the possible exception of quantum entanglement, all causality is local, dependent on spatiotemporal adjacency, in the observable universe. There is still the question then of local causality to overcome.)

Yes, that is a concern. But I think the assumption in the experiment is that there is some causal efficacy, somehow. That could be entanglement (for it to work like it does in the movies it will need to be) or it could just be standard EM radiation communication between the source and destination.

If we assume that some fancy computer stuff is done at the source, to record the information needed, then sent via radiowaves to the destination, then fancy stuff rebuilds the instance at the next (or same) state of the instance, the only question remaining between us (it seems) is whether or not the functionally same instance is the same consciousness. I think we have certainly established that at some level it is indeed the same instance of the algorithm, or you it is not a separate instance according to all criteria for "instance."

So the causality is only local to one universe within the multiverse: that is, global within some one select universe, but not the entire multiverse? Hmm... this is really getting messy.

No I am saying that causality is no longer there, period. I mean, some is still there, but only in the form of "transporter is used, so get a random system configuration."

If that happens, and just by luck the proper configuration is randomly generated, I do not consider that the same instance. There is no causal efficacy, as you would say, between the states vector prior to the copy and the state vector afterwards.
 
Last edited:
It exists in the book (referring to potential instances). It goes from a description existing in a book of no active material system to a description existing in a book of an active material system.

We are just using different terminology. My "description of an algorithm" == your "algorithm" and my "algorithm" == your "instance of an algorithm."

I will adopt your version to aid communication.

Well, this is where we have to be extremely careful to make absolutely sure we are talking about the same things (which is the often boring business of philosophy: refining and extending an everyday language of thousands of words to describe billions of phenomena).

Very precisely, by "algorithm in a book" I mean the set of symbols that defines the "algorithm". Usually we conflate them, as I do in the quote above to speed communication. But to be very precise, there are three levels I want to talk about: the algorithm (the set of potential instances); the description of the algorithm (one of any set of symbols that try to define the algorithm); and material instances of the algorithm (material systems that match a potential instance wherein the algorithm may be executed). (What's worse, there are at least three semantic levels to the material instances that we have seen so far (there may be more): functional (what does it do); execution sequence, which is really a subclass of structural (how is it realized); and active integrity / continuous spatiotemporal (one run at a time). But that's an aside for now.)

Given this analysis, I think "your description of an algorithm" and "my description of an algorithm" are the same: that is, any set of symbols which define and communicate the algorithm; and I agree, in the way you are using it, "your algorithm" = "my instance of an algorithm"; however, my analysis distinguishes between potential instances, the set of which is the algorithm, and actual instances. Potential instances exist logically, at the level of logical relationships, of ideal definition, of "class". They can only be pointed to by symbols (think fictional characters in a story, numbers, etc.). Actual instances exist materially, in our physical world (versus the immaterial, logical world, which is the set of all describable worlds, many of which may be impossible, physically and/or logically). Where I distinguish the purely potential / logical / immaterial from the actual / physical / material, you do not (or at least haven't been, afaics).

In short, the symbols for the algorithm describe the idea of the algorithm, which is the set of all potential instances of the algorithm; the idea of the algorithm describes potential instances of the algorithm, and of course actual instances (for each actual instance matches a potential instance). And distributively, since the symbols describe the algorithm, and the algorithm describes the instances, the symbols, via the algorithm, describe the instances. How's that for confusing?

Why is this important? Beyond the obvious -- to avoid mistaking different levels, confusing the symbol for the idea, the idea for the thing, the thing for the symbol -- it allows us to talk about ideal, imaginary things, like the number two, the Wizard of Oz, etc. Looking back a few posts, I think I see the source of the confusion (my fault, as usual): You said:
But "classes" don't exist in and of themselves. "class" is just a way to partition systems. To say a system is an instance of a class only means that it is a member of a partition of the set of all systems, the partition which we can describe using a class description -- but that "class" thing doesn't exist out in the void, just waiting for something to instantiate it.
I replied:
Not under materialism, no. Classes existing [prior to and more causal than matter, as prescriptions rather than descriptions] "out in the void" is idealism.
-- in response to the second sentence, about classes existing in the void, which I took to imply existing materially, and the whole quote as referring to material existence. I should have read more carefully, or been more specific. It's right to say that classes don't have a material existence. But they do have a logical existence (meaning we can imagine them, and distinguish between them in order to talk about them). An algorithm is a class, a set of potential mechanical sequences, and has this sort of existence, a logical existence, as opposed to a material existence. So what's the difference? What is a "logical existence"?

That's been a thorny issue in philosophy for two and a half millenia, but to cut to the chase, logical existence refers to classes, descriptions, definitions, ideal entities, perfect paradigm cases, members of classes, potential instances: it's the sort of existence a line has (not drawn on a piece of paper or whatever, that's merely a representation (symbol) of the line, drawn to give us some idea of the line; we can't actually draw the ideal intentity that is the defined "line", because it has no breadth, and is infinitely long, by definition; it has no actual instances, there are no infinitely long, breadthless, perfectly straight lines; however, we can communcate the idea of the "line" with appropriate symbols, define it, logically, ideally, and make use of that logical definition).

Now, an algorithm is a description, a logical definition, a class or set of potential instances, and therefore exists in the same way a line does, logically. So when I distinguish between the actual, physical instances of the algorithm and the potential, logical instances, I'm making a distinction of that sort: between an actual, physical, material instance and a potential, logical, ideal instance. The "algorithm", properly speaking, is the class, its members, the set of potential instances (or in the case of single membership, the unique potential instance) only. It's a mistake to conflate it with the actual instance, to mix logical and physical, a mistake that can lead to all sorts of paradoxes, of which the teleporter may be one.

Key-rye-st what a mess! I know that's by far not the greatest explanation ever. But that's the best I can sort it out right now. Sorry for any added confusion. If you've ever studied semiotics, it covers a lot of the same ground in distinguishing signifier (symbol), signified (idea: class of potential instances), and referent (actual instance).

It depends of the level of detail of the algorithm's description. One could change the material instance to execute the algorithm more or less efficiently, using different routines beneath the level of the algorithm's description. This complicates the question, of course. The same algorithm is being executed by a different local material system. It seems it's still the same locally continuous instance of the algorithm, but its material composition has been altered beyond a simple one-to-one swapping out of particles. So an altered system, a different system, it seems. Definitions become fuzzy here, as this situation isn't encountered, or recognized as such when it is, much.

In the case of the altered system, I would say at the level of functionality it is the same instance; at the level of execution sequence it is a different instance;

We agree completely here. I think the assumption in the transporter scenario is that only "routines" below the level of detail necessary to capture -- in full -- consciousness are omitted from the description.

In other words, that any change in instance due to execution sequence would be irrelevant.

Note that this is trivially feasible to achieve because even if the full state vector of the set of all particles at every planck time interval is required we can always just simulate the whole shebang rather than taking any shortcuts.

Well, assuming you don't cross the threshhold into quantum uncertainty, there might be state changes in consciousness in the time required to get the information (even at scales above quantum we'd still need to ensure that the means of observing didn't alter the state). But we'll let Scotty and his class of teleporter engineers worry about all that.

while in the case of the swapped system, it is the same instance at both levels.

So you agree that if there was a magic teleporter that could simply swap the system in an instant and then move the particles to the destination in an instant, it would be the same instance both functionally and sequentially?

Yes, I think so. If by "an instant" you mean a time interval so small it might as well be zero, then yes, as the destruction and replication events would occur in no time. But events taking no time aren't possible events -- events can be defined as "instantaneous" in calculus, but this merely sets a mathematical limit, based on infinite time-slices, we never actually reach -- so the instant teleportation hypothetical is really just saying: "take someone who's here; now imagine she's not here, she's over there, instantly; is it the same person?" Yes. Because in this fanciful case she wasn't destroyed and replicated, just imagined here, and then there. So logically, it's a purely imaginary case, and an impossible class at that, with a described candidate for membership, but no logically possible membership, no potential instance; thus, an empty set, without potential, or of course, actual instances.

In either case, however, at the level of active integrity, which may also be relevant to consciousness, it's a different instance each time the algorithm is run (sometimes crashing, sometimes not, in my pc experience).

Yes I agree.

What do you mean by "active integrity?" Do you mean the causal efficacy of each state transition in the system? Because then I think we agree completely.

No, I mean a spatiotemporally continuous execution, from activation to deactivation. (Unless something goes wrong with the execution, the state transitions will imply the causal links in the system, at least for their level of description).

As long as the universe 'knows' about the link. But under materialism, I'm not sure how it could, or why it should (here putting 'knows' in scare quotes is facetious shorthard for "as long as the determined state transition is causally efficacious"; with the possible exception of quantum entanglement, all causality is local, dependent on spatiotemporal adjacency, in the observable universe. There is still the question then of local causality to overcome.)

Yes, that is a concern. But I think the assumption in the experiment is that there is some causal efficacy, somehow. That could be entanglement (for it to work like it does in the movies it will need to be) or it could just be standard EM radiation communication between the source and destination.

Well, the EM radiation communication would face relativity's limit on lightspeed; and entanglement would have to work above the quantum scale (if we assume "the consciousness" being teleported is functional at either end above the quantum scale, which we'd have to for determinism to hold for the state transitions).

If we assume that some fancy computer stuff is done at the source, to record the information needed, then sent via radiowaves to the destination, then fancy stuff rebuilds the instance at the next (or same) state of the instance, the only question remaining between us (it seems) is whether or not the functionally same instance is the same consciousness. I think we have certainly established that at some level it is indeed the same instance of the algorithm, or you it is not a separate instance according to all criteria for "instance."

Yes, I think you mean "[f]or
it is not separate instance according to all criteria for 'instance'"; however, it is separate for some criteria, possibly crucial criteria, as outlined in the discussion of the extra distinctions of "instance" above.

So the causality is only local to one universe within the multiverse: that is, global within some one select universe, but not the entire multiverse? Hmm... this is really getting messy.

No I am saying that causality is no longer there, period. I mean, some is still there, but only in the form of "transporter is used, so get a random system configuration."

If that happens, and just by luck the proper configuration is randomly generated, I do not consider that the same instance. There is no causal efficacy, as you would say, between the states vector prior to the copy and the state vector afterwards.

So would you say the difference is expectation? (This is a provocative and puzzling case, because by "causal efficacy", which is a stupid-sounding phrase I must admit, though I can't think of a better one, I really mean "causal in the way intended, expected, planned, etc." To distinguish it from causality that has effects which are useless to us because they aren't the effects we thought would happen).
 
Well, this is where we have to be extremely careful to make absolutely sure we are talking about the same things (which is the often boring business of philosophy: refining and extending an everyday language of thousands of words to describe billions of phenomena).

Very precisely, by "algorithm in a book" I mean the set of symbols that defines the "algorithm". Usually we conflate them, as I do in the quote above to speed communication. But to be very precise, there are three levels I want to talk about: the algorithm (the set of potential instances); the description of the algorithm (one of any set of symbols that try to define the algorithm); and material instances of the algorithm (material systems that match a potential instance wherein the algorithm may be executed). (What's worse, there are at least three semantic levels to the material instances that we have seen so far (there may be more): functional (what does it do); execution sequence, which is really a subclass of structural (how is it realized); and active integrity / continuous spatiotemporal (one run at a time). But that's an aside for now.)

Given this analysis, I think "your description of an algorithm" and "my description of an algorithm" are the same: that is, any set of symbols which define and communicate the algorithm; and I agree, in the way you are using it, "your algorithm" = "my instance of an algorithm"; however, my analysis distinguishes between potential instances, the set of which is the algorithm, and actual instances. Potential instances exist logically, at the level of logical relationships, of ideal definition, of "class". They can only be pointed to by symbols (think fictional characters in a story, numbers, etc.). Actual instances exist materially, in our physical world (versus the immaterial, logical world, which is the set of all describable worlds, many of which may be impossible, physically and/or logically). Where I distinguish the purely potential / logical / immaterial from the actual / physical / material, you do not (or at least haven't been, afaics).

In short, the symbols for the algorithm describe the idea of the algorithm, which is the set of all potential instances of the algorithm; the idea of the algorithm describes potential instances of the algorithm, and of course actual instances (for each actual instance matches a potential instance). And distributively, since the symbols describe the algorithm, and the algorithm describes the instances, the symbols, via the algorithm, describe the instances. How's that for confusing?

Why is this important? Beyond the obvious -- to avoid mistaking different levels, confusing the symbol for the idea, the idea for the thing, the thing for the symbol -- it allows us to talk about ideal, imaginary things, like the number two, the Wizard of Oz, etc. Looking back a few posts, I think I see the source of the confusion (my fault, as usual): You said:

I replied:

-- in response to the second sentence, about classes existing in the void, which I took to imply existing materially, and the whole quote as referring to material existence. I should have read more carefully, or been more specific. It's right to say that classes don't have a material existence. But they do have a logical existence (meaning we can imagine them, and distinguish between them in order to talk about them). An algorithm is a class, a set of potential mechanical sequences, and has this sort of existence, a logical existence, as opposed to a material existence. So what's the difference? What is a "logical existence"?

That's been a thorny issue in philosophy for two and a half millenia, but to cut to the chase, logical existence refers to classes, descriptions, definitions, ideal entities, perfect paradigm cases, members of classes, potential instances: it's the sort of existence a line has (not drawn on a piece of paper or whatever, that's merely a representation (symbol) of the line, drawn to give us some idea of the line; we can't actually draw the ideal intentity that is the defined "line", because it has no breadth, and is infinitely long, by definition; it has no actual instances, there are no infinitely long, breadthless, perfectly straight lines; however, we can communcate the idea of the "line" with appropriate symbols, define it, logically, ideally, and make use of that logical definition).

Now, an algorithm is a description, a logical definition, a class or set of potential instances, and therefore exists in the same way a line does, logically. So when I distinguish between the actual, physical instances of the algorithm and the potential, logical instances, I'm making a distinction of that sort: between an actual, physical, material instance and a potential, logical, ideal instance. The "algorithm", properly speaking, is the class, its members, the set of potential instances (or in the case of single membership, the unique potential instance) only. It's a mistake to conflate it with the actual instance, to mix logical and physical, a mistake that can lead to all sorts of paradoxes, of which the teleporter may be one.

Key-rye-st what a mess! I know that's by far not the greatest explanation ever. But that's the best I can sort it out right now. Sorry for any added confusion. If you've ever studied semiotics, it covers a lot of the same ground in distinguishing signifier (symbol), signified (idea: class of potential instances), and referent (actual instance).

If an algorithm is a logical concept - and surely it is - then it cannot be instantiated in the physical world. A logical concept is an inexact description that we apply to objects and processes in the physical world. A computer doesn't bring an algorithm "into existence" any more than throwing a stone brings a parabola into existence. They are ways for us to understand the physical actions, whether of stones or electrons.
 
If an algorithm is a logical concept - and surely it is - then it cannot be instantiated in the physical world.

Good point, because in a manner of speaking -- yes and no. So it's a point of confusion I should try to clear up...

An algorithm is a class description. What does it describe? Itself: it defines a set whose members may be thought of as ideal instances that in turn potentially describe, match one-to-one in detail, an actual instance, at the level of detail of the algorithm, in whatever context we are concerned with (functional, structural, et al.). That's what I intend by a potential instance becoming an actual instance, a description only.

To avoid confusion, the "actual instance" may be called by its more traditional philosophical name, an object.

A logical concept is an inexact description that we apply to objects and processes in the physical world.

Well, I'd qualify that a bit. Logical concepts are exact, in the sense that there should be no disagreement over how many "two" is, because ideal; but incomplete as descriptions of our physical observations because a priori experience. Our observations are recorded in empirical concepts, empirical meaning from our senses, which are inexact -- what the hell is "cute", exactly? -- because they are a posteriori experience, and experience, especially when you throw in emotional valuation et al, is often a lot messier than the logical necessity we imagine (though not always: groups of "two" occur in both the logical and empirical world).

A computer doesn't bring an algorithm "into existence" any more than throwing a stone brings a parabola into existence.

Just to expand on this important point: the parabola approximates the stone's throw extremely well in a perfect vacuum -- the problem is finding one, of course -- i.e., the path is a parabola, empirically-speaking, but it's effectively impossible to describe that path to a mathematical precision, and owing to quantum fluctuations to even effect an ideal parabola if we could. The algorithm is an exact description of itself only, and only at the level of function. There are often processes that match that functionality when they work as we expect them to; however, in the real world, sometimes they don't.

They are ways for us to understand the physical actions, whether of stones or electrons.

Yes.
 
Well, that's a whole topic in its own right, but I think we're treating it as one of the preconditions in this discussion: it's presumed that we have a technology that creates exact, identical physical copies of people.

However, it might be important to specify whether this technology requires the destruction of the original, or if it can make a copy without affecting the original at all. It seems to me the two situations lead to two different ethical dilemmas.

One could postulate the person being sent back in time to materialise in an identical chamber - so that the experience of Mr A1 was identical to that of Mr A2. Would Mr A1 be willing to shoot himself in such a situation? Would he regard the existence of Mr A2 being even relevant?

Suppose the two identical copies were presented with identical guns that would allow them to shoot themselves in the head - but due to some clever quantum device (details left as an exercise for the class) only one of them will go off. Will both of them attempt to kill themselves, certain that if one dies, the other will survive?

Personally, I think that the fact that I belong to a class which has other members is a totally inadequate reason to end my existence, because I'm not conscious as a class, I'm conscious as a class instance. The fact that other members of the class exist is meaningless. There are plenty of other instances of the class Human, or Mammal, or Physical Object around. That doesn't mean that I regard my own unique existence as being redundant. My existence will still be unique, because I'm the only person at this location at this time. Somebody else is at a different location. He might be like me, but if he was me he'd be where I was. I find I'm always in one place at a time.
 

Back
Top Bottom