It exists in the book (referring to potential instances). It goes from a description existing in a book of no active material system to a description existing in a book of an active material system.
We are just using different terminology. My "description of an algorithm" == your "algorithm" and my "algorithm" == your "instance of an algorithm."
I will adopt your version to aid communication.
Well, this is where we have to be extremely careful to make absolutely sure we are talking about the same things (which is the often boring business of philosophy: refining and extending an everyday language of thousands of words to describe billions of phenomena).
Very precisely, by "algorithm in a book" I mean the set of symbols that defines the "algorithm". Usually we conflate them, as I do in the quote above to speed communication. But to be very precise, there are three levels I want to talk about: the algorithm (the set of potential instances); the description of the algorithm (one of any set of symbols that try to define the algorithm); and material instances of the algorithm (material systems that match a potential instance wherein the algorithm may be executed). (What's worse, there are at least three semantic levels to the material instances that we have seen so far (there may be more): functional (what does it do); execution sequence, which is really a subclass of structural (how is it realized); and active integrity / continuous spatiotemporal (one run at a time). But that's an aside for now.)
Given this analysis, I think "your description of an algorithm" and "my description of an algorithm" are the same: that is, any set of symbols which define and communicate the algorithm; and I agree, in the way you are using it, "your algorithm" = "my instance of an algorithm"; however, my analysis distinguishes between potential instances, the set of which is the algorithm, and actual instances. Potential instances exist logically, at the level of logical relationships, of ideal definition, of "class". They can only be pointed to by symbols (think fictional characters in a story, numbers, etc.). Actual instances exist materially, in our physical world (versus the immaterial, logical world, which is the set of all describable worlds, many of which may be impossible, physically and/or logically). Where I distinguish the purely potential / logical / immaterial from the actual / physical / material, you do not (or at least haven't been, afaics).
In short, the symbols for the algorithm describe the idea of the algorithm, which is the set of all potential instances of the algorithm; the idea of the algorithm describes potential instances of the algorithm, and of course actual instances (for each actual instance matches a potential instance). And distributively, since the symbols describe the algorithm, and the algorithm describes the instances, the symbols, via the algorithm, describe the instances. How's that for confusing?
Why is this important? Beyond the obvious -- to avoid mistaking different levels, confusing the symbol for the idea, the idea for the thing, the thing for the symbol -- it allows us to talk about ideal, imaginary things, like the number two, the Wizard of Oz, etc. Looking back a few posts, I think I see the source of the confusion (my fault, as usual): You said:
But "classes" don't exist in and of themselves. "class" is just a way to partition systems. To say a system is an instance of a class only means that it is a member of a partition of the set of all systems, the partition which we can describe using a class description -- but that "class" thing doesn't exist out in the void, just waiting for something to instantiate it.
I replied:
Not under materialism, no. Classes existing [prior to and more causal than matter, as prescriptions rather than descriptions] "out in the void" is idealism.
-- in response to the second sentence, about classes existing in the void, which I took to imply existing materially, and the whole quote as referring to material existence. I should have read more carefully, or been more specific. It's right to say that classes don't have a material existence. But they do have a logical existence (meaning we can imagine them, and distinguish between them in order to talk about them). An algorithm is a class, a set of potential mechanical sequences, and has this sort of existence, a logical existence, as opposed to a material existence. So what's the difference? What is a "logical existence"?
That's been a thorny issue in philosophy for two and a half millenia, but to cut to the chase, logical existence refers to classes, descriptions, definitions, ideal entities, perfect paradigm cases, members of classes, potential instances: it's the sort of existence a line has (not drawn on a piece of paper or whatever, that's merely a representation (symbol) of the line, drawn to give us some idea of the line; we can't actually draw the ideal intentity that is the defined "line", because it has no breadth, and is infinitely long, by definition; it has no actual instances, there are no infinitely long, breadthless, perfectly straight lines; however, we can communcate the idea of the "line" with appropriate symbols, define it, logically, ideally, and make use of that logical definition).
Now, an algorithm is a description, a logical definition, a class or set of potential instances, and therefore exists in the same way a line does, logically. So when I distinguish between the actual, physical instances of the algorithm and the potential, logical instances, I'm making a distinction of that sort: between an actual, physical, material instance and a potential, logical, ideal instance. The "algorithm", properly speaking, is the class, its members, the set of potential instances (or in the case of single membership, the unique potential instance) only. It's a mistake to conflate it with the actual instance, to mix logical and physical, a mistake that can lead to all sorts of paradoxes, of which the teleporter may be one.
Key-rye-st what a mess! I know that's by far not the greatest explanation ever. But that's the best I can sort it out right now. Sorry for any added confusion. If you've ever studied semiotics, it covers a lot of the same ground in distinguishing signifier (symbol), signified (idea: class of potential instances), and referent (actual instance).
It depends of the level of detail of the algorithm's description. One could change the material instance to execute the algorithm more or less efficiently, using different routines beneath the level of the algorithm's description. This complicates the question, of course. The same algorithm is being executed by a different local material system. It seems it's still the same locally continuous instance of the algorithm, but its material composition has been altered beyond a simple one-to-one swapping out of particles. So an altered system, a different system, it seems. Definitions become fuzzy here, as this situation isn't encountered, or recognized as such when it is, much.
In the case of the altered system, I would say at the level of functionality it is the same instance; at the level of execution sequence it is a different instance;
We agree completely here. I think the assumption in the transporter scenario is that only "routines" below the level of detail necessary to capture -- in full -- consciousness are omitted from the description.
In other words, that any change in instance due to execution sequence would be irrelevant.
Note that this is trivially feasible to achieve because even if the full state vector of the set of all particles at every planck time interval is required we can always just simulate the whole shebang rather than taking any shortcuts.
Well, assuming you don't cross the threshhold into quantum uncertainty, there might be state changes in consciousness in the time required to get the information (even at scales above quantum we'd still need to ensure that the means of observing didn't alter the state). But we'll let Scotty and his class of teleporter engineers worry about all that.
while in the case of the swapped system, it is the same instance at both levels.
So you agree that if there was a magic teleporter that could simply swap the system in an instant and then move the particles to the destination in an instant, it would be the same instance both functionally and sequentially?
Yes, I think so. If by "an instant" you mean a time interval so small it might as well be zero, then yes, as the destruction and replication events would occur in no time. But events taking no time aren't possible events -- events can be defined as "instantaneous" in calculus, but this merely sets a mathematical limit, based on infinite time-slices, we never actually reach -- so the instant teleportation hypothetical is really just saying: "take someone who's here; now imagine she's not here, she's over there, instantly; is it the same person?" Yes. Because in this fanciful case she wasn't destroyed and replicated, just imagined here, and then there. So logically, it's a purely imaginary case, and an impossible class at that, with a described candidate for membership, but no logically possible membership, no potential instance; thus, an empty set, without potential, or of course, actual instances.
In either case, however, at the level of active integrity, which may also be relevant to consciousness, it's a different instance each time the algorithm is run (sometimes crashing, sometimes not, in my pc experience).
Yes I agree.
What do you mean by "active integrity?" Do you mean the causal efficacy of each state transition in the system? Because then I think we agree completely.
No, I mean a spatiotemporally continuous execution, from activation to deactivation. (Unless something goes wrong with the execution, the state transitions will imply the causal links in the system, at least for their level of description).
As long as the universe 'knows' about the link. But under materialism, I'm not sure how it could, or why it should (here putting 'knows' in scare quotes is facetious shorthard for "as long as the determined state transition is causally efficacious"; with the possible exception of quantum entanglement, all causality is local, dependent on spatiotemporal adjacency, in the observable universe. There is still the question then of local causality to overcome.)
Yes, that is a concern. But I think the assumption in the experiment is that there
is some causal efficacy, somehow. That could be entanglement (for it to work like it does in the movies it will need to be) or it could just be standard EM radiation communication between the source and destination.
Well, the EM radiation communication would face relativity's limit on lightspeed; and entanglement would have to work above the quantum scale (if we assume "the consciousness" being teleported is functional at either end above the quantum scale, which we'd have to for determinism to hold for the state transitions).
If we assume that some fancy computer stuff is done at the source, to record the information needed, then sent via radiowaves to the destination, then fancy stuff rebuilds the instance at the next (or same) state of the instance, the only question remaining between us (it seems) is whether or not the functionally same instance is the same consciousness. I think we have certainly established that at some level it is indeed the same instance of the algorithm, or you it is not a separate instance according to all criteria for "instance."
Yes, I think you mean "[f]or
it is not separate instance according to all criteria for 'instance'"; however, it is separate for some criteria, possibly crucial criteria, as outlined in the discussion of the extra distinctions of "instance" above.
So the causality is only local to one universe within the multiverse: that is, global within some one select universe, but not the entire multiverse? Hmm... this is really getting messy.
No I am saying that causality is no longer there, period. I mean,
some is still there, but only in the form of "transporter is used, so get a random system configuration."
If that happens, and just by luck the proper configuration is randomly generated, I do not consider that the same instance. There is no causal efficacy, as you would say, between the states vector prior to the copy and the state vector afterwards.
So would you say the difference is expectation? (This is a provocative and puzzling case, because by "causal efficacy", which is a stupid-sounding phrase I must admit, though I can't think of a better one, I really mean "causal in the way intended, expected, planned, etc." To distinguish it from causality that has effects which are useless to us because they aren't the effects we thought would happen).