rocketdodger
Philosopher
- Joined
- Jun 22, 2005
- Messages
- 6,946
Two instances of the same class. If that class is restricted to have only one member (an "identity class"), then all instances will be copies of each other (where "member" refers to potential instances).
If it turns out it isn't. As far as I'm able to determine from slogging through the morass of JREF teleportation threads, those who have a problem with getting in the teleporter are arguing that consciousness is an active material process of a material instance of "a person", and that destroying that person destroys that process; those who don't have a problem are arguing that consciousness is the description of the material instance of "a person" and his or her active material processes (consciousness included), iow, the class or idea of this person, and this person's consciousness, which many think of and may be nothing more than the active material process, persists somehow in its static, symbolic description.
On one side, it seems to me, speaking philosophically, is an emergent materialism which identifies the process of consciousness with its separate material instance; on the other, a pythagorean idealism which identifies consciousness with its unique descriptive class. So whether you get in the teleporter or not depends on your metaphysics. (I incline to emergent materialism; though of all the idealisms, pythagorean is the most tempting: by far the least silly).
There is a third option:
My position is that human consciousness is a form of self referential information processing. It is an algorithm -- a series of computation steps -- that knows about itself.
My position is that the steps in the algorithm -- like any other algorithm -- can be thought of as a series of state transitions within the systems the algorithm is instantiated upon. Think about how programs are executed on a computer, how each step in a program represents a set of state transitions in the hardware. Well, my position is that the algorithm of consciousness is the same kind of thing in our brain -- the steps correspond to state transitions in our neural network.
My position is that these state transitions are deterministic, assuming quantum randomness is insignificant. This means the next state is determined by only the current internal state, the current external state, and a deterministic state transition function (which in the physical domain is simply the laws of physics).
My position, then, is that you can model consciousness (any algorithm, actually) as a series of state transitions in some system somewhere. That is, F(Si(t), Se(t), t) --> Si(t+1), where F( ) is the state transition function, Si( ) is the internal state, and Se( ) is the external state. If you looked at time slices of consciousness -- we can use plank time as the duration since then we know we captured any relevant events -- the algorithm would look something like this in the physical domain: S(1)-->S(2)-->S(3)--> ... -->S(current time).
My position is that consciousness is those deterministic transitions between states, the "--->" you see above. It is the algorithm itself, not the physical stuff the algorithm is running on. It isn't your brain, it is the directed "movement" from one state of your brain -- or any brain -- to the next.
My position is that if you take a subsequence of this algorithm -- suppose S(10)-->S(11)-->S(12) -- and split it between multiple systems, or instances, it remains the same algorithm precisely because the deterministic state transitions are exactly the same. In other words, if F(Si(10), Se(10), 10) occurs on system A, and determines state 11 on system B, and if F(Si(11), Se(11), 11) occurs on system B and determines state 12 on system C, the algorithm and hence the consciousness is exactly the same as it would be if everything occured in the same system.
So if your brain is in state 1, and the laws of physics combined with state 1 result in state 2 one planck time later, then the system where state 2 is located should be irrelevant. State 2 is still part of the algorithm, the same algorithm, because it was determined by state 1.
And finally, my position is that if you somehow add an intermediate step in there between determining state 2 and the system actually being set to state 2 -- such as communicating across space to an identical system that it should be set to state 2 -- the algorithm and hence the consciousness is still the same, because state 2 is still determined by state 1. The fact that there was a middleman doesn't change that key element. Nor would it change if that communication took a very, very long time -- if the original was scanned, then destroyed, and the information took a billion years to reach the destination, and only then was the copy made -- it would still be the same algorithm and hence the same consciousness. Because state 2 was determined by state 1
In other words, if your criteria for defining "instance" relies upon the deterministic transitions from one state of the system to the next then it is valid to say that the source and destination copies are the same instance -- since the source determines the initial state of the destination.