The slow transporter, questions raised

I don't understand the Master/Slave relationship in the OP.

Suppose we are at the half-way point numerically. I'll use capital for Master and lowercase for Slave.

Neuron A fires, so a fires. A affects b at the one end. a affects B at the other.

In what sense is B the Master of b if it is altered by a? And doesn't it get exponentially worse as the connections increase?

It seems to me the relationship cannot hold.

The master always has priority.

So the chain would be A fires, that syncs a, then a affects B, and B fires, which syncs b.

The fact that b would also be activated by A at the same time isn't important, since the behavior of b is still exactly the same.

What *is* important is when the neuron in question is the root of some sequence of activity, I.E. if it is a sensory neuron. So at the halfway point in the process, half of the receptors in your retina will be masters at the source and half will be masters at the destination, meaning you will "see" a blend of both the source and destination.
 
The master always has priority.

So the chain would be A fires, that syncs a, then a affects B, and B fires, which syncs b.

The fact that b would also be activated by A at the same time isn't important, since the behavior of b is still exactly the same.

What *is* important is when the neuron in question is the root of some sequence of activity, I.E. if it is a sensory neuron. So at the halfway point in the process, half of the receptors in your retina will be masters at the source and half will be masters at the destination, meaning you will "see" a blend of both the source and destination.

But the root problem remains, doesn't it? The brain isn't sequential, but parallel.

As an example of how I think the thought experiment fails: Say the person at one end is looking at an old YouTube video of a cat with its head stuck in a bag and the person at the other end is reading my post here on JREF. What would the experience be? Remember, there is a mix of master and slave concurrently.

And I don't think you can escape this by trying to hold each in a sort of environmental stasis -- because there is an internal environment to consider, even one that is changing as the master/slave relationship changes. If there is no difference, then the label master or slave would fail. If there is a difference, then it seems like each party would magnify this difference as a consequence of how the brain is self-referentially wired.
 
Well, RD's method looks sound, and doesn't contravine materialism, as far as I can tell. Isn't it that neurons are anyway dying and being replaced all the time?

Theoretically, using this technique, you could transfer your consciousness onto silicon.

Nick

If he was removing the neuron one by one and transporting them somewhere physically else, and reconstructing the brain neuron by neuron, I would agree, but this is not what he does.

Sure some neuron dies and might be replaced , but the connection in the old neuron is *lost* , and the new neuron only make new connection (which might be or might not be at all to the old neuron the other was connected to, at a different potential). And even if there is a 100% replacement, it is minimal in comparison to the non replaced mass.

But that is not how I read the OP, I read it as the status of the neuron is duped onto the other brain. The metal cube analogy being quite flawed and hint that it is what was meant : the status is being transplanted.

If it is incorrect and the OP meant transplanting neuron physically (as opposed to their status) , then the OP has to realize that a brain transplantation neuron by neuron is by many order of magnitude more complex that a simple brain transplant, for exactly the same results.
 
But the root problem remains, doesn't it? The brain isn't sequential, but parallel.

As an example of how I think the thought experiment fails: Say the person at one end is looking at an old YouTube video of a cat with its head stuck in a bag and the person at the other end is reading my post here on JREF. What would the experience be? Remember, there is a mix of master and slave concurrently.

And I don't think you can escape this by trying to hold each in a sort of environmental stasis -- because there is an internal environment to consider, even one that is changing as the master/slave relationship changes. If there is no difference, then the label master or slave would fail. If there is a difference, then it seems like each party would magnify this difference as a consequence of how the brain is self-referentially wired.

But there's a situation like this already on account of us having 2 eyes. Binocular rivalry. If one eye sees one thing and the other a totally different scene, only one remains in consciousness, IIRC. The mind chooses.

Nick
 
Last edited:
If he was removing the neuron one by one and transporting them somewhere physically else, and reconstructing the brain neuron by neuron, I would agree, but this is not what he does.

As I understand it, RD#2 is a slave "avatar." He is a perfect copy of RD#1 but is only developing neural capability slowly. AND as each neuron in RD#2 develops so it is connected (via some unknown high-tec means) to it's root position in RD#1. Thus, neuron by neuron, the subjectivity of RD#1 becomes that of RD#2. As one passes the 50% mark so more and more connections are severed until finally RD#2 exists, RD#1 ceases, yet without a significantly disturbed subjectivity. Perhaps if they were both inside some kind of generic pod it would help also.

The classic "pseudo-materialist" objection to Parfit's Teletransporter scenario is that "I will die"- that it won't be the same "me" in the new location. With RD's variation this objection seems to be diminished.

Nick
 
Last edited:
As an example of how I think the thought experiment fails: Say the person at one end is looking at an old YouTube video of a cat with its head stuck in a bag and the person at the other end is reading my post here on JREF. What would the experience be? Remember, there is a mix of master and slave concurrently.

Your brain is completely isolated from reality, though -- the only exception is the sensory neurons.

This means you can simply do a linear combination of the input. In other words the experience would be just as if you were looking at a screen with both images at 50% opacity.

The same goes for all the other senses. I don't think it would be that alien to us, since we transition from sensory state to state all the time -- hot to cold, windy to calm, dark to light, whatever.
 
It's hard to speculate about how such a procedure would be experienced, without knowing a lot more about how the brain works and how the technology would work.

However, even if we presume that it works as you describe, it seems to me that the fact that the two people have a certain period of shared experience doesn't eliminate the fact that, as soon as you sever the link between them, you'll have two separate but "identical" people, along with all the same philosophical issues.

So I'm not sure it brings any new insight to the table in terms of physicalist vs. supernatural consciousness.

I suppose someone who believes in a supernatural consciousness would deny that your technology would even be possible.
 
Just the usual.

Something like a seemingly continuous perception of ourselves and our environment. What you loose when you go to sleep and get back when you wake up the next morning, etc.

Okay, so consciousness is not graduated in this case. i.e. a sleeping person has no consciousness and an awake person has consciousness. Its either there or not.

I think this is the first problem. We do not experience consciousness as either there or not.
This is why it is difficult if not impossible to say how many functional neurons we need for consciousness and an assumption in your transporter experiment

rocketdodger said:
As their sensory neurons were "moved" to the destination, they would see and feel parts of both location, until eventually the source was completely replaced with the destination.

You assume that a one neuron by one neuron functional transfer will satisfy a sense of consciousness at both locations.

I also do not see how you can separate function from form in a neuron.
A neuron functions because of its form/make-up and its form/make-up depends on its function.

I think there is a reason living organisms evolved to replicate through transfer of genetic material and growth.

You seem obsessed with "transferring consciousness" between matter and claiming this must have a material basis.

This has to do with wanting to "live" forever and its pursuit is called alchemy.

Interestingly the father of mechanics, Newton, had the same obsession.
 
Last edited:
Okay, so consciousness is not graduated in this case. i.e. a sleeping person has no consciousness and an awake person has consciousness. Its either there or not.

I think this is the first problem. We do not experience consciousness as either there or not.
This is why it is difficult if not impossible to say how many functional neurons we need for consciousness and an assumption in your transporter experiment



You assume that a one neuron by one neuron functional transfer will satisfy a sense of consciousness at both locations.

I also do not see how you can separate function from form in a neuron.
A neuron functions because of its form/make-up and its form/make-up depends on its function.

I think there is a reason living organisms evolved to replicate through transfer of genetic material and growth.

You seem obsessed with "transferring consciousness" between matter and claiming this must have a material basis.

This has to do with wanting to "live" forever and its pursuit is called alchemy.

Interestingly the father of mechanics, Newton, had the same obsession.

I expected you to contribute statements to this thread that I would learn nothing from, were I to actually discuss them with you.

Thank you for living up to my expectations.
 
However, even if we presume that it works as you describe, it seems to me that the fact that the two people have a certain period of shared experience doesn't eliminate the fact that, as soon as you sever the link between them, you'll have two separate but "identical" people, along with all the same philosophical issues.

Well, not really.

At the end all the neurons at the source would be slave neurons, meaning their activity is completely determined by activity at the destination.

Its like if you had a computer actually running a program, and another computer that just had all the bits of every word magically syncd to the first one. Is that second one actually running a program too? I wouldn't say so, because there is no flow of information across the transistors. That only exists at once place -- the master.
 
I expected you to contribute statements to this thread that I would learn nothing from, were I to actually discuss them with you.

Thank you for living up to my expectations.

Denial is a strong force in human psychology and it certainly prevents one from learning.
 
At the end all the neurons at the source would be slave neurons, meaning their activity is completely determined by activity at the destination.

Unless you're suggesting that there is in fact some non-material essence that was transferred to the destination, isn't this the same as saying that both people are experiencing the same thing? The person at the source would be having a sort of "out of body experience" of being at the destination.

But as soon as you disengage the link, doesn't the person at the source continue to live? That person would feel like he was zapped back into his body at the source, and all you've done is reversed the original conundrum.
 
Unless you're suggesting that there is in fact some non-material essence that was transferred to the destination, isn't this the same as saying that both people are experiencing the same thing? The person at the source would be having a sort of "out of body experience" of being at the destination.

No.

The person at the slave end isn't even a person -- it is just a collection of neurons behaving exactly like their masters.

The essence of consciousness is causal in nature -- neuron A causes neuron B to fire. The slave neurons, a and b, only fire because they are in sync with A and B. It is not true that neuron a causes neuron b to fire.

The difference can be illustrated by supposing that the machine breaks and the slave neurons are kept in slave status but signals from the masters are gone. In that case you could have the following happen:

Neuron A fires --> machine syncs neuron a and a fires as well --> MALFUNCTION --> Neuron B fires due to A --> machine cannot sync neuron b --> neuron b does not fire
 

Back
Top Bottom