Does the traditional atheistic worldview contradict materialism?

I apologize, but I don't understand your hypothetical scenario at all.
Now imagine that the person who leaves the teleporter isn't exactly you by the other metrics.
Which other metrics in particular?
When you walked into the tele, your memories were disassembled and spooled out onto some other medium, then spun into a flash-grown clone of you, such that the clone "lived" your life in a fraction of a second.
I don't follow what you're saying here.
Everything you remember, he remembers, only because the brain is a stochastically self-assembled piece of work, his brain looks nothing like your brain.
There are two different kinds of comparisons going on here; we need to be precise about what it is we're talking about if you want to reflect the view I'm proposing.

In one scenario I described above, a person P was cloned twice (and the original destroyed), resulting in W and E. In this case, there's continuity from P to W, and P to E. In this scenario, I would claim:
  • W gets to say he is the same person as P.
  • E gets to say he is the same person as P.
  • W does not get to claim he is the same person as E; nor does E get to claim he is the same person as W.
This is in a scenario where E and W are "as equivalent as you like" to each other; implicitly, they would also work the same as a human brain.

So:
Is he you? Why/why not?
If E' used a different kind of machine than a human brain, then I would say E' is not W, for the same reason I would say E isn't W.

I have no idea about comparing E' to P though; I'll have to wait until I figure out what you're trying to say to express an opinion on that.
 
Last edited:
My main point about these threads is that the structure of the brain is part and parcel of the alleged data and information, so abstraction of said data is not possible without a translation of the physical structure as a processor
-part of this is created by the actual structure and micro structures of the brain and they way they create the neural network
-the patterns of action by the cells themselves are conditioned and history dependent

So any mechanism that translates the data to an abstract form is going to have to include an individual translation for each brain, take the image of a dog, each brain will process that image differently from any other so each abstraction of the data will require that correlation before translation.

In other words the data is dependent upon the brain structures. Each brain is idiomatic in the way it handles, stores and represents data.

(In the OSI network model, this is done by the presentation layer before going down the the stack to the session layer. In other words data is taken and made into a standard form, are we transmitting ASCII, then we need to say so and make it standardized (partly simplified). This also happens as incoming data comes up the stack from the session layer.

So in this example there would also need to be a further translation fro and to each machines idiomatic data storage and processing.)
 
My main point about these threads is that the structure of the brain is part and parcel of the alleged data and information, so abstraction of said data is not possible without a translation of the physical structure as a processor
Right, we're stepping pretty far into mental masturbation territory here, but that's what Saturdays are for.

In other words the data is dependent upon the brain structures. Each brain is idiomatic in the way it handles, stores and represents data.
But anyway, y2bgs, this statement is basically what I'm trying to get at, sorry for being so unclear. Mathematically speaking, it's possible for the brains of two different people to contain precisely the same information content - in terms of experiences, memories, conditioned learning, that sort of thing - but because brains are exceedingly personalized, the physical structure and neural activity encoding that information will be entirely different between the two. If the information can be divorced from the structure, these two people are the same person. If info can't be divorced from structure, why not just say structure and be done with it?
 
Mathematically speaking, it's possible for the brains of two different people to contain precisely the same information content - in terms of experiences, memories, conditioned learning, that sort of thing - but because brains are exceedingly personalized, the physical structure and neural activity encoding that information will be entirely different between the two.
I think this requires a lot more precision to discuss. We're talking about systems that evolve as a function of time and inputs of the environment, which are furthermore capable of reference. We need to compare specific entities at specific points in time.

Furthermore, you're using the term "personalization" to mean "difference", but I think this might convey the wrong idea. When P is cloned to make some W and some E, then as time progresses, W and E are going to diverge. So W and E are going to differ from each other. We can call that personalization if we like; but the same thing that accumulates in W, which accumulates a different way in E, that makes W different from E, is also making E different from P and W different from P. The act of "personalization" doesn't merely make two brains evolved from the same branch different; it also makes the same brain evolving in time different.
If the information can be divorced from the structure, these two people are the same person.
No. If you have two people at all, they are different from each other. If P is cloned into W and E, no matter how equivalent W and E's brain structures are, you would get two different people. W will not be E, same structure or not.
If info can't be divorced from structure, why not just say structure and be done with it?
And this is a different question anyway. If it weren't for those blasted quantum mechanical thingies, you could perfectly simulate what my brain would do using a computer. But we can compare those computers to each other nonetheless; you can perform the same exact simulation using a big endian machine, a little endian machine, a machine where 1 is a positive voltage and 0 is negative, one that is flipped, and a virtual machine emulating one of those machines. All of those different kinds of machines can be programmed to produce the exact same kind of output for the same inputs, yet have entirely different structures.

In this case, the actual structure doesn't matter; the outcomes will be equivalent. The information being computed will be exactly the same.
 
Last edited:
I meant to reply sooner, but I spent the whole weekend being mentally unable to stop playing X3. It's that good.

Anyway, I think I've been muddying the issue too much by trying to avoid introducing yet another tangential analogy, so I'm just going to go whole hog.

The brain is a gigantic encryption system. More specifically a coding system, but I'm going to use the word "code" a lot and it hardly looks like a word anymore as it is. Past your eyeballs, everything is an encoded representation of reality. You brain's code has built up gradually and stochastically over the course of your development. So has mine. If we experience the exact same thing at the same time, although each of our brains are processing the same information, its representation will be completely different between us, just as the same phrase can be encrypted by two keys and have completely different cyphertexts.

But the encoding isn't the information, any more than the encryption key is the message. What I've been trying to figure out is if you're including the encoding along with experiences and memories and whatnot. I mostly think you are, in which case I can do little more than shrug and say "well, I agree," but then you say stuff like this
All of those different kinds of machines can be programmed to produce the exact same kind of output for the same inputs, yet have entirely different structures.
and I wonder if you're not after all.
 
Yeah, it's a strained analogy, but it was the best I could think of to describe it.
 
Yeah, it's a strained analogy, but it was the best I could think of to describe it.


Trust me, I understand. 'Code' is already very overloaded in this discussion.

I'm afraid I don't have anything useful to contribute - I couldn't clarify soup at this point.

I have been reading too much here today and my brain is tired.
 
Past your eyeballs, everything is an encoded representation of reality. You brain's code has built up gradually and stochastically over the course of your development. So has mine. If we experience the exact same thing at the same time, although each of our brains are processing the same information, its representation will be completely different between us, just as the same phrase can be encrypted by two keys and have completely different cyphertexts.
Not sure what you're getting at, but I imagine that you're saying that, say, we both smell the "same thing" in terms of cinnamon, apples, pie crust, and pine; yet to me it smells like grandma's house. I'm with Complexity; I'm not sure the encryption metaphor works here.

But, sure. I've been including specifically subjective information, right down to not only what I'm smelling, but what it is like to smell it. Part of the information is the detection of these smells, and part of it is the personalized continuity of memories which makes me not only smell apple pie and pine sol, but grandma's house.

What I'm trying to tell you, though, is that there is a particular part of my brain that has a physical structure--in particular, there's a part that makes me feel like I'm at grandma's house when I smell these things. As far as the rest of the brain outside of this very specific physical structure is concerned, it really doesn't care about the structure per se--so long as whatever is stimulating the rest of the brain acts like that structure, everything will be equivalent.
 
Last edited:
I think Complexity's objection has more to do with "encryption" implying the existence of a third party entity, who is using your brain to send a message to another entity without a third entity reading its contents. That way madness lies. The right term is "encoding," which is third-party agnostic, but see above re: code code code code code.

so long as whatever is stimulating the rest of the brain acts like that structure
Okay, so you do need the structure, then? Is that your final answer?

In which case, I wouldn't worry about the experience/memory/outputs. If you've got the structure working properly, those all should take care of themselves.
 
Okay, so you do need the structure, then? Is that your final answer?
Huh? I could have sworn I just said the exact opposite. You do not need the structure; any functional equivalent, however said functional equivalent is structured, would suffice.
 
Okay now you're just screwing with me.
No. Go back to my analogies. Let's use a computer chip, some wires and neurotransmitter devices to replace the structure.
Our choice of shape for the chip, bit representation, endianness, and so on are in no way constrained in order to get the result we want. All we need is for the right outputs to be produced for the right inputs, and the right state semantics.

It is a prototypical black box. What is inside the black box is irrelevant, so long as it has the requisite functions. Unless you're using an odd sense of the word "structure", structure does not matter--so long as the functions of the black box are preserved, the system is none the wiser.
 
No. Go back to my analogies. Let's use a computer chip, some wires and neurotransmitter devices to replace the structure.
Our choice of shape for the chip, bit representation, endianness, and so on are in no way constrained in order to get the result we want. All we need is for the right outputs to be produced for the right inputs, and the right state semantics.

It is a prototypical black box. What is inside the black box is irrelevant, so long as it has the requisite functions. Unless you're using an odd sense of the word "structure", structure does not matter--so long as the functions of the black box are preserved, the system is none the wiser.

Small aside as I have no horse: in the race, each individual is very different in sensory stuructures as well, so what becomes perception varies widely, as well as the way the data gets organised.

So you would need a tremendously flexible 'template' to account for all the variability in sensation and percpetion, much less abstract congnition.

The structure of the brain does matter in the way it organises and makes patterns.

So that would also have to be part of any metadata that you want to create.
 
So you would need a tremendously flexible 'template' to account for all the variability in sensation and percpetion, much less abstract congnition.
We're just replacing the piece that associates the smells with grandma for now, but I agree with the general sentiment. This is reflected in the phrase "state semantics" ("state" implies the black box keeps changing the way it processes inputs according to history; "semantics" refers to a particular abstract specification of how to do so).

You can think of the chip as running a simulation of the brain structure it is replacing if you like.
 
I think Complexity's objection has more to do with "encryption" implying the existence of a third party entity, who is using your brain to send a message to another entity without a third entity reading its contents. That way madness lies. The right term is "encoding," which is third-party agnostic, but see above re: code code code code code.

Okay, so you do need the structure, then? Is that your final answer?

In which case, I wouldn't worry about the experience/memory/outputs. If you've got the structure working properly, those all should take care of themselves.


'Encryption' suggests deliberately hidden, which I thought could be willfully misunderstood by conspiracy theorists to suggest that 1) there are hidden things in the brain, 2) someone put them there, 3) someone might try to take them out, and 4) we are all doomed.

If we're going to use 'encryption', I want to buy some aluminum shares in the near future.

(besides, 'encryption' really isn't the right word)
 
We're just replacing the piece that associates the smells with grandma for now, but I agree with the general sentiment. This is reflected in the phrase "state semantics" ("state" implies the black box keeps changing the way it processes inputs according to history; "semantics" refers to a particular abstract specification of how to do so).

You can think of the chip as running a simulation of the brain structure it is replacing if you like.
Okay, I think I see what's going on here.

I've been using "structure" in the biological "structure vs function" sense. "Function" is what the biology does, "structure" is how it does it. In terms of simulation, integrate-and-fire neuron models mimic function without regard to structure, while multi-compartment models try to replicate function by mimicing structure.

Meanwhile y2bggs has been using "structure" to mean the actual material, I believe. By this definition, no computer simulation would have the same structure as the brain, it'd be all circuits and transistors and stuff, even if it was using said stuff to run an emulation using the other definition of structure.

That about cover it?
 
Okay, I think I see what's going on here.
That makes one of us.
I've been using "structure" in the biological "structure vs function" sense. "Function" is what the biology does, "structure" is how it does it.
Sure, but the structure in this sense of a computer simulation of brain activity and brain activity are different. A simulated structure may accumulate a particular type of influence; this can be thought of as an addition. A simulation would perform a functionally equivalent operation, but it will use an adder, consisting of the binary positional numbering system, shifting, carries, and so on. This is a fundamentally different "how", but given that the outputs are used to control a real or simulated intensity, the end result is the same "what".

Unless I'm still not understanding how you're trying to use "structure".
Meanwhile y2bggs has been using "structure" to mean the actual material, I believe.
No, not material. Just a particular "implementation", so to speak. Something akin to "accumulate more voltage potential in this neuron as a result of that stimulation using a sodium pump" versus "simulate an accumulation of this representative quantity in a positional numbering system by performing an add using a positional numbering system in a register and moving it to this location". Two different "how's" that are merely arranged to make the same "what's".

Can you give me examples of things which are functionally equivalent yet structurally different?
 
Can you give me examples of things which are functionally equivalent yet structurally different?

I thought I did the most relevant example I could think of.

The integrate-and-fire model of neural activity is the simplest type of simulation that still bears any functional resemblance to the real thing. The code at its heart is shorter than the text needed to describe how it works, and is nothing at all like how real neurons operate. All it does is sum up weighted inputs, and fire into the next set of neurons when that sum passes a threshold. The geometry and composition of the neuron it's simulating is inconsequential - everything's in the vector of synapse weights. Although the subset of neurons whose function can be accurately mimicked by this model is small, such neurons do exist. This is function without structure.

Multi-compartment models, such as those built in NEURON, take the opposite approach. Simulated receptors and voltage-gated ion channels are linked to geometric models of the neurons in the same spatial and electrical arrangement as the real neurons were/could be, all as biophysically accurate as possible. The activity of the neuron is a result of simulated synaptic activity causing voltage to propagate down a series of messy differential equations and out to the next set of synapses. Even for those same simple neurons the IF model might work as well for. The geometry and composition of the original neuron in this case is paramount, because accurately modeling such things is what gives rise to the right behavior. This is function from structure.
 

Back
Top Bottom