• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
And unless you're a dualist, that's all that exists. Interpretation of the results of information processing is just more information processing. It can't be anything else, because there isn't anything else.

That's right... all that exists is just stuff behaving like itself.

We can use some of that stuff as information processors. We can use some of it for fuel. We can use some of it for sleds and hats.

Interpretation of the symbolic outputs of information processing is a process of physical computation, the activity of the brain. Which you can define as information processing, as long as your definition of information is anchored to the physical universe, so that stars are also processing information.
 
I think this is where you are stuck -- you think the initial mapping of reality->simulation somehow dictates the behavior of the simulation. It doesn't.

I don't know why you'd think I think that.

Just look at the math piggy -- If transformation Th ( h for human ) maps reality to the simulation, and the tornado is TOR, then Th( TOR ) = simulation.

To get back our conscious interpretation of the simulation we apply the inverse -- InverseTh( Th( TOR ) ) = TOR, or InverseTh( simulation ) = TOR.

Well, TOR can't actually be run through the function Th.

When the programmer sits down to write the program, there's no tornado involved.

TOR is a mental representation of a tornado, and the function Th maps that onto a simulator. The inverse results in a new instance of TOR, a mental representation of a tornado, in another state.

For the aliens, all they need to do is figure out the transformation that takes their conceptual space to the simulation, or Ta ( a for alien ). Ta( TOR ) = simulation. Once they find that, they can do the same thing we do -- InverseTa( simulation ) == TOR.

Now replace it all -- InverseTa( Th( TOR ) ) = TOR.

Note that the interesting thing here is that Ta will be the composite of our Th and the mapping from our conceptual space to the alien conceptual space, call it Ta-to-h.

Meaning, if the aliens find your information processor running a tornado sim *as well as* the keyboard and monitor, they can deduce our Th and thus the Ta-to-h.

Linguists know this implicitly, since it is how we figure out other languages given a common starting point.

Yeah, but if the only information you have is the simulator system itself, then there is no common starting point for figuring that the behavior of the simulator is supposed to map to anything at all, much less what it would be.

If the machine itself were conscious, it couldn't tell from its body's behavior that it was supposed to be a simulation of something else. It might could tell if given enough hints and clues from elsewhere.

By the same token, if our universe is a machine running some sort of simulation, we'd have no idea what it is supposed to be simulating, no matter how much we observed it. But it would not be simulating us, unless you propose a simulator which first simulates itself.
 
In other words, they'd still need outside information to attempt to decode it.

That is irrelevant. What is relevant is how much information they need to decode it.

If they correctly assume that it is a simulation of a tornado, then they only need a single mapping in order to decode any aspect of the entire simulation.

If they incorrectly assume it is a simulation of a watershed, they they will need a huge amount of mappings in order to decode any aspect of the simulation before that aspect resembles a simulation of a watershed.

In the latter case, the amount of information needed to go from <tornado simulation> --> <watershed> is larger than the entire <tornado simulation> itself. That alone is an objective difference that they can use to rule out whether it is actually a simulation of a watershed.
 
But nothing you just said has ever been a point of contention and furthermore it is not the claim that westprog is making.

The claim made by westprog is that if we are running a simulation of a bowl of soup, down to the atomic level, we are also running a simulation of the way the neural networks of the human brain function -- it just depends on how the results are interpreted.

This is simply false for reasons that should be obvious.

Well, that's not the claim I'm discussing anyway.

Any simulation could certainly be a simulation of many different systems (even if it's complete, it could not be distinguished from an incomplete simulation of a different system) but few of them, perhaps only one of them, will describe anything that would "make sense" to a human brain.
 
Well, TOR can't actually be run through the function Th.

When the programmer sits down to write the program, there's no tornado involved.

TOR is a mental representation of a tornado, and the function Th maps that onto a simulator. The inverse results in a new instance of TOR, a mental representation of a tornado, in another state.

But there is then another function that goes from real tornado to human mental representation. I just put all of that into one function, Th.

And furthermore it doesn't require a programmer ( explicitly ). We can use radar and even 2d video footage to construct an accurate model of the tornado that is then fed into the simulation, without any humans ever participating in those latter steps.

Yeah, but if the only information you have is the simulator system itself, then there is no common starting point for figuring that the behavior of the simulator is supposed to map to anything at all, much less what it would be.

Eh, that is not quite true.

A common starting point would be looking at the behavior of the components of the simulator -- in a thing built by humans, most likely some kind of switching hardware like transistors -- and recognizing that these things are switches, and make up a system that was designed specifically process information. Or, perhaps evolved to do the same ( aliens looking at our brains would likely come to that conclusion, based on the way neurons function as switches ).
 
That is irrelevant. What is relevant is how much information they need to decode it.

If they correctly assume that it is a simulation of a tornado, then they only need a single mapping in order to decode any aspect of the entire simulation.

If they incorrectly assume it is a simulation of a watershed, they they will need a huge amount of mappings in order to decode any aspect of the simulation before that aspect resembles a simulation of a watershed.

In the latter case, the amount of information needed to go from <tornado simulation> --> <watershed> is larger than the entire <tornado simulation> itself. That alone is an objective difference that they can use to rule out whether it is actually a simulation of a watershed.

Ok, but what's the importance of that?

If the issue is whether it makes any sense to speak of the "world of the simulation" to mean anything other than an imaginary system (that is, a state of the brain of the observer) or a real system which is the simulating machine itself and nothing more, what impact does that have?

Is there something in there which should change my mind from "No, the world of the simulation can only refer to the physical state of the simulator machine or to a state of imagination in an observer's brain"?

If so, what is it?
 
But there is then another function that goes from real tornado to human mental representation. I just put all of that into one function, Th.

Oh? And how do you intend to reverse that function?

Eh, that is not quite true.

A common starting point would be looking at the behavior of the components of the simulator -- in a thing built by humans, most likely some kind of switching hardware like transistors -- and recognizing that these things are switches, and make up a system that was designed specifically process information. Or, perhaps evolved to do the same ( aliens looking at our brains would likely come to that conclusion, based on the way neurons function as switches ).

But you don't get to do any of that without going outside the system.

These guys are just more interpreters extracting the symbolic information by hook or crook.

If you want to claim that the machine itself has a way of privileging your intention, you can't do any of this, and you can't involve these interpreters.

And if the machine has no way of privileging or even divining your intention, then the "world of the simulation" corresponding to your intended world is not something that can exist in the machine, but only in your imagination.
 
Well, that's not the claim I'm discussing anyway.

Any simulation could certainly be a simulation of many different systems (even if it's complete, it could not be distinguished from an incomplete simulation of a different system) but few of them, perhaps only one of them, will describe anything that would "make sense" to a human brain.

Not just a human brain -- any intelligence.

No intelligence can look at a simulation of a tornado, in totality, and accurately conclude that it is a simulation of a rock. It is simply impossible. There is *no* behavior isomorphism.

If they reached such a conclusion, it would be due to logical errors on their part, not because it really was also a simulation of a rock.

Now it could look at a computer, and not bother to see the stuff going on inside, and conclude that "this is a simulation of anything that just sits there" because yes, the behavior of a computer *is* isomorphic with that of a rock in terms of net translation and rotation changes. But is that what you are talking about? I hope not, because that would be a stupidly obvious point to have wasted all these words for.
 
Ok, but what's the importance of that?

If the issue is whether it makes any sense to speak of the "world of the simulation" to mean anything other than an imaginary system (that is, a state of the brain of the observer) or a real system which is the simulating machine itself and nothing more, what impact does that have?

Is there something in there which should change my mind from "No, the world of the simulation can only refer to the physical state of the simulator machine or to a state of imagination in an observer's brain"?

If so, what is it?

The fact that simulation outputs are not limited to things that only impact human imagination.

For instance, we could make a weather simulation that controls some artificial lights positioned over a forest.

There is a single mapping that goes from simulation->reality that will result in those lights having the correct settings for time of day, cloud formations, etc.

Any other mapping will result in incorrect results, which all the forest animals and all the plants in the forest can certainly detect.

From what you know about computers, do you think it would be "good enough" to have some technician just find random mappings until the current real light matched the results from the simulation? No, of course not. He has to find a mapping that also makes all subsequent light conditions match. How many such mappings do you estimate there would be, for a very high granularity simulation? I estimate there would be 1 mapping that is smaller than the simulation itself, and any mapping larger than the simulation is irrelevant since the simulation would be of no use in that case.
 
Last edited:
Oh? And how do you intend to reverse that function?

Uh, I am not a platonic idealist. I view mathematics as merely a way to describe the world. A "function" is just a term in human language, so you don't "apply" a function to any real object. You "apply" a function to descriptions of real objects, which then results in another description.

But you don't get to do any of that without going outside the system.

These guys are just more interpreters extracting the symbolic information by hook or crook.

If you want to claim that the machine itself has a way of privileging your intention, you can't do any of this, and you can't involve these interpreters.

And if the machine has no way of privileging or even divining your intention, then the "world of the simulation" corresponding to your intended world is not something that can exist in the machine, but only in your imagination.

Lets define world formally to settle this once and for all. I propose:

A world is a behavior space where a single transformation can be applied to the entire space in order to take it to some other behavior space and thus represents a consistent isomorphism between the two spaces.

Do you disagree with that definition?

Thats why I claim a computer simulation of a tornado is a world of sorts -- a single transformation can be applied that illustrates to any observer the consistent isomorphism between the simulation and the real world.
 
Because it necessarily follows from what Rocketdodger said that simulations can be conscious.

A simulation could be conscious, by coincidence.

If there was a conscious thing with some feature that mimicked a feature in another system, you could use that conscious thing for your simulator.

Or if the system you want to simulate happens to create a real (physical) brain in the simulator just by chance, because that physical arrangement is somehow necessary to run the sim, then you could have a conscious simulator.

Real consciousness is the result of information in relationship with itself, but only if we mean "information" as "the kind of stuff that is computed by stars", which is energy and matter.

Real consciousness is not the result of information processing, however, in the other, more abstract or metaphorical sense in which our desktop and laptop machines process information for us.

That's because that sort of information processing relies on a coordination between physical computations (the changes in a real system) and an imagined system, the result of which is an understanding of the physical computations as informational computations as well. That coordination and understanding can only be done externally from the simulating machine, by the programmer and reader.

The physical computations are what they are, regardless. Changes in state inside a computer box, for example.

The informational component isn't added to the system being used as a simulator. It exists as configurations in an observer's brain. Only the physical calculations are actually performed by the simulator -- the informational ones are the same, yes, but they are only "informational" if there is an informational output, which requires an observing brain.

If the observer is unaware of the simulation, or can't decode it, then there is no informational output of the simulation, or it's one that is not the same as the informational output to a brain that can decode it.

So to speak of a simulation being conscious, you either mean that the simulating machine has to be physically built like a conscious brain, or that an idea in your mind can itself somehow be "conscious".

Either the physical state of the machine is conscious, or the physical state of your brain is conscious, either one, but nothing else.
 
Not just a human brain -- any intelligence.

No intelligence can look at a simulation of a tornado, in totality, and accurately conclude that it is a simulation of a rock. It is simply impossible. There is *no* behavior isomorphism.

Yeah, but you can look at it and conclude that it's a simulation of another system which, when viewed in the right way, also looks like a tornado.

Or that it's a simulation of an extremely large system, but not in very great detail, which could be all sorts of things, especially if it were a fantasy world.

The fact that no one would confuse it for a rock in particular -- unless perhaps it's a very fast sim of the erosion of a rock on the lip of a waterfall, which very well might look a lot like a sim of a patch of ground subject to storms -- isn't particularly telling of anything.

Or you could conclude that it's not running a simulation, but doing something else, you can't figure out what.

If you succeed in decoding it, then the physical computations of the system have value as informational computations, because they now have informational outcomes, those being states of the matter and energy in your brain.

But of course that's all external to the system which is the simulator. The informational overlay doesn't affect it at all. It would have no way of privileging any one possible solution over any other, even if it had a way of suspecting it was being used to run a sim in the first place.
 
A simulation could be conscious, by coincidence.

If there was a conscious thing with some feature that mimicked a feature in another system, you could use that conscious thing for your simulator.

Or if the system you want to simulate happens to create a real (physical) brain in the simulator just by chance, because that physical arrangement is somehow necessary to run the sim, then you could have a conscious simulator.

Real consciousness is the result of information in relationship with itself, but only if we mean "information" as "the kind of stuff that is computed by stars", which is energy and matter.

Real consciousness is not the result of information processing, however, in the other, more abstract or metaphorical sense in which our desktop and laptop machines process information for us.

That's because that sort of information processing relies on a coordination between physical computations (the changes in a real system) and an imagined system, the result of which is an understanding of the physical computations as informational computations as well. That coordination and understanding can only be done externally from the simulating machine, by the programmer and reader.

The physical computations are what they are, regardless. Changes in state inside a computer box, for example.

The informational component isn't added to the system being used as a simulator. It exists as configurations in an observer's brain. Only the physical calculations are actually performed by the simulator -- the informational ones are the same, yes, but they are only "informational" if there is an informational output, which requires an observing brain.

If the observer is unaware of the simulation, or can't decode it, then there is no informational output of the simulation, or it's one that is not the same as the informational output to a brain that can decode it.

So to speak of a simulation being conscious, you either mean that the simulating machine has to be physically built like a conscious brain, or that an idea in your mind can itself somehow be "conscious".

Either the physical state of the machine is conscious, or the physical state of your brain is conscious, either one, but nothing else.

I don't see how you can claim to agree with me, then come up with a post like this.

Look, focus on simple things. That is where you are going astray.

If an object O exhibits behavior A, which then leads to behavior B, there is a real causal relationship between those events.

Do you agree, or disagree?

If a simulation of that object S( O ) exhibits behavior S( A), which then leads to behavior S( B), there is a real causal relationship between those events.

Do you agree, or disagree?
 
Uh, I am not a platonic idealist. I view mathematics as merely a way to describe the world. A "function" is just a term in human language, so you don't "apply" a function to any real object. You "apply" a function to descriptions of real objects, which then results in another description.

Philosophy to the rescue....

It's all well and good to deal in imaginary spaces, and it can be tremendously useful.

But if we're going to discuss what is real in this world, as we're supposedly doing, and you propose a function that describes real world events (such as a programmer mapping abstractions of the behavior of a tornado onto the behavior of an electronic machine) and you then go on to discuss the implications of inverting the function, then I expect that inversion not to be physically impossible if it is to be relevant.

If Th(TOR) describes a physical process which begins with real tornadoes and ends in a simulator operating, but Th(TOR) can't be reversed in reality, then what am I to conclude about reality as a result of inverting Th(TOR)?

Only that the process must have begun with real tornadoes, which I already knew.
 
Yeah, but you can look at it and conclude that it's a simulation of another system which, when viewed in the right way, also looks like a tornado.

Or that it's a simulation of an extremely large system, but not in very great detail, which could be all sorts of things, especially if it were a fantasy world.

The fact that no one would confuse it for a rock in particular -- unless perhaps it's a very fast sim of the erosion of a rock on the lip of a waterfall, which very well might look a lot like a sim of a patch of ground subject to storms -- isn't particularly telling of anything.

Or you could conclude that it's not running a simulation, but doing something else, you can't figure out what.

If you succeed in decoding it, then the physical computations of the system have value as informational computations, because they now have informational outcomes, those being states of the matter and energy in your brain.

But of course that's all external to the system which is the simulator. The informational overlay doesn't affect it at all. It would have no way of privileging any one possible solution over any other, even if it had a way of suspecting it was being used to run a sim in the first place.

Merging this with post 1873
 
I don't see how you can claim to agree with me, then come up with a post like this.

Look, focus on simple things. That is where you are going astray.

If an object O exhibits behavior A, which then leads to behavior B, there is a real causal relationship between those events.

Do you agree, or disagree?

If a simulation of that object S( O ) exhibits behavior S( A), which then leads to behavior S( B), there is a real causal relationship between those events.

Do you agree, or disagree?

I almost agree.

The problem here is that you're being imprecise in the second part there.

If S(O) refers to a real object involved in the simulation, like a machine part, and S(A) and S(B) refer to real behaviors of that object, then yes, there's a real causal relationship.

However, if there's an informational overlay, I(O) changing from state I(A) to I(B), which is intended to correspond to real object S(O) changing from state S(A) to S(B), then that causal relationship is not real but imaginary.

It is imaginary, because it only exists as a state of representation in the brain of an observer.

Without that observer, all we have are the physical computations with no informational overlay to it.

Some of your fundamental errors come from conflating S and I.
 
TOR is a mental representation of a tornado, and the function Th maps that onto a simulator.
The mental representation of a tornado was created from real tornadoes. But for the existence of tornadoes, there would be no such concept; furthermore, we look to real tornadoes to provide us with the understanding.

The mental representation of a tornado is merely an intermediary.
 
As to the last part, we can use a definition of "information processor" which requires no interpreter of the information, but in that case, we're no longer talking about something that does what we typically think of information processors as doing, but rather all sorts of physical systems become information processors.
I don't know about that. I typically do think of all sorts of physical systems as information processors.
 
If Th(TOR) describes a physical process which begins with real tornadoes and ends in a simulator operating, but Th(TOR) can't be reversed in reality, then what am I to conclude about reality as a result of inverting Th(TOR)?

Oh I see what you are saying.

Well you could just have some kind of weather machine that takes a state of the simulation and reproduces that in the real world.

Or, if that is too farfetched for you, we can limit the input and output to the results of a tornado rather than the tornado itself.

For instance, the photons that are captured by a video camera. If Th is <camera input> --> <tornado sim > then it is clear that the inverse is <tornado sim > --> <camera input> which is easy to do even with current technology.
 
I almost agree.

The problem here is that you're being imprecise in the second part there.

If S(O) refers to a real object involved in the simulation, like a machine part, and S(A) and S(B) refer to real behaviors of that object, then yes, there's a real causal relationship.

However, if there's an informational overlay, I(O) changing from state I(A) to I(B), which is intended to correspond to real object S(O) changing from state S(A) to S(B), then that causal relationship is not real but imaginary.

It is imaginary, because it only exists as a state of representation in the brain of an observer.

Without that observer, all we have are the physical computations with no informational overlay to it.

Some of your fundamental errors come from conflating S and I.

I would like you to explain how in I(O) I(A) causes I(B) without in S(O) S(A) causes S(B). That is, how can an information overlay contribute to causality?

My point is that in all cases it IS actual machine parts that are involved in the causal sequence, even if a given observer needs an informational overlay to see it.

But yes, I am speaking about the machine parts. The transistors of the computer.

So now answer this:

If there is a simulation of a neural network running on a computer, and in the real neural network a neuron fires due to the integration of signals from other neurons, is there not an isomorphic causal sequence that takes place in the transistors of the computer? And isn't that causal sequence in the transistors of the computer just as "real" as the corresponding sequence in the actual neural network? Meaning, isn't something like "voltage from transistor X caused transistor Y to switch" just as "real" as whatever happens in the neural network?
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom