Has consciousness been fully explained?

Status
Not open for further replies.
You can't be seriously asking this question.

I am.

Suppose I define the results of an "oil spill" in my simulated world such that any entity moving on a surface covered by the spill has its static and dynamic friction coefficients lowered.

Then, when there is an oil spill in my simulated world, any entity in that simulation will see real results from the spill -- their behavior will be changed.

For real.

I don't see why the fact that they are in a simulation rather than OPR somehow means the real behavior change is less "real." Behavior is changed, any way you slice it.

You might only see that change as a difference in the patterns of 1s and 0s in the computer registers. Who cares? The behavior is still changed. That is a real result.
 
Last edited:
No, they don't. Our brains don't hold any data or any models. They are habit machines, specially designed association machines.

You don't need to lecture me about the mechanics of our brains.

There's as much data in our brains as there is in our muscles.

You are flat out wrong.

Just because our brains encode information as part of the network topography doesn't imply that the information is not data.

And just because the only way to get at most of that information is to run the network itself doesn't imply that the information is not data.

The fact is, I can recite the alphabet. My muscles can't. To recite the alphabet requires an algorithmic mechanism, and data. There is simply no other way to get such a result.

The fact that you have a hard time understanding how a series of associative operations in a very fuzzy network could be an algorithm doesn't mean a series of associateive operations in a very fuzzy network isn't an algorithm.

But if you think any sort of simulated anything is as real as things you bang up against in reality... I'm not sure what to say about that.

If you think the transistors in the RAM of my computer are less real than things you bang up against in reality ... I'm not sure what to say about that.
 
I am trying to understand your use of SOFIA, so for there to be perceptions there has to be SOFIA, or are they paralell?

So is it like a process underlying perceptions or a side by side process?

Oh, gotcha.

Side by side, and partially overlapping, but with perception "upstream" might be the best way to think about it.

Perception happens prior to conscious awareness of the perception, and the brain can perceive things/events, remember them, respond to them, even learn from them without making any of its actions available to conscious awareness.

You can imagine perception as the loading docks, where the sensory information (regardless of what's generating it) is accepted and routed.

Most of it gets unloaded and processed and used up without getting shuffled over to the wing of the factory that produces Sofia. (Or perhaps more accurately, to the outlet points at which several wings make their products available to the global operation that produces Sofia, like a power plant adjusting its mixture of fuels.)

But some of it is sent in that direction because that's how the brain's corridors run. (The patterns made by large groups of corridors do not change, but within these larger arteries the corridors themselves change their size and how they connect all the time, which effects what happens to the input.)

So you're only aware of a part of what you perceive, and a lot of what you experience hasn't been perceived at all, it's been invented and filled in by your brain.

There's a lot going on between the loading docks and the fuel mix that's used by the Sofia process. Part of the raw perceptive material is selected, refined, and mixed with locally generated material before being fed to Sofia.
 
Just because our brains encode information as part of the network topography doesn't imply that the information is not data.

I know. But the brain is still not data, and doesn't contain any. It is whatever it is in physical space, and that's all that it is or ever can be.

The information is data, yes, but there is no information in the brain either, only tissue. "Information" is a way we talk about the brain because we can't hope to talk about it on a neural level. So we entify it all into information and data.

To imagine that the information and data actually exist independently, in the way that the physical brain exists or our bodily functions can be said to exist, is to conflate abstraction and reality at a very fundamental level.
 
I am.

Suppose I define the results of an "oil spill" in my simulated world such that any entity moving on a surface covered by the spill has its static and dynamic friction coefficients lowered.

Then, when there is an oil spill in my simulated world, any entity in that simulation will see real results from the spill -- their behavior will be changed.

For real.

I don't see why the fact that they are in a simulation rather than OPR somehow means the real behavior change is less "real." Behavior is changed, any way you slice it.

You might only see that change as a difference in the patterns of 1s and 0s in the computer registers. Who cares? The behavior is still changed. That is a real result.

Ok, but what's really happening there?

Certainly, the patterns produced by the action of the computer will change. And if you're running a sufficiently robust simulation, those patterns can make predictions about the real world.

But there is nothing resembling an oil spill going on, which is why you put it in quotes. So you can't talk about an oil spill if you want to describe, first, what's going on in objective physical reality (which is the only reality we know of at this point).

If you forget what the output is supposed to symbolize to a human viewer, you only find a computer behaving the way it's built to behave, running calcs and logging numbers and strings and changing the colors of pixels on a screen.

Those "entities" have no real-world behavior, because they don't exist. They are abstractions we create to describe clusters of data churned out by the computer. So are their "actions".

It's simply convenient for us to think of them that way. And we can get all sorts of views and outputs telling us how these entities behave in "simulated space" -- but that space itself is an abstraction -- it has no objective reality.

We can do the same for other simulated environments, such as the Hundred Acre Wood. We can simulate that in a cartoon, but that doesn't create any "cartoon space" that has any claim to reality. It just creates a "cartoon space" in our heads where we accept that a talking teddy bear lives with kangaroos in a British forest.

So we accept that these light patterns on the screen and their corresponding data are "entities".

Which brings us back to our simulated human being living in his own simulated space. And our simulated racecar on its track in its simulated space.

There's much we can learn from simulations. But we have to understand that the things they symbolize to us have no claim whatsoever to any independent objective reality. The only thing that is real is the device generating the simulation, and what it's doing is not at all correlated with what's happening in the simulation space.

If the racecar sim can't actually make something roll very fast down a track, something which could actually strike you, then we're talking about an abstraction, which can't exist objectively because it's a product of cognition.

And if our simulated human doesn't actually produce those brain waves -- and everything else that's going on -- in 4-D spacetime, then it's not really happening, period.

It's just a computer crunching ones and zeros.

Simulations are instructive, but they're not real. So you can't generate an actual Sofia event by running a simulation. Just like you can't run your computer off the grid by plugging it into itself and running a simulation of a power plant.

The simulated power plant does not generate energy. The simulated brain does not generate Sofia.
 
The fact is, I can recite the alphabet. My muscles can't. To recite the alphabet requires an algorithmic mechanism, and data. There is simply no other way to get such a result.

The fact that you have a hard time understanding how a series of associative operations in a very fuzzy network could be an algorithm doesn't mean a series of associateive operations in a very fuzzy network isn't an algorithm.

I'm not saying that this isn't what we call an algorithm.

But this doesn't mean there's really any data in our brains.
 
If you think the transistors in the RAM of my computer are less real than things you bang up against in reality ... I'm not sure what to say about that.

Are you claiming to be running your computer on simulations of transistors?
 
To imagine that the information and data actually exist independently, in the way that the physical brain exists or our bodily functions can be said to exist, is to conflate abstraction and reality at a very fundamental level.

Yes, but the same can be said for anything one calls "information" and "data," including the ones and zeros on your computer.

So why aren't you complaining about that?

You have gotten me, and Pixy, and a whole bunch of other people all wrong -- we aren't conflating abstraction and reality, we are pointing out that abstraction is nothing but another feature of reality. All there *is* is reality. There is no "abstraction" floating around. "Abstraction" is just a label that intelligent entities like ourselves might apply to the behavior of a system of particles. But guess what -- everything in reality is just behavior of systems of particles.

That makes abstraction no different than anything else. It is all real.
 
If you forget what the output is supposed to symbolize to a human viewer, you only find a computer behaving the way it's built to behave, running calcs and logging numbers and strings and changing the colors of pixels on a screen.

If you forget what the behavior of a human being is supposed to symbolize to other human observers, you only find a system of particles behaving the way nature has determined it to behave.

So what?

For all of the absurdity westprog has introduced on these forums, at least he/she has brought the question of "what" we are to the very forefront of debate (at least for me).

Even though we are nothing but particle behavior, we observe each other in ways that are meaningful to us. And that is all that matters -- to us.

So how is that different from the behavior of systems in a simulation? If those systems recognize each other in some way that is meaningful to them, why does the fact that they are not in OPR matter?

Are they still not systems that behave in a certain way relative to each other?


They are abstractions we create to describe clusters of data churned out by the computer. So are their "actions".

But abstractions are only abstractions to us. They are still systems that exhibit behavior. The "abstraction" label is human dependent -- your sword cuts both ways.

It's just a computer crunching ones and zeros.

You keep saying this. But you still haven't produced a reason why this matters a hoot in the grand scheme of things.

How do you know you are not just a computer crunching ones and zeros? In some solipsistic simulation?

Tell me which of these claims you dispute. Maybe we can make more progress this way:

1) You only have access to reality through your sensory neurons, and possibly chemicals in your bloodstream that might modify neural processes.

2) It is theoretically possible to trick your sensory neurons into firing in an identical fashion to their natural firing from natural stimuli.

3) Thus it is theoretically possible to simulate all input to your nervous system, including input from your own body.

4) You have no direct access to your nervous system that is not through neurons that could be tricked in a similar fashion. That is, you have no way to confirm that a given input to any neuron is coming from a real stimuli, even from a neighboring neuron, rather than a simulated one.

5) Thus you have no way to know whether you are a simulation or not.

Now seriously, can you offer any logical arguments that refute the above 5 claims?
 
I'm not saying that this isn't what we call an algorithm.

But this doesn't mean there's really any data in our brains.

Same goes for a computer.

You seem to want to say that how we typically think of computers isn't a good fit for the brain. I agree.

But the proper course of action is to rethink how we view computers and computing, not the other way around.
 
Yes, but the same can be said for anything one calls "information" and "data," including the ones and zeros on your computer.

So why aren't you complaining about that?

You have gotten me, and Pixy, and a whole bunch of other people all wrong -- we aren't conflating abstraction and reality, we are pointing out that abstraction is nothing but another feature of reality. All there *is* is reality. There is no "abstraction" floating around. "Abstraction" is just a label that intelligent entities like ourselves might apply to the behavior of a system of particles. But guess what -- everything in reality is just behavior of systems of particles.

That makes abstraction no different than anything else. It is all real.

There isn't really any data or information in my computer, either, I hate to have to tell you.

I know that will seem patently false to you, but that's the way it is.

I mean yes, of course, there is, but only in the sense that there are talking animals in the Winnie the Pooh stories. They are definitely there when you read the story and look at the pictures, but you can examine what's physically real about the apparatus and you'll never find talking animals there.

You're right that there's no "abstraction" floating around. No argument from me there.

But let's take a simple simulation of a bridge collapsing from overload.

Say a father is explaining the concept to his son, and he draws a bridge with cars on it, and says "Let's assume each car weighs a ton, and the bridge can hold 100,000 pounds, and there are 45 cars on it right now. How many more cars can drive on?"

Then he writes down 90,000 and they begin to add "cars" by adding 2,000, getting the sum, then adding another 2,000 to the sum.

When the 51st car gets on the bridge, they shout "Oh no!" and scribble all over the bridge.

That exercise didn't create any real bridge or any real collapse.

And no matter how good your sim is, it won't either. Because the bridge and the collapse require our imaginations, just as the talking teddy bear does, and just as the "information" and "data" in our computers does.

These things are not just as real as anything else, because they're not real at all.

You are confusing reality and abstraction.

And this confusion is causing errors in your thinking.

Our simulated human will NEVER cause an actual instance of consciousness, for the same reason that a simulated power plant will never generate energy.

As for your simulated entities in their simulated oil spill, if our time-bomb virus were triggered and all life on earth instantly died, there would be no entities, there would only be the workings of the computer (which is not doing any learning) as it changed states and pixels lit up and changed color.

The abstraction would have utterly ceased to be, because abstractions require imagination.

Abstractions do not exist in reality and cannot do the things that real stuff does, like leak oil, generate power, or be conscious.
 
Last edited:
If you forget what the behavior of a human being is supposed to symbolize to other human observers, you only find a system of particles behaving the way nature has determined it to behave.

That doesn't matter, because the behavior corresponds.

If I kick you in the leg, you feel it because your perception of me kicking you corresponds with something happening in the real world -- i.e., me kicking you.

But in the simulation, the behavior of the physically real apparatus bears no correspondence to what's being "simulated" in imaginary space. It could be human circulation, a moon landing, a race car, or a beehive.

I've been trying to get at this fundamental error in all of these threads.

If you don't correct it, you will continue to reach absurd conclusions.
 
Our simulated human will NEVER cause an actual instance of consciousness, for the same reason that a simulated power plant will never generate energy.
[/I]

If a simulated computer can perform accurate calculations it doesn't seem like much of a stretch for a simulated brain to perform the same things a 'real' one does.
 
But in the simulation, the behavior of the physically real apparatus bears no correspondence to what's being "simulated" in imaginary space. It could be human circulation, a moon landing, a race car, or a beehive.

Uh what? If the behavior doesn't correspond, then we wouldn't be calling it a simulation of that thing. For example, if a simulation of a beehive has bees that only fly out in perfectly straight lines and never return, and the beehive is made out of hair, and the bees are square, etc. Well that's not really a simulation of anything we would call a beehive is it?
 
But abstractions are only abstractions to us. They are still systems that exhibit behavior. The "abstraction" label is human dependent -- your sword cuts both ways.

No it doesn't, and no they don't, because the abstractions literally vanish when we stop observing them and thinking about them.

If your computer is simulating a group of organisms that learn to avoid a toxin, for example, the entities and toxins and learning are all a conclusion you reach based on a mathematical outcome that we've managed to cleverly display.

But the actual physical system isn't avoiding anything, isn't populated by entities, isn't doing any learning. All of that is imaginary.

There is an absolute wall of separation between reality and simulated space.

Only models can actually do real things.
 
Uh what? If the behavior doesn't correspond, then we wouldn't be calling it a simulation of that thing. For example, if a simulation of a beehive has bees that only fly out in perfectly straight lines and never return, and the beehive is made out of hair, and the bees are square, etc. Well that's not really a simulation of anything we would call a beehive is it?

We're talking here about the difference between simulation and model.

A scale-model car can leak oil. A simulated car can't.

A perfect working model of a human being would be conscious. A simulated human being can't be.

In the model, the behavior of the physical system corresponds exactly to what's happening in OPR.

In the simulation, the behavior of the physical system -- whether it's pen and ink, or a supercomputer -- does not correspond to what's being simulated.

For example, the abacus only means 1,234 if you know how to read it. But a pile of 1,234 rocks is a pile of 1,234 rocks even if you can't count them.

No matter how good our simulated human is, he will never be conscious because the simulation can't make brain waves move across 4-D spacetime, just like it can't make water move through a radiator.
 
If a simulated computer can perform accurate calculations it doesn't seem like much of a stretch for a simulated brain to perform the same things a 'real' one does.

What do you mean by "a simulated computer"?

Actually, I don't think I want to know, but I have to ask.
 
Status
Not open for further replies.

Back
Top Bottom