Has consciousness been fully explained?

Status
Not open for further replies.
No it is the behavioral dilemma, you assume that you are having that experience, because you label it as such.

This is absolute corn-fed nonsense.

One cannot label an experience one does not have.

Yes, we can have hallucinations, but we cannot hallucinate the experience of Sofia itself, because it is a prerequisite to the experience of any hallucination.
 
It would be a simulated crash. And the simulated lander would then cease to exist in the simulation, it would instead be a simulated pile of junk.

The point is, the simulated lander fires its simulated rockets because if it does not then according to the rules of the simulation it will cease to exist as the simulated beings that constructed it intended.

Now, extrapolate that to our own universe. How do you know our lunar landers would "really" crash? You don't. All you know is that the rules of our universe dictate that if it did not fire those rockets it would smash into the moon and be destroyed in what we all observe as a crash.

In both cases it is just 1) entities 2) following rules.

All of which is gloriously irrelevant to the question at hand.
 
So some assert based on wishful thinking but lacking evidence. I'm with Rummy: 1) "There are known knowns. These are things we know that we know."
And we know many things, for a simulation, variables.

2) "There are known unknowns. That is to say, there are things that we know we don't know."
And we are firmly here regarding variables and how they interact.

3) "But there are also unknown unknowns. There are things we don't know we don't know."
And this is the problem.

This is not a problem at all.

If physical object A can do function X, then it is always theoretically possible to build a different object B that can do function X. The only limitations are time, space, materials, and sufficient information.

Because human beings can be conscious, it must be possible in theory to build a conscious machine.
 

No, this is not at all unknown. At each level of organization, there are features of the underlying units which matter to what's happening at that level, and others that don't.

You can always, in theory, swap out the low-level units for something else. (But this doesn't guarantee that this "something else" will actually exist in our universe in all cases.)

ETA: This is why synthetic motor oil is interchangeable with organic oil in your car.
 
The switching of the transistor is far more predicable than either the cell or the soup. There's a gradation.

I said the switching in a cell is more like the switching in a computer than a bowl of soup.

In other words, things like this are more like a transistor than anything in a bowl of soup:

http://en.wikipedia.org/wiki/Ion_channel

Do you disagree that the predictability and controllability of an ion channel is closer to that of a transistor than that of aggregates in a bowl of soup?

And in each case, we're choosing just the functional behaviours which interest us in order to make sense of the situation.

So what?

Are you claiming that those behaviors cease to exist when we aren't interested in them?
 
Last edited:
All of which is gloriously irrelevant to the question at hand.

But you said the simulated crash wouldn't be a "real" crash. My response was to show that it was "real" enough to the simulated lander.

That it is or is not "real" to us is irrelevant, since we were not the entity involved in the crash.
 
Piggy said:
So some assert based on wishful thinking but lacking evidence. I'm with Rummy: 1) "There are known knowns. These are things we know that we know."
And we know many things, for a simulation, variables.

2) "There are known unknowns. That is to say, there are things that we know we don't know."
And we are firmly here regarding variables and how they interact.

3) "But there are also unknown unknowns. There are things we don't know we don't know."
And this is the problem.

This is not a problem at all.

If physical object A can do function X, then it is always theoretically possible to build a different object B that can do function X. The only limitations are time, space, materials, and sufficient information.
Er, yes. So actually it is a bit of a problem.

Because human beings can be conscious, it must be possible in theory to build a conscious machine.
May we should try a less complex lifeform first. A not-biological blue-green algae for a start?

And of course we do know how to build consciousnesses. Whether they are "machines" is under discussion.

I also, and again, mention post 1160 as being applicable.
 
Last edited:
Piggy so what do you think of Jaron Lanier's ideas in post 1160?

He's certainly correct, but what's interesting is that the thought experiment fails even if a perfect simulation were possible.
 
But you said the simulated crash wouldn't be a "real" crash. My response was to show that it was "real" enough to the simulated lander.

That it is or is not "real" to us is irrelevant, since we were not the entity involved in the crash.

For the purpose of the thought experiment, it makes all the difference in the world.

It also is critical to the mistake in your claim that our universe might "be a simulation".

Our universe can't be a simulation because it is not an abstraction.

I suppose it's possible that our universe was built by some hyperdimensional race to run a simulation. But if so, we can have no idea what sort of simulation it might be running, for the same reason that a wire in a computer cannot know whether the computer is running a simulation of a moon landing or a race car.

And I suppose it's possible that our universe could be a scale model of an identical but larger universe, but in that case we're just as real as we'd be if the other bigger universe did not exist.
 
Er, yes. So actually it is a bit of a problem.


May we should try a less complex lifeform first. A not-biological blue-green algae for a start?

You're taking everything out of context here. So these points are worthless for this discussion.
 
Btw, there seems to be another fundamental error in thinking about the thought experiment.

It appears that some want to frame it this way:

Suppose we build a computer that does everything the human brain does. Then we put it into a mechanical body that has all the sensory apparatus of a human body...

But this is classic "begging the question" -- assuming as a premise what you're attempting to prove.

It's been amply demonstrated why we can't assume that a computer by itself can do everything the brain does.

So we must begin with one of two scenarios:

A. Suppose we create a perfect computer simulation of a human body, including the brain (of course)....

B. Suppose we create a perfect scale model of a human body, including the brain (of course)....

And we must choose which of these scenarios we want to imagine.
 
But you said the simulated crash wouldn't be a "real" crash. My response was to show that it was "real" enough to the simulated lander.

That it is or is not "real" to us is irrelevant, since we were not the entity involved in the crash.

It's entirely relevant to us. The simulation is for us, not the simulated lander. There's no reason why the "lander" would know it's been in a crash. A perfectly good simulation might simply say "CRASH" when the lander occupies the same space as a rock.

Simulations do not necessarily involve entities within the simulation "knowing" anything at all. Indeed, they almost always don't. The purpose of a simulation is to tell the user of the simulation what would happen in a particular situation. It models just as much interaction between the entities as is necessary for the type of simulation taking place. The entities might not even have a specific separate existence in the program.
 
It's entirely relevant to us. The simulation is for us, not the simulated lander. There's no reason why the "lander" would know it's been in a crash. A perfectly good simulation might simply say "CRASH" when the lander occupies the same space as a rock.

Simulations do not necessarily involve entities within the simulation "knowing" anything at all. Indeed, they almost always don't. The purpose of a simulation is to tell the user of the simulation what would happen in a particular situation. It models just as much interaction between the entities as is necessary for the type of simulation taking place. The entities might not even have a specific separate existence in the program.

I thought the whole premise was to simulate particles down to the planck level.
 
I don't understand what that means. Can you explain a bit more?

Well, it's what Westprog and I (at least) have been mentioning lately.

Simulations don't exist in objective physical reality (OPR) . The mechanism sustaining the simulation has real electro-physical processes going on in OPR, but you can't judge from that what's being simulated. Instead, you have be able to interpret whatever the physical output actually is (e.g., patterns of colored light, soundwaves) so your brain can imagine very precisely whatever's being simulated.

That's why we have to provide so much input to make it be very precise in how it displays the output -- binary printouts don't look like racecars to us.

But the simulation never makes water flow through a radiator.

Similarly, no matter how closely we perform a simulation of the brain, we'll never make any of those signature waves move through OPR, for example, because the thing we call the simulation is an abstraction. No actual brain is operating in OPR, which is where the phenomenon of consciousness actually occurs, just as the phenomenon of spilling oil only actually occurs in OPR, even though they are different kinds of phenomena.

It's like the configuration of the abacus beads. It's only in our imagination that it "is" 1,234. The actual state of that computer system only "is" a simulation of a racecar (or a brain) because it makes us imagine a racecar.

So we certainly are not a simulation of what we believe we experience, because that cannot be real in OPR. We cannot be an abstraction of something, because that would mean we literally are an instance of interpretation of a symbol system, which is not anything you'd confidently call "real".

We cannot actually be an abstration, so we cannot actually be a simulation. If our universe turned out to be a machine running some kind of simulation, we'd have no way of knowing what it was.

But we could be a model in a very large bottle somewhere in some larger-dimensional universe. It would be all the same to us.
 
Well, it's what Westprog and I (at least) have been mentioning lately.

Simulations don't exist in objective physical reality (OPR) . The mechanism sustaining the simulation has real electro-physical processes going on in OPR, but you can't judge from that what's being simulated. Instead, you have be able to interpret whatever the physical output actually is (e.g., patterns of colored light, soundwaves) so your brain can imagine very precisely whatever's being simulated.

That's why we have to provide so much input to make it be very precise in how it displays the output -- binary printouts don't look like racecars to us.

But the simulation never makes water flow through a radiator.

Similarly, no matter how closely we perform a simulation of the brain, we'll never make any of those signature waves move through OPR, for example, because the thing we call the simulation is an abstraction. No actual brain is operating in OPR, which is where the phenomenon of consciousness actually occurs, just as the phenomenon of spilling oil only actually occurs in OPR, even though they are different kinds of phenomena.

It's like the configuration of the abacus beads. It's only in our imagination that it "is" 1,234. The actual state of that computer system only "is" a simulation of a racecar (or a brain) because it makes us imagine a racecar.

So we certainly are not a simulation of what we believe we experience, because that cannot be real in OPR. We cannot be an abstraction of something, because that would mean we literally are an instance of interpretation of a symbol system, which is not anything you'd confidently call "real".

We cannot actually be an abstration, so we cannot actually be a simulation. If our universe turned out to be a machine running some kind of simulation, we'd have no way of knowing what it was.

But we could be a model in a very large bottle somewhere in some larger-dimensional universe. It would be all the same to us.

....and the conclusion to Piggy's explanation is that we cannot understand something, like a simulation, without the participation of a conscious human being. In other words knowledge of a machines consciousness is dependent on human consciousness. Knowledge is composed of percepts/behavior and concepts/labels (Behaviorist included ;)). Concepts/labels are the children of human thinking, percepts what we sense physically in OPR. No machine can have knowledge of its own consciousness, but humans can have knowledge of our own consciousness since once a concept, such as consciousness is born, it becomes a percept. On an abstract level we would never know if a machine/simulation had machine/simulation knowledge of its consciousness as we would never objectively know how the machine/simulation defined its consciousness, unless of course we pretended that there was a metaphysical definition for consciousness such as SRIP and then pointed to a machine/simulation and said "see I told you it was what I made it to be". :cool:
 
I thought the whole premise was to simulate particles down to the planck level.

I don't know if this was what was intended, but it's very far from clear whether such a simulation would be theoretically possible. Certainly it wouldn't be practically possible.
 
....and the conclusion to Piggy's explanation is that we cannot understand something, like a simulation, without the participation of a conscious human being. In other words knowledge of a machines consciousness is dependent on human consciousness. Knowledge is composed of percepts/behavior and concepts/labels (Behaviorist included ;)). Concepts/labels are the children of human thinking, percepts what we sense physically in OPR. No machine can have knowledge of its own consciousness, but humans can have knowledge of our own consciousness since once a concept, such as consciousness is born, it becomes a percept. On an abstract level we would never know if a machine/simulation had machine/simulation knowledge of its consciousness as we would never objectively know how the machine/simulation defined its consciousness, unless of course we pretended that there was a metaphysical definition for consciousness such as SRIP and then pointed to a machine/simulation and said "see I told you it was what I made it to be". :cool:

That's the point about the "lander" in the simulation. There's no reason why it should consider itself as a "lander". Within the simulation, it's just a pattern of voltages, changing around. Indeed, it need not consider itself an entity at all. (Leaving aside the fact that it probably isn't aware of anything). It's only a lander from a POV outside the simulation.

If we are a simulation, we may well have no idea what it is we are simulating. This is unless we are real minds being presented with a simulated universe, ala Matrix.
 
This is absolute corn-fed nonsense.

One cannot label an experience one does not have.

Yes, we can have hallucinations, but we cannot hallucinate the experience of Sofia itself, because it is a prerequisite to the experience of any hallucination.

Uh huh, nice retort. And as stated before, many things are conflated under consciousness, and by you now under SOFIA, yu have not deined it, you have not explored it, you have not tried to deliniate it. Yet you insist that it exist. SOFIA is not a prerequisite for hallucination, perception is. So you have just conflated perception into SOFIA.

Just as whale is a fish.
 
Status
Not open for further replies.

Back
Top Bottom