Has consciousness been fully explained?

Status
Not open for further replies.
My adviser studied mathematics under Church and is now in cognitive science. I ran that claim by PM about the Church thesis by him earlier today and he would agree with you (there is no evidence for it because it is wrong).

The claim has been thoroughly refuted on numerous occasions now. It will keep being repeated though. It's strange, because as phrased* it's patently nonsensical - so to refute it it's necessary to figure out what is really being claimed and show what is wrong with that.

*Clearly a computer simulation of a brain can't do everything a brain does, and nobody actually thinks that this is true.
 
Well, it could. I don't know why you'd want it to, but it certainly could.

There's a constant confusion between the functional definition, whereby a bowl of soup is entirely useless as a switch, and the physical definition, where a bowl of soup can be a switch, or hundreds of switches, or billions of switches. It's only an issue when the claim is made that there's some physical activity that's demonstrably unique to life and the devices created by living things.
 
Sorry to have missed so much action over here. I've been stinking up the Sam Harris thread.

The problem with this approach is that it doesn't actually mean anything, because I don't only have access to claims about Sofia, I have access to the experience itself, as do you.

Since we all experience it, and we all have similar brains, then we can conclude we all have it.

The fact that a simulation would, in simulation space, "claim" to have it is irrelevant. It's like saying that I might not really have driven to work this morning, because a simulation of someone driving to work might include sounds coming out of the speakers which sound like a human voice claiming to be driving to work.

It's totally irrelevant.

No it is the behavioral dilemma, you assume that you are having that experience, because you label it as such. Which is fine, I am not going to say that something isn't happening, either way.

the issue is this, in a unitary standard, we can only describe the behaviors of an object, we can then apply labels to those behaviors, so when it comes to SOFIA, we can only label the behaviors of SOFIA and then judge whether an object meets the criteria for 'having SOFIA'.

Now what I would ask is this, how can an individual tell that they have SOFIA or are they have a set of other events that the conflate, label and just call SOFIA? Now I am not asking this to be argumentative but because it is a real issue in human biology.

We often have events that we label as one thing, and that personal label may or may not be accurate.
 
No it is the behavioral dilemma, you assume that you are having that experience, because you label it as such. Which is fine, I am not going to say that something isn't happening, either way.

the issue is this, in a unitary standard, we can only describe the behaviors of an object, we can then apply labels to those behaviors, so when it comes to SOFIA, we can only label the behaviors of SOFIA and then judge whether an object meets the criteria for 'having SOFIA'.

Now what I would ask is this, how can an individual tell that they have SOFIA or are they have a set of other events that the conflate, label and just call SOFIA? Now I am not asking this to be argumentative but because it is a real issue in human biology.

We often have events that we label as one thing, and that personal label may or may not be accurate.

It only becomes a difficulty when you label it "a unitary standard" for analytical purposes. If you refrain from standardizing SOFIA events but regard them more like works of art which have a meaning in-them-themselves then I don't see the difficulty.
 
Well, it could. I don't know why you'd want it to, but it certainly could.

There's a constant confusion between the functional definition, whereby a bowl of soup is entirely useless as a switch, and the physical definition, where a bowl of soup can be a switch, or hundreds of switches, or billions of switches. It's only an issue when the claim is made that there's some physical activity that's demonstrably unique to life and the devices created by living things.
 
No it is the behavioral dilemma, you assume that you are having that experience, because you label it as such. Which is fine, I am not going to say that something isn't happening, either way.

the issue is this, in a unitary standard, we can only describe the behaviors of an object, we can then apply labels to those behaviors, so when it comes to SOFIA, we can only label the behaviors of SOFIA and then judge whether an object meets the criteria for 'having SOFIA'.

Now what I would ask is this, how can an individual tell that they have SOFIA or are they have a set of other events that the conflate, label and just call SOFIA? Now I am not asking this to be argumentative but because it is a real issue in human biology.

We often have events that we label as one thing, and that personal label may or may not be accurate.

We should apply the same standards that we use in other situations. Somebody who lives in the forest may learn to recognise the noises made by a squirrel being eaten by a wolverine. After a while, he can recognise it easily. He will readily learn to associate the noise with the event. But when he goes to the souvenir shop and picks up a toy which reproduces the same noise, he doesn't assume that a real squirrel is being eaten by a real wolverine, even if the noise made is exactly the same. He doesn't assume that something real is going on because while one aspect of the information is the same - the high-pitched squirrel screaming - because the rest of the situation is entirely different.

It might be that inside the box there is an actual wolverine eating a squirrel, but it seems very unlikely. This is one of the earliest lessons that human beings learn. When very young, we can't tell the difference between the soft toy and a kitten. Gradually, we learn that things need to have more than one thing in common to be the same thing.

When we get very sophisticated, we are able to link common factors between things that look different. We understand that at a deep level, ostriches and penguins are birds, even though they can't fly, and bats aren't, even though they can. Such classification has to be done with great care, though.
 
No it is the behavioral dilemma, you assume that you are having that experience, because you label it as such. Which is fine, I am not going to say that something isn't happening, either way.

the issue is this, in a unitary standard, we can only describe the behaviors of an object, we can then apply labels to those behaviors, so when it comes to SOFIA, we can only label the behaviors of SOFIA and then judge whether an object meets the criteria for 'having SOFIA'.

Now what I would ask is this, how can an individual tell that they have SOFIA or are they have a set of other events that the conflate, label and just call SOFIA? Now I am not asking this to be argumentative but because it is a real issue in human biology.

We often have events that we label as one thing, and that personal label may or may not be accurate.

I don't see how this is an issue for the people claiming that the phenomena haven't been understood. Whether or not SOFIA refers to a suite of different things, or just one thing, it's only an issue for the people who claim, as per the OP, that it's been fully explained. It might be that the sensation of pain and the sensation of self are totally separate, different things, and the explanation for them is entirely different. That's only an issue when one claims to have an explanation.
 
That's like asking why a simulated lunar landing unit fires its simulated rockets in simulated space if it's not actually landing on the moon. After all, it's not really going to crash if it fails to fire the rockets.

It would be a simulated crash. And the simulated lander would then cease to exist in the simulation, it would instead be a simulated pile of junk.

The point is, the simulated lander fires its simulated rockets because if it does not then according to the rules of the simulation it will cease to exist as the simulated beings that constructed it intended.

Now, extrapolate that to our own universe. How do you know our lunar landers would "really" crash? You don't. All you know is that the rules of our universe dictate that if it did not fire those rockets it would smash into the moon and be destroyed in what we all observe as a crash.

In both cases it is just 1) entities 2) following rules.
 
Well, it could. I don't know why you'd want it to, but it certainly could.

Yes, I agree.

The point is that for a bowl of soup to switch it is a much more involved process than for a little transistor to switch.

That is why we build computers out of transistors instead of bowls of soup.
 
A complete non-biological model of a human being would be conscious, by definition.
So some assert based on wishful thinking but lacking evidence. I'm with Rummy: 1) "There are known knowns. These are things we know that we know."
And we know many things, for a simulation, variables.

2) "There are known unknowns. That is to say, there are things that we know we don't know."
And we are firmly here regarding variables and how they interact.

3) "But there are also unknown unknowns. There are things we don't know we don't know."
And this is the problem.

Although we don't know how it's done yet, we can be sure that consciousness is not happening at the cellular level. It must be at a higher level of organization/granularity.
Depending on specific definitions of consciousness, yes.

And in cases like that, it's always possible in theory to swap out the bits at a lower level of granularity, as long as they behave the same in aggregate.
Unknown.

Stars can swirl in the same way that heavy cream does in cocoa. Funnels can occur in water or air. As long as things act right at the proper level of organization, the micro-properties of the components can be quite different.
Other than in lifeforms, true enough.

my_wan said:
Not so imo. It means a simulation -- simulations certainly are done -- may be lacking variables and/or relationships needed to actually model what is being theoretically simulated.
So add the variables and/or relationships needed for a better model. The variables are still just as real.

So what you appear to say here is the simulation is a simulation because it is missing variables and/or relationships to be real. Does that mean the simulation becomes real when the missing variables and/or relationships are added to the simulation?
It will of course more and more closely model reality.

We know for a fact that our present simulations are missing variables and/or relationships, which is the point of thought experiments of simulations not missing them. But by saying a simulation can't be real is tantamount to saying simulations:
1) Must be missing variables
2) Needed variables can't be defined by a simulation.
1) is certainly true; as for 2) see Rumsfield.

Yet you can't define a real variable that can't be defined. Are you saying fuzzy variables can't be defined? That's not true. We even have fuzzy logic gates where things can only be sort of true.
The real problem is deeper and is how do we model the 'correct' interactions, suppressing the 'incorrect' ones.

No matter how you twist it the fact remains that simulation variables are as real as any variable used to define any of your sensory or perceptual variables. Unless you want to label certain variables supernatural, thus not accessible to physics.
The physics we don't understand and 'magic' are the same.

The comments in post 1160 seem appropriate.
 
There's a constant confusion between the functional definition, whereby a bowl of soup is entirely useless as a switch, and the physical definition, where a bowl of soup can be a switch, or hundreds of switches, or billions of switches. It's only an issue when the claim is made that there's some physical activity that's demonstrably unique to life and the devices created by living things.

Yet again you fail to understand the issue.

Nobody has said some property or attribute was unique to life and the devices life creates.

The claim is that there is so much more of some property or attribute in life and the devices life creates that the behavior of those systems is radically different from the behavior of everything else. It is the behavior that is unique.

If you disagree, please show me where "running" or "reproduction" or "metabolism" occurs in any non-living or non-created system. Anywhere. Please, show me.

I honestly don't understand why you can't grasp that simple idea, even after 4 years of arguing about it.
 
Yet again you fail to understand the issue.

Nobody has said some property or attribute was unique to life and the devices life creates.

The claim is that there is so much more of some property or attribute in life and the devices life creates that the behavior of those systems is radically different from the behavior of everything else. It is the behavior that is unique.

If you disagree, please show me where "running" or "reproduction" or "metabolism" occurs in any non-living or non-created system. Anywhere. Please, show me.

I honestly don't understand why you can't grasp that simple idea, even after 4 years of arguing about it.

Where does "reproduction" or "metabolism" occur in any non-living system?

I've never denied that life has unique qualities. I'm simply denying that there are any fundamental similarities between life and the subclass of non-life formed by life. It's not surprising that living things create objects that fit them in various ways, but just because human beings make gloves, I don't consider that "hand-shaped objects" should be considered a special class with its own behaviour.
 
Yes, I agree.

The point is that for a bowl of soup to switch it is a much more involved process than for a little transistor to switch.

That is why we build computers out of transistors instead of bowls of soup.

The reason we build computers out of transistors is that they only switch when told. They behave in an entirely predicable, controllable way, such that the workings of a computer are entirely deterministic and as certain as possible. It's the complexity and intricacy of the bowl of soup that makes it unusable, not it's simplicity.
 
I don't see how this is an issue for the people claiming that the phenomena haven't been understood. Whether or not SOFIA refers to a suite of different things, or just one thing, it's only an issue for the people who claim, as per the OP, that it's been fully explained. It might be that the sensation of pain and the sensation of self are totally separate, different things, and the explanation for them is entirely different. That's only an issue when one claims to have an explanation.

Well that wasn't my claim, I said some parts are fairly well understood others aren't.
 
Where does "reproduction" or "metabolism" occur in any non-living system?

It doesn't.

Computation, however, which is what reproduction and metabolism -- and a whole bunch of other behaviors -- are built upon, occurs all over the place.

I've never denied that life has unique qualities. I'm simply denying that there are any fundamental similarities between life and the subclass of non-life formed by life.

I don't like the word "fundamental" because you clearly throw it around with different meanings at different times.

But there are similarities. Do you disagree, for example, that using your own terms the computation (or switching, whatever you want to call it) that goes on in a cell is much more predictable and controllable than the switching that goes on in a bowl of soup? And doesn't that mean that both cells and computers are similar because both of them exhibit more predictable and controllable switching that a bowl of soup?
 
Last edited:
The reason we build computers out of transistors is that they only switch when told. They behave in an entirely predicable, controllable way, such that the workings of a computer are entirely deterministic and as certain as possible. It's the complexity and intricacy of the bowl of soup that makes it unusable, not it's simplicity.

But what makes a transistor's switching more predictable and controllable than a bowl of soup's?

Surely there must be a "physical" reason, in some "physics" textbook somewhere.

Can you explain that?
 
So if the exact same machine with the exact same intellectual content had sensory input about the world as you perceive it, then it would be a real intelligence? But if that exact same intelligence instead perceived a computer generated world as primary input, suddenly the exact same intelligence is no longer real because it doesn't see the world you call real?

Do you lose your sophia, or any realness to that sophia, when you are tricked into thinking your somewhere beside where your body is at, or in a simulated environment? Because it's not that hard to fool your senses, and make you think your somewhere else with your real body in front of you.

No, that is not the point.

We still have Sofia when we dream, after all.

It comes right back to the issue I am forced to continually raise, which is the difference between running a simulation and a model.

If you simulate a racecar, nothing in OPR accellerates when the simulated car does.

It's not a matter of where the sensory input comes from.

It's a matter of whether the machine is a model of a human being, or if it's running a simulation of a human being.

A failure to distinguish these two conditions leads to errors when trying to draw conclusions.
 
So if you were a blind deaf paraplegic, wired in so this racecar world was the world you could interact with, race the car, etc., is your sophia somehow lost just because you can't perceive the real world as I know it?

I don't think so.

This is entirely irrelevant because in this case we're still dealing with an actual human being (not even a model), rather than a simulation of a human being.
 
It doesn't.

Computation, however, which is what reproduction and metabolism -- and a whole bunch of other behaviors -- are built upon, occurs all over the place.



I don't like the word "fundamental" because you clearly throw it around with different meanings at different times.

But there are similarities. Do you disagree, for example, that using your own terms the computation (or switching, whatever you want to call it) that goes on in a cell is much more predictable and controllable than the switching that goes on in a bowl of soup? And doesn't that mean that both cells and computers are similar because both of them exhibit more predictable and controllable switching that a bowl of soup?

The switching of the transistor is far more predicable than either the cell or the soup. There's a gradation. And in each case, we're choosing just the functional behaviours which interest us in order to make sense of the situation.

In most respects, a cell is far more like a bowl of soup than it is like a transistor.
 
As a matter of fact your sense of qualia is a simulation your brain creates for you. This is why certain illusions work so well. This is why your senses can be tricked into an out of body experience, while you watch your real self as it is over there. What you call your sense of consciousness is itself part of the qualia modeled in your head for you.

You think when your hand is touched, you know where that touch was, yet that depends on how your body is modeled in your head, not on how your body really is. This makes touch location illusions possible.

This is still irrelevant and hopelessly confused.

The question had nothing to do with how the brain handles information during Sofia events.

It was proposed that, because a computer simulation of a human might produce output such as "I am consciously aware" on a screen or printout or through speakers, even if it doesn't have Sofia, then we must accept the idea that an actual human might claim to have Sofia without actually having it.

Which is nonsense.

And it's nonsense because it fails to distinguish between simulations and models -- that is, between abstractions and physical reality.

That's like saying that if our computer-simulated racecar blows an oil hose, but we don't see any oil coming out of the computer, then a real racecar might not leak oil if its hose blows.

ETA: The same basic error is at the heart of the nonsensical claims that "IP causes consciousness" or "Consciousness is IP".
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom