The Hard Problem of Gravity

Stop a moment and think out what the combinatorial possiblities are for even short grammatical English sentences. And then for a short conversation.

Your program would occupy the entirety of the visible Universe before it even got started.

I don't disagree. But we're talking hypothetically. It's a thought experiment, No-one seriously thinks that p-zombies ARE real, but that their hypothetical properties illuminate some complexities of the question of consciousness.

So humour me. By your definition (which, as I said, I broadly agree with), would such a bot be conscious, or not?


"Invoke" consciousness? What does that mean?

If you have self-referential information processing, you have consciousness. If not, then not. You can't "invoke" consciousness.

Generate. Produce. Is such a line of code "self-referential information processing" (no!)? Even if it produces, essentially, the p-zombie? Could you step out of bold assertion and do some thinking? I'm genuinely interested in your thoughts, and their implications, but you're not having a discussion here.
 
I'll stick my neck out here.

I don't believe (and it is nothing more than an opinion) that if we had an incredibly accurate computer simulation (running on hardware something like we have today i.e. transistors and the like) of a human being that responded exactly as I would do in its simulated universe that it would be conscious the same way as I am.

This is not because I have any belief that humans are special or we have any magical properties but simply because "the map is not the territory". It's the same for any simulation or computer modeling of anything. For example we can already create an incredibly accurate computer simulation of a steam engine, we can get responses from that model that will match up exactly with what would happen if we run a real steam engine in the real world but of course the model will never even pump a millilitre of real water. For all the model steam engine has the "appearance" of working as a real steam engine we know there are fundamental differences between it and the real steam engine.

It's also true that it's only in the field of artificial intelligence that there is any claim that the simulation is equivalent to the thing simulated. Computer programs have been written to simulate almost anything you can think of, and nobody ever considers that that, say, a simulation of the solar system is equivalent to a real solar system.
 
I don't believe (and it is nothing more than an opinion) that if we had an incredibly accurate computer simulation (running on hardware something like we have today i.e. transistors and the like) of a human being that responded exactly as I would do in its simulated universe that it would be conscious the same way as I am.

I agree entirely. I think Mercutio and Pixy would too, though. [Would you, guys?]

Now how this is all falsifiable (from either direction) is a tricky question. But is it one that we need be concerned with? We know from empirical evidence from people with brain damage through physical assaults on the structure of the brain (e.g. injury, Alzheimer's, Parkinson's disease) or people with congenital brain defects that consciousness is certainly not as we usually describe it with our language.

Yes, it's a tricky question. Which is where I'm with AMM - the blunt refusal of some to even acknowledge that it is a relevant question at all (let alone a difficult one) is frustrating. One key component of the p-zombie is that it poses these questions of falsifiability, or "How do we tell consciousness from non-consciousness [given the fact that we can't mind-read]?".
 
Your notion of "conscious" here is drawn from AI, I think, Pixy.
It is partly drawn from AI research, yes, and partly from psychology and partly from biology and partly from mathematics.

With humans it is more complex.
Yes and no. Mostly no.

Human consciousness considered as a system is very complicated. But the consciousness is not the source of the complexity.

There can be a whole heap of self-referencing processing going on completely unconsciously.
No. You may not be aware of it, but it is aware of itself.

This is precisely equivalent to claiming that other people cannot be conscious because you don't know what they are thinking.

To define "consciousness" in AI is easy. To do so in humans is more complex.
No. It works exactly the same way. The definition only becomes complex if you insist on introducing irrelevancies.
 
The most widely accepted neuronally-based models for human consciousness are the many variants of Global Workspace Theory.
Says who?

Amongst cognitive neuroscientists they clearly predominate.
Says who?

In these models, consciousness is a "global access" state in which certain information is "broadcast" to a wide base of distributed parallel networks - unconscious modules.
What makes you think these "modules" are unconscious?

Thus it can be seen that self-referencing has little to do with what makes information conscious or not.
Non-sequitur.

You are trying to transfer the ultra simplistic notions of consciousness in AI to human consciousness and it simply does not work.
Then I'm sure that you - unlike AkuManiMani or Westprog or anyone else who has ever made this claim - can point out something that is actually different between human and machine consciousness.

Human consciousness (actual phenomenality) is vastly more complex and quite possibly a completely different thing from machine consciousness.
Then I'm sure that.... Oh, I said that already.
 
"And self-referential information processing is consciousness is self-referential..."

Such processes take place in the brains and bodies of individuals who are, in fact, not conscious. This assertion is flat-out wrong.



No. It is not.

We are a vastly long way from producing a computer program which is self-referential in the way that human beings are self-referential. I'm not aware of any program which deals with its own nature as a computer program, and can examine its own running code and make discoveries about it.
 
I don't disagree. But we're talking hypothetically. It's a thought experiment
It's impossible, so no program can work that way, so we dismiss it.

No-one seriously thinks that p-zombies ARE real, but that their hypothetical properties illuminate some complexities of the question of consciousness.
Actually, P-Zombies are worse. They are actually incoherent by definition under any self-consistent metaphysics.

So humour me. By your definition (which, as I said, I broadly agree with), would such a bot be conscious, or not?
Is it performing self-referential information processing?

Generate. Produce.
What does that mean, though, with respect to consciousness? The consciousness is not something separate from the running program that you can pick up and move somewhere else, any more than you can separate the running from the runner.

Is such a line of code "self-referential information processing" (no!)? Even if it produces, essentially, the p-zombie?
P-Zombies, are, by definition, only possible under internally inconsistent metaphysical assumptions. So we can dismiss them too.

Could you step out of bold assertion and do some thinking? I'm genuinely interested in your thoughts, and their implications, but you're not having a discussion here.
You're not asking meaningful questions. So the only answer I can give you is that your questions are not meaningful.
 
We are a vastly long way from producing a computer program which is self-referential in the way that human beings are self-referential. I'm not aware of any program which deals with its own nature as a computer program, and can examine its own running code and make discoveries about it.
That you are not aware of this is not a very strong statement. You weren't aware of SHRDLU, either.

Systems incorporating optimising JIT compilers do this sort of thing. You very likely have one installed on your PC.

And most humans demonstrate little ability to do this in any case.
 
I agree entirely. I think Mercutio and Pixy would too, though. [Would you, guys?]
No, I disagree. If it behaves exactly like you, it's conscious exactly like you.

One key component of the p-zombie is that it poses these questions of falsifiability, or "How do we tell consciousness from non-consciousness [given the fact that we can't mind-read]?".
We can mind-read, so the question doesn't arise.
 
...snip..

Yes, it's a tricky question. Which is where I'm with AMM - the blunt refusal of some to even acknowledge that it is a relevant question at all (let alone a difficult one) is frustrating. One key component of the p-zombie is that it poses these questions of falsifiability, or "How do we tell consciousness from non-consciousness [given the fact that we can't mind-read]?".

But it is only a problem (i.e. the HPC) if you assume how we describe and talk about our "consciousness" is an accurate description of that consciousness and as I say the empirical data shows that it isn't.

At one time we really didn't have the data so it was quite appropriate to try and understand "consciousness" as it was described. But today that is akin to trying to describe an optical illusion in terms of what we say we see. Consider an optical illusion that that makes us think/experience i.e be conscious of movement of dots when we know there is no movement, we wouldn't try to explain that illusion in terms of the dots movement despite that being the consistent description of the illusion.
 
It's impossible, so no program can work that way, so we dismiss it.

No. Let's not dismiss it. The questions it poses are important. It's a thought-experiment which distils what appear to be weaknesses in your position. It illustrates certain issues with the concepts you're using to base your assertions on. These are important questions. Why handwave them away? Your answers may be (and probably are) coherent, and what seem at first glance to be weaknesses may subsequently turn out not to be so. But you need to engage with the question before we'll know.

Hypothetically, is such a program conscious by your definition? I'm just posing these to try and get to the exact point your making; to clarify that it is you're claiming (which I think I agree with). I am in no way suggesting that such a machine can be built, just as those who discuss the p-zombie think it can (does?) really exist. It's just a tool to try and get you to clarify what you're proposing.

Please.


Actually, P-Zombies are worse. They are actually incoherent by definition under any self-consistent metaphysics.

I agree, which is why I'm using the parrot-bot example - it seems to be the type of thing you're suggesting is conscious, but yet seems to be lacking something that I would argue is a pre-requisite for consciousness - the ability for novel thoughts. So I wanted to ask you whether or not you thought such a program was conscious or not? If so, why?If not, why not?


Is it performing self-referential information processing?

I don't know. That's why I'm asking you. I'm being interrogative, not assertive. I think I agree with you, I just want to know exactly what it is you're claiming. Would a parrot-bot with a function to disguise repetitive answers be conscious, in your opinion?

What does that mean, though, with respect to consciousness? The consciousness is not something separate from the running program that you can pick up and move somewhere else, any more than you can separate the running from the runner.

Consciousness is a set of behaviours. On that we can agree. So, I'm asking you - do you think that a line of code in a parrot-bot that disguises repetitive answers would be one of those behaviours? Is my parrot-bot conscious? It is trivially self-referential, and processes information - but it lacks the ability to be novel. I honestly don't know if you think it is conscious, or it isn't, because you're dodging the question.


You're not asking meaningful questions. So the only answer I can give you is that your questions are not meaningful.

They are meaningful. How we can distinguish consciousness from non-consciousness seems to me to be the most meaningful question there is to ask in this field.
 
But it is only a problem (i.e. the HPC) if you assume how we describe and talk about our "consciousness" is an accurate description of that consciousness and as I say the empirical data shows that it isn't.

At one time we really didn't have the data so it was quite appropriate to try and understand "consciousness" as it was described. But today that is akin to trying to describe an optical illusion in terms of what we say we see. Consider an optical illusion that that makes us think/experience i.e be conscious of movement of dots when we know there is no movement, we wouldn't try to explain that illusion in terms of the dots movement despite that being the consistent description of the illusion.

I don't think you're right, even though I agree with your broad point. Whatever behaviour set really defines consciousness (and yes, neuroscience is clarifying what constitutes that behaviour set), the problem (the hard problem, if you will) remains - how do we tell consciousness from non-consciousness? Now, this is just a methodological problem rather than an ontological one (as Westprog is arguing), but it's a problem nonetheless.
 
But it doesn't actually behave like me does it?
Hrm. Depends on how you parse the original sentence.

But if you are saying, if you simulate a steam engine to an arbitrary level of detail inside a computer, do you have a real-world steam engine, the answer is (of course), no.

But if you then ask, if you simulate a human being to an arbitrary level of detail inside a computer, do you have a real-world consciousness, the answer is yes.

Because the steam engine is defined by its ability to work with matter, while consciousness is defined by its ability to work with information. And while matter cannot cross the simulation barrier, information has to.
 
No. Let's not dismiss it. The questions it poses are important. It's a thought-experiment which distils what appear to be weaknesses in your position. It illustrates certain issues with the concepts you're using to base your assertions on. These are important questions. Why handwave them away?
Uh, because it's impossible.

If I had an elephant, and it was green, and had wings, and thirteen legs, and was made of cheese, and was roughly the size and consistency of the Virgo Supercluster, would it still be an elephant?

Hypothetically, is such a program conscious by your definition?
Does it perform self-refential information processing?
 
Biological systems are "instructed"?

Yes, they are. It's called DNA.

That's tautological.

No, it's not. Some things look a certain way but are of a different nature.

Tell me how we would distinguish a conscious machine from, say, a really good chat-bot? Or would a really good chat-bot by definition be conscious?

It depends how we define consciousness. So, let's start by that. How do YOU define consciousness ?

I don't know, but the answers are most certainly not self-evident.

I never claimed they were.
 
We could possibly create artificial beings who are conscious and still not know how the consciousness got there.

Of course. But it's still a matter of function, not composition. Unless you can show that humans are, somehow, "special", and how.

The problem is that the way we determine whether other human beings are conscious is very complex and hard to define. It depends hugely on having shared experience.

Not really. People have been saying that volcanoes, trees, winds and events are conscious for ever. Humans have projected their own "mind" onto other things, so it's no wonder we naturally assume other humans are conscious, as well. Why ? Because we associate a certain behaviour with a "mental state" i.e. anger, love, etc. So when the volcano explodes, we say it's angry. I don't think it's that difficult to "determine" consciousness, it's just often wrong. But the way we do it is by examining behaviour. There's simply no other way.
 
Uh, because it's impossible.

If I had an elephant, and it was green, and had wings, and thirteen legs, and was made of cheese, and was roughly the size and consistency of the Virgo Supercluster, would it still be an elephant?

Don't play dumb. You're not dumb.

The answer to the question just posed, though - is NO. Because the definition of "elephant" precludes curdled milk constitution.

I honestly don't know what your answer to my hypothetical would be, because you have not adequately explained what it is you mean. So, I'll ask again - is my parrot-bot conscious by your definition? I honestly don't know.


Does it perform self-refential information processing?

Do you count a simple function to prevent repeated answers "self-referential information programming"? Again, I don't know.

You're turning what could be an interesting chat into an obtuse argument. I really don't know why.
 
...snip...

But if you then ask, if you simulate a human being to an arbitrary level of detail inside a computer, do you have a real-world consciousness, the answer is yes.

Because the steam engine is defined by its ability to work with matter, while consciousness is defined by its ability to work with information. And while matter cannot cross the simulation barrier, information has to.

I think you and I have had this discussion before... As always we'll have to agree to disagree because as you know I think that such an abstraction represents a change in the information.
 

Back
Top Bottom