westprog
Philosopher
- Joined
- Dec 1, 2006
- Messages
- 8,928
How could a human pretend to be conscious when he/she wasn't actually? Sleepwalking?
An answerphone message.
How could a human pretend to be conscious when he/she wasn't actually? Sleepwalking?
Stop a moment and think out what the combinatorial possiblities are for even short grammatical English sentences. And then for a short conversation.
Your program would occupy the entirety of the visible Universe before it even got started.
"Invoke" consciousness? What does that mean?
If you have self-referential information processing, you have consciousness. If not, then not. You can't "invoke" consciousness.
I'll stick my neck out here.
I don't believe (and it is nothing more than an opinion) that if we had an incredibly accurate computer simulation (running on hardware something like we have today i.e. transistors and the like) of a human being that responded exactly as I would do in its simulated universe that it would be conscious the same way as I am.
This is not because I have any belief that humans are special or we have any magical properties but simply because "the map is not the territory". It's the same for any simulation or computer modeling of anything. For example we can already create an incredibly accurate computer simulation of a steam engine, we can get responses from that model that will match up exactly with what would happen if we run a real steam engine in the real world but of course the model will never even pump a millilitre of real water. For all the model steam engine has the "appearance" of working as a real steam engine we know there are fundamental differences between it and the real steam engine.
I don't believe (and it is nothing more than an opinion) that if we had an incredibly accurate computer simulation (running on hardware something like we have today i.e. transistors and the like) of a human being that responded exactly as I would do in its simulated universe that it would be conscious the same way as I am.
Now how this is all falsifiable (from either direction) is a tricky question. But is it one that we need be concerned with? We know from empirical evidence from people with brain damage through physical assaults on the structure of the brain (e.g. injury, Alzheimer's, Parkinson's disease) or people with congenital brain defects that consciousness is certainly not as we usually describe it with our language.
It is partly drawn from AI research, yes, and partly from psychology and partly from biology and partly from mathematics.Your notion of "conscious" here is drawn from AI, I think, Pixy.
Yes and no. Mostly no.With humans it is more complex.
No. You may not be aware of it, but it is aware of itself.There can be a whole heap of self-referencing processing going on completely unconsciously.
No. It works exactly the same way. The definition only becomes complex if you insist on introducing irrelevancies.To define "consciousness" in AI is easy. To do so in humans is more complex.
Says who?The most widely accepted neuronally-based models for human consciousness are the many variants of Global Workspace Theory.
Says who?Amongst cognitive neuroscientists they clearly predominate.
What makes you think these "modules" are unconscious?In these models, consciousness is a "global access" state in which certain information is "broadcast" to a wide base of distributed parallel networks - unconscious modules.
Non-sequitur.Thus it can be seen that self-referencing has little to do with what makes information conscious or not.
Then I'm sure that you - unlike AkuManiMani or Westprog or anyone else who has ever made this claim - can point out something that is actually different between human and machine consciousness.You are trying to transfer the ultra simplistic notions of consciousness in AI to human consciousness and it simply does not work.
Then I'm sure that.... Oh, I said that already.Human consciousness (actual phenomenality) is vastly more complex and quite possibly a completely different thing from machine consciousness.
"And self-referential information processing is consciousness is self-referential..."
Such processes take place in the brains and bodies of individuals who are, in fact, not conscious. This assertion is flat-out wrong.
No. It is not.
It's impossible, so no program can work that way, so we dismiss it.I don't disagree. But we're talking hypothetically. It's a thought experiment
Actually, P-Zombies are worse. They are actually incoherent by definition under any self-consistent metaphysics.No-one seriously thinks that p-zombies ARE real, but that their hypothetical properties illuminate some complexities of the question of consciousness.
Is it performing self-referential information processing?So humour me. By your definition (which, as I said, I broadly agree with), would such a bot be conscious, or not?
What does that mean, though, with respect to consciousness? The consciousness is not something separate from the running program that you can pick up and move somewhere else, any more than you can separate the running from the runner.Generate. Produce.
P-Zombies, are, by definition, only possible under internally inconsistent metaphysical assumptions. So we can dismiss them too.Is such a line of code "self-referential information processing" (no!)? Even if it produces, essentially, the p-zombie?
You're not asking meaningful questions. So the only answer I can give you is that your questions are not meaningful.Could you step out of bold assertion and do some thinking? I'm genuinely interested in your thoughts, and their implications, but you're not having a discussion here.
That you are not aware of this is not a very strong statement. You weren't aware of SHRDLU, either.We are a vastly long way from producing a computer program which is self-referential in the way that human beings are self-referential. I'm not aware of any program which deals with its own nature as a computer program, and can examine its own running code and make discoveries about it.
No, I disagree. If it behaves exactly like you, it's conscious exactly like you.I agree entirely. I think Mercutio and Pixy would too, though. [Would you, guys?]
We can mind-read, so the question doesn't arise.One key component of the p-zombie is that it poses these questions of falsifiability, or "How do we tell consciousness from non-consciousness [given the fact that we can't mind-read]?".
...snip..
Yes, it's a tricky question. Which is where I'm with AMM - the blunt refusal of some to even acknowledge that it is a relevant question at all (let alone a difficult one) is frustrating. One key component of the p-zombie is that it poses these questions of falsifiability, or "How do we tell consciousness from non-consciousness [given the fact that we can't mind-read]?".
No, I disagree. If it behaves exactly like you, it's conscious exactly like you.
...snip...
It's impossible, so no program can work that way, so we dismiss it.
Actually, P-Zombies are worse. They are actually incoherent by definition under any self-consistent metaphysics.
Is it performing self-referential information processing?
What does that mean, though, with respect to consciousness? The consciousness is not something separate from the running program that you can pick up and move somewhere else, any more than you can separate the running from the runner.
You're not asking meaningful questions. So the only answer I can give you is that your questions are not meaningful.
But it is only a problem (i.e. the HPC) if you assume how we describe and talk about our "consciousness" is an accurate description of that consciousness and as I say the empirical data shows that it isn't.
At one time we really didn't have the data so it was quite appropriate to try and understand "consciousness" as it was described. But today that is akin to trying to describe an optical illusion in terms of what we say we see. Consider an optical illusion that that makes us think/experience i.e be conscious of movement of dots when we know there is no movement, we wouldn't try to explain that illusion in terms of the dots movement despite that being the consistent description of the illusion.
Hrm. Depends on how you parse the original sentence.But it doesn't actually behave like me does it?
Uh, because it's impossible.No. Let's not dismiss it. The questions it poses are important. It's a thought-experiment which distils what appear to be weaknesses in your position. It illustrates certain issues with the concepts you're using to base your assertions on. These are important questions. Why handwave them away?
Does it perform self-refential information processing?Hypothetically, is such a program conscious by your definition?
Biological systems are "instructed"?
That's tautological.
Tell me how we would distinguish a conscious machine from, say, a really good chat-bot? Or would a really good chat-bot by definition be conscious?
I don't know, but the answers are most certainly not self-evident.
We could possibly create artificial beings who are conscious and still not know how the consciousness got there.
The problem is that the way we determine whether other human beings are conscious is very complex and hard to define. It depends hugely on having shared experience.
Uh, because it's impossible.
If I had an elephant, and it was green, and had wings, and thirteen legs, and was made of cheese, and was roughly the size and consistency of the Virgo Supercluster, would it still be an elephant?
Does it perform self-refential information processing?
...snip...
But if you then ask, if you simulate a human being to an arbitrary level of detail inside a computer, do you have a real-world consciousness, the answer is yes.
Because the steam engine is defined by its ability to work with matter, while consciousness is defined by its ability to work with information. And while matter cannot cross the simulation barrier, information has to.