Has consciousness been fully explained?

Status
Not open for further replies.
This is apriori assuming that consciousness is something above and beyond the behavior of consciousness.

Which you just said you aren't doing, yet here you are doing it!

If a robot behaves exactly as if it were conscious, then it must be conscious.

I don't see why this is so hard for so many people to understand.

It's difficult to understand because it's wrong.

That's like saying that if we can make a robot that behaves as if it has muscles, then it must have muscles.

If we suppose that it's possible to rig up a machine that acts like a conscious person, yet we haven't bothered to do anything to it to make it actually be conscious, it just proves that those tasks can be engineered some other way.

The thing doesn't simply become conscious by reaching some threshold of behavioral mimicry of other things that are conscious. And this is for the same reason that the artificial leg that's good enough to fool you doesn't acquire muscle.

When a thing behaves specifically in the ways that generate consciousness -- that is, the kinds of things going on with the neurons and brain waves -- then it will be conscious. But this critical behavior is not perceptible by a casual observer.

If a machine is built which does that, then it will be conscious, even if it behaves in ways radically different from how we behave.
 
I think your analogy is a bit off, Piggy. Creating a leg that functions just like a leg but without muscles tells us plenty about how a leg moves -- which is what we would be after in such a situation.

That it wouldn't tell us much about muscles is analogous to a robot constructed with other material than neurons telling us quite a bit about consciousness but telling us nothing about neurons. We know it won't inform us much about neurons, but that is not what we are after. We are after an explanation for consciousness. If we can see how a robot is constructed to produce that type of behavior we can get a good idea how other types of networks might be constructed to do it. Sure, it wouldn't be exactly how a brain does it, but that's why it is an analogy.

I'm not sure how one could possibly separate behavior that is indistinguishable from conscious behavior and call it not-conscious. Consciousness is that behavior, it is an action.

That it can be carried out by neurons or silica chips is the trivial bit.

But I wasn't talking about "a robot constructed with other material than neurons" which would be capable of "telling us quite a bit about consciousness but telling us nothing about neurons".

I was talking about a robot that's not built to be conscious, but can pass for conscious. That tells us very little about consciousness b/c it's not a property of the robot. That's why it's analogous to the muscle in the leg -- it's an absent element in the model.

It would be very informative about the technology used to engineer the action some other way, but not so much about the engineering of the original.

And no, consciousness is not our overt behavior in response to the world, because we're not consciously aware of a great deal of that. It's a function of our brains. A genuine physical function.

But yeah, it's a trivial notion that model brains can be built (in theory). We don't know what all we'd need to make one yet, but there's no reason they can't be built.
 
How would you know? More than that, what does that even mean? Does Compy act conscious, discuss its private feelings and beliefs and thoughts, but lack, what, the consciousness molecule?

I don't understand.

Well, Compy's the one we rigged up by hooking up all its systems and making it able to learn and such but didn't bother to wire it for consciousness and then left him that way, so I don't know, but I wouldn't put money on him doing any of those things. I don't think anyone's betting that he will pass

But if we assume we somehow could build a robot that passed for conscious but wasn't -- which means we figured out how to produce the behavior some other way -- that robot wouldn't lack a "consciousness molecule", it would simply lack any apparatus that does whatever the brain's doing physically when it turns on your sense of conscious awareness and until it shuts it down again.

We know that neural activity in certain areas of the brain is involved, as are synchronous brain waves, and that when that behavior is going on in the brain, we have (or we are) a sense of awareness and experience, and when the brain stops that behavior, it vanishes and we vanish with it.

It's very strange, but that's what happens. When we're conscious of the world around us, we can perceive the location of our own conscious events (our own Sofia) in time and space.

The conscious machine would have to have the right design to make something equivalent happen, including the physical component, for a similar event to occur.

We can't design anything like that right now. But no, it's not a molecule.
 
If we lower our standards enough, as to what we might consider conscious behavior, we might have to accept that a 'bot running an algorithm is actually conscious.

I remember a push in certain quarters of academia to re-define "literacy" to include things like interpreting cartoons or recognizing stop signs, very very broadly. As far as I know it failed.

The trouble with something like this, though, is that you then have to invent another word to mean what the old one used to.

So I stand by my definition of it. It's something machines can certainly do in theory.
 
If we suppose that it's possible to rig up a machine that acts like a conscious person, yet we haven't bothered to do anything to it to make it actually be conscious

No, we don't suppose it is possible. It isn't. It is physically impossible, for reasons I just explained.
 
The thing doesn't simply become conscious by reaching some threshold of behavioral mimicry of other things that are conscious.

Yeah, it does, actually.

That threshold is when the internal flow of information in Compy's circuits starts to mimic the flow of information in the brains of conscious people.

Get it?
 
We understand your hope/wish/thesis/whatever you'd like to call it.

As we say, facts not in evidence. Nor, unfortunately is it likely those facts will ever be in evidence extending beyond DD's medical definition of 'consciousness', and that (and other definitions) for many of us completely misses 'what it is'.
 
Well, Compy's the one we rigged up by hooking up all its systems and making it able to learn and such but didn't bother to wire it for consciousness and then left him that way, so I don't know, but I wouldn't put money on him doing any of those things. I don't think anyone's betting that he will pass

But if we assume we somehow could build a robot that passed for conscious but wasn't -- which means we figured out how to produce the behavior some other way -- that robot wouldn't lack a "consciousness molecule", it would simply lack any apparatus that does whatever the brain's doing physically when it turns on your sense of conscious awareness and until it shuts it down again.
How do you think that's possible?

We know that neural activity in certain areas of the brain is involved, as are synchronous brain waves
No. Synchronous brain waves are no more involved in consciousness than 2.4GHz RF noise is in running my computer. In fact, they're exactly as involved - they're the electromagnetic signal of the clock speed of the circuit.

The conscious machine would have to have the right design to make something equivalent happen, including the physical component, for a similar event to occur.
What physical component?

We can't design anything like that right now. But no, it's not a molecule.
It's not a molecule. Great. What is it? And how do you know? How do you know there's anything there at all?

We have a brain. It's a whole lot of interconnected neurons performing computation. Out of that arises consciousness. There's nothing else there. There is no physical component other than the neural network.

You keep saying there has to be something else. Why? The neural network of our brain is physical. It controls our body; it controls everything we do. What more do you need, and why?
 
We understand your hope/wish/thesis/whatever you'd like to call it.

As we say, facts not in evidence. Nor, unfortunately is it likely those facts will ever be in evidence extending beyond DD's medical definition of 'consciousness', and that (and other definitions) for many of us completely misses 'what it is'.
Really? Then what's your definition? What's the basis for your definition? And how does your definition differ from ours?
 
If consciousness evolved to make decisions, then what's the difference in saying it's a decision-making process, which would mean that it's a kind of decision making?

What do you think would be the objection to that?

Or do you think that we should only talk about what it does, and never speak of why or how it evolved?

Because that is like saying chewing evolved to make chewing. It is the same thing.

Consciousness is not some process over and above making decisions. It is simply a form of making decisions. In particular, decisions about the self.

When you say it "evolved to make decisions" it implies that if you remove the decisions, it would still be there. That isn't how it works -- if you remove the things that consciousness does from the equation, there is no more consciousness.

Contrast that with teeth, or the brain. It makes sense to say teeth evolved for chewing, and the brain evolved for decision making. Consciousness did not evolve for decision making, consciousness is decision making, that evolved.
 
Last edited:
rocketdodger said:
The thing doesn't simply become conscious by reaching some threshold of behavioral mimicry of other things that are conscious.

Yeah, it does, actually.

That threshold is when the internal flow of information in Compy's circuits starts to mimic the flow of information in the brains of conscious people.

Get it?


Hmm. You seem to defend PM's critique of Searle which if I interpret PM correctly is based on Searle's implied defense of intentionality and/or motivation in human consciousness:

rocketdodger said:
Then Searle says, suppose the man memorises all those books (the Room itself is physically possible, never mind memorising its contents, but let that pass). Then he notes that the man from the Room still doesn't understand Chinese, and yet he can respond to questions written in that language. Where's the part that understands Chinese? The man is now running the system that understands Chinese.

Now that's <rule8> interesting man, thats <rule8> interesting.


So I'll paraphrase your statement in your first quote here as:

"That threshold is when the internal flow of information in Compy's circuits starts to mimic the flow of information in the brains of conscious people lacking intentionality and/or motivation."

Is Compy insane?
 
Hmm. You seem to defend PM's critique of Searle which if I interpret PM correctly is based on Searle's implied defense of intentionality and/or motivation in human consciousness:
No. Searle is a full-blown dualist, and he is making exactly the same mistakes that we have seen in this thread.

Read the "systems reply" and the "virtual minds reply" here. Searle's response is exactly the same vacuous response we are seeing here from Team Anti-computationalism.

Read the other replies there too. Searle's position is the intellectual equivalent of heat death: Nothing ever changes, nothing is ever learned. Fortunately Searle is not the Universe. But I feel sorry for his students.
 
Really? Then what's your definition? What's the basis for your definition? And how does your definition differ from ours?

It's easy. The ultimate test of consciousness. If the A.I. behaves in every way that a conscious being would, then let it read Daniel Dennet's 'Quining Qualia'. If it agrees that something essential in its experience has been left out by Dennet's picture of consciousness, then the A.I. is conscious.:p
 
Hmm. You seem to defend PM's critique of Searle which if I interpret PM correctly is based on Searle's implied defense of intentionality and/or motivation in human consciousness:

I have no problem with defense of intentionality and/or motivation.

I have a problem with the idea that one could construct a chinese room without using intentionality and/or motivation to drive the decisions it must make.

That is simply absurd, and anyone who has even tried their hand at serious programming knows it. In other words, certainly not Searle.

So I'll paraphrase your statement in your first quote here as:

"That threshold is when the internal flow of information in Compy's circuits starts to mimic the flow of information in the brains of conscious people lacking intentionality and/or motivation."

Is Compy insane?

No, you can't have anything even close to human like consciousness without "intentionality and/or motivation," as you call it.

But even if that were not the case -- and I want to stress that it is the case -- think about this: for someone to construct the chinese room in a lookup table fashion, so that there is no "intentionality or motivation" in the workings, what would they need to do?

They would have to sit there and think "if someone asks the room this, the room should respond with .. what? Let me think about how I would respond..." and then they encode the response in the machinery somehow.

What do you think the implications of that are?

Among other things, it means that the guy who made the room has already had a conversation with you. He has experienced it himself. As if you were standing there talking to him from the other side of a wall. He had to make up an imaginary person to have the conversation with in order to program the lookup table. He has already spoken to you, before you ask the room even a single thing. Before you were even born. He knows everything you will ever say to the room -- by definition.

Thus he has consciously experienced your interaction with the room, yet in a different time in a different place. Meaning, the room would still be conscious by all applicable definitions -- it is simply the extension of the person that built it, having a conversation with you across time.

Now if you can find a mathematical hole in that logic, by all means tell me. But I can't. And I think it is rather absurd. So if you still want to cling to the idea of a chinese room built using a lookup table, go ahead -- but it is even crazier than the idea of a conscious machine in the first place.
 
Last edited:
It's easy. The ultimate test of consciousness. If the A.I. behaves in every way that a conscious being would, then let it read Daniel Dennet's 'Quining Qualia'. If it agrees that something essential in its experience has been left out by Dennet's picture of consciousness, then the A.I. is conscious.:p
That seems awfully close to some of the arguments that are being put forth without the smilie.
 
But I wasn't talking about "a robot constructed with other material than neurons" which would be capable of "telling us quite a bit about consciousness but telling us nothing about neurons".

I was talking about a robot that's not built to be conscious, but can pass for conscious. That tells us very little about consciousness b/c it's not a property of the robot. That's why it's analogous to the muscle in the leg -- it's an absent element in the model.

It would be very informative about the technology used to engineer the action some other way, but not so much about the engineering of the original.

And no, consciousness is not our overt behavior in response to the world, because we're not consciously aware of a great deal of that. It's a function of our brains. A genuine physical function.

But yeah, it's a trivial notion that model brains can be built (in theory). We don't know what all we'd need to make one yet, but there's no reason they can't be built.


But, as you seemed to imply earlier the idea of a robot that passes for conscious but which is not conscious seems to be a non-starter. We still have the "problem of other minds", so how could you operationalize a definition for a robot that passes for conscious but is not conscious? That's the old p-zombie argument and it doesn't seem coherent to me; there seems to be an underlying assumption in the argument that consciousness is somehow separable from the behavior that we see as consciousness.

I don't see how it is even possible. To begin, you would need a clear cut definition of exactly what consciousness *is*. I have asked people to try to pin this down before but have found few to no takers.
 
So if I simulate an orange with my Turing Machine I can actually eat it and reduce my risk of gout?

Can anyone answer this for me? RD, maybe you can lend your superior intellect to the question.
 
Can anyone answer this for me? RD, maybe you can lend your superior intellect to the question.

This is a bit of a triumph of hope over experience, but what the heck I'll answer it.

No you can't eat it, nobody has suggested you can. If you took some vitamin C made in a factory that would have the same effect and nobody would claim that it lacked the essential 'orangeyness' that it required to make it work, but that they couldn't quite define.
 
Status
Not open for further replies.

Back
Top Bottom