Has consciousness been fully explained?

Status
Not open for further replies.
I don't want to spend too much time on it, but I do have to come back for a loose end regarding conscious machines....

From time to time there's a discussion about machines or critters that behave exactly as if they're conscious, but they're not.

The problem is, there are a lot of assumptions built into that, making it a badly formed and unproductive thought experiment.

So instead, let's take a machine and wire him up. We can call him Compy, and we'll wire up a controlling computer to some legs, some arms, some sort of eyes and ears, nose and tongue, a balancer, tactile sensors, even mechanisms that control his internal temperature and the pressure of the fluids used to work his robot parts, and so forth.

Once we've got all that done, we'll program his brain so he can remember and learn from his interactions with the world.

So there's Compy. Is he conscious?

No, he's not, because we haven't included any mechanism for that.

Just like every other bodily function that happens in 4-D spacetime, consciousness has some sort of physical cause that must be implemented with some sort of hardware in order to occur, and we haven't included any of that, or connected it up right with the computer signals.

It's tempting to forget this last bit, because as we hook Compy's computer up with all these other apparati that do what our bodies do, we can see what the hardware/software combo is, but when we come to our brains we see that it doesn't employ any other organ to produce Sofia, so we think, well, there must be no specific physical mechanism then, except what's being used to "run the program" so to speak.

But that would mean that there would be no direct physical cause of a phenomenon locatable in space and time, because there is no spare physical activity over and above what's being used to essentially run a simulation, and this violates the known laws of physics, so we reject it.

Instead, we conclude that the brain itself contains sufficient hardware to do it.

Right now, nobody knows how the brain accomplishes consciousness, how it generates Sofia. So we have no idea how to take the final step of completing Compy's brain so that he's conscious.

Which brings us to the next question.... Does Compy behave as if he's conscious?

Almost certainly no.

How do we know that? Well, consciousness is resource-intensive, and it's not a peacock's tail, so it must be doing something very important for it to have evolved and to be so prominent in such a dominant species as ours (on our level of magnification, admittedly).

We won't know exactly or precisely how important until we can define a sure signature of consciousness in the brain and begin making conclusions about other species. But given how prominent it is in us, and how we've managed to overtake the planet, it's probably no slouch.

So without it, Compy most likely has very little chance of passing for anything like a normal, sober, conscious person.
 
I don't want to spend too much time on it, but I do have to come back for a loose end regarding conscious machines....

From time to time there's a discussion about machines or critters that behave exactly as if they're conscious, but they're not.

The problem is, there are a lot of assumptions built into that, making it a badly formed and unproductive thought experiment.

So instead, let's take a machine and wire him up. We can call him Compy, and we'll wire up a controlling computer to some legs, some arms, some sort of eyes and ears, nose and tongue, a balancer, tactile sensors, even mechanisms that control his internal temperature and the pressure of the fluids used to work his robot parts, and so forth.

Once we've got all that done, we'll program his brain so he can remember and learn from his interactions with the world.

So there's Compy. Is he conscious?

No, he's not, because we haven't included any mechanism for that.

Just like every other bodily function that happens in 4-D spacetime, consciousness has some sort of physical cause that must be implemented with some sort of hardware in order to occur, and we haven't included any of that, or connected it up right with the computer signals.

It's tempting to forget this last bit, because as we hook Compy's computer up with all these other apparati that do what our bodies do, we can see what the hardware/software combo is, but when we come to our brains we see that it doesn't employ any other organ to produce Sofia, so we think, well, there must be no specific physical mechanism then, except what's being used to "run the program" so to speak.

But that would mean that there would be no direct physical cause of a phenomenon locatable in space and time, because there is no spare physical activity over and above what's being used to essentially run a simulation, and this violates the known laws of physics, so we reject it.

Instead, we conclude that the brain itself contains sufficient hardware to do it.

Right now, nobody knows how the brain accomplishes consciousness, how it generates Sofia. So we have no idea how to take the final step of completing Compy's brain so that he's conscious.

Which brings us to the next question.... Does Compy behave as if he's conscious?

Almost certainly no.

How do we know that? Well, consciousness is resource-intensive, and it's not a peacock's tail, so it must be doing something very important for it to have evolved and to be so prominent in such a dominant species as ours (on our level of magnification, admittedly).

We won't know exactly or precisely how important until we can define a sure signature of consciousness in the brain and begin making conclusions about other species. But given how prominent it is in us, and how we've managed to overtake the planet, it's probably no slouch.

So without it, Compy most likely has very little chance of passing for anything like a normal, sober, conscious person.

There is a big error in your logic, though.

It is the same error that supporters of the chinese room make.

In short, you don't realize that as the behavior of Compy gets closer and closer to that of a normal human it becomes less and less viable to "copy" rather than "generate."

That is, from a purely mathematical point of view, the chances that Compy would act like a human and *not* share the same mechanics regarding the generation of those behaviors is extremely improbable compared to the chances that Compy just became conscious at some point and that the behaviors are genuine.
 
There is a big error in your logic, though.

It is the same error that supporters of the chinese room make.

In short, you don't realize that as the behavior of Compy gets closer and closer to that of a normal human it becomes less and less viable to "copy" rather than "generate."

That is, from a purely mathematical point of view, the chances that Compy would act like a human and *not* share the same mechanics regarding the generation of those behaviors is extremely improbable compared to the chances that Compy just became conscious at some point and that the behaviors are genuine.

What the hell are you talking about?

There's very little chance that Compy will come anywhere close to approximating the behavior of a conscious and sober person.
 
If, for example, consciousness evolved to make decisions

I don't understand why you would skip past "consciousness is a type of decision making" and go to "consciousness evolved to make decisions."

This is where the HPC comes from, and why it is actually neither hard nor a problem.

There is no evidence that consciousness is something above and beyond what you are thinking when you are conscious -- by definition.

Its like you are looking at an engine and asking "where does the go come from?" Well, the pistons go up an down and turn this crankshaft, and that turns the wheels, and the thing moves forward. "no, no -- the 'go', where does that come from? Why would a mechanic waste all that energy making the go when they could just make the car move forward without the go?" huh? The go is the car moving forward, wtf else would the 'go' be?
 
What the hell are you talking about?

There's very little chance that Compy will come anywhere close to approximating the behavior of a conscious and sober person.

Then what are we arguing about?

Wouldn't it be a stupid claim to say that a system that didn't behave as if it was conscious -- according to some definition -- was in fact conscious?

Nobody is making that claim, as far as I know.

The claim is that if Compy acts like commander Data, then he is surely conscious like a human. If Compy acts like a toaster, then he is not conscious like a human.
 
There is a big error in your logic, though.

It is the same error that supporters of the chinese room make.

In short, you don't realize that as the behavior of Compy gets closer and closer to that of a normal human it becomes less and less viable to "copy" rather than "generate."

That is, from a purely mathematical point of view, the chances that Compy would act like a human and *not* share the same mechanics regarding the generation of those behaviors is extremely improbable compared to the chances that Compy just became conscious at some point and that the behaviors are genuine.
Exactly. You cannot simulate human behaviours with a lookup table. The only possible way to do it is with a conscious system.

Searle looks at the Chinese Room and says, Where's the part that understands Chinese? The Room understands Chinese. Then Searle says, suppose the man memorises all those books (the Room itself is physically possible, never mind memorising its contents, but let that pass). Then he notes that the man from the Room still doesn't understand Chinese, and yet he can respond to questions written in that language. Where's the part that understands Chinese? The man is now running the system that understands Chinese.

It is astonishing to me that Searle cannot grasp this. It's subtle, but it's not that complicated, and it destroys his position. Well, okay, maybe it's not so astonishing that Searle can't grasp a fact that would destroy his worldview and the core of his academic career. Disappointing, shall we say.

If you assume that consciousness cannot arise from a system, and go poking about looking for the consciousness molecule, you will never find it. You might as well look for the internal combustion elves that make your car go. Asserting that they are subtle, special, immaterial elves doesn't make your case any stronger.
 
Exactly. You cannot simulate human behaviours with a lookup table. The only possible way to do it is with a conscious system.

And I have to reiterate my point that if the blabbering opponents of the computational model and/or functional model would just stop blabbering and actually do some work on the problem they would clearly see this.

Then Searle says, suppose the man memorises all those books (the Room itself is physically possible, never mind memorising its contents, but let that pass). Then he notes that the man from the Room still doesn't understand Chinese, and yet he can respond to questions written in that language. Where's the part that understands Chinese? The man is now running the system that understands Chinese.

Now that's <rule8> interesting man, thats <rule8> interesting.

DISCLAIMER -- R rated language in the below clip
http://www.youtube.com/watch?v=LrLLdNuOESk#t=1m17s

Kind of like a chinese brain condensed all down into a single person -- the computations that are consciousness being just the substrate for another set of computations ! Fascinating.

I have always thought it would be mind blowing if memes worked in such a way, almost with a life of their own on top of our conscious processes. Snow Crash worthy stuff there.
 
Last edited:
What the hell are you talking about?

There's very little chance that Compy will come anywhere close to approximating the behavior of a conscious and sober person.

The questions are:
-what behaviors define consciousness
-what behaviors does Compy exhibit


Now given the limited design of Compy, it depends on what levels of consciousness you wish to define.

Now Compy might well meet the insect criteria of behaviors, but is unlikely to meet avian of mammalian criteria, unless you want to specify how much processing and memory Compy has.

Given a certain level of that they might rise to small brained critters with very simple systems (relativly simple, not actually simple at all). Reptiles and fish.
 
Exactly. You cannot simulate human behaviours with a lookup table. The only possible way to do it is with a conscious system.

A look up table solely no, but say there was another set of instructions that gave rules for dealing with new words and events, and that it allowed the operant in the room to devise contextual meanings and associations of the new words and events.

Then the second book also gave instructions for how to amend the original book.

Rather clunky, I know but a possible trail to follow.

the Chinese room/giant is of course meant to point out how we think about these things. It has many failings.
 
A look up table solely no, but say there was another set of instructions that gave rules for dealing with new words and events, and that it allowed the operant in the room to devise contextual meanings and associations of the new words and events.

Then the second book also gave instructions for how to amend the original book.
Yep. Except that the second book would also need instructions on how to amend the second book. :)

Rather clunky, I know but a possible trail to follow.
No, I don't think it's clunky at all. I think it's very apt.

the Chinese room/giant is of course meant to point out how we think about these things. It has many failings.
Yeah, but Searle hasn't learned the moral of his own fable.
 
I don't understand why you would skip past "consciousness is a type of decision making" and go to "consciousness evolved to make decisions."

This is where the HPC comes from, and why it is actually neither hard nor a problem.

There is no evidence that consciousness is something above and beyond what you are thinking when you are conscious -- by definition.

Its like you are looking at an engine and asking "where does the go come from?" Well, the pistons go up an down and turn this crankshaft, and that turns the wheels, and the thing moves forward. "no, no -- the 'go', where does that come from? Why would a mechanic waste all that energy making the go when they could just make the car move forward without the go?" huh? The go is the car moving forward, wtf else would the 'go' be?

If consciousness evolved to make decisions, then what's the difference in saying it's a decision-making process, which would mean that it's a kind of decision making?

What do you think would be the objection to that?

Or do you think that we should only talk about what it does, and never speak of why or how it evolved?

As for "where the go comes from", you might as well say "What makes it move?", the answer to which is precisely the moving around of all those parts.

But if you were to say that the computer makes the car move, and it does so with only enough hardware to run the programming and no other hardware at all, then you'd be making an absurd statement.

Just as "going" is something the car does, consciousness is something the body does. And both motion and consciousness happen in real 4-D spacetime and are caused by physical actions.

But consciousness is not simply equivalent to "making decisions". The brain can do that without the processes that generate this phenomenon of individual awareness.

So no, we can't simply ignore the phenomenon of Sofia. Any complete explanation of consciousness will have to account for that, too.

One problem with IP-only pseudo-explanations is that they cannot account for (or even detect) that phenomenon, so they attempt to ignore it instead.
 
Then what are we arguing about?

Wouldn't it be a stupid claim to say that a system that didn't behave as if it was conscious -- according to some definition -- was in fact conscious?

Nobody is making that claim, as far as I know.

The claim is that if Compy acts like commander Data, then he is surely conscious like a human. If Compy acts like a toaster, then he is not conscious like a human.

Well, this is my point now, isn't it?

To begin the thought experiment with "Suppose we build something that's not conscious but behaves exactly as it is..." is to make assumptions that invalidate the thought experiment for most purposes.

When we think through the construction, we can see that we don't expect Compy to be conscious or to behave as if he is, which tells us something about the matter.

But for the sport of it, let's say that we do figure out how to create a robot that's not conscious but behaves exactly as if it were conscious. What would this tell us about consciousness?

Not a heckuva lot. It's like if we manage to make an artificial leg without muscles, but to a casual observer it looks and works exactly like a real leg with muscles. What does this tell us about how muscles work?

Not much. It tells us that there are ways to make the same things happen without using a muscle.

So if we made our passing-for-conscious robot somehow, it would only tell us that those behaviors can be achieved without consciousness. Which for now is supposition, but in any case, would not be very enlightening about how the brain generates Sofia.
 
But for the sport of it, let's say that we do figure out how to create a robot that's not conscious but behaves exactly as if it were conscious. What would this tell us about consciousness?

This is apriori assuming that consciousness is something above and beyond the behavior of consciousness.

Which you just said you aren't doing, yet here you are doing it!

If a robot behaves exactly as if it were conscious, then it must be conscious.

I don't see why this is so hard for so many people to understand.

By definition, a lookup table cannot pass a Turing test -- you can always change the test to account for the fact that you are dealing with a lookup table and thwart it. This is the same reason that determinism doesn't imply one can know the future ahead of time -- if the future is dependent upon the observer, then by definition it is impossible to ever get ahead of the curve. So there is no way even God himself could program a robot to deal with every possible contingency using a lookup table.

The only way to get conscious behavior is via consciousness itself.
 
rocketdodger said:
But for the sport of it, let's say that we do figure out how to create a robot that's not conscious but behaves exactly as if it were conscious. What would this tell us about consciousness?

This is apriori assuming that consciousness is something above and beyond the behavior of consciousness.


I presume that was why he said at the beginning of that post
To begin the thought experiment with "Suppose we build something that's not conscious but behaves exactly as it is..." is to make assumptions that invalidate the thought experiment for most purposes.
 
But for the sport of it, let's say that we do figure out how to create a robot that's not conscious but behaves exactly as if it were conscious. What would this tell us about consciousness?
How would you know? More than that, what does that even mean? Does Compy act conscious, discuss its private feelings and beliefs and thoughts, but lack, what, the consciousness molecule?

I don't understand.
 
Piggy, the problem you are having is that you think the way out is by looking at the experiment exclusively from the point of view of what the 'bot would need to be experiencing, to be conscious.

This is wishful thinking in that it is really a disguised belief that we can tell others thoughts without experiencing any actual behavior which would indicate their thoughts. Something this forum is designed to trash.

I think Jaron Lanier has a better argument.

He reminds us that the Turing test cuts both ways.

If we lower our standards enough, as to what we might consider conscious behavior, we might have to accept that a 'bot running an algorithm is actually conscious.

Relying on technology, through which we are increasingly experiencing another persons consciousness, may reduce our standards as to what a conscious human really might behave like physically in front of us.

Jaron's solution is a call for thinking humans to enjoy the physical interactions more than the techno-glasses-one and in this way keep our standards high.

At least this way we will always keep the computationalists busy perfecting technology instead of stalling progress for a quick buck....... by persuading us to buy into the math that pen and paper is a form of consciousness.
 
Last edited:
Not a heckuva lot. It's like if we manage to make an artificial leg without muscles, but to a casual observer it looks and works exactly like a real leg with muscles. What does this tell us about how muscles work?

Not much. It tells us that there are ways to make the same things happen without using a muscle.

So if we made our passing-for-conscious robot somehow, it would only tell us that those behaviors can be achieved without consciousness. Which for now is supposition, but in any case, would not be very enlightening about how the brain generates Sofia.


I think your analogy is a bit off, Piggy. Creating a leg that functions just like a leg but without muscles tells us plenty about how a leg moves -- which is what we would be after in such a situation.

That it wouldn't tell us much about muscles is analogous to a robot constructed with other material than neurons telling us quite a bit about consciousness but telling us nothing about neurons. We know it won't inform us much about neurons, but that is not what we are after. We are after an explanation for consciousness. If we can see how a robot is constructed to produce that type of behavior we can get a good idea how other types of networks might be constructed to do it. Sure, it wouldn't be exactly how a brain does it, but that's why it is an analogy.

I'm not sure how one could possibly separate behavior that is indistinguishable from conscious behavior and call it not-conscious. Consciousness is that behavior, it is an action.

That it can be carried out by neurons or silica chips is the trivial bit.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom