Has consciousness been fully explained?

Status
Not open for further replies.
John Searle? Chinese Room John Searle?

The man's an idiot.

Wow! There is a big difference between being wrong and being an idiot. Typically, idiots don't become Rhodes Scholars and don't have entire fields of research directed at proving their argument wrong.
 
Wow! There is a big difference between being wrong and being an idiot. Typically, idiots don't become Rhodes Scholars and don't have entire fields of research directed at proving their argument wrong.
To be fair I think that the field of Strong AI has aims and motivations other than proving John Searle wrong.

In fact I think John Searle was critiquing the already existent field of Strong AI.
 
Actually I think it would be devilishly simple in the circumstances. Where would even the simplest English sentence come from if the thing you were modelling was the interactions of the components of a human brain's architecture?
The interface translating the simulated neural impulses into words.
But you did say that we would not know if we were such a simulation, which suggests that you believed that your own conscious experience could be from a simulation.
Yes. I don't automatically limit simulations to computers running instructions. They are overlapping sets; neither is a subset of the other.
All good points.
Thanks. You ask good though-provoking questions yourself.
I don't assume that it could,
Too bad. I am intrigued by the question of whether it is the computer or the set of instructions that would be conscious.
I don't expect to see a simulation of a complete human brain and associated virtual environment any time soon, not even in my lifetime.

But I might see a complete simulation of a mouse. If we could see a simulation of a mouse that behaved like a mouse then I would tend more toward computationalism.

Yes, that would support the idea.
 
But your answer does not make sense in this context.

It is obviously not programmed to mimick human behaviours as you suggested - so any behaviours that the simulated human exhibited would have to be a consequence of the interactions in the neural architecture.
And there we part ways. Hueristic learning networks based on our current understanding of things have been programmed as surely as if hard code was running. And yeah, I know, but, but, that's how human brains are "programmed".

No. We have no frikking idea how human brains are "programmed", or if that's even the right term, GEB, be damned.

So why would it exhibit only "limited" behaviours? What are the limits?
Real time considerations first come to mind. If the ball hits it between the eyes before the hand comes up catch it, fail.

Would it be able to talk? Would it be able to pass a primary school comprehension test?
Most likely.

Would it argue philosophy and claim to be conscious?
Now. Now. Do you also have 'an entity' in mind?

But in a few hundred years, maybe. That doesn't mean it actually is conscious.
 
To be fair I think that the field of Strong AI has aims and motivations other than proving John Searle wrong.

In fact I think John Searle was critiquing the already existent field of Strong AI.

Searle coined the term Strong AI and he was critiquing cognitive science. But what I meant was that there is an extensive literature both in philosophy and cognitive science devoted to proving Searle wrong. I can pretty much guarantee that none of the people that hold the views that Pixy thinks he/she is supporting would say that Searle is an idiot.
 
So it could not, even in principle, be done on a computer that runs a program consisting of a set of instructions?

I said that the Turing model wasn't appropriate. Computers haven't been built to the Turing model for many, many years now. The Turing model is a very useful way to think about some programs - but it's not a useful way to think about computer machinery. Back in the early days of MSDOS, personal computers tended to follow a model broadly similar to Turing - but very quickly they began to need a real time capacity. That's why Microsoft copied VMS from DEC. If you want to post to JREF while playing music and watching YouTube, you need real-time interrupts built in.

No matter how fast the computer, no matter how much memory, no matter how detailed the model - it could not model a human brain interacting with an environment so as to produce human-like behaviour?

"So you're saying that no matter how loud I shout, he can't understand me? What if I were to get a PA?" "No, he doesn't understand English, no matter how loud you shout."

A computer which emulates a real-time response must have a real-time response built in as an integral part of the system. Luckily we know how to do this. For some reason, though, this hasn't percolated through to the people pushing Strong AI.
 
I said that the Turing model wasn't appropriate. Computers haven't been built to the Turing model for many, many years now. The Turing model is a very useful way to think about some programs - but it's not a useful way to think about computer machinery. Back in the early days of MSDOS, personal computers tended to follow a model broadly similar to Turing - but very quickly they began to need a real time capacity. That's why Microsoft copied VMS from DEC. If you want to post to JREF while playing music and watching YouTube, you need real-time interrupts built in.
Why do you think preemptive multitasking software is not Turing equivalent?
"So you're saying that no matter how loud I shout, he can't understand me? What if I were to get a PA?" "No, he doesn't understand English, no matter how loud you shout."
I don't understand. Is this more humour? What does it have to do with what I said?
A computer which emulates a real-time response must have a real-time response built in as an integral part of the system. Luckily we know how to do this. For some reason, though, this hasn't percolated through to the people pushing Strong AI.
I really don't understand what you are saying.

If a computer based on an instruction set, and yes with preemptive multitasking, real time interrupts etc - were to run the type of simulation I was talking about and the computer was sufficiently powerful - could the modelled human exhibit human like behaviours?
 
And there we part ways. Hueristic learning networks based on our current understanding of things have been programmed as surely as if hard code was running. And yeah, I know, but, but, that's how human brains are "programmed".

No. We have no frikking idea how human brains are "programmed", or if that's even the right term, GEB, be damned.
I am not sure what you mean by all that. The words you choose to put in my mouth are not even remotely what I would say.

The model I am talking about would model the neuronal architecture as it is in a human brain. So by definition the model would be programmed the same way a human brain is, whatever that is.
Real time considerations first come to mind. If the ball hits it between the eyes before the hand comes up catch it, fail.
That would seem to be as much a problem for a real human as for a simulated one.
Most likely.


Now. Now. Do you also have 'an entity' in mind?

But in a few hundred years, maybe. That doesn't mean it actually is conscious.
But if a computer model of the neural architecture of the human brain (in a few hundred years) could produce a virtual being arguing "Of course I am conscious, what a stupid question", but not actually being conscious wouldn't that imply that when I say "Of course I am conscious, what a stupid question", it is not because I am conscious - it is because the physics of my brain is making me say that?
 
Last edited:
I am not sure what you mean by all that. The words you choose to put in my mouth are not even remotely what I would say.

The model I am talking about would model the neuronal architecture as it is in a human brain. So by definition the model would be programmed the same way a human brain is, whatever that is.
And I'd say neuronal architecture is a start. We don't even have a clue what also needs to be modelled. Nor do we have a clue what actually goes on in those neurons (I speak of 'be precise' here) as human brains are 'programmed' -- or if that's even the right term for it.

That would seem to be as much a problem for a real human as for a simulated one.
You and I are proof real ones manage.

But if a computer model of the neural architecture of the human brain (in a few hundred years) could produce a virtual being arguing "Of course I am conscious, what a stupid question", but not actually being conscious wouldn't that imply that when I say "Of course I am conscious, what a stupid question", it is not because I am conscious - it is because the physics of my brain is making me say that?
Neuronal architecture plus whatever else we learn of in a few hundred or thousand years, that needs to modelled, basically including all the i/o of a human body, and you think it'll say it's conscious. When I see it saying that I'll believe your scenario has merit; but nope, I still won't agree it is.

I can't comment on your consciousness, only on mine, but I suspect you are human and I presume conscious. We will continue to be so only as long as the physics of our brains and bodies continues operate as needed for that consciousness to occur.
 
And I'd say neuronal architecture is a start. We don't even have a clue what also needs to be modelled. Nor do we have a clue what actually goes on in those neurons (I speak of 'be precise' here) as human brains are 'programmed' -- or if that's even the right term for it.
As I said, I don't expect to see it in my lifetime. A mouse - well maybe.
You and I are proof real ones manage.
Nah, can't catch a ball to save my life. Slow processing you see.
Neuronal architecture plus whatever else we learn of in a few hundred or thousand years, that needs to modelled, basically including all the i/o of a human body, and you think it'll say it's conscious. When I see it saying that I'll believe your scenario has merit; but nope, I still won't agree it is.
Fair enough - I think the thought experiment has run it's course.
 
Why do you think preemptive multitasking software is not Turing equivalent?

Because a time-dependent response to an external stimulus is simply not part of the Turing model. The Turing model is very simple. That's one reason why it's so useful. It's also why it's application should be very restricted.


I don't understand. Is this more humour? What does it have to do with what I said?

I really don't understand what you are saying.

I'm saying that if a computer doesn't have a particular facility, then a bigger, faster computer won't have it either.

It might be, in practice, that the implementation of the Turing machine might have sufficient real time capacity built in. That simply muddies the waters. If the contention is that any Turing machine, on any hardware implementation, can totally model everything that the brain does - considering how certain implementations might accidentally do it is obviously not the right approach. The right way is to consider what is theoretically necessary.

If a computer based on an instruction set, and yes with preemptive multitasking, real time interrupts etc - were to run the type of simulation I was talking about and the computer was sufficiently powerful - could the modelled human exhibit human like behaviours?

Nobody knows. We know that if it doesn't have the facility of real time response that it cannot exhibit human like behaviours.
 
Sorry, Piggy, you have to. Your questions have been answered; you clearly need more background to understand the answers. I'm pointing you to the very best resources to provide that knowledge.

Dude, that never flies around here.

And no, my questions have not been answered, or I wouldn't have to keep asking them.

More to come....
 
Because a time-dependent response to an external stimulus is simply not part of the Turing model. The Turing model is very simple. That's one reason why it's so useful. It's also why it's application should be very restricted.
An O/S with pre-emptive multitasking can be written with the same programming language and run on the same platforms as non-preemptive multitasking O/S's

If the non-multitasking of non-preemptive multitasking O/S's are Turing Equivalent then so are the pre-emptive multitasking O/S's
I'm saying that if a computer doesn't have a particular facility, then a bigger, faster computer won't have it either.

It might be, in practice, that the implementation of the Turing machine might have sufficient real time capacity built in. That simply muddies the waters. If the contention is that any Turing machine, on any hardware implementation, can totally model everything that the brain does - considering how certain implementations might accidentally do it is obviously not the right approach. e right way is to consider what is theoretically necessary.
You appear to have missed the point of what I was saying.

And along the line you appear to be saying that there are computers that are not Turing equivalent. Yes?
Nobody knows. We know that if it doesn't have the facility of real time response that it cannot exhibit human like behaviours.
So you keep saying. Can we see the thinking behind the statement?

Could a computer model the behaviour of a single molecule if it was not in real time?
 
westprog said:
I said that the Turing model wasn't appropriate. Computers haven't been built to the Turing model for many, many years now. The Turing model is a very useful way to think about some programs - but it's not a useful way to think about computer machinery. Back in the early days of MSDOS, personal computers tended to follow a model broadly similar to Turing - but very quickly they began to need a real time capacity. That's why Microsoft copied VMS from DEC. If you want to post to JREF while playing music and watching YouTube, you need real-time interrupts built in.
This from Wolfram Mathworld

The Church-Turing thesis (formerly commonly known simply as Church's thesis) says that any real-world computation can be translated into an equivalent computation involving a Turing machine. In Church's original formulation (Church 1935, 1936), the thesis says that real-world calculation can be done using the lambda calculus, which is equivalent to using general recursive functions.

The Church-Turing thesis encompasses more kinds of computations than those originally envisioned, such as those involving cellular automata, combinators, register machines, and substitution systems. It also applies to other kinds of computations found in theoretical computer science such as quantum computing and probabilistic computing.

There are conflicting points of view about the Church-Turing thesis. One says that it can be proven, and the other says that it serves as a definition for computation. There has never been a proof, but the evidence for its validity comes from the fact that every realistic model of computation, yet discovered, has been shown to be equivalent. If there were a device which could answer questions beyond those that a Turing machine can answer, then it would be called an oracle.
 
Last edited:
In the brain.


Neurons. Lots of them.


Self-referential information processing.

This is exactly what I'm talking about.

Q: What makes a car run?

A: Metal parts. Lots of them. In the engine.

Now, that answer happens to be accurate. Incredibly imprecise, but accurate.

Yet if we didn't already know the real answer, we would have no way of judging this particular proposition because it does not distinguish between any action of metal parts which does not cause a car to run, and the actions of metal parts which do cause a car to run.

In other words, the only reason we can say that this answer is accurate but imprecise is because we do know much much more than that.

Your answer, however, raises all sorts of red flags.

So far, you've provided no evidence -- besides references to tomes and lectures which you provide no summary of -- to support your contention that IP alone is a sufficient explanation.

(That word "alone" is the key.)

And on the face of it, the contention makes no sense, unless there's some sort of explanation to be had.

Consciousness is a bodily function. A behavior.

A unique one, yes, but still a bodily function.

And we know of no bodily function that can be accomplished in the way you're describing. So the exception has to be justified in some way.

So if you can provide a thumbnail of how this actually happens, then we can move on.

The reason we're stuck here is precisely because you simply continue to make this strange assertion with no evidence. ("Go read a 400 page book" or "Go take a college lecture" is not evidence.)

Let's take the example of some other bodily functions involving the brain: regulating body temperature and heartbeat.

A casual observer can't witness the brain doing these things in another person. But we can use measurement instruments to check the outcomes of the process, to track heartbeat and body temperature, and to peer inside the brain as best we can to see what's going on in there while this is happening.

Similarly, we can use instruments to see some of what the brain is doing when it does consciousness.

We know the brain is doing it. No doubt.

So here is a bodily function requiring brain activity.

As I've said, and you seem to agree, the firing of neurons leads to the firing of neurons.

You then jump to the conclusion that consciousness = the firing of neurons, but without any clear description of the precise process by which that allegedly happens.

For every other bodily function, in order for the overt behavior to take place -- blinking, shivering, regulating the heartbeat, regulating temperature, running, focusing light on the iris -- the firing of neurons has to be coupled with some sort of executive mechanism of another type.

"Running the logic" alone, with just enough mechanism to do that and no more, cannot accomplish any of these things by itself.

Now, we should not allow ourselves to make the mistake of thinking that consciousing is not a behavior, just because we haven't yet cracked the mechanism (which, apparently alone among bodily functions, appears to be handled entirely by the brain).

[Of course, this is discounting non-explanatory "explanations" consisting only of vague generalizations without any step-by-step mechanism spelled out, such as "SRIP" or "lots of neurons", which are not only hopelessly inadequate but also cannot distinguish between brain functions that are not involved in the function of consciousness and those which are -- if it can't do that, it's not an explanation of the process.]

Clearly, Sofia is a bodily function. When I'm dreaming, it's operating. When I stop dreaming, it shuts off. When I wake up, it starts again.

It's something our bodies do.

Meanwhile, all this time, the brain is always engaged in SRIP, neurons are always firing.

So clearly, irrefutably, pointing to SRIP and the firing of neurons by itself is not an explanation of the mechanism of consciousness because those things are going on whether the body is doing consciousness or not.

QED that is an insufficient explanation.

The question remains: what mechanism actually instantiates the behavior? What is the analog to the various mechanisms, above and beyond the classic chain-reaction firing of neurons, to the mechanisms that focus light on the retina, make an arm move, slow the heart down, raise goosebumps, and keep our body temp in a safe range?

If you ignore that mechanism -- whatever it may turn out to be -- then you require a ghost in the machine, unless you can provide a clear, step-by-step explanation of exactly what is going on (and where) in the generation of Sofia.
 
So consciousness is the firing of neurons - in specific patterns, but still just the firing of neurons.

The end.

This is not something that can be asserted unless you can describe precisely how it's done.

Of course consciousness relies on the firing of neurons. There's no doubt about that.

But as far as we know, there are no bodily functions which can be accomplished by IP alone, without some sort of mechanism to make the function occur.

Regulating heartbeat, for example, is not the result of IP alone. Like all bodily functions -- just like all overt behavior by machines that use computers, such as playing CDs, printing charts, or displaying graphics -- regulating the heartbeat requires the logic and some mechanism to make the behavior happen.

Consciousness is behavior.

Unless you can explain precisely how IP, in this case, manages to generate an actual performance by itself when we know of no other circumstances in which that occurs, then it seems you are simply omitting the necessary mechanism because we haven't yet figured it out.
 
You are looking for a mechanism that is not required, to explain behaviours that don't exist - that you have never even described.

Of course the behaviors exist.

Consider yourself going to sleep at night, dreaming, not dreaming, waking up the next morning.

When you're asleep and dreaming, the behavior is happening.

When you're asleep and not dreaming, it's not happening.

When you wake up again, it's happening.

There is absolutely no denying that.

And if you say no mechanism is required for this behavior -- although we need a mechanism for every single other behavior -- then you're going to have to explain with some precision what is going on.
 
Actually, I really don't get the time thing

If I was modelling a wave then I wouldn't say that the wave had failed to exhibit wave behaviour because it was not in real time. Time is part of the model

If I was modelling plant growth I would not say that it failed to exhibit the correct behaviour because it was not in real time.

So why is the model of the brain different in this respect?

Would the same go for a mouse brain model?

An ant model?

A tapeworm?
 
As I've said, Hofstadter devotes 800 pages to explaining from first principles how this is possible in Godel, Escher, Bach; and Wolfe explains how this maps to modern psychology in his lecture series. They both cover a lot of ground, but every inch of it is illuminating. I can't cover all that ground in this edit box, and I can't cover it as well as they do in any case.

But if you understand it, you can surely provide me a thumbnail.

Go ahead, hit me with it.

I have no problem with thumbnail explanations of time dilation, black holes, freaky aspects of QM, big bang theory, and other at-first-counterintuitive theories and phenomena.

I don't see why consciousness should be any different.

You know, I used to think Dennett had a pretty good explanation: Build brain A, then build brain B to live inside brain A. Until I realized that he hadn't actually explained anything, because he didn't provide any mechanism for why this arrangement would result specifically in the phenomenon of consciousness rather than non-conscious behavior.
 
Now are you saying that the executive is a volitional component? Or that it represents another level of neural networking?

Neither. See my posts above.

Consciousness is certainly the weirdest bodily function we know of, and it appears to be different in some profound ways from all the others.

But nevertheless, this does not give us free rein to cast aside the basics of biology.

The brain's neurons can fire all they want, but if something other than that is going to happen, then some other mechanism must be involved.

We know the brain alone generates consciousness.

But so far, we're at a loss to explain how.

It's no use simply chalking it up to vague yet insufficient non-explanations like SRIP, or neurons, or parallel processing. Why? Because all of this goes on in brain functions that have nothing to do with conscious awareness.

We just don't know what is going on to make this stubbornly confounding bodily function occur.

The study I cited upthread, which made use of deep brain implants, finds that at a "signature" of conciousness is a simultaneous activation of 4 different types of waves spanning the space of the brain.

Yes, neurons are involved in that, but it is not the classic toe-to-heel, chain reaction firing that we usually think of when we imagine neural activity.

And the exciting thing about this discovery is that, at last, we have an activity of the organ which is correlated with consciousing, but not with other activities of the brain.

Whatever the mechanism turns out to be, it will have to meet that criterion if it is to have any explanatory power.

SRIP, neural activity in general, and parallel processing in general do not meet that criterion.

There's no doubt that machines can, in theory, be built which will also do consciousness, just as our bodies do consciousness.

But it will not involve "running the logic" alone.

For the behavior to happen, there must be the logic (which, again, is an abstraction for what's actually happening physically) combined with an executive mechanism of some sort to produce actual behavior.
 
Status
Not open for further replies.

Back
Top Bottom