• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
I'm not sure I want to get back into this, but I might as well try and keep up to speed
What do you mean by a neural correlate of consciousness?

An NCC is what the brain is doing neurologically during a conscious experience.

For instance, different sets of neurons (some distinct, some overlapping) are activated when we imagine ourselves in a situation and when we imagine others in the same situation.

Gazzaniga's point is that even if we were to discover tomorrow what the entire gamut of NCCs was for all possible conscious experiences, we still wouldn't know why any given neural state was associated with any given conscious experience.
 
Secondly, if we did understand exactly which sequence of brain states led to every conceivable conscious feeling then of course we'd understand consciousness in exactly the same way we understand everything else. Or am I missing something?

No, there's an important difference between understanding that A and B are correlated, and understanding why they are correlated.

For example, people have always known that when viewing an action at a considerable distance, the sound lags behind what you can see, but we weren't able to understand precisely why until relatively recently.
 
Thats because you don't know what you are looking for.

The "computational model" is merely the idea that consciousness comes from the behavior of our neurons, and the behavior alone.

People call it the "compuational" model because it turns out the behavior of our neurons reduces to computation.

So yeah, everything you have been reading (that is credible) is support for the computational model.

The notion that consciousness comes from the behavior of neurons is the physicalist or biological model. If that's the same as a computational model, then fine.

However, as it's been elaborated on this forum, with all the baggage... that is something that I do not find in the literature on the brain.
 
I have to ask -- if the computational model isn't the prevailing one, insofar as anyone actually claims to subscribe to a "model" in the first place, then what is?

There isn't one. We don't yet know how the brain performs the behavior. There is, as of now, nothing to be had which could be called a model of consciousness at all.
 
I could simulate the physical characteristics of an automobile in order to study the effects of impact on new designs. But my simulation would give no indication of how the car drives. Similarly, it sounds as though the IBM project is not *meant* to create AI, so it's not surprising that it won't. In any event, Markram's comments about *this* particular simulation say nothing about the possibility of creating a thinking machine.

That's right, it's not intended to, and it says nothing about the possibility of creating a conscious machine or how one might go about designing it.

That was my point. I was addressing specifically the claims of those who believe that a machine simulating a brain in minute detail would itself be a conscious machine.
 
It may say that elsewhere in the book you are reading, but this quote you've posted says nothing of the sort. It only says that the simulation IBM is working on will not create consciousness.

Yes it does, actually, since the simulation they're working on is in fact an attempt to build a simulated brain down to the neuron / neural column level.

Again, that doesn't imply that conscious machines are impossible, only that machines running digital simulations do not, at some magic threshold of detail, begin behaving like the systems they're simulating.
 
Gazzaniga isn't adding anything new to the discussion. It's always been an explanatory gap. My personal take on it is that the gap may remain forever more. I can live with that. To me, that gap doesn't change the possibility of machine consciousness.

That's true, I consider Gazzaniga's views on that point to be quite mainstream. Which was my point, after all.

And again, I'm certainly not arguing against the possibility of conscious machines. Never have.
 
But this analogy holds for *any* phenomenon we don't understand. What's it adding to the discussion?

Well, I've cut back in after some time away, so it might add nothing to the discussion at this particular moment.

But there are those who would like to dismiss the so-called hard problem as a non-problem. Clearly, it is a very significant problem.
 
I don't think Pixy believes that the phenomenon is understood, not completely, anyway. I think that his claim is that the basic mechanism is simple to describe.
Yes, what I am saying - and I'm far from alone here - is that consciousness in and of itself is actually dead simple and fully understood. What is complex and hard to understand is all the other stuff that brains do - sensory processing and language and memory association and so on - and that it's a huge mistake to try to label this stuff as "consciousness".
 
This is all perfectly reasonable and not in the least surprising nor damning to the case of computationalists.
Actually, as written it's subtly but profoundly wrong.

How humans think is a high level of abstraction that is provably unnecessary for constructing a machine that thinks like a human.

You need to know how the brain operates, i.e. the low-level mechanical stuff. In detail, of course. Which is what Hawkins was saying in the previous quoted paragraph - that building models that do not accurately simulate the biology of the brain does not necessarily tell us much about how the mind works.

Now that I'm fine with. It is of course possible to create an intelligent machine that doesn't resemble the biological structures of the brain at all, but if you're studying how human brains do what they do, then you might as well do it right.
 
Last edited:
By way of comparison, imagine consciousness itself as a single-cylinder two-stroke internal combustion engine, and the human brain as an entire Bugatti Veyron.
 
I could simulate the physical characteristics of an automobile in order to study the effects of impact on new designs. But my simulation would give no indication of how the car drives. Similarly, it sounds as though the IBM project is not *meant* to create AI, so it's not surprising that it won't. In any event, Markram's comments about *this* particular simulation say nothing about the possibility of creating a thinking machine.
Just thought I'd toss in this quote from the Wikipedia page on the Blue Brain project:

The Wikipedia page on the Blue Brain project said:
"It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford.[4] In a BBC World Service interview he said: "If we build it correctly it should speak and have an intelligence and behave very much as a human does."[4]
How about that.
 
Consciousness is quite simple really.
I remember watching an ant carrying a dead lacewing across a patio for a while.
The ant was clearly conscious as it was dealing with the breeze blowing it back a few feet each time it made progress. After numerous attempts it realised that by hooking its feet along the edge of the tile it would be less likely to be blown back.
It had already realised it was more fruitful to drag it rather than carry it over its head.
On reaching a wall it realised after continually falling off, to follow the bottom of the wall, leaving the ant trail and go round and rejoin the ant trail further on. At one point it even left the fly where it had become stuck in a crevice, returned to the ant trail and brought back another ant and together they released the fly and successfully returned to the ant trail. To be helped by more ants, it wasn't long before it was taken underground into the nest.
Sounds like an intelligent conscience being to me.

I doubt it was fully self conscious, or that there was a complicated computation going on in its head.
 
Piggy said:
Secondly, if we did understand exactly which sequence of brain states led to every conceivable conscious feeling then of course we'd understand consciousness in exactly the same way we understand everything else. Or am I missing something?

No, there's an important difference between understanding that A and B are correlated, and understanding why they are correlated.

For example, people have always known that when viewing an action at a considerable distance, the sound lags behind what you can see, but we weren't able to understand precisely why until relatively recently.

That's an excellent example, until a few hundred years ago we didn't know that light travelled faster than sound because we couldn't measure it.

However we can certainly measure all the relevant things about a neuron, what it's made of and how it works, etc. And as Pixy has pointed out, the quantum stuff we can't measure isn't robust enough to be involved in conscioussness.

So what do you postulate the missing bit is? If consciousness isn't just an emergent property of a bunch of neurons squished into a skull, what is the missing thing that provides the explanation?
And if there isn't a missing bit, what wouldn't we understand if we knew what the neural correlate of every conscious state was?
 
The notion that consciousness comes from the behavior of neurons is the physicalist or biological model. If that's the same as a computational model, then fine.

However, as it's been elaborated on this forum, with all the baggage... that is something that I do not find in the literature on the brain.

The computational model is not the same as the physicalist model. The essence of the computational model is that there is no physical behaviour associated with the neurons that is required for consciousness to exist, and that any physical process which follows the same computational model with produce consciousness in exactly the same way.

So it doesn't matter whether the "program" of the brain is carried out by neurons, transistors, cards or rocks laid out on a beach, the result will be exactly the same in terms of producing consciousness.

It's implicit in the computational view that consciousness is not something physical, like electromagnetism - because clearly it's not possible to substitute, say, neutrons for electrons and get the same physical effect. It's something else - like addition, for example.

It's quite important to distinguish between the physicalist and computational viewpoints. It's often claimed that proponents of physicalism - the idea that something physical that happens in the neurons creates consciousness - are denying that artificial consciousness is possible. In fact, the possibility of artificial consciousness is implicit in physicalism.

This is the essence of the disagreement which has been going on for some time now.
 
That's an excellent example, until a few hundred years ago we didn't know that light travelled faster than sound because we couldn't measure it.

In fact, thunder and lightning enabled people to work out that light was faster than sound for a very long time - though whether they did figure it out I don't know.

However we can certainly measure all the relevant things about a neuron, what it's made of and how it works, etc. And as Pixy has pointed out, the quantum stuff we can't measure isn't robust enough to be involved in conscioussness.

How do we know in advance what the relevant things are? The brain certainly stops functioning, sometimes permanently, when any number of things are disabled. Can we say for certain that the various wave functions of the brain aren't relevant for consciousness, for example?

So what do you postulate the missing bit is? If consciousness isn't just an emergent property of a bunch of neurons squished into a skull, what is the missing thing that provides the explanation?
And if there isn't a missing bit, what wouldn't we understand if we knew what the neural correlate of every conscious state was?

We wouldn't understand how to create a conscious mind.
 
The computational model is not the same as the physicalist model. The essence of the computational model is that there is no physical behaviour associated with the neurons that is required for consciousness to exist, and that any physical process which follows the same computational model with produce consciousness in exactly the same way.
The wording there is a bit weird, but yes. Neurons aren't special, it's the computation that matters.

So it doesn't matter whether the "program" of the brain is carried out by neurons, transistors, cards or rocks laid out on a beach, the result will be exactly the same in terms of producing consciousness.
Cards and rocks laid out on a beach are not switching elements.

It's implicit in the computational view that consciousness is not something physical, like electromagnetism - because clearly it's not possible to substitute, say, neutrons for electrons and get the same physical effect. It's something else - like addition, for example.
Sure. But it's not just implicit: That's how consciousness behaves. This observation has been very thoroughly explored and confirmed. Consciousness is a process, not a substance; the idea that it's a substance doesn't even make any sense given our understanding of both neuroscience and physics.

It's quite important to distinguish between the physicalist and computational viewpoints. It's often claimed that proponents of physicalism - the idea that something physical that happens in the neurons creates consciousness - are denying that artificial consciousness is possible. In fact, the possibility of artificial consciousness is implicit in physicalism.
I don't know what that's supposed to mean, but then, non-computational physical consciousness makes no sense either, so that's not a surprise.

This is the essence of the disagreement which has been going on for some time now.
Well, partly. There's also people who interject from time to time with immaterialist nonsense.

The problem with the non-computationalist side is that it's purely an argument from incredulance. (That is, you can't believe something is known, therefore what you say is true.)
 
Yes, what I am saying - and I'm far from alone here - is that consciousness in and of itself is actually dead simple and fully understood. What is complex and hard to understand is all the other stuff that brains do - sensory processing and language and memory association and so on - and that it's a huge mistake to try to label this stuff as "consciousness".

Well, I have to disagree with you, there, then. It's the "fully" that doesn't ring right. I'm pretty sure we have a good grasp of how consciousness operates but it's a bit premature to claim that we know all there is to know about it.
 
Status
Not open for further replies.

Back
Top Bottom