The Hard Problem of Gravity

Why not read Blackmore's series of interviews Conversations on Consciousness, and see how many of the interviewees consider that we've actually overcome the HPC at a material level. I believe the answer is none.

In terms of theoretical modelling, sure Dennett does, no doubt O'Regan too. The book isn't just discussing the HPC, but most of the rest admit or infer we don't really know yet.

As soon as there is concurrent conscious and unconscious processing, as humans have, Strong AI needs to explain it, and ideally do so in neuronal terms.

Nick

Ah, so you don't have a large number of neuroscientists who agree with your point, okay.

Fine by me.

I think you will have a hard time finding a large number of neuroscientists who even agree the HPC exists.
 
They're not talking about "what consciousness is" mechanistically (where your definition might or might not be useful). Rather they are trying to account for the phenomena of consciousness itself (subjective experience) in material, neuronal terms. They are investigating brain activity to try and understand how certain neuronal behaviour creates phenomenality. And the result thus far is that they have discovered certain brain activity associated with consciousness but they do not yet understand just why or how this relates to actual phenomenal consciousness. It is not clear yet.

Now either this is because you are switching to some really funny abstracted cart before the horse thinking or because you are using definitions that are removed from reality.

Perception is very well studied and modeled, you may not like that but it is true. The process of sensation and perception is fairly well under way.

My guess is that you are using terms here that you can not define or delineate.

"Subjective', what are you referring here to specifically?

Give it a try NIck2227, use strong specific language. And if you want you can use your 'narrative self', as long as you define it as well.

But consider this, try neurodevelopment, at what ages do the specifics that you think you can define begin to emerge in observable behaviors?

The sense of self (in terms of a localized body that is separated from other bodies) seems to be an emergent phenomena at the age of two, now we can discuss what aspects I am referring to here, when object identity begins to manifest, mirroring and then separation from other objects. (Identification and naming, pain expression, the beginings of empathy)

But I have the impression that as often happens in these threads, your premise is based upon some lack of specific definition.
 
They are saying that they don't know why the phenomenom of global access should equate to consciousness.
Then they are putting the cart before the horse.

And by consciousness here they are refering to phenomenality.
Well, you are, anyway.

What's this "phenomenality"?

They're not talking about "what consciousness is" mechanistically (where your definition might or might not be useful). Rather they are trying to account for the phenomena of consciousness itself (subjective experience) in material, neuronal terms.
Of course. I know that. Except for the word "phenomena", which is gibberish.

So? Consciousness is still self-referential information processing.

They are investigating brain activity to try and understand how certain neuronal behaviour creates phenomenality.
You mean awareness.

And the result thus far is that they have discovered certain brain activity associated with consciousness but they do not yet understand just why or how this relates to actual phenomenal consciousness.
You mean awareness.

It is not clear yet.
In broad terms it is perfectly clear. They are working out the details.

But this definition is not appropriate to the context of the investigation Gaillard, Dehaene et al are involved in.
That's because you keep using the wrong word.

They are trying to uncover just how certain neuronal activity creates the actual phenomena of consciousness.
The what?

You are effectively saying "it just does it," but this is problematic because there does appear to be concurrent unconscious processing.
That bears no relation to what I said, and your objection is irrelevant. It was irrelevant the first time you raised it, and it's irrelevant now.

I point you to Damasio's Descartes' Error. If you're taking the cogito as axiomatic then of course your arguments are going to be circular and unfruitful. Descartes' cogito applies only to the narrative self.
Nope. Decartes did make an error, and Damasio rightly points that out, but that's dualism, and is irrelevant to the point of the cogito, and has nothing to do with narrative self, and would be irrelevant even if it did.

Mind you, Damasio's argument isn't particularly strong in any case.

Likely Hofstadter too.
Nope.

You'll have to give me a little more time for that one.
Okay.

Descartes doesn't really seem to be held in such high regard these days, from what I read. More the useful source of a number of intriguing and engaging errors.
Right. He started well, then fell into logical fallacy.

Better regarded in mathematics than philosophy.

He's using the terms "consciousness" and "awareness" as meaning the same, AFAICT.
Then he shouldn't.

When one is discussing phenomenality and trying to understand phenomenal consciousness this is how it is.
Discussing what to understand the what?

I don't know.
Read Hofstadter.
 
Hi INW,

I do feel I'm being misunderstood here.

I am not saying that self-reference feedback loops are not part of the model. My contention was that they are not what makes the difference between conscious and unconscious processing. My contention is that consciousness (as in phenomenal awareness) is not inherently related to self-reference.

I could be wrong, but this is my contention.

Really Nick?

Then you lack any understanding of neurology and how neurons work

The simple single neuron model and how it becomes a filter and generator.

Each neuron has a base chance of ‘firing’ (i.e. a biochemical process where a semi permeable membrane changes state and ions flow out causing the neuron to release neurotransmitter)

Each neuron exists in relation to other neurons and it is a conditioned state.

If a neuron receives the signal from a fellow neuron and then subsequently fires (within a time effect frame) then it is more likely to fire again when that preceding neuron fires again. This is called potentiation.

If a neuron receives the signal from a fellow neuron and then subsequently does not fire (within a time effect frame) then it is less likely to fire again when that preceding neuron fires again. This is called attenuation.

So we can model the actual rate of firing using a matrix calculation for the base neuron by adding and adding the potentiation potential of the preceding neurons which are firing at a given time and subtracting the attenuation potential of the preceding neurons which are firing at a given time. This then modifies the base and gives you the ratio for the activation of the base neuron. (Given the arrangement of firing that the base neuron receives)


Now this is crucial to the signal processing that occurs in most areas of the brain.

For example, in the retina you have photo receptors and in the fovea (the area of color perception) you have arrays of neural connection at the level just next to the photo receptors, so you have the neuron associated with the photoreceptor, but then you have the array level. And it seems to work with the following schema.

Each red photoreceptor is arrayed to the photoreceptors around it. Creating a ‘daisy’, so you have a red photoreceptor and it is in two separate daisy circuits one where it is arrayed with other red receptor (red center, red petals) and another where it is arrayed with green receptors (red center, green petals). So that given photostimulation you have the base cell firing, but you also have two sets of reference possibilities confirmation (surrounding cells fire in like) or denial (surrounding cells fire in contrast). So before the signal is even sent to the optical nerve, you have a sorting process going on.

Does this make sense, because it is crucial to the actual mechanisms for self reference which are crucial to things like, distinguishing visual edges in perceptual fields, depth perception and the like.
 
Nick said:
* The neuronal markers for conscious access do appear to involve a considerable state change. (Even scientists use terms like "ignition" in describing what happens neuronally).

No they don't. You've completely misunderstood everything they are talking about.

Pixy, you're just getting a little crazy now. The paper is online here. In describing the neuronal precursors to conscious access the authors repeatedly use the terms "ignite" or "ignition." Please just read it if you don't believe me.

To me this means that the process taking place resembles a state change. (Please note that both above and when I used the term in the post before I used italics for "appear to" and "resembles" because I'm not suggesting that there definitively is a state change.)

Nick
 
Then they are putting the cart before the horse.

They're researching how the brain creates consciousness. If they took it as implicit that consciousness was self-reference then I dare say they would be researching how the brain references itself. But AFAICT they're not.

Using briefly flashed words preceeded by and sometimes succeeded by visual masks one can create a situation where both conscious and unconscious perception can be neuronally studied. And when the word is consciously reported by the subject it is seen that there are 3 neuronal conditions being met.

Thus, I submit, what is being studied are the neuronal conditions for conscious access.

I just don't see how defining consciousness as self-reference is much meaningful in this context. It may well be true (though that's unproven), but it anyway strikes me as very much judging the outcome before one starts experimenting.

And what makes this research exciting, and makes lots of other scientists interested in it, is not that it is just "filling in the details" as you suggest. Rather it is that it is going right into this chasm between the brain and the mind, that many scientists have believed in since Descartes, and trying to shine some light in there.

And what else is seen is that for conscious access to occur actually a fair bit of stuff needs to go on at a neuronal level. Not just we stick a little feedback loop in and it's a done deal.
Rather "the ensuing brain-scale neural assembly must “ignite” into a self-sustained reverberant state of coherent activity that involves many neurons distributed throughout the brain."

The situation is problematic for Strong AI. Certainly it's not how I imagine many Strong AI theorists would like it. When Dennett came up against this kind of research a decade or so ago, he had to climb down and re-think.

Nick
 
Last edited:
Pixy, you're just getting a little crazy now. The paper is online here. In describing the neuronal precursors to conscious access the authors repeatedly use the terms "ignite" or "ignition." Please just read it if you don't believe me.
I read it. I don't believe you.

You mean neural, if you mean anything at all.

To me this means that the process taking place resembles a state change.
No. It's a change in patterns of activity. I'll excuse you here because the mistake isn't yours, it's the authors'*; they should be more precise in their language to prevent such misunderstanding.

(Please note that both above and when I used the term in the post before I used italics for "appear to" and "resembles" because I'm not suggesting that there definitively is a state change.)
There definitively is a change in activity. "State change" isn't even properly defined in this context.

* Or - I should have realised, since I keep telling you to read Hofstadter - the translator's. Presumably the paper was originally published in French; the neural/neuronal conflict may be due to English vs. French terminology.
 
Last edited:
They're researching how the brain creates consciousness.
Right. My objection was with your description of what they are doing, which is not in fact correct.

It's not, as you said, that "they don't know why the phenomenom of global access should equate to consciousness". They are taking global access as a hypothesis and testing some predictions that might arise from it. (Though GWT is awfully fuzzy at best, at least you can do some experiments based on it.)

If they took it as implicit that consciousness was self-reference then I dare say they would be researching how the brain references itself. But AFAICT they're not.
Consciousness is self-referential information processing. And as Dancing David just noted, there's a huge amount of research into the details of referential and self-referential processing in the brain. This is the background to all neuroscience. Your objection makes as much sense as saying "If people were made of atoms, doctors would be researching how atoms assemble into people".

Using briefly flashed words preceeded by and sometimes succeeded by visual masks one can create a situation where both conscious and unconscious perception can be neuronally studied. And when the word is consciously reported by the subject it is seen that there are 3 neuronal conditions being met.

Thus, I submit, what is being studied are the neuronal conditions for conscious access.
What do you think you mean by that?

I just don't see how defining consciousness as self-reference is much meaningful in this context.
Because you're talking about awareness one moment and consciousness the next, and using the same term for both.

Tel me this: How can you report that you have seen a word without making reference to your awareness of that word?

It may well be true (though that's unproven), but it anyway strikes me as very much judging the outcome before one starts experimenting.
No. It's how we define consciousness. The details of how the self-reference happens will likely be interesting and possibly surprising, but there's no question but that it happens.

You might as well argue that the oceans aren't made of water.

And what makes this research exciting, and makes lots of other scientists interested in it, is not that it is just "filling in the details" as you suggest.
No, wrong again. It's exciting and interesting because it's the details that teach us stuff. We already know the oceans are made of water, because that's what we created the word "ocean" to mean. So that's not very interesting. What else is in them? How did they get there? That's what matters. We know that continents are made of rock, but that doesn't tell us why the coastlines of South America and Africa fit together.

Consciousness is self-referential information processing. But that's a definition, not a theory. It does mean that we'll find self-referential loops in the brain at some level, just as much as we'll find water in the ocean. And in fact we find these loops everywhere, and that is precisely what Gaillard and Dehaene are talking about.

Rather it is that it is going right into this chasm between the brain and the mind, that many scientists have believed in since Descartes, and trying to shine some light in there.
No it isn't. This is nonsense. This "chasm" is a fiction created by philosophers and long since discarded by most neuroscientists.

And what else is seen is that for conscious access to occur actually a fair bit of stuff needs to go on at a neuronal level. Not just we stick a little feedback loop in and it's a done deal. Rather "the ensuing brain-scale neural assembly must “ignite” into a self-sustained reverberant state of coherent activity that involves many neurons distributed throughout the brain."
Nick, THAT IS A FEEDBACK LOOP.

The situation is problematic for Strong AI.
Not even remotely.

Dennett had to climb down and re-think.
Cite.
 
Right. My objection was with your description of what they are doing, which is not in fact correct.

It's not, as you said, that "they don't know why the phenomenom of global access should equate to consciousness". They are taking global access as a hypothesis and testing some predictions that might arise from it. (Though GWT is awfully fuzzy at best, at least you can do some experiments based on it.)

As I see it, the research uses GWT as a context in which to frame its intent and to place its results, but is not really dependent upon GWT. The object is to identify markers for conscious access, and this is anyway interesting largely regardless of the theoretical model.

Consciousness is self-referential information processing.

IMO, that definition is not useful here. We are dealing with consciousness as conscious access - that which can be sensed, or at least reported as sensed.

What do you think you mean by that?

In actual terms that can be objectively verified it's whether a sensory perception can be correctly reported. The intention/assumption is to test actual perception in the first person, but this is a close as we can get.

Because you're talking about awareness one moment and consciousness the next, and using the same term for both.

In this context they are the same.

Tel me this: How can you report that you have seen a word without making reference to your awareness of that word?

My answer would depend on whether you consider the act of reporting as implicitly making reference to awareness. If we consider the term "reference" purely objectively then it is easy.

No. It's how we define consciousness. The details of how the self-reference happens will likely be interesting and possibly surprising, but there's no question but that it happens.

I have absolutely no doubt that self-reference takes place, and plenty of it. What is contentious for me is whether it is the presence or absence of self-reference that solely arbitrates whether conscious access takes place.

Nick, THAT IS A FEEDBACK LOOP.

It is not in question that there are feedback loops involved in neural assemblies. What is in question is whether this is what makes the difference between conscious and unconscious processing. My point here is that one could make just a little feedback loop. There's no need to get billions of neurons involved.


Dennett's "Fame in the Brain" metaphor replaced his "Multiple Drafts" hypothesis in the wake of research which didn't back up the latter.

Nick
 
Last edited:
I read it. I don't believe you.

You mean neural, if you mean anything at all.

No. It's a change in patterns of activity. I'll excuse you here because the mistake isn't yours, it's the authors'*; they should be more precise in their language to prevent such misunderstanding.

There definitively is a change in activity. "State change" isn't even properly defined in this context.

* Or - I should have realised, since I keep telling you to read Hofstadter - the translator's. Presumably the paper was originally published in French; the neural/neuronal conflict may be due to English vs. French terminology.

Well, as I understand it, the word "neuronal" is used when there's a need to really make it clear that neurons are involved. Presumably because the word "neural" has over time tended to become synonymous with any brain activity, not just neuronal. Perhaps not, but "neuronal" anyway seems to be a common enough word when I read ibogaine research papers.

My French is a bit rusty these days, but I believe "ignite" in French means "to set light to," the same as in English. I imagine most words for visually obvious state changes have near exact counterparts in major languages.

Nick
 
Last edited:
Nick said:
They are talking about what there is access to.

Dan Dennett, the godfather of Strong AI, defines it like this.

Pixy said:

Found it now, from Baars...

Baars said:
One dramatic contrast is between the vast number of unconscious neural processes happening in any given moment, compared to the very narrow bottleneck of conscious capacity. The narrow limits of consciousness have a compensating advantage: consciousness seems to act as a gateway, creating access to essentially any part of the nervous system. Even single neurons can be controlled by way of conscious feedback. Conscious experience creates access to the mental lexicon, to autobiographical memory, and to voluntary control over automatic action routines. Daniel C. Dennett has suggested that consciousness may itself be viewed as that to which ‘we’ have access. (Dennett, 1978) All these facts may be summed up by saying that consciousness creates global access.

Nick
 
Pixie,

I'm trying to get some clariity with these terns: "conscious" and "aware".

As far as I know, my body and brain are engaged in hundreds of self referential loops of which I have no awareness. (No perception of them and I'm totally oblivious to these processes.)

Then there's this symbol manipulation I call "self-awareness" which is also a lot of loopiness. I seem to be aware of this awareness, though I suspect I'm not aware of the of what's actually going on but just a surface process or even a derivative.

Now the dumb question for clarity's sake:
Is SHRDLU aware?
Is SHRDLU self-aware?
Is SHRDLU aware of its awareness?
Does SHRDLU have a subjective experience of selfhood?

BTW I'd reccomend folks read Hofstadter's I'm a Strange Loop.
GEB is fine but somewhat dated. LOOP makes a deeper investigation into what we call selfhood.
 
Last edited:
Mathematical properties and physical properties are one and the same.

You clearly don't even understand what a "category" is, so I find it ironic that you would accuse someone else of making a category error.

Properties are just categorization. Suppose I categorize an apple as "round." Is that a mathematical property or a physical property?

An apple is clearly not round in the same sense in which a sphere is round. Nothing in the physical universe is a perfect sphere. A sphere is a mathematical concept.

Wrong.

Mathematics is a way to describe reality. We use mathematics to describe behavior. In fact, it is the only way to fully describe behavior.

That is kind of the whole point of mathematics.

Clearly not spoken to many mathematicians, then.

The vast majority of mathematics has nothing to do with the physical world, and most pure mathematicians have no interest in the applicability of their ideas. Indeed, some of them find it quite irritating when some mathematical concept turns out to be useful in the physical world.

There is no "magic world of mathematics" floating around independent of reality. It is a description of reality and as such it is entirely dependent on reality. No reality, no mathematics.

This is sort of trivial to understand, since if there was no reality, there would be no humans, and if there were no humans, there would be no mathematics.

It's wonderful to see how the most subtle of concepts and the most intractable of mysteries can be swept aside by a few bold assertions.

Mathematics is not a description of reality. It is part of reality. The undiscovered digits of pi are part of reality every bit as much as the surface of Venus.

Completely irrelevant, and your claim to the contrary betrays how little you know.

Any behavior resulting from timing or biochemical processes can also be exhibited by networks of transistors.

Well, that's obviously wrong. Can a network of transistors produce a baby? One of the better-known biochemical processes.

If your tire went flat, and you couldn't find another tire, but some engineer came up and said "here, replace the wheel with this widget, it will behave just like the tire as far as your car is concerned," what would you tell him?

Would you stubbornly insist that the behavior of the tire isn't what allows the car to go? Would you insist that it is instead some magical property of an actual rubber tire that can't be exhibited by any other entity in the universe, and that no matter what his widget does the car won't be "going" at all?

I would not be foolish enough to believe that because some produces a "round " object, that it can replace the wheel. If I wanted to replace a particular component, it's never sufficient to find a property and duplicate it. It is necessary to find all the essential properties for the role, and to duplicate all of them. This is exactly where the AI view of the brain is lacking. It treats the brain as a network of switches, and decides, on very little evidence, that all the other behaviour is irrelevant.

No, it is not obvious at all. There is no scientific reason why one could not replace a neuron in a brain with a suitably advanced cybernetic device programmed to emulate that neuron and have the brain function exactly as before. None.

If you can think of one, feel free to share it with us.

Replacing a neuron with something that does exactly the same thing will obviously be unlikely to make a difference. Replacing a neuron with something that does just one thing the same and does everything else differently - say for example a transistor - would be very likely to make a difference. Hence we don't replace damaged neurons with transistors.

Many of the theorems of mathematics and computer science seem silly to those who are not educated in the relevant subjects.

Luckally, "silly" doesn't mean squat in science. Sound mathematical descriptions do.



I am not proclaiming how smart I am. I am proclaiming how uneducated in this field you are.



If you want to establish that the substrate of consciousness might be limited to biological neural networks, then you will need to demonstrate that.

It's odd that even though you're happy to proclaim my lack of education, you seem unable to understand the simplest of concepts, no matter how often I repeat and repeat them. I have never insisted that consciousness is necessarily limited to biology. I'm simply stating that we don't know what biological process is responsible for consciousness. We don't know, for example, that it's equivalent to a network of transistors.

It is quite ironic that you are able to imagine how all these different systems in the universe might act as switches -- when it suits your argument -- yet you are completely unable to imagine how anything besides a biological brain might exhibit all the behaviors of consciousness.

I can imagine all sorts of things. I don't assume that because I can imagine something that it is necessarily true.

Perhaps because the latter doesn't suit your argument?
 
I would not be foolish enough to believe that because some produces a "round " object, that it can replace the wheel. If I wanted to replace a particular component, it's never sufficient to find a property and duplicate it. It is necessary to find all the essential properties for the role, and to duplicate all of them. This is exactly where the AI view of the brain is lacking. It treats the brain as a network of switches, and decides, on very little evidence, that all the other behaviour is irrelevant.
Church-Turing thesis.

Replacing a neuron with something that does exactly the same thing will obviously be unlikely to make a difference. Replacing a neuron with something that does just one thing the same and does everything else differently - say for example a transistor - would be very likely to make a difference. Hence we don't replace damaged neurons with transistors.
Church-Turning thesis.

It's odd that even though you're happy to proclaim my lack of education, you seem unable to understand the simplest of concepts, no matter how often I repeat and repeat them. I have never insisted that consciousness is necessarily limited to biology. I'm simply stating that we don't know what biological process is responsible for consciousness. We don't know, for example, that it's equivalent to a network of transistors.
Church-Turing thesis.

Three strikes, you're OUT!
 
An apple is clearly not round in the same sense in which a sphere is round. Nothing in the physical universe is a perfect sphere. A sphere is a mathematical concept.

"Sphere" is merely a categorization. It is a property. That is what all mathematical concepts are. And they apply only to reality, because reality is all that exists.

Mathematics doesn't exist in some magical void independent of reality.

Clearly not spoken to many mathematicians, then.

The vast majority of mathematics has nothing to do with the physical world, and most pure mathematicians have no interest in the applicability of their ideas. Indeed, some of them find it quite irritating when some mathematical concept turns out to be useful in the physical world.

The vast majority of poetry has nothing to do with the physical world either. Yet, the language used to create all poetry is necessarily a language used to describe reality.

It's wonderful to see how the most subtle of concepts and the most intractable of mysteries can be swept aside by a few bold assertions.

Mathematics is not a description of reality. It is part of reality. The undiscovered digits of pi are part of reality every bit as much as the surface of Venus.

Wait... isn't your whole objection to strong AI based on your assertion that mathematics is not reality in the same way physics is reality?

And now you are asserting that mathematics is part of reality every bit as much as the physical?

*looks confused*

Well, that's obviously wrong. Can a network of transistors produce a baby? One of the better-known biochemical processes.

Well, neither can a network of neurons, so clearly that point does nothing to advance your argument.

I would not be foolish enough to believe that because some produces a "round " object, that it can replace the wheel. If I wanted to replace a particular component, it's never sufficient to find a property and duplicate it. It is necessary to find all the essential properties for the role, and to duplicate all of them. This is exactly where the AI view of the brain is lacking. It treats the brain as a network of switches, and decides, on very little evidence, that all the other behaviour is irrelevant.

lol.

You yourself claim that everything can be considered a switch.

So if everything can be considered a switch, westprog, then "treating the brain as a network of switches" doesn't really imply anything at all, does it?

Just be honest -- you have very little idea what goes on in AI research at all. You are just some keyboard philospher who took a little bit of programming 20 years ago when you were in college and you think that makes you qualified to make sweeping generalizations about the entire field of computer science. Right?

Replacing a neuron with something that does exactly the same thing will obviously be unlikely to make a difference. Replacing a neuron with something that does just one thing the same and does everything else differently - say for example a transistor - would be very likely to make a difference. Hence we don't replace damaged neurons with transistors.

Right. Which is why I never said such a thing.

It's odd that even though you're happy to proclaim my lack of education, you seem unable to understand the simplest of concepts, no matter how often I repeat and repeat them. I have never insisted that consciousness is necessarily limited to biology. I'm simply stating that we don't know what biological process is responsible for consciousness. We don't know, for example, that it's equivalent to a network of transistors.

We know that if the computation performed by the neurons is what consciousness is, then it can indeed be done by a network of transistors as well.

And we have ample evidence that it is the computation being performed, not the physical attributes of the neurons themselves, that exhibit the behaviors of consciousness.

This seems to be pretty easy to grasp since a fresh corpse has the same physical attributes of a living person whereas it exhibits none of the computation.
 
Ah, so you don't have a large number of neuroscientists who agree with your point, okay.

Fine by me.

I think you will have a hard time finding a large number of neuroscientists who even agree the HPC exists.

It's a hard thing to prove either way, as I can't easily do a straw poll, but I'll give you a quote from the Introduction to Rita Carter's 2002 book, Exploring Consciousness...

Rita Carter said:
Alas, I cannot claim that, Having Thought, I have solved the problem. This book will not let you into the secret of consciousness, because I don't know it. Nor, I think, does anyone else. Nor does it propose a radically new, improved theory. And it certainly doesn't tell you everything there is to know on the subject....
....As in all fields dotted with ivory towers, the study of consciousness has until recently been carved up between different disciplines with the practitioners of each seemingly convinced that their approach, and theirs only, will lead to enlightenment.
Neuroscience, for example, has revealed a great deal about the biological processes that accompany consciousness, and many books have been written that claim to explain how physical events give rise to subjective experience. On closer examination, however, most of them turn out to be theories about the "easy" problems - how the brain processes information, not how it turns it into feelings, thoughts, perceptions. The only comprehensive theories which deal directly with the hard problem are those which claim it does not exist - that consciousness simply is physical processes and everything else is illusory. That is a neat idea and may turn out to be correct. But few people - myself included - are satisfied by it. Carter (2002), University of California Press

Personally, I doubt many neuroscientists undertaking research projects actually spend so much time agonising over the HPC. They get on with the "easy problems." But whether they believe there is an HPC or not does not, I submit, depend on how clever they are but rather on how easily they are satisfied with a theoretical solution.

You can take the position that "the mind is what the brain does," but that is just taking a position. Nothing more. It's cool.

So, to answer your point, I can't prove it but I figure a clear majority of neuroscientists who actually research would agree that the HPC exists. Not because of their intelligence or understanding or lack of these things. But because if you're of the disposition that likes to investigate stuff, which as a researcher you likely are, you're not so likely to be happy with a pat statement like "the mind is what the brain does." Jeremy Wolfe, who Pixy cites, is a clear exception, but I figure this would hold for most researchers.

Nick
 
Last edited:
It's a hard thing to prove either way, as I can't easily do a straw poll, but I'll give you a quote from the Introduction to Rita Carter's 2002 book, Exploring Consciousness...
Why?

Personally, I doubt many neuroscientists undertaking research projects actually spend so much time agonising over the HPC.
Of course they don't, since HPC is logically incoherent rubbish.

They get on with the "easy problems."
If by "easy" you mean "real".

But whether they believe there is an HPC or not does not, I submit, depend on how clever they are but rather on how easily they are satisfied with a theoretical solution.
No. The HPC is logically incoherent rubbish. Whether they believe it exists or not depends on whether they understand what is actually being proposed, because if they do, they will reject it.

You can take the position that "the mind is what the brain does," but that is just taking a position. Nothing more. It's cool.
That's absurd. The mind is what the brain does. This is better established than any other scientific hypothesis.

So, to answer your point, I can't prove it but I figure a clear majority of neuroscientists who actually research would agree that the HPC exists.
No.

Not because of their intelligence or understanding or lack of these things. But because if you're of the disposition that likes to investigate stuff, which as a researcher you likely are, you're not so likely to be happy with a pat statement like "the mind is what the brain does." Jeremy Wolfe, who Pixy cites, is a clear exception, but I figure this would hold for most researchers.
You are very, very wrong.
 
Why?


Of course they don't, since HPC is logically incoherent rubbish.


If by "easy" you mean "real".


No. The HPC is logically incoherent rubbish. Whether they believe it exists or not depends on whether they understand what is actually being proposed, because if they do, they will reject it.


That's absurd. The mind is what the brain does. This is better established than any other scientific hypothesis.

Well, Rita Carter is a well respected scientific journalist who has worked personally with most of the main players in consciousness research. It's clear to me from the quote I provided that, as of 2002, she does not agree with you.

She clearly feels that a majority of researchers still regard the HPC as valid.

The ones that don't, notably Dennett, O'Regan, and the Churchlands (3 out of the 4 are philosophers) cannot justify their position in material terms. This is simple fact. They simply take a position and then attempt to defend it. Note in particular Dennett (2000) simply derides anyone who doesn't agree with him as "needing therapy." He does not, because he cannot, defend his position with materialist science.

I don't think anyone disputes that the computational theory of consciousness (Strong AI) may be correct. They merely point out that it is completely unproven at a neuroanatomical level.

Nick

eta: Personally I'm fine that you believe in Strong AI yourself. But to claim that pretty much every other scientist working in consciousness research agrees with you is just patent nonsense. I provided you with a quote from Baars (2005) stating that consciousness research is still in its early days. I provided you with a quote from Ramachandran (2007) stating that we are just scratching the surface. I provide you with a summary of the scene from Rita Carter (2002). You just ignore them and continue blindly on. I salute you for your fortitude, but it's clear for me that you are completely on a limb here. Thanks anyway for prodding me into more reading!
 
Last edited:
But to claim that pretty much every other scientist working in consciousness research agrees with you is just patent nonsense. I provided you with a quote from Baars (2005) stating that consciousness research is still in its early days. I provided you with a quote from Ramachandran (2007) stating that we are just scratching the surface. I provide you with a summary of the scene from Rita Carter (2002). You just ignore them and continue blindly on. I salute you for your fortitude, but it's clear for me that you are completely on a limb here. Thanks anyway for prodding me into more reading!

Clearly, there are serious researchers in the subject who agree entirely with the Pixy viewpoint, and there are cranks, fools and dimwits who disagree. And we can define their qualifications in the subject by how closely they agree with Pixy.
 
Wait... isn't your whole objection to strong AI based on your assertion that mathematics is not reality in the same way physics is reality?

Yes

And now you are asserting that mathematics is part of reality every bit as much as the physical?

Yes.

There is no contradiction. Mathematics and mathmatical truths are not dependent on physical reality. Physical reality is dependent on mathematical truths.

The orbit of a planet around the sun depends on the value of pi. The value of pi does not depend on any physical quantity in the universe. It would have the same value if there was no universe. There cannot be a universe as we understand it that could possibly have a different value for pi.
 

Back
Top Bottom