The Hard Problem of Gravity

Well, Rita Carter is a well respected scientific journalist who has worked personally with most of the main players in consciousness research. It's clear to me from the quote I provided that, as of 2002, she does not agree with you.

She clearly feels that a majority of researchers still regard the HPC as valid.
It's not clear that she does believe that, and it is in any case completely wrong. HPC is not logically coherent.

If you disagree with my assertion, all you have to do to prove me wrong is to provide a logically valid statement of HPC. Not prove it is true, just make a statement of the problem that does not contain inherent logical fallacies.

Which should be easy - assuming that it's not in fact a pile of worthless drivel.

The ones that don't, notably Dennett, O'Regan, and the Churchlands (3 out of the 4 are philosophers) cannot justify their position in material terms.
Says who? Anyway, stop reading philosophers. That's a large part of your problem.

This is simple fact.
No, it's an unsupported assertion.

They simply take a position and then attempt to defend it. Note in particular Dennett (2000) simply derides anyone who doesn't agree with him as "needing therapy." He does not, because he cannot, defend his position with materialist science.
What position are you talking about, and why can it not be defended within "materialist" science? (And what other sort of science is there?)

I don't think anyone disputes that the computational theory of consciousness (Strong AI) may be correct. They merely point out that it is completely unproven at a neuroanatomical level.
The problem is that there is no other explanation. The GWT that you keep referring to is a computational theory.

eta: Personally I'm fine that you believe in Strong AI yourself. But to claim that pretty much every other scientist working in consciousness research agrees with you is just patent nonsense. I provided you with a quote from Baars (2005) stating that consciousness research is still in its early days.
Yes. True to an extent, but irrelevant.

I provided you with a quote from Ramachandran (2007) stating that we are just scratching the surface.
True to an extent, but irrelevant.

I provide you with a summary of the scene from Rita Carter (2002).
Which was mostly wrong.

You just ignore them and continue blindly on.
Some of them are right. Baars and Ramachandran in no way contradict anything I am saying. That's why they're irrelevant to this particular argument. You've simply failed to grasp what they are saying, and somehow believe that it poses some sort of problem (which you have never managed to define) for the computational model.

That's completely wrong. There are two schools here: Those like Dennett and Hofstadter and Baars and Ramachandran and Wolfe, who are talking about computational models, and those like Chalmers and Searle and Jackson, who are talking about invisible elves.

I'll take the computational model, thanks.

I salute you for your fortitude, but it's clear for me that you are completely on a limb here. Thanks anyway for prodding me into more reading!
No. Nick, what we are describing is mainstream neuroscience. Not everyone agrees with the exact extent and ramifications of every element we have discussed - of course not, or there would be no research left to be done - but the basics we have explained to you underpin all of modern neuroscience. Dismissing Professor Wolfe as an exception because he states unequivocally what everyone in the field already knows is the height of absurdity.
 
Clearly, there are serious researchers in the subject who agree entirely with the Pixy viewpoint, and there are cranks, fools and dimwits who disagree.
There are indeed some serious researchers who agree entirely with my viewpoint, because I came to my present viewpoint by reading their work. There are other serious researchers who agree on some points and disagree on others. And they certainly know many of the details far better than I do. I'm talking - for the most part - in very broad brush strokes here, on areas where all neuroscientists agree.

And then there's the other group, the ones who believe in magic elves.

And we can define their qualifications in the subject by how closely they agree with Pixy.
No. Just ask them if they think HPC is coherent as Chalmers states it. If they do, then you can probably safely dismiss anything they have to say on any topic.
 
Well, neither can a network of neurons, so clearly that point does nothing to advance your argument.

It's nothing to do with advancing my argument. You made a statement - that a network of transistors could be made equivalent to any biochemical process - that is simply, plain, wrong and I gave a simple counterexample to demonstrate it.

I never claimed that a network of neurons was capable of doing anything any other biochemical process could do. That would be as absurd as your statement.

It makes argument a tedious, lengthy process when the most obvious errors are defended in this kind of meandering, pointless chase after points. Why not just say "Yes, a network of transistors can't substitute for every biochemical process, but it can replace all functions of the brain". Assuming that is what you mean. I have to guess, since you can't keep focused on what you are saying.

You will find that if you simply correct misstatements as you go, your ideas will actually become clearer. I've made factual errors during this thread, and I expect to make more. I will, if they are pointed out, amend them.
 
It's nothing to do with advancing my argument. You made a statement - that a network of transistors could be made equivalent to any biochemical process - that is simply, plain, wrong and I gave a simple counterexample to demonstrate it.
Church-Turing thesis. Strike one!

I never claimed that a network of neurons was capable of doing anything any other biochemical process could do. That would be as absurd as your statement.
Church-Turing thesis. Strike two!

It makes argument a tedious, lengthy process when the most obvious errors are defended in this kind of meandering, pointless chase after points. Why not just say "Yes, a network of transistors can't substitute for every biochemical process, but it can replace all functions of the brain". Assuming that is what you mean. I have to guess, since you can't keep focused on what you are saying.
Church-Turing thesis. Strike three, you're out!

Again.

You will find that if you simply correct misstatements as you go, your ideas will actually become clearer. I've made factual errors during this thread, and I expect to make more. I will, if they are pointed out, amend them.
Well, that was at least partly correct.
 
Some of them are right. Baars and Ramachandran in no way contradict anything I am saying. That's why they're irrelevant to this particular argument. You've simply failed to grasp what they are saying, and somehow believe that it poses some sort of problem (which you have never managed to define) for the computational model.

The problem for me is that neither Baars or Rama mention the computational theory. They are neuroscientists and they are happy (apparently) researching the actual brain and trying to work out how it does what it does. Both admit that there are these huge explanatory gaps. Baars says it's going to take another century to understand what's going on and Ramachandran says we're only now just scratching the surface of the issue of selfhood. Now if they were to say...well, there are these huge explanatory gaps but of course we will one day work out how the brain does it just like a computer, or words to this effect...then I would be more convinced by your assertions. But they don't. And I'm not.

ETA: This is the thing for me. None of these guys reinforce what you say. If they came out and said... well of course we know how it happens because we have AI simulations and so now we're just trying to work out how the brain does the same thing...then I would be convinced. But they don't. Really I've looked. Dehaene, Edelman, Baars, Ramachandran - none of these guys have I seen state anything like this. They don't mention computational theories AFAICT.


That's completely wrong. There are two schools here: Those like Dennett and Hofstadter and Baars and Ramachandran and Wolfe, who are talking about computational models, and those like Chalmers and Searle and Jackson, who are talking about invisible elves.

No. That's just in your head, Pixy. The world for you may be split entirely into "people who agree with me" and "people who don't" and you may spend your hours in harmless fantasy trying to recruit those who you think are suitably distinguished into the former camp but, as even I have to admit, in reality it is not like this.

Baars' original model was abstract. But the Global Neuronal Workspace Model, as it seems to be more usually called these days, is neuronal. I mean, you can call it computational, I guess, if this is how you relate to the world, but I doubt if so many others will bother. Whichever way you call it it is, I submit, well accepted, as Carter states, that we don't yet have the answers that the average person interested in these things seeks.

No. Nick, what we are describing is mainstream neuroscience. Not everyone agrees with the exact extent and ramifications of every element we have discussed - of course not, or there would be no research left to be done - but the basics we have explained to you underpin all of modern neuroscience. Dismissing Professor Wolfe as an exception because he states unequivocally what everyone in the field already knows is the height of absurdity.

I think if you listen again to Wolfe you will find that very early on he states his position. I take the position that the mind is what the brain does...or words to this effect. He then proceeds to relate the answers to a whole series of the easy problems. To me this action is a de facto admission that the HPC exists and he accepts that you can only currently take a position here, not fill it with actual evidence. You may choose to interpret his words differently!

Nick
 
Last edited:
The problem for me is that neither Baars or Rama mention the computational theory.
What Baars is describing is a computational theory. What do you think it is?

They are neuroscientists and they are happy (apparently) researching the actual brain and trying to work out how it does what it does.
Yes.

Both admit that there are these huge explanatory gaps.
Yes.

Baars says it's going to take another century to understand what's going on and Ramachandran says we're only now just scratching the surface of the issue of selfhood.
Yes.

What puzzles me is why you think any of this is relevant to your point.

Now if they were to say...well, there are these huge explanatory gaps but of course we will one day work out how the brain does it just like a computer, or words to this effect...then I would be more convinced by your assertions. But they don't. And I'm not.
Seriously, Nick, what other possibility are you proposing? The brain processes information. It is a computer. It does things just like a computer because that is what it is.

ETA: This is the thing for me. None of these guys reinforce what you say.
Sorry, Nick, but they do. You just don't have the fundamental concepts to realise it. Read Hofstadter. Listen to Wolfe. Both are wonderful teachers, certainly better than me. I learned a lot from them.

If they came out and said... well of course we know how it happens because we have AI simulations and so now we're just trying to work out how the brain does the same thing...then I would be convinced.
Why would they say that? AI has produced simple models of some parts of brain function. AI research is a tool for understanding consciouness. The details of brain function are going to be different, and far more complex. Everyone understands this.

But they don't. Really I've looked. Dehaene, Edelman, Baars, Ramachandran - none of these guys have I seen state anything like this. They don't mention computational theories AFAICT.
Do you understand what a computational model of consciousness is? Do you understand that as soon as you start talking about neural activity, you are talking about a computational model? Everything they are talking about is computational models. They don't say it in these papers, because that's first-year intro psych stuff. That's why Wolfe does come out and say it: Mind is what brain does. That's the foundation of all of modern neuroscience, even more fundamental than evolution is to biology.

No. That's just in your head, Pixy. The world for you may be split entirely into "people who agree with me" and "people who don't" and you may spend your hours in harmless fantasy trying to recruit those who you think are suitably distinguished into the former camp but, as even I have to admit, in reality it is not like this.
Nope. Sorry, you're wrong.

Baars' original model was abstract. But the Global Neuronal Workspace Model, as it seems to be more usually called these days, is neuronal.
That means it's computational.

I mean, you can call it computational
Yes. Because it is.

I guess, if this is how you relate to the world, but I doubt if so many others will bother.
Because it's obvious.

Whichever way you call it it is, I submit, well accepted, as Carter states, that we don't yet have the answers that the average person interested in these things seeks.
I never said we did. I said there are computational models, and invisible elves, and that I prefer the computational models, thanks.

I think if you listen again to Wolfe you will find that very early on he states his position. I take the position that the mind is what the brain does...or words to this effect. He then proceeds to relate the answers to a whole series of the easy problems.
No. He answers the real problems. Because the so-called "hard" problems are fictions.

To me this action is a de facto admission that the HPC exists and he accepts that you can only currently take a position here, not fill it with actual evidence.
And you accuse me of projection!

No, Nick. The so-called "hard" problem is incoherent nonsense. I note once more that you can prove me wrong by simply providing a logically coherent statement of the hard problem. You don't have to show that it is true, merely that it is valid.

Chalmers can't do it, so over to you.

You may choose to interpret his words differently!
You mean, take him as meaning what he says? He specifically notes that psychology is a scientific, naturalistic discipline, which explicitly rules out HPC.

Sorry Nick, you're just wrong here.
 
It's not clear that she does believe that, and it is in any case completely wrong. HPC is not logically coherent.

For me the HPC is essentially a proposition. It proposes that there is still an explanatory gap between brain processing and actual experiential consciousness - phenomenality.

The so-called "easy problems" are to discover how the brain processes. The HPC is to understand how this processing becomes consciousness.

The Strong AI perspective on this is that there simply is no explanatory gap. It does not exist. Processing is consciousness (or "awareness" for Pixy here). The mind simply is what the brain does. There is nothing else that needs to be explained. Carter (2002) states that not so many people are convinced by this perspective.

When we look at modern brain research, particularly that emerging from France under Dehaene, Naccache and others, for me it does seem that the Strong AI interpretation is threatened.

There clearly is both conscious and unconscious processing going on concurrently in the human brain. There do appear to be many self-referencing loops all over the brain which are not consciously accessible. Consciousness only appears to emerge when vast assemblies of neurons "ignite" and "broadcast" information all over the brain. If correct it seems that none of this should be predicted by Strong AI.

Nick
 
Sorry Nick, you're just wrong here.

Well, if I'm wrong I'm wrong. It's not such a big deal. I still do not buy your statement that everyone in neuroscience accepts that consciousness is entirely computational. They don't say it. Ramachandran and Baars happily admit that we're just beginning the journey. They don't say...well of course it's all been mapped out for us by AI and we're just filling in the gaps in how a human does it. If they did, fair enough.

At the same time, Rita Carter happily states that most people aren't convinced by the computational answer to the HPC.

So, from where I'm sitting, and not working professionally in neuroscience, it seems that on one side there's you, claiming that silently all these scientists agree with the computational model regardless of the fact that they never mention it publically....and on the other there are the scientific statements themselves...it's going to take another 100 years...we're just scratching the surface....few people are convinced by the computational answer.

Hopefully you can understand my skepticism.

Nick
 
For me the HPC is essentially a proposition. It proposes that there is still an explanatory gap between brain processing and actual experiential consciousness - phenomenality.
That isn't at all what Chalmers says, but go on.

The so-called "easy problems" are to discover how the brain processes.
Since what the brain does is processing, once you understand how it does it, why do you think there should be anything left?

The HPC is to understand how this processing becomes consciousness.
Why is this "hard", or a "problem", at all?

It's only a problem if you assume that it's something other than brain processing. But we know already that it is brain processing. That's established fact.

So now we need to study and understand that processing. That's precisely what people like Baars and Ramachandran and Wolfe and Gaillard are doing.

And when you understand that processing, you understand consciousness.

The Strong AI perspective on this is that there simply is no explanatory gap.
Wrong.

It does not exist.
Wrong.

Processing is consciousness (or "awareness" for Pixy here).
Hopelessly wrong. I've explained this to you repeatedly. This bears no relation to anything I have said at any time.

The mind simply is what the brain does.
Correct.

There is nothing else that needs to be explained.
Embarrasingly poor strawman.

Carter (2002) states that not so many people are convinced by this perspective.
Who cares?

When we look at modern brain research, particularly that emerging from France under Dehaene, Naccache and others, for me it does seem that the Strong AI interpretation is threatened.
You keep saying this, but it makes no sense whatsoever. Threatened by what?

There clearly is both conscious and unconscious processing going on concurrently in the human brain.
Yes, of course. So what? The same thing happens in computers.

There do appear to be many self-referencing loops all over the brain which are not consciously accessible.
Not accessible to you. Accessible to themselves.

You don't have access to my mental state either. Those other loops simply aren't you.

Consciousness only appears to emerge when vast assemblies of neurons "ignite" and "broadcast" information all over the brain.
At least you have the decency to use scare quotes. The neurons do not ignite. They broadcast nothing. They switch. That's what they do.

The patterns of activity in the neural network change. The patterns of activity in the neural network are always changing. It wouldn't be terribly useful otherwise.

If correct it seems that none of this should be predicted by Strong AI.
It's correct and it's entirely consistent with and (in broad terms) predicted by AI research. They are talking about feedback loops, Nick, which is exactly what we have been explaining to you all this time.
 
It proposes that there is still an explanatory gap between brain processing and actual experiential consciousness - phenomenality.

Which is all fine and dandy and as soon as someone who is in favour of this proposition can explain how one is supposed to objectively measure that which by definition cannot be objectively measured under their epistemology maybe we can start treating it as something other than simply a philosophical language game.
 
That isn't at all what Chalmers says, but go on.

For me it is essentially what Chalmers says. I don't really see how one can dispute this. But, anyway...


Since what the brain does is processing, once you understand how it does it, why do you think there should be anything left?

Well, the usual reason is because that it appears that there should be.

Why is this "hard", or a "problem", at all?

It's only a problem if you assume that it's something other than brain processing. But we know already that it is brain processing. That's established fact.

Well, academics usually take the materialist perspective that TMIWTBD, as we've discussed. That is a fact. Whether it's true is not yet ascertained, but anyway...

So now we need to study and understand that processing. That's precisely what people like Baars and Ramachandran and Wolfe and Gaillard are doing.

Agreed.

And when you understand that processing, you understand consciousness.

Hopefully that is so.


It is the Strong AI perspective. Dennett (2000) states that those in error repeatedly look for something else after the processing is done to account for consciousness. It seems fairly straightforward to me.


See above.


Hopelessly wrong. I've explained this to you repeatedly. This bears no relation to anything I have said at any time.

whatever

Embarrasingly poor strawman.

I mean that there is no explanatory gap between processing and consciousness.

Who cares?

Not you, apparently. Seeing as it doesn't agree with what you're saying.

You keep saying this, but it makes no sense whatsoever. Threatened by what?

Well, when you have a theory it is nice when evidence supports it.


Not accessible to you. Accessible to themselves.

I have to admit that I've been getting progressively more intrigued by this notion.

The assumption is that "I" represent the whole body, yet it could well be that the body is actually an accumulation of relatively independent conscious modules and that only the bits which are connected together by this "global access" network can intercommunicate through consciousness. All the modules are conscious but what we usually term "I" is actually only the "global access" network.

You don't have access to my mental state either. Those other loops simply aren't you.

Well, other loops in the body will feed back I'm sure to the "global access" network, though not directly through what is experienced as conscious awareness.

At least you have the decency to use scare quotes. The neurons do not ignite. They broadcast nothing. They switch. That's what they do.

For sure. What I was bringing to your attention was that a very large number of them do it in synchrony, and the fact that this is so can, not unreasonably, lead one to conclude that this is needed for consciousness.

Nick
 
Last edited:
Which is all fine and dandy and as soon as someone who is in favour of this proposition can explain how one is supposed to objectively measure that which by definition cannot be objectively measured under their epistemology maybe we can start treating it as something other than simply a philosophical language game.

I think in reality it's more likely that if, in say 50 years time, the scientists still really aren't getting there (which I personally don't think will happen) then they will be forced to change tack. Objectivity is all very well and fine. But if perchance it isn't getting the job done then things inevitably change.

Nick
 
But if perchance it isn't getting the job done then things inevitably change.

Sounds like the argument of altMed-ers.

I think in reality it's more likely that if, in say 50 years time, the scientists still really aren't getting there (which I personally don't think will happen) then they will be forced to change tack.

Isomorphism: objective science can't deal with Homeopathy.
 
Sounds like the argument of altMed-ers.



Isomorphism: objective science can't deal with Homeopathy.

I'm just a highly practical person. Objectivity is a brain state. Nothing more nothing less. If it does the job, great. If it doesn't, bye bye.

Nick
 
For me it is essentially what Chalmers says. I don't really see how one can dispute this.
The problem is, this is not what he says; it's a restatement of what Chalmers says that tries to make him not sound crazy.

Chalmers says, categorically, that it is impossible to produce a scientific theory of consciousness. He asserts that materialism and scientific naturalism are false.

Well, the usual reason is because that it appears that there should be.
Sorry, what is that even supposed to mean?

Well, academics usually take the materialist perspective that TMIWTBD, as we've discussed. That is a fact. Whether it's true is not yet ascertained, but anyway...
Nope, sorry, this is just wrong. We know this as a fact better than we know any other fact. Better than we know that the world is round or that the sea is full of water.

Never mind FMRI and direct neural stimulation and such. Just consider the results of five thousand years of recorded experiments with psychoactive drugs. Some of our oldest records are devoted to discussions of this subject.

"Mind is what brain does" is less controversial than the atomic theory of matter or the germ theory of disease or the heliocentric theory of the solar system.

Hopefully that is so.
"Hopefully"? Under what conditions can it not be so?

It is the Strong AI perspective. Dennett (2000) states that those in error repeatedly look for something else after the processing is done to account for consciousness. It seems fairly straightforward to me.
That explanatory gap is entirely imaginary and logically incoherent, yes. However, there is a real explanatory gap involving real theories about real processes. We have not explained everything the human brain does, not by a long shot.

I mean that there is no explanatory gap between processing and consciousness.
Consciousness is self-referential information processing. But that's not an explanation, that's a definition.

We are still working on an operational theory of the human mind. But we know already that there is an operational theory, and we know its boundaries.

Not you, apparently. Seeing as it doesn't agree with what you're saying.
It's your opinion of someone else's opinion of other people's opinions. Who cares?

Well, when you have a theory it is nice when evidence supports it.
How does that address the question? Threatened by what? As I've noted, all of this is perfectly compatible with AI research. There is no "threat" whatsoever.

I have to admit that I've been getting progressively more intrigued by this notion.
Well, okay, cool. :)

The assumption is that "I" represent the whole body, yet it could well be that the body is actually an accumulation of relatively independent conscious modules and that only the bits which are connected together by this "global access" network can intercommunicate through consciousness.
No, that's backwards. They don't "intercommunicate through consciousness". Consciousness is generated through their intercommunication.

All the modules are conscious but what we usually term "I" is actually only the "global access" network.
Essentially, yeah. There are all these autonomous and semi-autonomous processes running in your brain that are - in a very real sense - conscious themselves, but to which you do not have access. You have access to the outputs of those processes, but not to the internal states.

So not only are you an illusion, what you are an illusion of is an illusion. :)

Well, other loops in the body will feed back I'm sure to the "global access" network, though not directly through what is experienced as conscious awareness.
Right - but the same is true when you communicate with me.

For sure. What I was bringing to your attention was that a very large number of them do it in synchrony, and the fact that this is so can, not unreasonably, lead one to conclude that this is needed for consciousness.
Well, yeah. Consciousness needs a number of switches organised into a feedback loop. That is what I've been pointing out all along.

Read Hofstadter. :)
 
There is no contradiction. Mathematics and mathmatical truths are not dependent on physical reality. Physical reality is dependent on mathematical truths.

The orbit of a planet around the sun depends on the value of pi. The value of pi does not depend on any physical quantity in the universe. It would have the same value if there was no universe. There cannot be a universe as we understand it that could possibly have a different value for pi.

If you really think this, then you have a fundamental misunderstanding of the nature of mathematics.

Good luck with that.
 
It's nothing to do with advancing my argument. You made a statement - that a network of transistors could be made equivalent to any biochemical process - that is simply, plain, wrong and I gave a simple counterexample to demonstrate it.

Yes, I assumed we were speaking in the context of the behavior of neurons as viewed by other neurons. My statement is not true in general.

I hope you can see why I would make that assumption, since that is after all what we are talking about.

I never claimed that a network of neurons was capable of doing anything any other biochemical process could do. That would be as absurd as your statement.

Yes, I assumed we were speaking in the context of the behavior of neurons as viewed by other neurons. My statement is not true in general.

I hope you can see why I would make that assumption, since that is after all what we are talking about.

It makes argument a tedious, lengthy process when the most obvious errors are defended in this kind of meandering, pointless chase after points.

lol, you mean like the error of claiming a rock can switch?

That claim is wrong. Furthermore, looking at the game of mental Twister you play in order to defend such a claim, anyone can see it is meandering. Finally, defending it is pointless, because whether or not a rock satisfies some label doesn't impact the way it actually behaves, which is what we are talking about and what is important.

You will find that if you simply correct misstatements as you go, your ideas will actually become clearer. I've made factual errors during this thread, and I expect to make more. I will, if they are pointed out, amend them.

Sure thing.
 
The problem is, this is not what he says; it's a restatement of what Chalmers says that tries to make him not sound crazy.

From what I've read Chalmers either takes the position that there may be something left over after all the processing is explained, or that there categorically will be something left over.

I don't think either make him sound crazy. Both seem on the surface fairly reasonable propositions. I think it is more that the truth is crazy!

Chalmers says, categorically, that it is impossible to produce a scientific theory of consciousness. He asserts that materialism and scientific naturalism are false.

Well, maybe on one of his more extreme days! You can cite him here if you like.

Nope, sorry, this is just wrong. We know this as a fact better than we know any other fact. Better than we know that the world is round or that the sea is full of water.

I see the fervour's building again.

Never mind FMRI and direct neural stimulation and such. Just consider the results of five thousand years of recorded experiments with psychoactive drugs. Some of our oldest records are devoted to discussions of this subject.

Drug experiences cut both ways here.

"Mind is what brain does" is less controversial than the atomic theory of matter or the germ theory of disease or the heliocentric theory of the solar system.

Perhaps to you, but to the rest of interested humanity this isn't so. The materialist theory of consciousness does remain controversial.

"Hopefully"? Under what conditions can it not be so?

That scientists spend the next 50 years studying and don't happen to get there.

That explanatory gap is entirely imaginary and logically incoherent, yes. However, there is a real explanatory gap involving real theories about real processes. We have not explained everything the human brain does, not by a long shot.

That's nothing to do with the HPC. As I said, the Strong AI position is that there exists no explanatory gap between brain processing and consciousness.


It's your opinion of someone else's opinion of other people's opinions. Who cares?

These attempts to distance yourself from anything which doesn't agree with your position do get a little predictable, Pixy.

Rita Carter (2002) wrote..."The only comprehensive theories which deal directly with the hard problem are those which claim it does not exist - that consciousness simply is physical processes and everything else is illusory. That is a neat idea and may turn out to be correct. But few people - myself included - are satisfied by it."

She does not agree with you! It's clear. It's OK. Can you simply handle this, or does your mind have to turn somersaults again trying to spin it all around?



Well, okay, cool. :)


No, that's backwards. They don't "intercommunicate through consciousness". Consciousness is generated through their intercommunication.

Consciousness is not generated through communication. It is communication. There's no separation.

Essentially, yeah. There are all these autonomous and semi-autonomous processes running in your brain that are - in a very real sense - conscious themselves, but to which you do not have access. You have access to the outputs of those processes, but not to the internal states.

So not only are you an illusion, what you are an illusion of is an illusion. :)

Not only are you an illusion, but your domain of illusory influence is even smaller than you think.


Well, yeah. Consciousness needs a number of switches organised into a feedback loop. That is what I've been pointing out all along.

In this case an exceptionally large number of switches.

Nick
 

Back
Top Bottom