• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Materialism - Devastator of Scientific Method! / Observer Delusion

Your argument basically boils down to...something you call 'you'...somehow (you don't explain how) has the capacity to understand that something called 'you' does not exist.

Leaving aside the obvious absurdity of such an observation...what is it that exists if 'you' do not?

Also excellent.

Hans :)
 
Massive significance? Simple word games don’t change the fact that it is just another way to say that the sense of self is constructed in the brain by neurological interactions while apparently just trying to deliberately ignore that sense of self and the neurological interaction of the areas of the brain that contribute to creating it, the one basic trick of a magician, deliberate misdirection.

The Man,

If it was in my power to stop your brain from attending to thoughts for say 15 seconds, whilst still retaining full consciousness, then I can assure you that, on resumption, you would say "Ah, Okay! I see what you mean. None of this is actually happening to anyone."

It isn't within my power, so I can refer you to Dan on the Centre of Narrative Gravity. Don't know if it helps any.
 
Last edited:
In the rabbit out of the hat analogy, the rabbit actually comes out of the hat. The trick or illusion is getting the rabbit into the hat without anyone noticing. Here and in a previous post you assert to essentially watching the rabbit being constructed in that hat. That doesn't make the rabbit go away that makes the need of a trick or illusion go away, how the rabbit gets into the hat. To do this you again refer to your own sense of self (the rabbit) to try to insist that it is not the trick that's the illusion but the rabbit. I doubt you could be any more self-inconsistent if you tried.

Fair enough. It's not a perfect analogy, but I can't think of a better one right now. The idea is that, if you can see the magician with a rabbit up his sleeve, you're less inclined to believe that rabbits can emerge out of thin air, and perform science based around that pretext.

Often what happens is that people try to give all aspects of emergent brain activity equal ontological rights (validity). I'm pointing out that in the case of those aspects of mental selfhood we call "the Observer" and "the Experiencer" they can't acquire validity in this manner. They have validity only in social context, not consciousness research... except to communicate!
 
Last edited:
Dennett's center of narrative gravity essay rests on one very questionable assumption. He posits a robot equipped with sensors and actuators that allow it to interact with the world and other beings (e.g. asking for help if trapped), and he also posits it possessing the computational capability to create narrative, including autobiographical narrative, from that interaction. But he also declares it non-conscious, using phrases like "clanking machinery" to make the idea that the robot could possibly be conscious sound absurd.

He has, most likely, thereby entangled himself in a contradiction. He has essentially re-invented the p-zombie in mechanical form. That is not a coherent concept, if the computational process of creating narrative (including autobiographical narrative) from interaction with the world is what consciousness actually is. Or if consciousness is a necessary component of that processing ability.

Personally, I don't think you actually need the robot part for the concept to be understandable and valid. But, it's a long debate but you do certainly have a point to some degree. Totally fair.

However, in the specific context I'm referring to - The Observer, or The Experiencer - I find Dennett's pointing to be useful. If we can start to comprehend how a brain comes to understand itself as having a mental self, and then developing behaviour from that pretext, we can start to see which aspects of that mental self have, and which aspects lack, validity in hard science terms.

Without this we end up like Chalmers or Tononi utterly convinced that there must be "someone who experiences consciousness." And we set out upon our magnum opus consciousness paper, starting from an untested core assumption that will scupper it before it's past the first paragraph.

Ultimately, the idea of a self comes not from some memetic illusion, but from the evolutionary process from which the cognitive processes in question arose. Evolution requires competitive interactions between replicating organisms. If there are no competing replicating organisms with varying traits, then there can be no evolution. Since evolution is a historical fact, organisms must exist.

I think you need to distinguish between physical and mental selfhood here, and how the first realistically likely led to the latter. And then it's needed to break down mental selfhood into its varying aspects and ensure that they are fit for purpose.
 
Last edited:
That's why I asked, in all seriousness, where the illusory-self-causing "memeplex" that you claimed "crawled into [someone's] head" crawled there from. Of course you couldn't answer, because it the memeplex you describe couldn't and didn't come from anywhere else. It was there first. In biological history, it was there before the first nerve cell evolved.

I my case it's seems to have come initially from other people with the memeplex. I'm lying there in my cot, and it's all flowing by, and then one day my mum and dad started calling me by a name. I picked up the idea of it pretty quick, you know, like what was expected, how to behave.
 
Dennett's center of narrative gravity essay rests on one very questionable assumption. He posits a robot equipped with sensors and actuators that allow it to interact with the world and other beings (e.g. asking for help if trapped), and he also posits it possessing the computational capability to create narrative, including autobiographical narrative, from that interaction. But he also declares it non-conscious, using phrases like "clanking machinery" to make the idea that the robot could possibly be conscious sound absurd.

He has, most likely, thereby entangled himself in a contradiction. He has essentially re-invented the p-zombie in mechanical form. That is not a coherent concept, if the computational process of creating narrative (including autobiographical narrative) from interaction with the world is what consciousness actually is. Or if consciousness is a necessary component of that processing ability.

Actually, thinking more, I don't think this criticism is valid. But gotta dash right now...
 
How does this keep the scientific method from being valid, it does not require a subjective self?

I am an empty house, just a body, no mind, just brain processes.

yet this organic body can use the method, no magic self needed.
 
The Man,

If it was in my power to stop your brain from attending to thoughts for say 15 seconds, whilst still retaining full consciousness, then I can assure you that, on resumption, you would say "Ah, Okay! I see what you mean. None of this is actually happening to anyone."

It is certainly not within your power, anyone's power or anything's power as not "attending to thoughts" and "still retaining full consciousness" is just a contradiction. As I've already related I've been semi-conscious and attending somewhat to thoughts (the lucid dreaming I mentioned before). On a regular basis I am unconscious and obviously not attending to thoughts. One of the things that can happen to me during that time is that someone or something can wake me up. Eventually though I generally just wake up by myself.


It isn't within my power, so I can refer you to Dan on the Centre of Narrative Gravity. Don't know if it helps any.

I doubt it, while I do have a passing interest in the neurological and informational theories about consciousness, I'm not particularly interested perpetuating memelogical tropes on that subject. Especially those that are given to self-inconsistency or just general inconsistency.
 
Last edited:
Fair enough. It's not a perfect analogy, but I can't think of a better one right now. The idea is that, if you can see the magician with a rabbit up his sleeve, you're less inclined to believe that rabbits can emerge out of thin air, and perform science based around that pretext.

Well no analogy is perfect, that's why it is just an analogy. Again what you have said yourself and to have observed yourself is the rabbit being constructed in the hat, by the hat. So not out of thin air but a construct of self-referential sub components. An analogy I've been toying with lately is that of a corporation and a board of directors.

Often what happens is that people try to give all aspects of emergent brain activity equal ontological rights (validity). I'm pointing out that in the case of those aspects of mental selfhood we call "the Observer" and "the Experiencer" they can't acquire validity in this manner. They have validity only in social context, not consciousness research... except to communicate!

Again what has "ontological rights (validity)" are the neurological impulses and the interactions of the different parts of the brain that contribute to that sense. The sense itself is the output that by feedback can become part of the input. Those "ontological rights (validity)" don't diminish just because it can be output, input and part of the processing.
 
Actually, thinking more, I don't think this criticism is valid. But gotta dash right now...

Nick227, somebody or something is posting here using your name if, as you say, it isn't you then who is it?
 
Again what has "ontological rights (validity)" are the neurological impulses and the interactions of the different parts of the brain that contribute to that sense. The sense itself is the output that by feedback can become part of the input. Those "ontological rights (validity)" don't diminish just because it can be output, input and part of the processing.

I agree. The neurological activity that creates the illusion is valid. How could it be otherwise, it's just neurological activity? I totally agree. But this does not mean that the illusion created by the activity is valid for all contexts to which it might be applied.

Because what we're talking about here is neurological activity which suggests the presence of something that simply isn't there. And then proceeds to build behaviour on this unexamined assumption. There's nothing wrong with it. Given the restrictions of a monist system it can't get the job done any other way. But simply because something is useful and highly functional within certain parameters does not make it real.

If I hand you a piece of paper which reads "There's an ogre in the basement. Don't go there!" - the paper is real. The neurological activity which created the thought about an ogre, and which then allowed it to be translated onto paper is real. The neurological activity in your brain which allowed you to interpret and understand the piece of paper is real. But this doesn't mean that the ogre is necessarily real. To find that out you'll have to go down to the basement yourself.
 
Last edited:
Myriad said:
Dennett's center of narrative gravity essay rests on one very questionable assumption. He posits a robot equipped with sensors and actuators that allow it to interact with the world and other beings (e.g. asking for help if trapped), and he also posits it possessing the computational capability to create narrative, including autobiographical narrative, from that interaction. But he also declares it non-conscious, using phrases like "clanking machinery" to make the idea that the robot could possibly be conscious sound absurd.

He has, most likely, thereby entangled himself in a contradiction. He has essentially re-invented the p-zombie in mechanical form. That is not a coherent concept, if the computational process of creating narrative (including autobiographical narrative) from interaction with the world is what consciousness actually is. Or if consciousness is a necessary component of that processing ability.

Actually, thinking more, I don't think this criticism is valid. But gotta dash right now...

I looked at the passage, linked earlier. Yes, he makes one odd sounding sentence on p3...

DD said:
That is, I am
stipulating that this is not a conscious machine, not a "thinker." It is a dumb machine, but it does have
the power to write a passable novel. (IF you think this is striclty impossible I can only challenge you to
show why you think this must be so, and invite you read on; in the end you may not have an interest in
defending such a precarious impossibilility-claim.)

... which sounds a bit funny in the light of how he later refuted Chalmers p-zombie hypothesis. But it doesn't affect what he's saying in this context. It's just phrased a bit funny when you consider what he said later.
 
Last edited:
Nick227, somebody or something is posting here using your name if, as you say, it isn't you then who is it?

Nearly 40k posts and this is the best you can do?! Dearie me!

Ah, I get it. Your memeplex is still running 1.0. That's the version that came out when Derek Parfit first published Reasons and Persons back in the mid 80s.

It would write to him and say - "OK, Mr Parfit, if there's no persisting self then who wrote your book, eh? Gotcha there, haven't I?"
 
How does this keep the scientific method from being valid, it does not require a subjective self?

I am an empty house, just a body, no mind, just brain processes.

yet this organic body can use the method, no magic self needed.

Yes, for sure. We can measure away, as I pointed out many posts ago. But without this seemingly hard-boundaried observer so the significance of science as a means to establish what is true, what is real, is inevitably diminished.

It's like a man standing inside a large pipe and looking one way. He can see the walls of the pipe stretching out before him and he's trying to work out what is really true, what is really real. He believes that the pipe is closed off behind him, that he's leaning on something firm, something established - himself. But then one day he turns around and realizes that the pipe is not closed, but open both ways. He's not leaning on anything and suddenly has no clue who he actually is any more. He thought he was something that he now sees he cannot be. Science can still be an interesting tool for him. He can use it to design bridges, make medicines, all sorts of useful things. But to establish what is true?. He laughs at the thought.
 
Yes, for sure. We can measure away, as I pointed out many posts ago. But without this seemingly hard-boundaried observer so the significance of science as a means to establish what is true, what is real, is inevitably diminished.
When did truth become the objective of the scientific method?
The brain, language and science is about models.
The scientific method is about determining which factors are reflected in the actions of apparent reality.

Truth would be part of a false dichotomy regards science and the sientific method, which is about discerning the most accurate models.

Who brought truth into the scientific method?
 
Your argument basically boils down to...something you call 'you'...somehow (you don't explain how) has the capacity to understand that something called 'you' does not exist.

Leaving aside the obvious absurdity of such an observation...what is it that exists if 'you' do not?

And this has always been the death spiral of solipsism.

Bob says there is no proof he objectively exists. Either Bob objectively exists to make the statement, which proves it false or Bob doesn't objectively exist so he can't make the statement.
 
And this has always been the death spiral of solipsism.

Bob says there is no proof he objectively exists. Either Bob objectively exists to make the statement, which proves it false or Bob doesn't objectively exist so he can't make the statement.
Cogito ergo sum.

I exist in a matrix. The nature of that matrix is debateable, but my existence is not.
 
Nick, is your point that without an observer there is no way to sift through the good/bad evidence that any basic science requires you to do? Something along those lines?
 
Personally, I don't think you actually need the robot part for the concept to be understandable and valid. But, it's a long debate but you do certainly have a point to some degree. Totally fair.

However, in the specific context I'm referring to - The Observer, or The Experiencer - I find Dennett's pointing to be useful. If we can start to comprehend how a brain comes to understand itself as having a mental self, and then developing behaviour from that pretext, we can start to see which aspects of that mental self have, and which aspects lack, validity in hard science terms.

Without this we end up like Chalmers or Tononi utterly convinced that there must be "someone who experiences consciousness." And we set out upon our magnum opus consciousness paper, starting from an untested core assumption that will scupper it before it's past the first paragraph.


The flaw you acknowledge in Dennett's argument means that he didn't actually succeed in avoiding an infinite regress. If the mere "clanking computer" (did he have any idea how computers work when he wrote that?) could generate autobiographical narrative without also generating a sense of self, then so could a brain. But that amounts pretty much to assuming the conclusion.

I think you need to distinguish between physical and mental selfhood here, and how the first realistically likely led to the latter. And then it's needed to break down mental selfhood into its varying aspects and ensure that they are fit for purpose.


What is the distinction between the physical self and the mental self? What part of the mental self is not physical? Since we're talking about strict materialism here, the answer is obvious. The mental must be a subset of the physical, and all mental processes are therefore part of the organism. Either part of its form, or part of its functioning.

Instead of focusing on mental selfhood, consider mental models of the world. How they might work, what they might include, and how they might be useful to maintaining the existence of an organism.

Consider, for instance, a jellyfish that (in order to thrive in its environment) must swim upward to shallower water at night and downward into deeper water in the daytime. Would the program for this mechanism require, or benefit, from a world model that included the self? Not really. "Undulating motions move me through a stratified space toward where conditions are better" is a possible model that includes a self, but the simpler model "undulating motions alter the conditions to make them better" works just as well.

The same might be true of, say, a turtle that hatches on the beach and has to crawl to the water to survive. "Move so as to make the water closer and danger-things farther away" might work as a model, but a model that actually included spatial positions things (and therefore, necessarily, a concept or "state variable" of self position) might also be enough of an improvement to be worth the neural overhead.

Now consider a mother bird that must collect food and bring it back to her hatchlings for them to have a chance to survive. This requires, obviously, a much more complex world model. She must find food, be aware of other birds (try to feed on what they're feeding on, but try not to let them feed on what you're feeding on), be aware of other hatchlings not her own (don't feed those!), and navigate around. A self-less world model (for instance, a massive memory table matching a vast number of possible long sequences of wing movements with the resulting changes in conditions, expanding upon the jellyfish technique) would become far too unwieldy. The overhead of a self inclusive model with elements such as self position and self motion is not only worth it but necessary. The more complex the model of the world becomes, the more the model of the self arises as figure versus ground. And this is before we add such cognitive elements as memory, planning, or language.
 
It's like a man standing inside a large pipe and looking one way. He can see the walls of the pipe stretching out before him and he's trying to work out what is really true, what is really real. He believes that the pipe is closed off behind him, that he's leaning on something firm, something established - himself. But then one day he turns around and realizes that the pipe is not closed, but open both ways. He's not leaning on anything and suddenly has no clue who he actually is any more. He thought he was something that he now sees he cannot be. Science can still be an interesting tool for him. He can use it to design bridges, make medicines, all sorts of useful things. But to establish what is true?. He laughs at the thought.


Suppose I were that man in the pipe, and you were there with me. Because your mental model is more complete and more accurate (you know about and can make use of the opening behind us), you would be able to amaze me with your ability to disappear and reappear at will. Simply take a step backward, and I could no longer see your or imagine where you might have gone. Unless I update my own mental model and turn around, your ability would be a mystery that I could neither duplicate nor understand. That makes your mental model superior.

So, how does this analogy apply? What does your your superior understanding of cognition allow you to do, that those of us mired in the useless notion of a self cannot?

(I hope it's something that can be used to fight crime!)
 

Back
Top Bottom