• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Materialism - Devastator of Scientific Method! / Observer Delusion

I'm thinking that the "self is an illusion" gambit equally invalidates any claim to perception, evidence or experience, scientific or otherwise.

That which is found to have evidence and be repeatable, is illusory evidence and repeatability. That which is believed without evidence, is illusory belief without evidence.

Still doesn't excuse "woo", be it chakras, homeopathy, reiki, auras, or belief that crystals do things by being around, other than attract dust.
 
We are not. There is not a self that is conscious. There is consciousness, and within that consciousness runs a programme that makes it seem that there is someone that is experiencing consciousness.

This is just basic stuff really.

We don't know for sure that it necessarily does. Consciousness is a 3D workspace that certainly seems to emerge from brain activity. But we don't know how the brain finally is. We know what it looks like in the workspace. We know how it functions, in the terms of the workspace. But finally this could mean virtually nothing. The answer could be completely left-field. Though as brain imaging develops this one could get easier.

We don't know where consciousness is. Consciousness is actually a type of meta-space if you think about it from the perspective of the processor that is generating it. This truth many people don't grasp and so they tie together the brain and consciousness in a way that doesn't further understanding.

When you say something like "the brain is conscious" personally I think you have to be careful. The brain that is conscious is not the brain that appears in consciousness.

There's nothing it's like to be you. There's no such thing as a persisting self. This bit is just easy if you develop a certain level of subjective awareness. You can watch the illusion being created.

Dennett's a genius who took a huge position in that book. But he himself openly states that he "doesn't do neurons." He's not so much trying to tie consciousness to the brain, more sharing his insights around functionality. Dennett is a great subjective observer.

If you can clearly see the illusion of the Observer for what it is, then the HPC is immediately invalidated. It's completely gone, along with roughly half of your points above.


A pile of nonsense that does nothing more than confirm the degree to which these subjects lack any firm basis in empirical science. Which is the problem of course.

...but the 'There's no such thing as a persisting self' silliness deserves special mention...especially when, in the very next line, you refer to some mysterious quantity you refer to as 'you' who watches the illusion of 'you' being created. An illusion watching an illusion. Maybe you'd better go back to 'basic stuff' school.
 
'We are not. There is not a self that is conscious. There is consciousness, and within that consciousness runs a
programme that makes it seem that there is someone that is experiencing consciousness.

This is just basic stuff really.'

Yes, you've just ran into the Homunculus Problem. Clever shifting of the meaning of the word 'consciousness'
into 'someone' doesn't make it go away.

First you say there is consciousness. Without further definition I have to assume from the context that you mean
something that ultimately comes down to brain-processing. And you say that this brain-processing runs a program
much like an ordinary computer program that creates a 'someone'. You've shifted the Homunculus Problem into the
concept of the 'someone' computer program but without the ability to show me the actual source code the Homunculus
Problem remains. The 'someone' computer program is another thing that just ends in infinite regression, you will
endlessly shift the Homunculus further down the line.
Dennett's argument is just a clever way to make you think that you've gotten rid of the Homunculus, but on deeper
analysis, He's still in there.

Homunculus Problem, Infinite Regression, basic stuff, really :)
 
First you say there is consciousness. Without further definition I have to assume from the context that you mean
something that ultimately comes down to brain-processing. And you say that this brain-processing runs a program
much like an ordinary computer program that creates a 'someone'.

No. It creates the sense of there being someone. There's a huge difference.

What you are arguing is akin to insisting that rabbits can manifest out of thin air because you've seen a guy in a black suit drag one out of a hat.

Homunculus Problem, Infinite Regression, basic stuff, really :)

The so-called homunculus problem only exists while you believe that someone is reading these words. When you can witness the constant creation and dispersal of the mental self, you can see straight through the illusion.

A memeplex has crawled inside your head, TA, and succeeded in convincing your brain that it's you. What will it come up with now?

Yours

Rogue Memeplex #1
 
Last edited:
The "self is an illusion" gambit is uninteresting and still doesn't solve any of the problems, because subjective experience and consciousness don't go away, even if the self is an illusion.

That's correct. But they're no longer happening to anyone. The power is gone out of them. And the psyche is usually far more concerned with who it is that is actually looking, rather than brain-in-a-vat, p zombs or the HPC.

And I never said "the brain is conscious", so please quote me accurately.

Look again FB, I said "like". I don't like to misquote either.
 
Last edited:
'Yes, that is the Great Fantasy of the Skeptic Mindset. Truth without metaphysics. Truth by Thought Alone.
Amusingly enough it's a fantasy that materialism itself can completely overthrow. Sadly most skeptics don't
have the awareness to see this.'

Completely overthrow? Then show me the source code of your 'Observer' computer program.

'You have to get past the Observer first. Once you've done that you're at first base. Then, frankly, the HPC is
the least of your mind's worries!'

You haven't past the Observer yourself. The Homunculus is still lurking. Your Observer still requires an Observer,
that requires an Observer etc. Just try to write the program, even if in pseudo-code.
 
'No. It creates the sense of there being someone. There's a huge difference.'

Then show me the 'sense-of-there-being-someone' algorithm. How do we get this _sense_? No further shifting
of the Homunculus, please.
 
I thought about this for awhile. Of course, a program requires some outside agent to do the programming (similar to the outside agent that I claim gives meaning to whatever a simulation is doing), but you can get around that by having the environment do the prograamming for you, but I think you run into a problem.

So let's say we have a very powerful CPU and attach some appendages and sense organs to it, and turn it loose in a hostile environment. Every time it "dies", it tries out different sub-routines. Given enough time, it will have "evolved" and developed very efficient sub-routines for defense, resource gathering, and problem solving. If the environmental challenges are harsh enough, it might become very intelligent.

But aren't we talking about a P-zombie? At what point in all these evolving sub-routines does it develop consciousness? Maybe integrated information theory is correct and anything that integrates information (at some threshold level I guess) is conscious.

Sorry been busy, though not entirely self-absorbed.

Yep, perhaps integrated information theory is correct and it is just a matter of complexity between the cross communicating sub-routines. We do know that there are different levels of consciousness and that even conscious a person can lose their sense of agency (at least in part). A P-zombie that doesn’t know it’s a P-zombie? With the distinction of external appearance of self-awareness but no internal self- awareness lost and the system having an internal sense of self-awareness (however it is constructed) the issue of that particular “P” philosophical constraint becomes moot.
 
'No. It creates the sense of there being someone. There's a huge difference.'

Then show me the 'sense-of-there-being-someone' algorithm. How do we get this _sense_? No further shifting
of the Homunculus, please.

I'm guessing it's pretty much like the "sense of there being someone else."

If my mind is able to produce agency when stimulated by moving pictures on a television screen - human is good, but cartoons work too - and if I can hold a model of someone dead in my head and thoughts (so that the question, "What would dad have done?" makes sense) and if I can attribute agency and personality to animals ("Fido is just anxious because it's raining"), then it seems I have the ability, not to just create an observer from the data coming in about "me," but about a great many things in my environment, at least some of which are illusions. This is so ingrained we don't even think about it much, we just do it without effort.

Do each of these attributions also require a homunculus: of my dog, of my dead father, of Bugs Bunny? If not, why not, since they act in the same fashion and suffer the same paradoxes already described. Is it so surprising that, with this extraordinary power to model the behavior and personality of others, that I shouldn't do exactly the same thing for my most intimate companion, my own body? That I should impute agency where there is none? All that is required is a machine which generates mental models, of others and of my own internal state.

And why should I trust that I have some special perspective? I can say I know how Abraham Lincoln would behave in situation X with as little force as I can say how I would behave in situation X. It works just as well if I am merely guessing in either case. I can guess well, if I know how Lincoln has behaved in the past, and guess well if my memory about my own actions is accurate. But honestly? I don't actually know until I do it. Same with Lincoln. Same with my dog or Bugs. Maybe this time he won't trick Elmer Fudd by dressing like a hot blonde, but attack the bastard with a knife.
 
Last edited:
'Yep, perhaps integrated information theory is correct and it is just a matter of complexity between the cross
communicating sub-routines.'

The problem with this is that when too many things are considered conscious then the concept starts to lose its
meaning, because it means too much. It becomes redundant and should be left out of the theory altogether.

As evidenced by this sentence :

'With the distinction of external appearance of self-awareness but no internal self- awareness lost and the system
having an internal sense of self-awareness (however it is constructed) the issue of that particular “P”
philosophical constraint becomes moot.'

Redundant and should be left out of the theory altogether.

But why, then, are we having this discussion?
 
I'm thinking that the "self is an illusion" gambit equally invalidates any claim to perception, evidence or experience, scientific or otherwise.

That which is found to have evidence and be repeatable, is illusory evidence and repeatability. That which is believed without evidence, is illusory belief without evidence.

Still doesn't excuse "woo", be it chakras, homeopathy, reiki, auras, or belief that crystals do things by being around, other than attract dust.


I'm not sure the invalidation claim is necessarily correct. Doesn't even an "illusion" arise from "perception, evidence or experience, scientific or otherwise"?

While aspects of ones sense of self may be illusory, the sense of a singular and consistent self and the self itself is a construct of reinforced and suppressed sub-selves (so to speak) that makes it no more of an illusion than a car or building being an illusion just because they are constructions of applicable sub-components.
 
+marplots I think there's a difference between the operational definition of self or intelligence or observation
that can be, at least in part, simulated by computer programs.
And I've argued that questions like the HPC boil down to questions about being, existence, itself, when taken far
enough. That's where the Homunculus ends, when you start asking questions about what it means to exist.
The question starts to devolve into 'what is the sense of there being anything at all?' or something like that.
 
What I'm arguing, to be clear, is that unless the materialists can present a truly Homunculus free
Observer-Illusion computer program, one can still legitimately argue in favour of a non-materialistic perspective,
within the boundaries I've outlined.
 
Completely overthrow? Then show me the source code of your 'Observer' computer program.

You're running it right this moment. Constructing that sentence, paying attention to those thoughts, did it not seen like someone was doing those things whilst they were happening?

That's the "I" programme. The Observer is an add-on. You're running Observer 2.0, upgraded to include vaguely convincing sounding arguments. Assuming you don't actually look, that is.


You haven't past the Observer yourself. The Homunculus is still lurking. Your Observer still requires an Observer,
that requires an Observer etc. Just try to write the program, even if in pseudo-code.

That the memeplex running now. Trying to create a convincing argument that the illusion it's creating is real. Let the thoughts pass by and its finished.
 
You haven't past the Observer yourself. The Homunculus is still lurking. Your Observer still requires an Observer,
that requires an Observer etc. Just try to write the program, even if in pseudo-code.

Yup, that's the memeplex talking.

You don't need an observer to see something. Visual information is.

This is not complicated. Just hugely counter intuitive. It goes right against everything the brain has learned since early infancy.

But just look at a brain. It's a bunch of neutrons and glia. How is it going to create an observer? Processing is.
 
Last edited:
That's what observation is. Stuff exists. You see it. You are observing stuff.

"Observation" is a verbal construct. Visual processing IS

We translate raw experience into useful constructions via thought. But this usefulness is derived from selective pressure. If construction helps us fulfill primal needs then it's favoured. Doesn't mean it's real

You are watching a magician and believing that rabbits really can emerge out of thin air.
 
Last edited:

Back
Top Bottom