• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Materialism - Devastator of Scientific Method! / Observer Delusion

I'm not describing consciousness. I'm pointing out that under monist materialism there can't be an actual observer or a point of observation.
How did you point this out? Your argument seems to come down to a repetition of the rejection of homunculus theory.

consciousness is the reduction in uncertainty that develops when a larger system creates more information than it's component subsystems.
I'm not following this definition. Information gain is the same as a reduction in the uncertainty of information about a system. The suggestion about more information than its component parts doesn't make any sense to me.

For example, let's say that we had three coins that could be either heads or tails. Each one has two values which we can represent with one bit. With three coins that would be three bits or eight possibilities. Sort of. It actually depends on how you use the information.

if we only care about the order of the coins then there are six possibilities.

coin1, coin2, coin3
coin1, coin3, coin2
coin3, coin1, coin2
coin3, coin2, coin1
coin2, coin3, coin1
coin2, coin1, coin3

If we only care about total heads and tails then there are four possibilities.
3 heads, 0 tails
2 heads, 1 tail
1 head, 2 tails
0 heads, 3 tails

If we care about the order of heads and tails then there are eight possibilities.
head, head, head
head, head, tail
head, tail, head
head, tail, tail
tail, head, head
tail, head, tail
tail, tail, head
tail, tail, tail

However, if we care about the order of the coins and whether it is heads or tails then we have 6 x 8 = 48

head1, head2, head3
head1, head2, tail3
head1, tail2, head3
head1, tail2, tail3
tail1, head2, head3
tail1, head2, tail3
tail1, tail2, head3
tail1, tail2, tail3

head2, head1, head3
...

head3, head2, head1
...

head1, head3, head2
...

head2, head3, head1
...

head3, head1, head2
...


We can see that we can have more information with three coins together than with three coins separately. So, are you arguing that three coins represent a conscious system?
 
Yes, come on, Barehl! I'm actually interested to hear your theory. I'm not going to shoot it down sentence by sentence like some do. I'm happy to let it accrue a reduction of uncertainty as it settles in.
I'm not worried about your shooting it down; it has a great deal of supporting evidence. But, I haven't published it yet so I can't talk about it in detail. I'm still working on whether knowledge theory excludes NCS like Turing machines from being cognitive. Suspecting it is one thing; proving it is another.

If GCS theory is right then General Artificial Intelligence is impossible. However, if the Computational Theory of Mind is right then there is no such thing as a General Cognitive System; this would simply be a class of computation.
 
Nick227, to tie things into a coherent narrative, perhaps you are working on one of the models of consciousness presented in this Scholarpedia entry? Identifying which would help focus what you mean about the missing observer. I'm still working through some of the more novel entries. I'd forgotten the Coalition of Neurons, which may surprisingly bring back the homunculus, or its functional equivalent (see entry). Good for a ponder moment.

As for non-computability, iirc that only arises in the Penrose-Hamerhoff Orch OR model (link is to cached version so you don't get a pdf). I love and hate this model. I love the potential microtubule connection; that presents a way for single-cell organisms to learn, something I'd been looking for. Hamerhoff's background in anesthesia would seem to put him in a unique position to focus on how to "turn off" consciousness, which of course needs to be defined in order to work on the problem. But he goes off the deep end eventually.

Let's say the research indicating quantum effects in photosynthesis bears out, and P-H are correct that QM effects are operative in microtubules. Even if so, this does not imply, as they argue, that the information is in the superimposed bubbles. Rather, far more likely the values are stored and retrieved via this mechanism, or marshalled for use. So, like photosynthesis, no mysterious outcome from using QM, just the macro/classical results. Further, there is no indication that this is the right scale at which to look for consciousness.

Back to the Scholarpedia, have to love this introduction (bold added):
Models of consciousness should be distinguished from so-called neural correlates of consciousness (Crick & Koch 1990). While the identification of correlations between aspects of brain activity and aspects of consciousness may constrain the specification of neurobiologically plausible models, such correlations do not by themselves provide explanatory links between neural activity and consciousness. Models should also be distinguished from theories that do not propose any mechanistic implementation (e.g., Rosenthal’s ‘higher-order thought’ theories, Rosenthal 2005). Consciousness models are valuable precisely to the extent that they propose such explanatory links (Seth, 2009). This article summarizes models that include computational, informational, or neurodynamic elements that propose explanatory links between neural properties and phenomenal properties.

The manner in which I've been stating this in-thread is that cognitivism is behavioral and a functional description, while neurology provides a physical account. These need to be married in theory in order for consciousness to be explained. Prior to that, there is all the magnificent detail cracking the code might reveal (in terms of scientific wow, not woo).
 
Last edited:
Nick227, to tie things into a coherent narrative, perhaps you are working on one of the models of consciousness presented in this Scholarpedia entry? Identifying which would help focus what you mean about the missing observer. I'm still working through some of the more novel entries. I'd forgotten the Coalition of Neurons, which may surprisingly bring back the homunculus, or its functional equivalent (see entry). Good for a ponder moment.

As for non-computability, iirc that only arises in the Penrose-Hamerhoff Orch OR model (link is to cached version so you don't get a pdf). I love and hate this model. I love the potential microtubule connection; that presents a way for single-cell organisms to learn, something I'd been looking for. Hamerhoff's background in anesthesia would seem to put him in a unique position to focus on how to "turn off" consciousness, which of course needs to be defined in order to work on the problem. But he goes off the deep end eventually.

Let's say the research indicating quantum effects in photosynthesis bears out, and P-H are correct that QM effects are operative in microtubules. Even if so, this does not imply, as they argue, that the information is in the superimposed bubbles. Rather, far more likely the values are stored and retrieved via this mechanism, or marshalled for use. So, like photosynthesis, no mysterious outcome from using QM, just the macro/classical results. Further, there is no indication that this is the right scale at which to look for consciousness.

Back to the Scholarpedia, have to love this introduction (bold added):


The manner in which I've been stating this in-thread is that cognitivism is behavioral and a functional description, while neurology provides a physical account. These need to be married in theory in order for consciousness to be explained. Prior to that, there is all the magnificent detail cracking the code might reveal (in terms of scientific wow, not woo).

Descartes:

I think therefore I am

Nick

--- think therefore ---am.
 
Solipsism is the belief that only one's own mind exists. You go a step further and say that no mind exists.

No. I am not saying this. I'm saying no observer of mind exists. That this is a highly favoured illusion.

I saying that under monist materialism there simply cannot exist a point of observation.
 
So how's your exploration going so far?

Found any actual “huge fallacies” yet?

If so, has your means of deriving scientific understanding of our world been improved?

If not, has your means of deriving scientific understanding of our world been improved?

I think the main thing I've recently noticed for myself is that there cannot exist a "point of observation" in a monist materialist system. The best that can be achieved is to construct a representation that suggests a point of observation.
 
You are evading the question: Are you really in any doubt what part of the world is YOU, and what is not?

I'm pointing out that the distinction is not a priori, merely the result of adaptation.


Finally, you might want to reexamine your own argumentation. IF our locus could shift from the eyes (and other senses), then it would NOT be a mere material brain function, and then there WOULD be an oberver entity.

No! You are still assuming an observer behind the eyes, someone that is seeing. You can't have a point of observation in monist materialism. It's a physical impossibility. The brain merely constructs neural representations to suggest to itself that a point of observation exists, again because of the adaptive advantage offered.

What can be seen by examining both of these aspects of phenomenal reality is that there is a huge amount of adaptively-advantageous construction going on. And it is so effective that the brain simply assumes that this is how things are. It assumes that this perspective is a priori, not constructed. And the brain of the scientist is the same! It proceeds from assumptions.
 
Back to the Scholarpedia, have to love this introduction (bold added):

Models of consciousness should be distinguished from so-called neural correlates of consciousness (Crick & Koch 1990). While the identification of correlations between aspects of brain activity and aspects of consciousness may constrain the specification of neurobiologically plausible models, such correlations do not by themselves provide explanatory links between neural activity and consciousness. Models should also be distinguished from theories that do not propose any mechanistic implementation (e.g., Rosenthal’s ‘higher-order thought’ theories, Rosenthal 2005). Consciousness models are valuable precisely to the extent that they propose such explanatory links (Seth, 2009). This article summarizes models that include computational, informational, or neurodynamic elements that propose explanatory links between neural properties and phenomenal properties

The manner in which I've been stating this in-thread is that cognitivism is behavioral and a functional description, while neurology provides a physical account. These need to be married in theory in order for consciousness to be explained. Prior to that, there is all the magnificent detail cracking the code might reveal (in terms of scientific wow, not woo).

Hi Hlafordlaes,

One of the ramifications of what I'm pointing out, regarding the observer illusion, is that there may well not be an explanatory gap.

We tend to look at the discovery of neural correlation and say things like - OK, great. But how do you get from that to me actually seeing? Or - but my life is so vivid and intense how can mere neural activity create that?

In both these cases the assumption is that there is an observing self. The assumption is that the pipe is closed behind us.

But this perspective is not a priori. Rather it is constructed by the brain because of the huge adaptive advantage it offers.
 
I'm not following this definition. Information gain is the same as a reduction in the uncertainty of information about a system. The suggestion about more information than its component parts doesn't make any sense to me.

.................

We can see that we can have more information with three coins together than with three coins separately. So, are you arguing that three coins represent a conscious system?

Hi Barehl,

Well, I was just quoting the idea behind Integrated Information Theory, which seems to be a popular model of consciousness these days in neuroscience. I'm not so into math myself.

With your coins, I don't know. With say a triangle, I guess you could say that it contains more information than the 3 sticks comprising it. But I don't see that this makes a triangle conscious. The extra information would seem to exist in the brain. But I'm not sure here.
 
What is paying attention to them?

You have to look. If you just grab hold of questions you only reinforce the idea of a questioner.

Besides attention is not what it seems. It's just the brain amplifying information streams.
 
Nick appears to wish to reverse evolutionary cause and effect, by claiming that systematic mis-perceptions of reality occur because they're (in unexplained or implausible ways) "adaptive."

I'm saying that a great deal of what seems to be a priori real about our visual field only appears that way because of adaptive advantage.

In actuality, it's adaptive to perceive reality in ways that accurately reflect or usefully model what reality contains.

I think you would struggle to substantiate that statement. It is adaptive to perceive reality in ways that allow us to survive and procreate.

Don Hoffman claims that in cases where the fitness of perceptions can be tested against their truth... fitness always wins.
 

In this case, stop attaching to questions and see if there is any longer a questioner.

The brain creates the sense of there being someone doing these things through constantly attaching to thinking. Asking more questions does not help to find out whether there actually is or is not a questioner. You just stop.
 
In this case, stop attaching to questions and see if there is any longer a questioner.

The brain creates the sense of there being someone doing these things through constantly attaching to thinking. Asking more questions does not help to find out whether there actually is or is not a questioner. You just stop.


Are you saying that it is the brain that is paying attention to the thoughts?
 
Are you saying that it is the brain that is paying attention to the thoughts?

No. As I said before attention is actually just signal amplification.

I'm saying there is no one that hears thoughts. But paying attention to them creates the sensation and belief that there is. This is how a processor creates the illusion of mental selfhood.
 
Nick227, to tie things into a coherent narrative, perhaps you are working on one of the models of consciousness presented in this Scholarpedia entry? Identifying which would help focus what you mean about the missing observer.

I checked out the linked article, thanks. I'm familiar with most of the entries. I think what I'm saying applies to most of them really.

I'm still working through some of the more novel entries. I'd forgotten the Coalition of Neurons, which may surprisingly bring back the homunculus, or its functional equivalent (see entry). Good for a ponder moment.

Hmmm...

As for non-computability, iirc that only arises in the Penrose-Hamerhoff Orch OR model (link is to cached version so you don't get a pdf). I love and hate this model. I love the potential microtubule connection; that presents a way for single-cell organisms to learn, something I'd been looking for. Hamerhoff's background in anesthesia would seem to put him in a unique position to focus on how to "turn off" consciousness, which of course needs to be defined in order to work on the problem. But he goes off the deep end eventually.

I've never been much impressed with Stu Hameroff personally.

Let's say the research indicating quantum effects in photosynthesis bears out, and P-H are correct that QM effects are operative in microtubules. Even if so, this does not imply, as they argue, that the information is in the superimposed bubbles. Rather, far more likely the values are stored and retrieved via this mechanism, or marshalled for use. So, like photosynthesis, no mysterious outcome from using QM, just the macro/classical results. Further, there is no indication that this is the right scale at which to look for consciousness.

I don't think we're going to find answers at the quantum level. Too much is explained at a neural level.

For me the issue is not the science. It's the grasping of just how counter-intuitive the reality of the situation likely is. We seek to explain phenomena that almost certainly do not exist, such as the observer. This to me is the whole issue with qualia. Qualia are not the real issue in the so-called Hard Problem. The real issue is actually not even stated, it's just taken as a given. It's that it seems like qualia are happening to someone.

Our language also doesn't help sometimes. We say things like "can the computer be conscious? Does it experience consciousness?" I don't think it helps to ascribe consciousness in this way.

Currently, I have to admit that Guilio Tononi might be onto something. I thought the way he phrased his abstract meant he was an idiot but maybe I was being hasty. If the universe is a vast informational system, obeying the 2nd law, then consciousness might be appearing to emerge from the level at which there is the greatest rebuttal of that law - the level at which the system properties of the whole most exceed the system properties of individual subsystems. Which is currently that of the human brain.
 
Last edited:

Back
Top Bottom