• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Do Materialism and Evolution Theory Undermine Science?

An apt question indeed!! But, it is not idealism, RD. It is pretty much straight-down-the-line modern materialism. You may recall some months ago, on another thread, me pointing out that when you start to investigate selfhood, beneath the level of the functioning independent organism, it gets weird. It gets very weird indeed. Things seem to be deeply, deeply counter-intuitive. You refused to accept this, despite me urging you to read Blackmore, Dennett, and others. I figure you clearly are interested at some level or other, elsewise why would you continue the dialogue? I'm just wondering if you will ever realise that actually it is pretty strange.
I've read Blackmore, Dennett, and others. Where's the weird?

This is the problem. It's is not that things are necessarily complicated. It is that they are deeply, deeply counter-intuitive. Scientists have spent decades trying to find "places in the brain where things become conscious" and similar. They have spent decades trying to create and sustain a model that matches what seems intuitively correct. They haven't got very far. This, as I think most now realise, is because the models that work are deeply counter-intuitive. They challenge directly the brain of the scientist working on them, for what is related in the model must also be going on inside that brain. All the defences that the notional self uses to protect itself are inevitably activated to deal with any threats the model creates in the brain of the investigator, and this is the real issue.
No.
 
What if this wasn't true? What if there is a tremendous overlap in the tasks many parts of the brain are supposed to do? I think we are finding is that there are trends for thoughts to occur in certain places, but that exceptions and differences crop up, every now and then. The functional units are not as easily delineable as you might think.

I am talking about a neuron as the unit.

Unless I have missed out on some major discoveries, I don't believe there is just a trend that all thought occurs on a substrate of neurons, and that one neuron is easily distinguishable from its neighbors.
 
If you read the rest of that post you will see that I point out that I'm fine to consider things solely at the level of the functioning organism and conclude that there really is a self doing things. But once one gets beneath that level, as inevitably one must if one is to meaningfully investigate selfhood and consciousness, things get very strange indeed.

But what you are doing on these threads is forcing people like Pixy and I, who really do stop our consideration at the level of the functioning organism, to argue about those deeper levels we care nothing for.

Basically we say "there is a self that does things, that is our 'self'," and you reply with "ok I agree but the real self is deeper, don't you see, and it isn't what you think it is." Well of course it isn't "what we think it is" because we don't care to delve deeper -- we are materialists.

In essence you argue with materialists about dualism, but that is retarded, because we aren't dualists. This is evident from your endless attempts to pigeonhole half of us into dualism when we are clearly not dualists.
 
But what you are doing on these threads is forcing people like Pixy and I, who really do stop our consideration at the level of the functioning organism, to argue about those deeper levels we care nothing for.

That sounds honest. Thank you. But I am actually not forcing you to do anything.

Basically we say "there is a self that does things, that is our 'self'," and you reply with "ok I agree but the real self is deeper, don't you see, and it isn't what you think it is." Well of course it isn't "what we think it is" because we don't care to delve deeper -- we are materialists.

I don't think you can define materialists as those who don't care to delve deeper. Dan Dennett is a materialist. His "multiple drafts" theory, for example, delves into the realm of actual brain processing. Plenty of other researchers who claim to be materialists have done the same.

You could say, perhaps, that you are a materialist but that you are not interested in selfhood below the level of the functioning organism.

In essence you argue with materialists about dualism, but that is retarded, because we aren't dualists. This is evident from your endless attempts to pigeonhole half of us into dualism when we are clearly not dualists.

Considering self and world at the level of the whole organism is dualistic. It's overtly dualistic. You can wriggle your way out of bits of the dualism with some quasi-behaviourist modelling, but basically it's dualist.

As I see it, the situation is pretty much as Dennett called it back in the early 90s. No one wants to regard themselves as a Cartesian Dualist, it's the unhippest term on the block, but actually the vast majority are still mentally modelling selfhood on the basis of the Cartesian model.

Nick
 
Considering self and world at the level of the whole organism is dualistic.
No.

It's overtly dualistic.
No.

You can wriggle your way out of bits of the dualism with some quasi-behaviourist modelling, but basically it's dualist.
No.

As I see it, the situation is pretty much as Dennett called it back in the early 90s. No one wants to regard themselves as a Cartesian Dualist, it's the unhippest term on the block, but actually the vast majority are still mentally modelling selfhood on the basis of the Cartesian model.
No.
 
Nick227 said:
I can foresee any future criticisms I might make of your understanding being dealt with in similar manner! Lucky that I am not "in it to win it."

You could also have interpreted it as an illustration of what happens when we simply choose to change description levels when it serves to keep our perspective intact – a kind of defensive mechanism? Perhaps (from one point of view)! I realize it was still uncalled for and thus I should apologize. Sorry for that! My rhetoric can also be seen as implying you are being silly or naïve by introducing such terms when criticizing a point you have made. For that too I should apologize. Again, sorry for that!

Now, back to the discussion…

However, from another point of view it could serve to illustrate where the first-person perspective can go no further without resulting in negation of a previously discussed phenomenon… but where a third-person perspective actually might be able to continue, although in a modified manner. I submit that even though such continuation is ‘modified’ it could still be quite ‘valuable’ for understanding the self, at least the underpinnings of such phenomenon.

As I see it, it would be problematic if there was no identification. How would you consider it problematic?

It ‘could’ be problematic if ‘self-narrative’ and ‘benign user illusion’ is taken to be the same thing without being aware of the ‘potential’ difference. As I see it, a self-narrative could simply imply a narrative of a central figure in the narrative, whereas ‘user illusion’ seems to include both the narrative and the interpretation of the narrative as a narrative (regardless of content). But you seem to have cleared that up later in your reply.

Watch the duality radar here. Spelling radar too!

That is the limitation I have (speaking English as a third language). I still think you can understand what I write.

About the duality radar… well, I don’t think it is too much of a concern here. It would only come up if one would confuse a way of referencing to something within a given context with the ontological position of dualism. If I watch hockey on TV and the commentator talks about someone passing the puck to someone else, it is fairly ineffectual to talk about ‘no one’ actually passing the puck. When talking about processes in the brain, from a third-person perspective, the same contextual situation should be understood. That is actually the strength of trying to be objective: it can illuminate and thus make it possible to communicate some detailed processes that aren’t directly accessible through introspection. We can assign properties to distinguishable “entities” without actually proposing dualism – it is simply a way to denote.

I think if you examine this notion of a "narrative interpreter" you will agree that it cannot be regarded as a self.

Some philosophers and scientists certainly seem to call it that. Why else would James call it the “I” or Baars conclude the following: “Consciousness enables access to "self" -- executive interpreters, located in part in the frontal cortex.”

It is certainly not the same as the self in any given narrative about self; it can perhaps be seen as one of the systems that make it possible for there to be a self-narrative, including narrative content, in the first place.

It is also perhaps an example of why we even talk about self-referencing systems to begin with: we can observe such “closed” mechanisms (by which we then understand some of them to be best described as self-referential). We talk about them being self-referential because they seem to at least to some degree have semi-autonomous characteristics. Our observations would also lead us to think that such “self-systems” would receive their own flow of sensory input, which empirically seems to be the case.

One problem here... it is not only the presence of a narrative about "I" that creates a sense of self. Once the notion of limited selfhood is created in the brain any thinking narrative will maintain this notional selfhood, for it will constantly appear that there must be a self that is experiencing the thoughts. Furthermore, the thoughts themselves will soon relate almost entirely to this notional self.

Which is also to say that ‘re-inventing the wheel’ every time is ineffective (conversely also called learning)!

I do not necessarily disagree with you here (regarding the notional self). I’m however skeptical about that being all that there is to this issue, i.e. that you simply talk about a tiny portion of what other people perhaps mean by self.

Whilst thinking, and identification with thought, are taking place it will tend to reinforce selfhood, pretty much no matter what the thinking is about. An exception might be a narrative specifically intended to attack the idea of notional selfhood though.

Many thoughts do not relate to the notional self either (for instance when concentrating deeply on a task, the task might engulf “oneself”, so to speak), it is only afterward when thoughts like “I did well (or bad) in that task” when notional self is brought back. On the one hand, there seems to be a connection between intensity of thinking, concentration, experience & doing, and how much of a notional self is perceived at those instances. On the other hand, with extreme relaxation or meditation a similar relationship is also found.

Since the notional self can be attacked by the idea that the notional self is, well, just a notion – I know, it sound trivial on paper but it is much harder to put into practice, well, at least harder to get to the result. One could maybe say that one meme is attacking another meme: the ‘benign user illusion’ is transformed into a ‘malign user illusion’ before it is eventually ignored (as Blackmore would have it).

Well, narratives are narratives. They create self. When I say one can passively observe thinking I am trying to relate that which appears to take place. Who knows?

Well, I submit that there wouldn’t be any kind of such relating unless there is some kind of self-reference that isn’t directly dependent on thinking alone. You could perhaps call that the self as observer?
 
You could also have interpreted it as an illustration of what happens when we simply choose to change description levels when it serves to keep our perspective intact – a kind of defensive mechanism? Perhaps (from one point of view)! I realize it was still uncalled for and thus I should apologize. Sorry for that! My rhetoric can also be seen as implying you are being silly or naïve by introducing such terms when criticizing a point you have made. For that too I should apologize. Again, sorry for that!

OK, thanks.

Now, back to the discussion…

However, from another point of view it could serve to illustrate where the first-person perspective can go no further without resulting in negation of a previously discussed phenomenon… but where a third-person perspective actually might be able to continue, although in a modified manner. I submit that even though such continuation is ‘modified’ it could still be quite ‘valuable’ for understanding the self, at least the underpinnings of such phenomenon.

I mean, the 1st person perspective will inevitably be problematic here, given that in many ways it itself is being challenged. Just what exactly do we experience in the moment? What is there? This question is not easy to accurately answer, though it often seems to be so. If we consider Dennett's Multiple Drafts model, then the act of interrogating the multiple drafts with a probe of this nature - "What's happening" - will produce another draft - a thought narrative. Whether this narrative accurately reflects what's going on is questionable, and certainly when it comes to probing drafts on the specific subject of selfhood it must be very questionable indeed.

So, for me, whilst I appreciate my own 1st person insights I'm also pretty skeptical of them here.

That is the limitation I have (speaking English as a third language). I still think you can understand what I write.

Yes, for sure. I'm impressed that you're this good in your 3rd language. I was thinking of "Gatzzangia" (sic). You spelt his name wrong twice, interestingly in a manner that might make a social psychologist raise eyebrows! (anglo-saxon fear of n-word) Not trying to make something of it, just noticed.


Some philosophers and scientists certainly seem to call it that. Why else would James call it the “I” or Baars conclude the following: “Consciousness enables access to "self" -- executive interpreters, located in part in the frontal cortex.”

I find his terminology here pretty Cartesian. The tendency will inevitably be to seek a "self", whether it be a single neuron or an executive level. I've no doubt that heirachies exist, though my background knowledge is acutely limited here, but if one starts to consider them "selves" then I think a dangerous line is being crossed.

It is certainly not the same as the self in any given narrative about self; it can perhaps be seen as one of the systems that make it possible for there to be a self-narrative, including narrative content, in the first place.

For sure.

It is also perhaps an example of why we even talk about self-referencing systems to begin with: we can observe such “closed” mechanisms (by which we then understand some of them to be best described as self-referential). We talk about them being self-referential because they seem to at least to some degree have semi-autonomous characteristics. Our observations would also lead us to think that such “self-systems” would receive their own flow of sensory input, which empirically seems to be the case.

Yes. They also reinforce our own notion of self-referencing.



I do not necessarily disagree with you here (regarding the notional self). I’m however skeptical about that being all that there is to this issue, i.e. that you simply talk about a tiny portion of what other people perhaps mean by self.

Personally, I figure I'm pretty much on track with my usage of the word. I could be wrong but that's my sense of it. It's the sense that there exists someone who owns a body, has feelings, has opinions, owns a car, has a girlfriend, etc.


Many thoughts do not relate to the notional self either (for instance when concentrating deeply on a task, the task might engulf “oneself”, so to speak), it is only afterward when thoughts like “I did well (or bad) in that task” when notional self is brought back. On the one hand, there seems to be a connection between intensity of thinking, concentration, experience & doing, and how much of a notional self is perceived at those instances. On the other hand, with extreme relaxation or meditation a similar relationship is also found.

Yes, I agree. Either putting oneself totally into the moment or purely observing the actions of the body or mind both tend to reduce the activity of this notional self. It forms judgments later.

However, I don't really agree that this can be construed as "Many thoughts do not relate to the notional self either." I find it rather that the intensity of the focus, or the absence of the focus, block the activity of this self, or identification with this activity.

Since the notional self can be attacked by the idea that the notional self is, well, just a notion – I know, it sound trivial on paper but it is much harder to put into practice, well, at least harder to get to the result. One could maybe say that one meme is attacking another meme: the ‘benign user illusion’ is transformed into a ‘malign user illusion’ before it is eventually ignored (as Blackmore would have it).

Yes. Dennett has his "benign user illusion." Blackmore has it as malign. In considering the BUI and MUI as memes, it would be interesting to see which would most likely survive in the environment of the brain, ruled over by a selfplex inevitably hostile to any type of "user illusion" concept. I note that the memetics meme itself doesn't seem to have progressed so far yet.

It's also interesting to consider some of the Behaviourist models that Merc and others here create to deal with the situation. Memetically the model appears to seek to get inside the selfplex, rather Trojan Horse-like, and then influence it from the inside. It seems to be appealing particularly to those who like to consider themselves materialists yet who don't want to deal with the selfhood issues. I'm not criticising here, merely observing. Blackmore's MUI meme is far more confrontational to the selfplex. You can no doubt guess which this brain is more attracted to!


Well, I submit that there wouldn’t be any kind of such relating unless there is some kind of self-reference that isn’t directly dependent on thinking alone. You could perhaps call that the self as observer?

Can you explain more here?

Nick
 
Last edited:
Nick227 said:
I find his terminology here pretty Cartesian. The tendency will inevitably be to seek a "self", whether it be a single neuron or an executive level. I've no doubt that heirachies exist, though my background knowledge is acutely limited here, but if one starts to consider them "selves" then I think a dangerous line is being crossed.

It seems to be more about distinguishing between structure and function on many different levels, thus also recognizing that the brain is both plastic and modular at the same time. This is probably why it’s such a difficult task to really understand the workings of the brain (or the whole organism). At one level, these semi-autonomous modules which sort of do their “own” things seem to exist. At another level, what they do, seem to have profound consequences for the functioning of the system at a much higher level.

Like with any ecosystem, a kind of self-organization seems to be taking place. When accurately investigating parts of the system, in relation to higher level activities, a clearer picture of how the whole ecosystem functions can be created. In order to do that however, we pretty much have to define a crucial part in such a way that it signifies a relationship to the particular higher level function we only have access to by observation.

Can you explain more here?

The way I see it there’s two general options – although I might make the mistake of simplifying too much when equating ‘thinking’ with ‘inner dialogue’ here.

Option one: people aren’t completely without though when meditating and thus when they say they didn’t have any thoughts (or that there wasn’t thinking) they are simply confabulating in retrospect. That would also make my point moot.

Option two: there actually isn’t thinking going on (at least in terms of inner dialogue), but still there is awareness (you are not unconscious). In order for you then to relate such “experience” or “activity” or whatever you wish to call it, it seems plausible that some kind of registering was taking place. The very fact that you can relate to such an episode retrospectively seems to speak for itself in this case. I.e. the organism somehow has access to such recollection, whatever it may think about the actual experience.
 
Nick, it sounds like you are searching for some kind of 'object' instead of a process.
 
I don't think you can define materialists as those who don't care to delve deeper. Dan Dennett is a materialist. His "multiple drafts" theory, for example, delves into the realm of actual brain processing. Plenty of other researchers who claim to be materialists have done the same.

I am an A.I. programmer. My primary interest is the realm of actual brain processing.

I am not interested in playing semantic games regarding the reality of "self." That is what I mean when I say "we don't care to delve deeper." I expect Dennet would agree.

You could say, perhaps, that you are a materialist but that you are not interested in selfhood below the level of the functioning organism.

Correct. What I am interested in is the material processes that cause behavior in an organism. That may or may not fit someones notion of "selfhood" -- and I don't care one way or the either. I expect Dennet would agree, except for the fact that such an attitude might impact sales of his books.

Considering self and world at the level of the whole organism is dualistic. It's overtly dualistic. You can wriggle your way out of bits of the dualism with some quasi-behaviourist modelling, but basically it's dualist.

As I see it, the situation is pretty much as Dennett called it back in the early 90s. No one wants to regard themselves as a Cartesian Dualist, it's the unhippest term on the block, but actually the vast majority are still mentally modelling selfhood on the basis of the Cartesian model.

See? You are doing exactly what I said you are doing.

I don't care to play this game anymore. I am an A.I. programmer who views himself as nothing but an advanced biological robot, which he could probably reprogram if he had the tools. If that is dualist, then I am a dualist, and I suspect so are a great many others, Dennet included.
 
I am talking about a neuron as the unit.

Unless I have missed out on some major discoveries, I don't believe there is just a trend that all thought occurs on a substrate of neurons, and that one neuron is easily distinguishable from its neighbors.
Do you have any idea just how much interpolation goes into those brain activity maps, and into statements such as "this type of thinking generally occurs in this portion of the brain"? There is no exact area. There is a lot of overlap, between different people, and even within the same person, at different times.

ETA: The neuron does not do anything, on its own. Thinking is the emergent behavior of all of them. Just like a single virtual neuron does not do anything, in a neural net algorithm.
 
Last edited:
Do you have any idea just how much interpolation goes into those brain activity maps, and into statements such as "this type of thinking generally occurs in this portion of the brain"? There is no exact area. There is a lot of overlap, between different people, and even within the same person, at different times.

ETA: The neuron does not do anything, on its own. Thinking is the emergent behavior of all of them. Just like a single virtual neuron does not do anything, in a neural net algorithm.

Ok let me spell it out for you in plain english

1) You said that in CS people break complex processes down into easily delineable functional subunits.

2) You claimed biological organisms may have no need to do so.

3) I pointed out that thought occurs on a substrate of neurons, which are easily delineable functional subunits, and which means yes, biological organisms do need to do so.

This implies both A) we should be able to model the brain using CS methods and B) whether you know it or not, your brain breaks problems down into some kind of subunit, because at the end of the day the whole thing is a bunch of neurons and the amount of processing a single neuron can do is very limited compared to an entire human thought.
 
I am an A.I. programmer. My primary interest is the realm of actual brain processing.

I am not interested in playing semantic games regarding the reality of "self." That is what I mean when I say "we don't care to delve deeper." I expect Dennet would agree.

Correct. What I am interested in is the material processes that cause behavior in an organism. That may or may not fit someones notion of "selfhood" -- and I don't care one way or the either. I expect Dennet would agree, except for the fact that such an attitude might impact sales of his books.

I would like to suggest something, RD. If you don't wish to explore selfhood beneath the level of the independent functioning organism, then don't do so. Just don't do it. There's no need imo to try and enlist the perspectives of other forum members or well-known philosophers to try and bolster up your position.

You seem to me to be constantly trying to reinforce your own stance through reference to others and it just comes across to me that you are yourself very torn here. Why not just say that, if it's the case? In the above post you refer to Dan Dennett twice, once to claim that he would no doubt back up your position, something I rather doubt though you never know, a second time to claim that he only writes about these things to make money. I mean, is this really how you want to come across?

Nick
 
Last edited:
It seems to be more about distinguishing between structure and function on many different levels, thus also recognizing that the brain is both plastic and modular at the same time. This is probably why it’s such a difficult task to really understand the workings of the brain (or the whole organism). At one level, these semi-autonomous modules which sort of do their “own” things seem to exist. At another level, what they do, seem to have profound consequences for the functioning of the system at a much higher level.

Like with any ecosystem, a kind of self-organization seems to be taking place. When accurately investigating parts of the system, in relation to higher level activities, a clearer picture of how the whole ecosystem functions can be created. In order to do that however, we pretty much have to define a crucial part in such a way that it signifies a relationship to the particular higher level function we only have access to by observation.

Yes, I find this so. It is thus important, as I mentioned, that one does not cross lines and start to regard certain areas of the brain as relating specifically to selfhood.


The way I see it there’s two general options – although I might make the mistake of simplifying too much when equating ‘thinking’ with ‘inner dialogue’ here.

Option one: people aren’t completely without though when meditating and thus when they say they didn’t have any thoughts (or that there wasn’t thinking) they are simply confabulating in retrospect. That would also make my point moot.

Option two: there actually isn’t thinking going on (at least in terms of inner dialogue), but still there is awareness (you are not unconscious).

Might it perhaps be easier to say that there is phenomenology?

In order for you then to relate such “experience” or “activity” or whatever you wish to call it, it seems plausible that some kind of registering was taking place. The very fact that you can relate to such an episode retrospectively seems to speak for itself in this case. I.e. the organism somehow has access to such recollection, whatever it may think about the actual experience.

Memories have been laid down, or narratives created during the act are recalled, perhaps.

I think that in actuality it is more that diminished number of thoughts or diminished identification with thought starts to take place, and thus the traditional perspective - that of being someone who experiences things - begins to become challenged in some narratives.

Nick
 
You seem to me to be constantly trying to reinforce your own stance through reference to others and it just comes across to me that you are yourself very torn here. Why not just say that, if it's the case? In the above post you refer to Dan Dennett twice, once to claim that he would no doubt back up your position, something I rather doubt though you never know, a second time to claim that he only writes about these things to make money. I mean, is this really how you want to come across?
Nick: It's not RD. It's you.
 
Nick227 said:
Yes, I find this so. It is thus important, as I mentioned, that one does not cross lines and start to regard certain areas of the brain as relating specifically to selfhood.

That would be a case where evidence should ultimately lead the way. It is thus possible to observe some functions through elimination: Hypothetically, our first observation might suggest that without certain modular functions the subject does not show/experience common characteristics of, let's say, personal selfhood. Hence as a working assumption pending further research, we would be able to say that some specific areas in the brain seem to be more important for this particular function. We still wouldn't be able to say that the self is seated in that particular area, but we should be able to somehow estimate the relative importance of that area for selfhood.

What you see as danger here is something that could also be seen as limitation. Generally it is not a good idea to argue against empirical evidence from a philosophical point of view. If we start to make rules for what is allowed and what is not – we might miss something crucial. If we assume our conclusions – for instance that selfhood is 'only' generated by thinking – then we soon find ourself on a slippery slope towards dogmatism. We wouldn not be able to look elsewhere or challenge current assumptions even though we would know that those we currently have are merely provisional.

One potential danger I see here is that a personal belief system or doctrine about selfhood could be challenged. Science should however not care.

Might it perhaps be easier to say that there is phenomenology?
Perhaps, but that is also quite a broad category. Ideally we would like to narrow it down rather than make it broader. It is, after all, "phenomenology" that we are trying to explain in a more detailed and systematic way.

Memories have been laid down, or narratives created during the act are recalled, perhaps.

I think that in actuality it is more that diminished number of thoughts or diminished identification with thought starts to take place, and thus the traditional perspective - that of being someone who experiences things - begins to become challenged in some narratives.
Maybe so? But yet again, in that case it is the process of "diminished identification" that should be explained in a more detailed and systematic way.

I think it is plausible to think that some kind of meta-representation is involved (regarding thinking and identification), thus it seems logical to look for different kinds of neural patterns when the subject is "normal" vis-a-vis in "dissociative" mode. I think some research has been done with Tibetan monks where there seemed to be fluctuation in blood flow to certain brain regions between meditation and normal wakefulness.
 
That would be a case where evidence should ultimately lead the way. It is thus possible to observe some functions through elimination: Hypothetically, our first observation might suggest that without certain modular functions the subject does not show/experience common characteristics of, let's say, personal selfhood. Hence as a working assumption pending further research, we would be able to say that some specific areas in the brain seem to be more important for this particular function. We still wouldn't be able to say that the self is seated in that particular area, but we should be able to somehow estimate the relative importance of that area for selfhood.

What you see as danger here is something that could also be seen as limitation. Generally it is not a good idea to argue against empirical evidence from a philosophical point of view. If we start to make rules for what is allowed and what is not – we might miss something crucial. If we assume our conclusions – for instance that selfhood is 'only' generated by thinking – then we soon find ourself on a slippery slope towards dogmatism. We wouldn not be able to look elsewhere or challenge current assumptions even though we would know that those we currently have are merely provisional.

One potential danger I see here is that a personal belief system or doctrine about selfhood could be challenged. Science should however not care.

I should explain what I meant better. As already discussed, there are a variety of neurological processes known to be implicated in selfhood. But here we're discussing that relating from thinking - the "I" aspect. My knowledge of these things is not great but it seems to me that, even if thinking and the beliefs generated by thinking could be tracked to specific regions of the brain, this still wouldn't give us reason to really consider "selfhood" or the "I" to be located in this region, at least not without considerable proviso. My position is not so much dogmatic, but rather making sure that one doesn't return to Cartesian logic without awareness that one is doing so.

Perhaps, but that is also quite a broad category. Ideally we would like to narrow it down rather than make it broader. It is, after all, "phenomenology" that we are trying to explain in a more detailed and systematic way.

I have problems with terms like "awareness" being used in this context. It always seems to me safer to relate that which is materially present, less unwanted dualities creep in.

Maybe so? But yet again, in that case it is the process of "diminished identification" that should be explained in a more detailed and systematic way.

I would love to see researchers get into identification. Blackmore, who I personally consider a considerable luminary, writes of how thoughts create selfhood, but I've not seen her look at the notion that there can be an agency mediating action upon thought.

I don't know quite how well it sits with meme theory, but I think it's ok. I haven't seen anyone yet attempt to deal with the psychological aspects of meme theory. Are there emotion-related underpinnings to the organism's choice of which meme to house? Surely there must be.



I think it is plausible to think that some kind of meta-representation is involved (regarding thinking and identification), thus it seems logical to look for different kinds of neural patterns when the subject is "normal" vis-a-vis in "dissociative" mode. I think some research has been done with Tibetan monks where there seemed to be fluctuation in blood flow to certain brain regions between meditation and normal wakefulness.

Yes, I've read of these sorts of things. But personally, I figure we're looking for a dopamine-mediated process here. There's that dopamine kick when selfhood is constructed by the brain. It feels good to identify. There's that quasi-addictive urge to pursue just that thought, or line of reasoning, against more logical alternatives.

I suspect that problems come up here because thinking and emotions are two of the areas neurologists know least about.

Nick
 
Last edited:
I think materialism and evolution are the only framework regarding consciousness and life that have yielded and continue to yield an abundance of useful, deeper, and verifiable information and medical insight. I think all woo must spin their wheels obfuscating understanding of evolution and materialism or to try to prove them false so that they can continue to believe in their own assorted unevidenced contrary theories that they feel so special and wise for "believing in" without ever having to subject it to the scrutiny they give the scientific paradigm which yields actual results.

Nick's argument is the same as Undercover Elephant's was...

Legs give rise to running... Legs are responsible for the jog around the park I took. "Jogs" and "running" (gerund) are non material things (nouns) that rely on material (physical) things. So is music, movement, time, patterns, laps, and flames that are blown out. I am not a dualist because I believe in these non-material "things"... in the same way I am not a secret dualist to say that brains give rise to consciousness the way legs give rise to "running". Consciousness is our individual interpretation of the pattern of neuronal firing in our brains... there is no evidence that consciousness of any sort can exist absent a material brain. I presume that most materialists follow this line of thinking... including Dennett, Ramachandran, Blackmore, and most of those on the forefront of evolution and/or cognitive research. As much as some would like to think this is an incoherent philosophy, I presume their ignorance, obfuscation, and semantic confusion has more to do with keeping their own incoherent alternative beliefs "alive" in their "consciousness". It reminds me very much of the creationist need to not understand how natural selection gives rise to the appearance of designe. If we (materialists) are right, then their unfalsifiable alternative is in jeopardy and this scares them. They have an emotional "need" for their believe in their "belief".
 
Last edited:
Consciousness is our individual interpretation of the pattern of neuronal firing in our brains... there is no evidence that consciousness of any sort can exist absent a material brain. I presume that most materialists follow this line of thinking... including Dennett, Ramachandran, Blackmore, and most of those on the forefront of evolution and/or cognitive research.

Exactly. Consciousness arises from the brain, that much is obvious. My interpretation of what is being discussed is that the problem is the: http://en.wikipedia.org/wiki/Binding_problem

If the self is derived from consciousness, and consciousness is derived from the collected sensory information funneled together to create a continuous experience, then what is self ultimately? Especially if consciousness is the non-material result of the brain functioning...

It seems kind of a waste to only view self as being the mechanisms of the brain and body when there is no way to prove that an AI modeled consciousness is in anyway similar to how our brains actually function. Even if you can model every neuron it's not going to tell you what your modeled brain is thinking. It's only going to show you a modeled brain.

Unless one is in possession of the knowledge of how our brains collect and combine information to create conscious experience, arguing that it's all able to be modeled, and that the self is merely the collection of biomechanical components appears on the surface to be some what naive.

That is not to say that the potential for creating a thinking AI is impossible or that it won't give us insight into our own thinking nature. It's just that right now, AI isn't representative of a whole lot when it comes to the complexity of our own brains, so we don't know, and we can't argue very well what we don't know.
 

Back
Top Bottom