• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
Or is this a framework question in disguise? Doesn't matter. I will let the relevant scientists do their job on this one. I will let you know when they have any kind of answer to the questions I am pondering here. For the moment, as far as I can tell, they are not there yet.
So, argument from ignorance? And before you protest, you are the one who brought up Searle.
 
Your entire philosophical stance is justified by a mathematical truth remember.
Well, that's good to hear.


Pixy's a nice guy, with some good ideas. Unfortunately many of us cannot agree with some of his assumptions and conclusions.
Certainly there are some who cannot agree with my conclusions.

None of you have ever made a coherent objection to my assumptions, though, so that's not my problem.
 
One thing I would like to know myself is when and how the physics of biology leads to the sensation of red. If you happen to know that it would be great.
Again, if you want to, you will have to use the terms as they are used in neurology, not in common usage. :)

Sensation is defined as the processes in the sense organs, except for the vestibular sense which is an amalgam of kinesthetic, cochlear sensations and visual perceptions, and maybe more as well. It is sort of a perception in that sense rather than a 'sense', in strict terms.

Perception is the brain events such as the creation of the visual field or the auditory field.

Sensation is the actual biochemical processes in the sense organs, perception is the complex processing in the cortex regions.

:)
Of course, some do not even know what the sensation of red
Again it would be helpful to use the defined terms as they are defined. The sensation of red is many different events in the retina of the eye, it involves the triggering of certain photoreceptors, and the not triggering of other photoreceptors, the mixing of saturation between the color receptors and the the brightness/contrast of the rods. In this case I think you are likely referring to what would be termed 'visual perception'.
:)
means because they will make it all about mechanics and nowhere about sensation itself (which is my main point, but if you want to talk specifics of certain models I am all ears).
Again it seems you are talking about teh actual process reffered to as perception. the 'sensations' are more like set values of interaction bits which are also analog amalgams of different photoreceptors.

No we do not at this time know why we perceive the seven/three major 'colors' as opposed to stippling , crosshatching or other possible ways that the 'colors' could appear to us.

However there is no reason that I am aware to think that it is anything other than biochemicals and neurological events.

Which is why I ask you, what else might there be?
I could be wrong that a neurological model does not already exist that gives the correlates of consciousness, but are we not still supposed to be trying to figure out what NCC's are, or did I miss a memo or something?
And I asked you specifically, what else would there be that could not be explained by neurological process.

they are not 'correlated' with consciousness, they are consciousness as far as we can tell.

What data is there that might indicate anything else? That I have seen there is nothing at this time that is not 'conscious' or exhibits 'consciousness' that does not have a neurological structure of some sort.
Or is this a framework question in disguise? Doesn't matter. I will let the relevant scientists do their job on this one. I will let you know when they have any kind of answer to the questions I am pondering here. For the moment, as far as I can tell, they are not there yet.

That is not what asked you at all, and you pointedly ignored my question.

"When consciousness is finally figured out it will probably be nothing anyone is expecting is my guess."

Is exactly what YOU said.

So other than neurological events, what exactly do YOU think consciousness might be?

I and most people 'expect' consciousness to be neurological events in biochemical neural networks.

So please answer what do YOU think or expect that consciousness might be other than that?

(Aside from whether non biochemical neural networks could be built.)

:)
 
If that is so then just by looking at you I should be able to figure out what you are sensing,
Special pleading and a false dichotomy.

Can you just tell if something is radioactive?
or emits in the ultraviolet?
Or can you see the ring nebula in Lyra?

Seriously, you have made either an over simplification or a false dichotomy.
and I can not. We can look at something together, and to whatever extent is relevant we can agree about what we see, but I just do not see the same kind of thing happening when I try and 'look' at your consciousness when your brain is being probed.

That is how the study of consciousness is different than the study of 'things' (refer to a previous post if you want to know what I mean by 'things').

Hope that helps.

Nope it looks like a false dichotomy, over simplification and special pleading to me.

:)
 
Science occurs exclusively in mind.

Don't forget that science works and metaphysics doesn't. That means that, unlike science, metaphysics is exclusively in the mind.

If you can demonstrate that metaphysics works, then I'll be expecting your announcement of winning the million dollars soon.
 
It seems that the dualists cannot fathom the idea that consciousness is physical, that is, due to physically connected matter. Build up a brain from atoms to molecules to neurons to neuron networks, and still dualists seem to insist that consciousness cannot appear unless there's some essential ethereal force, like EM and quantum entanglement, which works in the space between particles.

Why does our intuition insist on this? Why is the idea so seductive?
 
Last edited:
Searle's Chinese Room argument is pretty much designed to seduce our intuition. Whether that was intentional, or merely the best argument he could produce to shore up his position, I can't say, but I suspect the latter.

It's only when you examine it with serious rigour that you notice that (a) it's not logically valid, and (b) the scenario hides some huge mathematical absurdities. (Searle fails to note, for example, that the Room would need to be larger than the observable Universe, or that it would operate on timescales that would make continental drift look like hummingbirds mating.)
 
Your mind doesn't seem too materialistic to me, even though can be explained extremely well by physical neuroscience.

The gravitational field is in no way materialistic, as another example.

Or the plank scale of quantum phenomena, where the worlds of materialism and mathematical abstraction get seemingly blurred via quantum nonlocality, and general material befuzzlement in general.

Yet you couldn't be posting the above without the materialistic understanding of "quantum" that enables engineers to design and build "quantum" machines ie your computer.
 
I thought of a scenario on the way to work today that will hopefully clarify for some people why machines can actually "know" something.

Suppose you are an AI programmer working on a game. Suppose you want the AI to behave a certain way, for instance something like running to the player when they are low on health because the player might give them a health pack or something ( legal disclaimer -- this is not an actual scenario in any game I am working on ).

The easiest way to do this, and the way that all current game AI will have it implemented, is for the programmer to just write a bunch of code somewhere in the AI logic that is tantamount to:

1) Check my health
2) If it is low, see if I can path to the player
3) Try to go to the player
4) Maybe play some kind of animation and voiceover to ask the player for a healthpack

Now in this scenario, I don't think anybody would claim that the AI knows anything at all. It is quite obvious that every bit of what the AI is doing was explicitly put there by the programmer. This isn't even close to a conscious behavior.

However there are other ways this could be implemented. Those are much harder, and less predictable, which is why they aren't ever done, but nevertheless an example might be something like the following.

a) program an underlying drive to be healthy in the AI. This is complicated but doable so I won't get into it here ( unless someone wants to know, I can elaborate in another post ), but in any case it isn't part of the "knowledge" of the AI so it is fine if it is completely programmed. This is similar to our drive to avoid pain, avoid hunger, etc.

b) program a system into the AI so it can remember arbitrary sequences of events and make inferences between them. This is also complicated but doable, there are many architectures that would work. And again, it is fine that this is "programmed."

c) put the AI in the world, lower it's health, and then give it a health pack. Over and over. In many different cases.

And you are done.

What happens in the AI logic, then, is this:

1) Check my health. < it is low >
2) If it is low, prioritize finding a way to bring it back up. < I allocate CPU resources to this task >
3) Search my memory to see if any sequences of events in the past ever resulted in my health increasing < I find some events, the player giving me a healthpack >
4) If I find any such events, search my memory for other events that are related to those events, so I can build a sequence of events that might take me from my current state to getting my health increased, assuming those past events are repeatable < the player gave me a healthpack > < he was close to me > < he isn't close to me now > < moving towards something makes me closer to it >
5) Once I have generated a sequence that I think will work, I should try it < I start moving towards the player >

Now of course I embellished that with human readable words, the AI doesn't use "he" or "towards" or whatever, it just executes code. But the essential thing here is that the programmer NEVER told the AI to go to the player when it had low health. The AI learned that from being in the world and having things happen to it.

In this latter case, I would argue that the AI genuinely "knows" something. It isn't quite conscious, but this behavior is certainly far closer to what a conscious entity is capable of than any of the strawman examples people typically come up with.
 
I thought of a scenario on the way to work today that will hopefully clarify for some people why machines can actually "know" something.

Suppose you are an AI programmer working on a game. Suppose you want the AI to behave a certain way, for instance something like running to the player when they are low on health because the player might give them a health pack or something ( legal disclaimer -- this is not an actual scenario in any game I am working on ).

The easiest way to do this, and the way that all current game AI will have it implemented, is for the programmer to just write a bunch of code somewhere in the AI logic that is tantamount to:

1) Check my health
2) If it is low, see if I can path to the player
3) Try to go to the player
4) Maybe play some kind of animation and voiceover to ask the player for a healthpack

Now in this scenario, I don't think anybody would claim that the AI knows anything at all. It is quite obvious that every bit of what the AI is doing was explicitly put there by the programmer. This isn't even close to a conscious behavior.

However there are other ways this could be implemented. Those are much harder, and less predictable, which is why they aren't ever done, but nevertheless an example might be something like the following.

a) program an underlying drive to be healthy in the AI. This is complicated but doable so I won't get into it here ( unless someone wants to know, I can elaborate in another post ), but in any case it isn't part of the "knowledge" of the AI so it is fine if it is completely programmed. This is similar to our drive to avoid pain, avoid hunger, etc.

b) program a system into the AI so it can remember arbitrary sequences of events and make inferences between them. This is also complicated but doable, there are many architectures that would work. And again, it is fine that this is "programmed."

c) put the AI in the world, lower it's health, and then give it a health pack. Over and over. In many different cases.

And you are done.

What happens in the AI logic, then, is this:

1) Check my health. < it is low >
2) If it is low, prioritize finding a way to bring it back up. < I allocate CPU resources to this task >
3) Search my memory to see if any sequences of events in the past ever resulted in my health increasing < I find some events, the player giving me a healthpack >
4) If I find any such events, search my memory for other events that are related to those events, so I can build a sequence of events that might take me from my current state to getting my health increased, assuming those past events are repeatable < the player gave me a healthpack > < he was close to me > < he isn't close to me now > < moving towards something makes me closer to it >
5) Once I have generated a sequence that I think will work, I should try it < I start moving towards the player >

Now of course I embellished that with human readable words, the AI doesn't use "he" or "towards" or whatever, it just executes code. But the essential thing here is that the programmer NEVER told the AI to go to the player when it had low health. The AI learned that from being in the world and having things happen to it.

In this latter case, I would argue that the AI genuinely "knows" something. It isn't quite conscious, but this behavior is certainly far closer to what a conscious entity is capable of than any of the strawman examples people typically come up with.

Now introduce something which is good for ones health but always leads to death, life. I wonder what your AI would do?
 
Searle's Chinese Room argument is pretty much designed to seduce our intuition. Whether that was intentional, or merely the best argument he could produce to shore up his position, I can't say, but I suspect the latter.

It's only when you examine it with serious rigour that you notice that (a) it's not logically valid, and (b) the scenario hides some huge mathematical absurdities. (Searle fails to note, for example, that the Room would need to be larger than the observable Universe, or that it would operate on timescales that would make continental drift look like hummingbirds mating.)

I never completely liked Searle's Chinese Room myself. He was trying to get across an idea of 'understanding' that is not so easily captured by the example he gave. Really it makes a mess of things unfortunately.
 
It seems that the dualists cannot fathom the idea that consciousness is physical, that is, due to physically connected matter. Build up a brain from atoms to molecules to neurons to neuron networks, and still dualists seem to insist that consciousness cannot appear unless there's some essential ethereal force, like EM and quantum entanglement, which works in the space between particles.

Why does our intuition insist on this? Why is the idea so seductive?

Category error. In terms of Epistemology there is the abstract mind (think mathematics and such) and sensation (that other supposedly dread substance 'monists' shun away without reason that involves seeing red, etc.). In terms of the models we get from using the epistemology of science, we do not have as yet an account for how elements in the theory (particles, neurons, etc.) lead to consciousness.

I am not a dualist or a monist, I am an Empiricist who is open to whatever models are necessary to predict as many phenomena as possible, in as reliably a consistent manner as possible.
 
Last edited:
Special pleading and a false dichotomy.

Can you just tell if something is radioactive?
or emits in the ultraviolet?
Or can you see the ring nebula in Lyra?

I can not tell if something is ultraviolet, is radio-active or look with the unaided eye to see M57 (with an average telescope, sure). That was never at issue. Those are all things covered quite well by, let's say, conventional science.

The example I gave in the post before this one was not directly addressed. It concerned the fact that if you probe someone's brain and ask them what they are feeling sensation wise, that you do not have direct access to those sensations yourself, hence, studying consciousness itself is different than studying a rock.

We do not have a consciousness telescope that we all can look in to, so that we all agree that right now Virgil is seeing red, or whatever. We can look into a telescope and agree a star is red though. That is a difference. Address that, not these silly examples you gave.

Speaking of my purported use of logical fallacy:

There was no analysis given on how there could be other options that were not given to help with the false dichotomy claim. This claim does not make a lot of sense anyways. I was not presenting two seemingly opposing choices, I was stating why I think the conventional method of doing science is not completely appropriate to studying consciousness (although it might be the best we can have...).

As for special pleading:

Special pleading is a form of spurious argumentation where a position in a dispute introduces favorable details or excludes unfavorable details by alleging a need to apply additional considerations without proper criticism of these considerations themselves.

What considerations am I not considering without proper criticism?

Please stop throwing out logical fallacies unless you are sure you are using them correctly. It is a waste of time.

Seriously, you have made either an over simplification or a false dichotomy.

Seriously, you guys need to chill the hell out and try, just try, to understand what the other person is thinking?
 
Last edited:
Again, if you want to, you will have to use the terms as they are used in neurology, not in common usage. :)

Sensation is defined as the processes in the sense organs, except for the vestibular sense which is an amalgam of kinesthetic, cochlear sensations and visual perceptions, and maybe more as well. It is sort of a perception in that sense rather than a 'sense', in strict terms.

Perception is the brain events such as the creation of the visual field or the auditory field.

Sensation is the actual biochemical processes in the sense organs, perception is the complex processing in the cortex regions.

I will not be shoe-horned. All of the above is fine so long as we are talking about various medical models. When talking about sensation itself, the what it is like to be something, it misses the boat. You do what you can though.

The above takes something that is subjective and puts a framework around how to describe objectively various processes that we think lead to subjective experiences. If the above is the only thing you think about when considering sensation then your conceptual landscape has some missing pieces.

Again it would be helpful to use the defined terms as they are defined. The sensation of red is many different events in the retina of the eye, it involves the triggering of certain photoreceptors, and the not triggering of other photoreceptors, the mixing of saturation between the color receptors and the the brightness/contrast of the rods. In this case I think you are likely referring to what would be termed 'visual perception'.

Again, models (that I even agree with, surprised?). The sensation of red is the undeniable self-evident fact that we are trying to figure out how works using science. The model is not the same as the sensation though (one category error of monists). If you want to talk models, I would be more than happy to (actually interested even because I think you have looked into the biology topics enough to have more expertise than I do about that subject).

Again it seems you are talking about teh actual process reffered to as perception. the 'sensations' are more like set values of interaction bits which are also analog amalgams of different photoreceptors.

No we do not at this time know why we perceive the seven/three major 'colors' as opposed to stippling , crosshatching or other possible ways that the 'colors' could appear to us.

Yes, interesting points.

However there is no reason that I am aware to think that it is anything other than biochemicals and neurological events.

Not sure what you mean. That sensation is anything other than ...? In terms of models, yep, that is what it most likely is (or something similar). As you noted though, we do not have an appropriate model as yet to explain the colors we perceive and so on. I hope that day comes before I die.

Which is why I ask you, what else might there be?

In terms of what?

And I asked you specifically, what else would there be that could not be explained by neurological process.

they are not 'correlated' with consciousness, they are consciousness as far as we can tell.

The last sentence is a cop out. When you can tell me how we should put neurons together in certain patterns (or something objectively similar) so that the color green is perceived by some entity (the subjective sense of perceive, which is part of the problem with talking to monists, EVERYTHING is objective because the idea of subjective does not even exist to them), then everything is kosher.

Neurons are not consciousness. Neurons are neurons. Neurons might give rise to consciousness, but they are most certainly not consciousness. There is a difference.

That is not what asked you at all, and you pointedly ignored my question.

I do not ever mean to ignore a question. Well, I am partial to CEMI, but other than that, when I said, when we do figure out consciousness it will not be what anyone expects, I meant it. I am someone and therefore I can not know what to expect. It is a guess though, but looking over the historical patterns in science, it seems like that is the most probable outcome.

A problem this momentous when finally solved usually takes a form no one expected. That has happened quite a bit in fact.

Really though, I have optimism for the scientific method so I say carry on the good fight. My guess that consciousness will turn out to be something no one expects is irrelevant to the current functioning of science.
 
Last edited:
For Pixy Misa
290687f4-6509-1ace.jpg
 
Status
Not open for further replies.

Back
Top Bottom