The Hard Problem of Gravity

I do not ascribe to Cartesian dualism.

You might not consciously subscribe to it, AMM, not many people these days do. But it could still be that you are modelling the inner world of the brain in accordance with how the outer world of the organism appears to function, creating a false isomorphism.

This is exceptionally common, even amongst distinguished scientists, as Dan Dennett has long pointed out.

As near as I can discern, atm, the realized capacity for subjective experience is the conscious experiencer.

How I see it is....perceptions are the result of the brain "framing" incoming information, ie...there is a monitor. Experience is the result of it further framing it in terms of subject-object relationships, ie...I am seeing the monitor.

I can buy that there is a realised capacity for emotional and cognitive response. As a Strong AI fan I would actually call this "the soul." But, thinking the way I do, I'm struggling with the notion that there is a capacity for subjective experience, and that this is the experiencer.

An analogy for my conception would be to view consciousness like a taunt string. The various patterns of vibration along the string are what we call experience. There is no need to invoke observers within observers in an infinite regression. All one needs to do is define that base media of qualitative experience and you've found the conscious observer.

So, you're saying the neural substrate of brain processing is the conscious observer?

Nick
 
My contention is that what we call the 'self', while merely a transitory flux, is the organizing whole of all of those components. The AHB was not wrong, per se. I think his point was to illustrate that nothing has a permanent substantial basis; all is change and flux -- even the 'self' :)

As I understand it, the Buddha went a bit further than this, see doctrine of Dependent Origination.
 
Everything you state here is merely an opinion, and furthermore, it originates from your prior conviction that human subjective experience is magical.

Which you have continued to assert time and time again without one iota of evidence. I've merely stated that there is no evidence for consciousness existing outside of living beings.

You've asserted that consciousness exists in human beings* and some specific devices made by human beings - and nowhere else. You don't seem to regard this as a magical theory for some reason.

What is really odd is that no matter how often Aku, or I, or anyone else says that they believe that the physical process of consciousness is something that could possibly be duplicated by a device, you insist that we think precisely the opposite. This kind of rewriting of the opponents viewpoint is not the sign of a strong position.

You have no idea what other people mean when they say their cars hate them or their photocopier is stupid.

I, in fact, have become rather attached to the female character I am working on right now. I am perfectly aware that when she looks at me, on screen, I was the one who wrote the code, and I perfectly understand how that code works, memory word by memory word. Yet, when something goes wrong, and I ask "wtf is she doing now?" I really mean it in the anthropomorphic sense. In fact, my coworkers tell me they enjoy it when I throw expletives in the form of degrading female terms her way.

So don't presume to think you know what anyone else means, because you don't, and all your assertions to the contrary amount to nothing but childish games.

I'm well aware that people in the AI business have all sorts of strange beliefs about their work. However, in the real world, a genuine belief that the car hates them is not a sign of a well grounded materialistic philosophy, but mental illness.


*Leaving animals aside for the time being.
 
Inherently, no. Invariably, yes.

Drugs don't add anything. They just scramble what wits you normally have.

The brain is in a constant drug state, genetically normalised presumably through natural selection... in terms of its capacity to perceive threats, food, and sexual opportunities (it's regular drug state is presumably the best). But this does not mean that it is inevitably the best to help the organism understand how the hell it got here or what consciousness is. Tweaking those serotonin receptors with certain indoles might or might not do a better job.

What? How is dualism involved here?

see above


There are still members of the medical profession (and the odd Jrefer) who regard certain ameliorative health events as happening "all in the mind," and infer from this that these are different classes of events (negative in comparison) to physical events of healing. They do so apparently unaware they are recreating the Cartesian mind-body split.

Nick
 
Last edited:
So, what is there to consciousness that SHRDLU doesn't exhibit?

Just to dig this one up again, I wonder, Pixy, how you relate your statement above to Hof (2007)'s claim that Stanley, a Stanford built robot that successfully crossed the Nevada Desert in 2005, cannot think, but can only process.

Hof said:
If and when Stanley ever acquires the ability to form limitless snowballing categories....then I'll be happy to say that Stanley thinks. At the present, though, its ability to cross a desert without self-destructing strikes me as comparable to an ant following a dense pheromone trail across a vacant lot without perishing. Such autonomy on the part of a robot vehicle is hardly to be sneezed at but it's a far cry from thinking and a far cry from having an "I." - Hofstadter (2007) p.190

Nick
 
Last edited:
You've asserted that consciousness exists in human beings* and some specific devices made by human beings - and nowhere else. You don't seem to regard this as a magical theory for some reason.

Actually, my position was that anything that switches is conscious.

You extended that, via your word games, to mean everything is conscious at one time or another.

If you want to interpret it that way, I don't really care. I don't enjoy word games like you apparently do.

What is really odd is that no matter how often Aku, or I, or anyone else says that they believe that the physical process of consciousness is something that could possibly be duplicated by a device, you insist that we think precisely the opposite. This kind of rewriting of the opponents viewpoint is not the sign of a strong position.

That's because we are pretty sure your disclaimer "could possibly be duplicated by a device" is merely to avoid arguments you know you won't win about certain points.

Otherwise, you would be able to list some properties of such a device.

I'm well aware that people in the AI business have all sorts of strange beliefs about their work. However, in the real world, a genuine belief that the car hates them is not a sign of a well grounded materialistic philosophy, but mental illness.

What about a belief in a supernatural entity that hates them?

*Leaving animals aside for the time being.

I don't want to leave animals aside. I know you do, because the existence of a spectrum of animals from bacteria to chimpanzees poses very difficult problems for your views.

That is just too bad, and it certainly isn't my problem. What honest people do, when reality doesn't agree with their views, is change their view.
 
Actually, my position was that anything that switches is conscious.

You extended that, via your word games, to mean everything is conscious at one time or another.

I demonstrated that everything switches, at one time or another. You failed to address the contradictions in your position. As per.

The fact that everything switches is not a word game. It's a matter of how the universe works.

If you want to interpret it that way, I don't really care. I don't enjoy word games like you apparently do.



That's because we are pretty sure your disclaimer "could possibly be duplicated by a device" is merely to avoid arguments you know you won't win about certain points.

Otherwise, you would be able to list some properties of such a device.

How can anyone describe the properties of a device to exhibit a property that is not understood? Oh wait, we can just say "Switches are conscious" and take it from there. That's a theory all right. It doesn't explain anything, but at least it's simple.

Until it's understood what physical process creates consciousness, it's not possible to design a device that duplicates consciousness.

What about a belief in a supernatural entity that hates them?



I don't want to leave animals aside. I know you do, because the existence of a spectrum of animals from bacteria to chimpanzees poses very difficult problems for your views.

Quite what these problems are I'd love to hear. Since a chimp brain is clearly quite similar to a human brain, it seems quite reasonable to me that chimp consciousness would be comparable to human consciousness. It also seems likely that an amoeba would have little similarity.

Of course, since from the start of the argument you've been addressing mostly what you have decided my views ought to be, rather than dealing with the actual points I make, it's business as usual.

That is just too bad, and it certainly isn't my problem. What honest people do, when reality doesn't agree with their views, is change their view.

You could do that. Or you could carry on talking to the characters in your computer game.
 
AkuManiMani said:
I do not ascribe to Cartesian dualism.

You might not consciously subscribe to it, AMM, not many people these days do. But it could still be that you are modelling the inner world of the brain in accordance with how the outer world of the organism appears to function, creating a false isomorphism.

This is exceptionally common, even amongst distinguished scientists, as Dan Dennett has long pointed out.

On the surface it would seem that what I've been proposing reflects Descartes in some way. Well, I reject dualism [and other pluralisms] mainly because of the interaction problem. Descartes posited that mind and matter were fundamentally different things -- if that were the case then there would be no way for mind to affect body and vis versa. For there to be interaction between entities they must have a common basis. This is why I think that some variant of monism must be the most accurate ontological description of reality.

At the same time, there must be a way for there to be real differences and distinguishability between entities [i.e. functional plurality] otherwise existence would be an undifferentiated continuum and we would not exist to talk about it. Most of the monist ontologies I'm aware of pick some arbitrary category [like mind, matter, 'spirit', etc..] and extend that category to encoumpass all other entities. I think that this is counterproductive since it needlessly blends categories and causes unnecessary ambiguity.

Several months back, when I was expressing my view of metaphysics I was told by a forum member [I've forgotten who] that what I was describing was a kind of neutral monism. I looked into the subject and found that my views are most similar to dialectical monism. When I distinguish between objective/subjective, quanta/qualia, or WaI/WaS I'm not referring to separate 'substances', as Descartes did. What I'm referring to are complementary relations. What I'm thinking of deviates quite a bit from what Descartes ever dreamed of. I'll attempt to elaborate what I've been thinking up below.

In the conception that I'm drawing upon, I don't see 'mind' and 'matter' as being in dialectical relation to one another. It seems to me that the mind is as vertically objective as the atomic matter of the body. Being veridical, in principle, the mind should be objectively observable. In my current formulation [at least the bit I bothered to write in my philosophy term paper >_<] I see the mind as being an emergent 'metasystem', or abstractive layer, of the body. What we call conscious experience is just the 'inside' subjective perspective of a mind [it's WaS correlate]. Even then experience, as such, does not occur unless the mind is in a specific range of states. When in such states, we are able to perceive sensory impressions and mental constructs as being qualities.

If I remember correctly, I think earlier you used the analogy of there being a light turned on when we are conscious. I think that's an apt analogy. To continue w/ the metaphor, the object(s) of our most focused attention would be the most brightly illuminated while those more on the periphery of our attention would be the most dimly lit. The areas of our minds which contain elements we can be directly aware of constitute the domain of the conscious mind [CM]. The portions of our minds that we are never directly conscious of would be subconscious [the parts which involve unconscious drives, instincts, autonomic regulation of bodily functions, etc.].

Farther outside of the range of the conscious mind lies the unconscious domain; the general organismic mind [OM] of the body proper, with each 'module' of it organizing the operations of various bodily systems [such as the immune system]. There aren't any absolute divisions between the specialized modules, since they are dynamically interconnected, but they can be distinguished by their distinct functions. In this conception the conscious mind which is a relatively small subset of the general mind of the body:

(Organismic Mind (Neural Mind (Conscious Mind)))

In this scheme, the concept of 'mind' is more general than a mere function of the brain; it overlays the body in general. The OM is the general organizing factor of the material substrate of the body while the conscious mind [or CM] specializes in operating the gross functions of locomotion and deliberation.


AkuManiMani said:
As near as I can discern, atm, the realized capacity for subjective experience is the conscious experiencer.

How I see it is....perceptions are the result of the brain "framing" incoming information, ie...there is a monitor. Experience is the result of it further framing it in terms of subject-object relationships, ie...I am seeing the monitor.

I can buy that there is a realized capacity for emotional and cognitive response. As a Strong AI fan I would actually call this "the soul." But, thinking the way I do, I'm struggling with the notion that there is a capacity for subjective experience, and that this is the experiencer.

I think I can agree with the general idea of information being 'framed' by the brain. I just suspect that it only becomes 'experience' within the domain of what I'm calling the conscious mind, and only then when the capacity is sufficiently stimulated during dreaming and waking states. To put it more simply, the CM is the stage [or global workspace] and consciousness occurs when the stage lights are running and the actors are in role -- except in this conception, there is no audience but the light being reflected from the elements on stage.

AkuManiMani said:
An analogy for my conception would be to view consciousness like a taunt string. The various patterns of vibration along the string are what we call experience. There is no need to invoke observers within observers in an infinite regression. All one needs to do is define that base media of qualitative experience and you've found the conscious observer.

So, you're saying the neural substrate of brain processing is the conscious observer?

Nick

Dunno for sure. I doubt it's the neural substrate, per se, however. I think the neural substrate is what generates and maintains the capacity of the CM stage; its only conscious at a specific range of physical states. Its part of the reason why I suspect that consciousness is a range of states undergone by the endogenous field of the brain; sensations would be perturbations of that field generated by the electrochemical activity of the neural substrate.

Of course, this is all just inference on top of speculation on top of inference. Its just an idea, but I have a strong hunch that it in some way matches reality.
 
Last edited:
I demonstrated that everything switches, at one time or another. You failed to address the contradictions in your position. As per.

The fact that everything switches is not a word game. It's a matter of how the universe works.

The fact that everything is conscious is not a word game. It's a matter of how the universe works.

Of course, you will reply "that is absurd." Funny how when I use an identical argument you all of a sudden declare it to be incorrect.

How can anyone describe the properties of a device to exhibit a property that is not understood? Oh wait, we can just say "Switches are conscious" and take it from there. That's a theory all right. It doesn't explain anything, but at least it's simple.

Until it's understood what physical process creates consciousness, it's not possible to design a device that duplicates consciousness.

Until you understand what physical process creates consciousness, you are in no position from which to declare anyone else incorrect on the matter.

It is just that simple.

Quite what these problems are I'd love to hear. Since a chimp brain is clearly quite similar to a human brain, it seems quite reasonable to me that chimp consciousness would be comparable to human consciousness. It also seems likely that an amoeba would have little similarity.

lol.

Again, you ignore everything in the middle. You understand that there are animals halfway between a human and an amoeba, right?

Are they just "half" conscious?

Of course, since from the start of the argument you've been addressing mostly what you have decided my views ought to be, rather than dealing with the actual points I make, it's business as usual.

Westprog, your entire method of argument is to actively avoid making points.

It is no wonder everyone else has to deduce what you might mean.

For example, you haven't yet addressed my questions about crystallization, now have you? Why is that? Maybe you just didn't see the post ... or maybe you are pretending you didn't see it...

Or you could carry on talking to the characters in your computer game.

And billions of people could carry on talking to their animals... their plants... their dead relatives... their dolls... their gods...

I guess we must look pretty stupid to you.
 
Sorry, that was a typo. I meant to say "unintelligible" >_<
Makes quite a difference...

The point I was attempting to get at is that thoughts and feelings must have some intelligible pattern(s) than can be identified and decoded in some way.
Well I suppose it's possible, but what makes you feel they must be decodable? and decoded into what exactly?

My entire case has been that qualia are a viable concept and that they can be understood using the scientific approach. The only reason why this debate has carried on so long is because many here who strongly reject the concept -- I suspect -- for ideological reasons. On the upside, the discussion has given me an opportunity to further develop some of my metaphysical ideas :)
Fair enough, but I refer you to my previous (twice unaddressed) questions - What are the scientific questions you want answered that are not answerable by the kind of scientific approach I outlined above (i.e. detailed investigation of the information processing involved)? What do you suppose an answer or answers to those questions might look like?

I really don't like just reposting the same questions each time, so if you can't or won't answer them, just say so, preferably with a reason.
 
Software is a term which is freely used, but what is actually implied? The algorithmic concept in the mind of the programmer? The text in a programming language? The compiled code produced by that text? The actual executing program on a particular machine? These are all separate things, and the connection between them is not clear and not obvious. Using something as an analogy, even in an informal way, when the thing being compared is ill-defined, is dangerous. The brain-hardware/thought-software idea has muddied the waters more than it's explained anything.
Quite. That's why these analogies make me uncomfortable. The 'brain-as-computer' idea has introduced many people to the idea that there is computation and information processing going on, but it's a two-edged sword - the brain's structure and function is nothing like the popular concept of a computer.
 
All of the chemical processes of the body seem to be actively pushed away from thermodynamic equilibrium so it doesn't seem that strait forward chemistry, IAOI, can be invoked as the organizing process.
that's how it all works - like being in orbit, it's a continual process of falling away from equilibrium - offset by continual input of energy. It's a leaky bucket of organisation that needs continual topping up - or a leaky boat sinking in entropy that continually needs bailing out, or...or... <must cut down on the expresso>

There is nothing in the DNA molecule itself that explicitly codes for, or dictates, morphology. This is even more clear in multicellular organisms where all of the cells contain the same genes yet develop and function in radically different ways.
It is in the DNA molecule, but at a higher level of abstraction than the bases. Looking at the bases you won't see the wood for the trees - it's the genes that explicitly code for morphology - length of limbs, number of digits, etc.

I think I mentioned earlier in the thread that I've recently been reading up on a field called biosemiotics. Basic idea is that what separates living systems from inanimate ones is that they are all semiotic -- they are systems of signs, codes, and meanings that direct every level of biological processes. In this scheme, a distinction is drawn between catalyzed chemical reactions and coded chemical reactions.
Before you get too involved in biosemiotics, it might be worth picking up some more basic biology & physiology.

All I know is that there must be some unitary process that allows a single microscopic cell to not only function, but unfold into a complex community of cells that considers itself a singular entity.
The genetic code generating a good number of self-organising systems.
 
The brain is in a constant drug state, genetically normalised presumably through natural selection... in terms of its capacity to perceive threats, food, and sexual opportunities (it's regular drug state is presumably the best). But this does not mean that it is inevitably the best to help the organism understand how the hell it got here or what consciousness is. Tweaking those serotonin receptors with certain indoles might or might not do a better job.
Might or might not. But Nick, we've seen the results. Philosophy produced while on drugs is invariably even worse than that produced while not on drugs.

see above
How is dualism even involved here?

There are still members of the medical profession (and the odd Jrefer) who regard certain ameliorative health events as happening "all in the mind," and infer from this that these are different classes of events (negative in comparison) to physical events of healing. They do so apparently unaware they are recreating the Cartesian mind-body split.
No. Completely wrong.
 
Just to dig this one up again, I wonder, Pixy, how you relate your statement above to Hof (2007)'s claim that Stanley, a Stanford built robot that successfully crossed the Nevada Desert in 2005, cannot think, but can only process.
Fine by me.

SHRDLU can tell you exactly how and why it carried out particular actions. If Stanley can't (and as far as I'm aware, it can't) then that's exactly the difference I'm talking about.
 
That's quite interesting. It sounds kinda like David Bohm's conception of the universe as being a holomovement. Seems like I've quite a bit more to read up on :0


I don't know about holomovement, dependant origination is part of the model, all things interact with each other.

There are trains of choices, consequences and more consequences.

This leads to rebirth of suffering.


So I can not steal without impacting others, when I walk the earth responds ever so slighty, if I accumulate wealth, others can't. You can't have your cake and eat it. That sort of thing.

"Like there is this huge web of life man."
 
AkuManiMani said:
The point I was attempting to get at is that thoughts and feelings must have some intelligible pattern(s) than can be identified and decoded in some way.

Well I suppose it's possible, but what makes you feel they must be decodable? and decoded into what exactly?

Because our own thoughts are intelligible to each of us. That means that there is some underlying system of logic that they are based on.

In answer to your second question, I suppose one means would be to develops some kind of interface that would allow subjects literally share mental experiences. Such knowledge would also be a necessary prerequisite to developing Matrix-esque VR environs or the creation of synthetic conscious agents. Of course, such technology is almost certainly a long ways off-- even if it is possible.

AkuManiMani said:
My entire case has been that qualia are a viable concept and that they can be understood using the scientific approach. The only reason why this debate has carried on so long is because many here who strongly reject the concept -- I suspect -- for ideological reasons. On the upside, the discussion has given me an opportunity to further develop some of my metaphysical ideas :)

Fair enough, but I refer you to my previous (twice unaddressed) questions - What are the scientific questions you want answered that are not answerable by the kind of scientific approach I outlined above (i.e. detailed investigation of the information processing involved)?

I thought I already answered that but I suppose you want a more detailed response. Empirical investigation in a subject area like this is fine -- even vital. But if those involved don't have at least some vague conception of what they're looking for it would make the going a lot more difficult; they'll have tons of data in their laps with no means of making sense of them. Empiricism is just one half of the scientific process; w/o postulation and theory-crafting there would be no way of making sense of the raw data the world presents to us. Reality doesn't come w/ premade labels and road maps.

If researchers aren't thinking in terms of identifying mental codes in patterns of brain activity they are less likely to find it; if no one is looking to explain what the fundamental physical basis is for why and how consciousness arises then it will almost certainly not be found. Simply collecting data w/o a conceptual framework for making sense of that data is futile. You're less likely to find answers to questions if you don't ask them, and you're less likely to gain in understanding w/o some idea of which questions to ask.

I think the mistake many thinkers in the subject are making is assuming they've arrived. I think ideologies like those of Strong AI are counterproductive because they simply placate people into assuming answers to questions they've never actually answered.

Person A: "What is consciousness?"

Person B: "Oh, its just information processing."

Person A: " What does that mean? There has to be more to it than that."

Person B: "Nonsense. We already know what it is."

Person A: "But what about stuff like feelings, colors, and sounds? Shouldn't we be trying to figure out what they are and how they come about?"

Person B: "Those are incoherent irrelevancies we already understand."

Person A: "Erm...So what are they?"

Person B: "Information processing."

Person A: "..."

What do you suppose an answer or answers to those questions might look like?

Something resembling the picture that's been forming in my mind and which I've been trying to articulate: Means of formulating an objective basis for determining what entities are conscious, and in what way, w/o having to infer it. Means of proposing realistic ways of reading, generating, and reproducing specific conscious experiences in subjects. Solid knowledge of how to go about creating artificial systems that are not only intelligent but actually sentient. Stuff like that.
 
Last edited:

Back
Top Bottom