• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Hard Problem of Gravity

"living things" is purely a label we use for certain processes we see around us - it is nothing more than an "invented" distinction. We can't even agree on a black and white definition of what a "living thing" is!

Define black. Define white.

How would you account for the fact that the "experience" of red can be "generated" by quite different "frequencies of EM radiation"?

Because our neurons communicate via electrochemical signals -- duh! :p

Okay first, lets recap some earlier points:

[Note: In my parlance awareness is synonymous with consciousness]

Given:
Y = quanta ['unit(s)' of information]
X = qualia ['hue(s)' of experience]

What I'm describing is:

AMM

[1] Consciousness = X
[2] Thinking= Y(X)

.......

What [PixyMisa is] describing is:

PM

[1] Capacity = Y
[2] Computation = n(Y)

[PixyMisa is] confusing informational capacity with experiential awareness, and logical computation with conscious thinking.

Pixy, is right about one thing, tho. Brains can be thought of as a specific class of computer.

With this definition in mind....

[G1] Brains are living computers.
[G2] Minds are living virtual computers.


[Inference 1]
The 'binding problem' can be resolved by considering consciousness to be a field phenomenon.

- There is is no structural 'seat of consciousness' in the brain because consciousness is a kind of field capacity generated by the collective activity of the brain

- The unified experience of consciousness can be explained by the unified nature of such a field.


[Inference 2]
Consciousness is a kind of field capacity generated by living brains at a specific range of physical states.

- This range of states I will call the 'Awareness Spectrum'

- This spectrum encompasses what we colloquially call 'states of consciousness' [such as the varying degrees of wakefulness, REM sleep, etc]


[Inference 3]
Not every computer meets the necessary physical criteria for consciousness and, to our knowledge, only living brains meet these criteria.

- Based on the preceding criteria for consciousness, artificial computers, to date, do not exhibit the capacity for consciousness.

- Its is therefore, unjustified to describe such computers as being conscious


[Inference 4]
With the above definitions in mind, there is no metaphysical [i.e. Cartesian] distinction between mental and physical.

- According to the definition of 'matter' I'm using, a mind is 'physical' but not 'material' in the sense of being made of atoms

- The 'non-local' nature of mental properties described by Descartes can be explained by the above definitions of 'mind' and 'consciousness' as being field phenomenon.
 
Last edited:
How would you account for the fact that the "experience" of red can be "generated" by quite different "frequencies of EM radiation"?

The experience of "red" can be generated by thinking about it. It can be generated by hearing the word "red". It can be generated by reading a story in which blood is mentioned.
 
Have you read Hofstadter?

Because that theory is exactly what Godel, Escher, Bach presents.

When I say "physical theory" I mean a paper written by a physicist, and published in a physics journal, peer reviewed by physicists. G, E, B is a fun read but it doesn't even pretend to present new ideas in physics. It's a discussion about various issues pertaining to AI and consciousness and many other things.

Of course anyone interested in the subject should read G, E, B, and The Emperor's New Mind, and a wide variety of books on the subject. But they should do so skeptically. Most of the authors are discussing the best way to extract phlogiston, or the right technique for turning lead into gold.
 
When I say "physical theory" I mean a paper written by a physicist, and published in a physics journal, peer reviewed by physicists. G, E, B is a fun read but it doesn't even pretend to present new ideas in physics. It's a discussion about various issues pertaining to AI and consciousness and many other things.

Of course anyone interested in the subject should read G, E, B, and The Emperor's New Mind, and a wide variety of books on the subject. But they should do so skeptically. Most of the authors are discussing the best way to extract phlogiston, or the right technique for turning lead into gold.

'Ey, westprog, I think you may be wasting your time trying to explain these concepts to Pixy. I doubt he can understand them without being programmed by a suitable ideological authority.
 
Whenever I see the term qualia I know I'm about to get an earful of shaving cream.

The term "qualia" is just a fancy philosophical term for something that any ten-year-old can understand. It takes a lot of work to unlearn it.

Ask a small child if he can see pictures in his head.
 
Okay, so behaviorism is just a specialized area of focus; it does not declare that focus to be all there is. Is that about accurate?
It is at one particular level of explanation. Neurology is at a different one. Both study natural objects and their actions. Compare with, say, Freud, who attempted to study whole organisms with explanations at a lower level; these mechanisms were inferred from the organismic level, not evidenced at the neurological level. They also happened to be wrong. Trying to explain one level by inferring another is tempting, but if you get beyond what you have actual evidence for, it's still just speculation.
BTW, the explanations and links you've provided so far have been very helpful but I do have one quibble.

You've mentioned before that you consider the 'mind' to be a 'fiction'. Does this mean that you view the 'mind' as non-existent as a real entity, or just not relevant to your field?
The mind, as traditionally defined (a causal entity) is an explanatory fiction. The notion that "if you didn't have a mind, you couldn't experience stuff" is purely circular; this traditional mind is worse than useless.

There was a trend for a while of saying that "the mind is what the brain does". If this were the end of it, that would be fine, if trivial. It would also be incomplete, as we clearly use more than just our brains to do the things we pretend the brain alone does (for just one example, the adrenal glands are not part of the brain, but certainly do influence our actions). If "the mind" is just a label for behavior, it is not fictional, just superfluous; we already have the word "behavior".
By my reckoning, what we call the 'mind' actually refers to a real entity. Its current status is comparable to the status of genes in Darwin's day; we don't know much about it other than that its existence can be inferred.
As, say, Freud did. He inferred ego, id, superego, as structures of the mind. I hope I don't have to elaborate on the dangers of inferring mechanisms without supporting evidence.
The purpose of all the speculation I've been doing is to try and guess at what the physical nature of this inferred entity called the 'mind' might be. I suspect that once we figure out the exact physical nature of the 'mind' we will then have a solid basis for understanding another inferred concept: Dawkin's memes.
In Skinner's "Selection by Consequences", he uses the term "elements of culture" for precisely the same thing. A behavioral replicant (either an operant behavior or an element of culture) can be observed. The facts of parent-offspring similarity, variability, and differential reproductive success can be observed. Natural Selection worked just fine without the specific mechanism of genes; the discovery of the DNA replicator simply (and importantly) provided the evidence at a different level of explanation. There are important lines of potential evidence at the neurological level that may provide one or more micro-level replicators corresponding to operant behavior (Sejnowski presented some info on octopamine-moderated channels in honeybees that looks fascinating, and corresponds to dopamine channels in us) and elements of culture (Ramachandran seems intrigued by the potential of mirror neurons; I am hopeful, but they sound too good to be true).

None of these are "mind" as the term has traditionally been used, with the causal element. None of these are enhanced by calling any part of it "mind".
 
AkuManiMani said:
Since, I'm conceiving of consciousness as a field of some kind [sorry, I know you wanted me to refrain from jumping into the 'empirical' but I think this may be appropriate, bear with me >_<] then it stands to reason that it would have the same general properties as other fields; one of them would be the capacity to have an overall zero value over a particular spacial extent. My guess is that any region with this overall zero value could be said to to be unconscious.


I suppose conscious experience would, by definition, be the first-person 'inside' perspective of a conscious field; the field IAOI or the 'beingness' of the field. Qualia would be the 'inside' correlation of the 'outside' aspect of field activity. Qualia could only be experienced from the 'inside' of a conscious field; absent a non-zero conscious field, there are no 'inside' qualia.
I take it then that your position is that you would 'experience nothing'? Conscious-'ness' being a kind of potentiality for experience as such, even though it might momentarily be without content, or empty. A lot could be said about that, but that's not the main point here.

Thus, it all really boils down to the assertion about you made about "consciousness" being a field of some sort, which we haven't yet discovered. I'm OK with that proposition, as a proposition like any other, although, in all fairness, it's empirically unsupported so far.

To the broader philosophical part. Okay, let's assume there's such a field and that qualia is the inside correlation of the outside aspect of field activity. What follows from that is a very rudimentary question: Have you explained subjective experience at all – 'how it feels' – or have you just explained the context where subjective experience take place? I.e., have you also defined the problem away?

Sure, if qualia is an inside correlation of the outside aspect, then that's all there is to it because the aspect is what it is, and it cannot be any other way. Yet, an entrenched philosopher would maintain that you have left something out, namely that of how it feels when experiencing. He might also ask what makes your theory better than the more conventional one, keeping in mind that there's no evidence for the field, and that it still fails to fully explain the subjective aspect of subjective experience to his liking. Obviously it doesn't even matter how you explain it empirically (field, neurons, computation, or what have you), it's still going to fall short, always is.

In summary: what insight into the "obvious" phenomenon of experiencing does it bring?
 
The term "qualia" is just a fancy philosophical term for something that any ten-year-old can understand. It takes a lot of work to unlearn it.

Ask a small child if he can see pictures in his head.

Ask a small child if the sun rises and falls in the sky.



Once we have seen things in our environment, we can use those same brain pathways without the things being in our immediate environment--just as we can use the muscles that get us from here to there in order to simply run in place. Seeing is an active process; we do not merely passively perceive. There need be no "mental image" or "qualia" to stimulate a passive process.

In short, introspection is a lousy way to gain knowledge into how we actually perceive. We think it feels like we have qualia; to me, it looks as though we perceive our environment, not our qualia. Dawkins quotes a story about Wittgenstein: 'Tell me,' the great twentieth-century philosopher Ludwig Wittgentstein once asked a friend, 'why do people always say that it was natural for man to assume that the sun went round the Earth rather than that the Earth was rotating?' His friend replied, 'Well, obviously because it just looks as though the Sun is going round the Earth.' Wittgenstein responded, 'Well, what would it have looked like if it had looked as though the Earth was rotating?'
 
The experience of "red" can be generated by thinking about it. It can be generated by hearing the word "red". It can be generated by reading a story in which blood is mentioned.

None of which requires the concept of "qualia". Semantic generalization is a perfectly good explanation.
 
Define black. Define white.
Another excellent example of Darat's point.
Because our neurons communicate via electrochemical signals -- duh! :p
I really hope that was an attempt at a joke that I just didn't get. Because as a serious answer, it's not even in the zip code.
Okay first, lets recap some earlier points:



Pixy, is right about one thing, tho. Brains can be thought of as a specific class of computer.

With this definition in mind....

[G1] Brains are living computers.
[G2] Minds are living virtual computers.
Oh?
[Inference 1]
The 'binding problem' can be resolved by considering consciousness to be a field phenomenon.
The binding problem is a function of dualistic language.
- There is is no structural 'seat of consciousness' in the brain because consciousness is a kind of field capacity generated by the collective activity of the brain
no.
- The unified experience of consciousness can be explained by the unified nature of such a field.
no.
[Inference 2]
Consciousness is a kind of field capacity generated by living brains at a specific range of physical states.
lord, no.
- This range of states I will call the 'Awareness Spectrum'
I will call it 'Bob".
- This spectrum encompasses what we colloquially call 'states of consciousness' [such as the varying degrees of wakefulness, REM sleep, etc]
Ah... behavior. Observable behavior. continue...
[Inference 3]
Not every computer meets the necessary physical criteria for consciousness and, to our knowledge, only living brains meet these criteria.
Oh? Why does my laptop computer hate me, then?
- Based on the preceding criteria for consciousness, artificial computers, to date, do not exhibit the capacity for consciousness.
Oh? My computer just told me how long it was taking to download something. Accurately, too. It must have been aware of (or conscious of) its own rate of processing in order to do this.
- Its is therefore, unjustified to describe such computers as being conscious
Or completely justified. Tomayto tomahto.
[Inference 4]
With the above definitions in mind, there is no metaphysical [i.e. Cartesian] distinction between mental and physical.
With the above definitions, your handwaving is labeled differently from Descartes' handwaving. Wax on, wax off.
- According to the definition of 'matter' I'm using, a mind is 'physical' but not 'material' in the sense of being made of atoms
why call it mind? What exactly are you calling mind?
- The 'non-local' nature of mental properties described by Descartes can be explained by the above definitions of 'mind' and 'consciousness' as being field phenomenon.
Considerably more trouble than it is worth. Assumes too much that is unsupported about what "mind" is, and does.
 
Thus, it all really boils down to the assertion about you made about "consciousness" being a field of some sort, which we haven't yet discovered. I'm OK with that proposition, as a proposition like any other, although, in all fairness, it's empirically unsupported so far.

While I wouldn't go so far as to say this article is empirical support for his theory, I think it does provide some nourishment for it.

http://www.sciencedaily.com/releases/2009/03/090319224532.htm

Mercutio said:
The mind, as traditionally defined (a causal entity) is an explanatory fiction. The notion that "if you didn't have a mind, you couldn't experience stuff" is purely circular; this traditional mind is worse than useless.

This is the part I don't get. If the mind, as traditionally defined, is NOT a causal entity, then what do you consider to be a 'causal entity'?

I don't think the mind as traditionally defined is any more of a fiction than a rainbow is.
 
I don't think the mind as traditionally defined is any more of a fiction than a rainbow is.

It's not any less of one either.

That would make it simply "a fiction".
 
Ask a small child if the sun rises and falls in the sky.



Once we have seen things in our environment, we can use those same brain pathways without the things being in our immediate environment--just as we can use the muscles that get us from here to there in order to simply run in place. Seeing is an active process; we do not merely passively perceive. There need be no "mental image" or "qualia" to stimulate a passive process.

In short, introspection is a lousy way to gain knowledge into how we actually perceive. We think it feels like we have qualia; to me, it looks as though we perceive our environment, not our qualia.

Qualia are how we percieve our environment.

Dawkins quotes a story about Wittgenstein: 'Tell me,' the great twentieth-century philosopher Ludwig Wittgentstein once asked a friend, 'why do people always say that it was natural for man to assume that the sun went round the Earth rather than that the Earth was rotating?' His friend replied, 'Well, obviously because it just looks as though the Sun is going round the Earth.' Wittgenstein responded, 'Well, what would it have looked like if it had looked as though the Earth was rotating?'


And what would it look like if qualia were real? And what would it look like if qualia were an illusion?
 
None of which requires the concept of "qualia". Semantic generalization is a perfectly good explanation.

It's a good explanation of why different stimuli produce the same qualia. It doesn't explain anything about why qualia are there in the first place.

The fact that different qualia can arise from the same stimuli, and the same qualia from different stimuli - or none - seems to call in doubt the idea that qualia are just responses to the environment.
 
Oh? My computer just told me how long it was taking to download something. Accurately, too. It must have been aware of (or conscious of) its own rate of processing in order to do this.

A cauliflower just told me, quite accurately, that it was a little bit past its best. As with the computer, the understanding starts with the human, and doesn't apply to the device or the vegetable.

Because we can glean information from a computer, it doesn't mean that the computer "knows" it.
 
It's a good explanation of why different stimuli produce the same qualia. It doesn't explain anything about why qualia are there in the first place.

If it weren't there, we would all be blind deaf and dumb. This discussion is retarded.
 
Because we can glean information from a computer, it doesn't mean that the computer "knows" it.

This is exactly what it means.

The computer has no concept of why it is significant to you but it certainly knows these things as much as any person could be said to know them because the epistemology of how you would know such a thing is the same.

Otherwise you're arguing about "feeling" that you "know" something - which is just a label for having some emotional reaction about a belief and not an indicator of genuine knowledge.
 

Back
Top Bottom