The Hard Problem of Gravity

It's not any less of one either.
We're in agreement :)
That would make it simply "a fiction".
But not for long. :(

A cauliflower just told me, quite accurately, that it was a little bit past its best. As with the computer, the understanding starts with the human, and doesn't apply to the device or the vegetable.
I think the computer and the cauliflower are at roughly similar places on the consciousness scale. ;)

What is information and how is it communicated are also very interesting questions and related to the what is consciousness question.
 
Otherwise you're arguing about "feeling" that you "know" something - which is just a label for having some emotional reaction about a belief and not an indicator of genuine knowledge.

I think this may be where I part company with a great many others here. I think emotional reactions are a form of thinking and a source of genuine knowledge. While I'll happily grant that it's nowhere near as accurate a source of knowledge as cognitive processes can provide, I'm not so sure that means it's of no use at all. I am as reluctant to discard common human emotional reactions to beliefs/observations of such things as gods, souls, free will, etc. as I would be to discard rainbows as uninteresting fictions.
 
westprog said:
The fact that different qualia can arise from the same stimuli, and the same qualia from different stimuli - or none - seems to call in doubt the idea that qualia are just responses to the environment.


I would like to know what exact instances you're referring to when you say that.

When does different qualia arise from same stimuli and when does the same qualia arise from different stimuli?
 
I would like to know what exact instances you're referring to when you say that.

When does different qualia arise from same stimuli and when does the same qualia arise from different stimuli?
Linky (png, wiki)

This assumes that qualia refers to the experiential aspect of a percept. A and B are actually the same shade.
 
We're in agreement :)
But not for long. :(

Something that is not more than or less than is simply equal.

If you agree with me and you agree with yourself and you don't agree with that then you didn't really agree with me or yourself.

I am as reluctant to discard common human emotional reactions to beliefs/observations of such things as gods, souls, free will, etc. as I would be to discard rainbows as uninteresting fictions.

Which changes nothing - this being the entirety of my point.

How you feel about something is not going to change what it is.
 
In short, introspection is a lousy way to gain knowledge into how we actually perceive.

Oh, that point is solid gold!
Self introspection is blind to the process.

I credit Buddhist Insight Meditation with being able to help one see that there's no substance to a self that claims to be aware of itself.
But then there's nothing to be discovered beyond that.
There's no self observer that can perceive the process by which the illusion of a perceiving self arises.

Sure introspection can help us see how we come to some of the quirks of our personalities, but it can't see through to what underlies the marvelous mirage of our subjectivity, because it has this blinding obstruction of self.
 
Last edited:
This is the part I don't get. If the mind, as traditionally defined, is NOT a causal entity, then what do you consider to be a 'causal entity'?

Sorry, I did not explain myself well at all. As traditionally used, the mind is presumed to be causal--we change your behavior by first changing your mind--and yet we have no evidence for the changed mind other than the changed behavior. It could just as well be a possessing animus, a capricious god, or my psychic cat that is controlling your behavior.

If we change x, y, and z in your environment, and you start buying coke instead of pepsi, it is clear that the environment impacted your behavior (yes, we would have a more strict experimental design to be able to really infer causation, but this is just an example). What we do not know, but what we seem entirely too willing to jump to, is that doing x, y, and z made you change your mind, which is what changed your behavior. Doing x, y, and z may have simply appeased the capricious god or amused my psychic cat.

Certainly, we think, feel, imagine, remember, and do all the sort of things that people attribute to a mind. But we do these things--it is not that our minds do these things, but that we do these things and label them mind. They are not an explanation or a cause of behavior; rather, they are more behavior to be explained.

I think (and feel, imagine, yadda yadda...). If I were to infer the presence of a mind from my thinking, then to claim the mind as the cause of the thinking is purely circular. Thinking is something that I do (by definition, then, a behavior--specifically, private behavior); it may be studied by looking at my brain, or at my behavior, but I defy anyone to measure my mind.

That which we typically call mind is effect, not cause. Yes, it is traditionally viewed as a causal agent. As with the HPC, it is the language we use in our folk psychology that gets us into trouble. When we look at what is actually going on, we no longer have to explain magic. We are no longer surprised by Libet's results, no longer reaching for QM to try to invoke some sort of respectable reverse causation. On the other hand, it means we are no longer magic.
 
I would like to know what exact instances you're referring to when you say that.

When does different qualia arise from same stimuli and when does the same qualia arise from different stimuli?

Mr Smith is not afraid of spiders. Mrs Smith is. Different qualia from same stimulus. (Seeing a big spider). Mr Smith is scared of snakes. Same qualia from different stimulus.

And how can we tell they are the same qualia? Well, we could rely on Mr and Mrs Smiths testimony ("Ooooh, I hate them snakes and spiders. They affect me the same way.") or as Mercutio would prefer, we could rely on behaviour.
 
This is exactly what it means.

The computer has no concept of why it is significant to you but it certainly knows these things as much as any person could be said to know them because the epistemology of how you would know such a thing is the same.

Otherwise you're arguing about "feeling" that you "know" something - which is just a label for having some emotional reaction about a belief and not an indicator of genuine knowledge.

Genuine knowledge? I don't think that is possible without an emotional reaction of some kind. It's precisely because information is meaningless to the computer that it doesn't have any knowledge about the data which it is interpreting.

Every part of the universe captures information about other parts of the universe. If we are to claim that computers have knowledge, then so do rocks and trees. Read the rings on a tree, and you'll find all kinds of information. Does that mean that a tree "knows" that the winter of 1932 was especially cold, in the same sense that a human does?
 
I really don't know where that comes from. It doesn't seem to have anything to do with anything I've said.

Well, you said that human memory was different from computer state because we "know" about our memories while computers do not "know" about their state.

So, I am pointing out that you are wrong. You are wrong because humans, just like computers, can't make heads or tails from information unless it is provided at the right time in the right format. Hence, you can't attach someone elses brain to your foot and just "know" about their memories.

You only "know" about your own memories and only when the relevant neurons are hooked up exactly like they need to be and fire exactly when they need to fire. Which, coincidentally, are exactly the same requirements for a silicon computer to make use of any information.

So while it may be true that the information processing happens in a different manner between humans and silicon computers, the fact remains that the fundamentals of computation apply to both.

You "think"? Now there's ironic. It's almost impossible to talk about human consciousness without using the language of human consciousness. None of which applies to computers, which don't think at all.

Well, you clearly aren't very involved with computers.

Because every single person I have ever worked with -- and that runs the gamut from network administrators to other A.I. programmers to artists to writers to grad students and professors -- commonly use the term "it thinks <whatever>," in reference to a software construct.

What does this mean? It means either 1) computers do think, according to most people, and you are wrong, or 2) computers do not think, but people speak as if they do, which means people use the term "think" when referencing things that do not think, which possibly includes ourselves. Either way, then, your argument hasn't been strengthened.

I'm sure you really want AI to be a viable route to producing consciousness. But wanting won't make it so.

Hmmm.

You are telling an A.I. programmer "wanting won't make it so?"

No disrespect, but no **** Sherlock. Wtf do you think I am an A.I. programmer?
 
Genuine knowledge? I don't think that is possible without an emotional reaction of some kind.

I am angry about this. I guess I genuinely know it to be wrong.

Does that mean that a tree "knows" that the winter of 1932 was especially cold, in the same sense that a human does?

No.
 
Read the rings on a tree, and you'll find all kinds of information. Does that mean that a tree "knows" that the winter of 1932 was especially cold, in the same sense that a human does?

Does a human "know" that the winter of 1932 was especially cold, in the same sense that at tree does?

Of course you will reply with "but a tree doesn't know anything."

Once again, proof enough that the HPC presupposes the truth of what it pretends to question.

And, of course, you will utterly fail to provide an operational definition of "know" that supports your argument to the contrary.
 
And how can we tell they are the same qualia? Well, we could rely on Mr and Mrs Smiths testimony ("Ooooh, I hate them snakes and spiders. They affect me the same way.") or as Mercutio would prefer, we could rely on behaviour.

Testimony is a behavior.
 
Because we can glean information from a computer, it doesn't mean that the computer "knows" it.

What about when a computer gleans information from itself?

What about when a computer gleans information from another computer?

What about when a computer gleans information from the world?

How about a dog? Does a dog "know" where it's water bowl is?

What, exactly, do you think the verb "to know" actually means?
 
I take it then that your position is that you would 'experience nothing'? Conscious-'ness' being a kind of potentiality for experience as such, even though it might momentarily be without content, or empty. A lot could be said about that, but that's not the main point here.

Thus, it all really boils down to the assertion about you made about "consciousness" being a field of some sort, which we haven't yet discovered. I'm OK with that proposition, as a proposition like any other, although, in all fairness, it's empirically unsupported so far.

Oh, I readily admit that its mostly speculation. Even so, the hypothesis can justifiably be inferred from current scientific knowledge.

To the broader philosophical part. Okay, let's assume there's such a field and that qualia is the inside correlation of the outside aspect of field activity. What follows from that is a very rudimentary question: Have you explained subjective experience at all – 'how it feels' – or have you just explained the context where subjective experience take place? I.e., have you also defined the problem away?

Sure, if qualia is an inside correlation of the outside aspect, then that's all there is to it because the aspect is what it is, and it cannot be any other way. Yet, an entrenched philosopher would maintain that you have left something out, namely that of how it feels when experiencing. He might also ask what makes your theory better than the more conventional one, keeping in mind that there's no evidence for the field, and that it still fails to fully explain the subjective aspect of subjective experience to his liking. Obviously it doesn't even matter how you explain it empirically (field, neurons, computation, or what have you), it's still going to fall short, always is.

In summary: what insight into the "obvious" phenomenon of experiencing does it bring?

Well, I think that if it were confirmed that consciousness were a kind of field there would be two benefits as far as philosophy goes.

It would conclusively solve the binding problem. I think that this alone would set the consciousness-as-field hypothesis [CaF] as being a cut above the rest.

As to the question of how the theory would address subjective experience, I don't think that it would fully resolve the issue but it would be a major step towards gaining a firmer understanding of it. If it were established the conscious experience was a property of a field, it would then lend itself to theoretical and empirical investigation. In principle, one would be able to gain deeper insight into what field states produce, or are correlated with, a particular 'hue' of qualia. It won't be as conclusive as the resolution to the binding problem but it would be a major step in the quest to understand consciousness.
 
Last edited:
The HPC is unwarranted in my view. Whether I or anyone else has consciousness or merely the illusion of consciousness is a purely semantic distinction in my book. However rejecting the HPC does not imply that the problem of understanding consciousness has been solved. Untill computers can reproduce human behaviours associated with consciousness (and perhaps even after since it is possible that we might produce conciousness without understanding it) I will view such claims with skepticism.
 
Oh, that point is solid gold!
Self introspection is blind to the process.

I credit Buddhist Insight Meditation with being able to help one see that there's no substance to a self that claims to be aware of itself.
But then there's nothing to be discovered beyond that.
There's no self observer that can perceive the process by which the illusion of a perceiving self arises.
Precisely! Because the observer is the illusion.

And, as an aside to AkuManiMani: Give up on the magic fields already. There is no such field, no such field is possible, and consciousness does not behave at all as if it were mediated by such a field. Rather, it behaves exactly as if communication had to go from one neuron to the next, to the next, to the next. Which we know is the case anyway.
 
Ask a small child if the sun rises and falls in the sky.

It does "rise and fall" in the sky from our perspective on earth's surface.

Once we have seen things in our environment, we can use those same brain pathways without the things being in our immediate environment--just as we can use the muscles that get us from here to there in order to simply run in place. Seeing is an active process; we do not merely passively perceive. There need be no "mental image" or "qualia" to stimulate a passive process.

'Mental image' and 'qualia' are simply terms we use to describe those processes. I don't think that its really justified to conclude from that that 'minds' or 'qualia' do not exist.

If you want to take that route, one can argue that atoms do not exist; they are simply something that sub-atomic particles do. What are sub-atomic particles? Merely the behaviors of their constituents. One can continue this chain of reasoning all the way down until you've 'dispelled' all of reality, in much the same way you've dispelled the 'illusion' of the mind.

In short, introspection is a lousy way to gain knowledge into how we actually perceive. We think it feels like we have qualia; to me, it looks as though we perceive our environment, not our qualia.

And yet, you cannot speak on the subject without assuming qualia. 'Qualia' is just a term that we use to label the class of phenomenon you just invoked ["we think", "it feels", "it looks", etc].

If [as it seems by your last statement there] you want to argue that we just directly perceive our environment you're simply taking things at face value, the very same thing you're accusing the 'qualia' advocates of. In reality every observation of an 'external' event is the observation of an 'internal' sensory event.

[This particular issue is yet another reason that I invoke the 'field' hypothesis. I wont elaborate on it here but if you inquire about it, I will in another post.]

The reality is that neither introspection or extrospection are more "true". Their relation is complementary in the same way that qualities and quantities are.

Dawkins quotes a story about Wittgenstein: 'Tell me,' the great twentieth-century philosopher Ludwig Wittgentstein once asked a friend, 'why do people always say that it was natural for man to assume that the sun went round the Earth rather than that the Earth was rotating?' His friend replied, 'Well, obviously because it just looks as though the Sun is going round the Earth.' Wittgenstein responded, 'Well, what would it have looked like if it had looked as though the Earth was rotating?'

But, in truth, its really all relative to one's perspective. Relative to an observer on earth, the earth is indeed stationary and the sun, the planets, and the rest of the cosmos are moving relative to it. We assume the perspective of the earth being the body in relative motion because its the easier to calculate of the two perspectives. Neither one is more 'true' than the other; in a transcendent sense, they both are the same.

Here's a question for you to think on: Were the theories of Special & General Relativity developed via introspection or extrospection? ;)
 
Last edited:
Another excellent example of Darat's point.

I know what his point is. I also suspect that he has not considered where that line of reasoning ultimately leads...

I really hope that was an attempt at a joke that I just didn't get. Because as a serious answer, it's not even in the zip code.

Did you laugh? ;)

AkuManiMani said:
Pixy, is right about one thing, tho. Brains can be thought of as a specific class of computer.

With this definition in mind....

[G1] Brains are living computers.
[G2] Minds are living virtual computers.

Oh?

"Oh" what?

The binding problem is a function of dualistic language.

Okay, so?

no.
no.
lord, no.

I went thru the trouble of providing argumentation and rationale for my assertions. I expect you to extend the same courtesy.

I will call it 'Bob".

Okay.

Ah... behavior. Observable behavior. continue...

Everything can be defined as 'behavior'. So what?


Oh? Why does my laptop computer hate me, then?

You hate your laptop and are projecting [i.e. attributing] that emotion onto your laptop.

Or...

You could say that hate is simply a word and does not exist.

Or...

You could say that hate is merely a behavior, and so, does not exist.

Or....

[I think you can see where this is going...]

Oh? My computer just told me how long it was taking to download something. Accurately, too. It must have been aware of (or conscious of) its own rate of processing in order to do this.

Oh? My [eyes] just told me how long it was taking to download something. Accurately, too. It must have been aware of (or conscious of) the rate of processing in order to do this.


AkuManiMani said:
- Its is therefore, unjustified to describe such computers as being conscious

Or completely justified. Tomayto tomahto.

Or completely unjustified. Tomahto Tomayto.

See, I can do it to ;)

With the above definitions, your handwaving is labeled differently from Descartes' handwaving. Wax on, wax off.
why call it mind? What exactly are you calling mind?

Define 'handwaving' and explain how it does not apply to your own assertions so far.

Considerably more trouble than it is worth. Assumes too much that is unsupported about what "mind" is, and does.

The very same argumentation you've employed can be used to 'prove' or 'disprove' anything you choose to. I established my premises, supported those premises with deductive and inductive reasoning, and explained that reasoning. All you've done, in this particular post, is make assertions with hardly any argumentation to back them up.
 
Last edited:
I know I said I was done talking to you but I'm bored so -- what the hell? /shrug

Precisely! Because the observer is the illusion.

But illusion is a perception; what then is perceiving?

And, as an aside to AkuManiMani: Give up on the magic fields already.

I never started on 'magic fields'. It was you, rocketdodger, et al. who invoked 'magic'. How could I 'give up' on something I never supported?

There is no such field, no such field is possible, and consciousness does not behave at all as if it were mediated by such a field. Rather, it behaves exactly as if communication had to go from one neuron to the next, to the next, to the next. Which we know is the case anyway.

PixyMisa, what is a field? :rolleyes:
 
Last edited:

Back
Top Bottom