The Hard Problem of Gravity

By fits and starts probably as we are doing. We can't make much progress, I think, until we do the hard work of piecing out what these words mean.

Exactly. And if it is incredible hard to elucidate what "matter" is (we have deal with this in the JREF as you and others know well), what are our hopes for something like "mind"?

Yes.

Angels on a pin head.
 
And wherefore is it wooing? And from whence are you saying of the illusion of consciousness?

Don't get me wrong. I'm not a "materialist" BY FAR. I believe no dualism, no monism, no "ism" can deal with reality. Talking about minds and consciousness is VERY slippery because, among other things, people in general have bought this thing about "materialism being truth" or some fantasy like that.
 
At that age, no, I don't think they are able to do that. But as we get older we learn to associate general emotions from facial expressions and body language. Eventually, we acquire the common language of the society we live in an individuals can communicate their internal states in more detailed ways which we can relate to, on some level.

Trying to communicate our internal states is what human beings spend much of their time doing. It's fundamental to being human. Human beings who are unable to interpret the states of other human beings are regarded as seriously impaired - autistics and psychopaths.
 
To briefly state what I'm in, I believe that all theoretical accounts of reality (including materialism, physicalism and even religions) are attempts to make sense, to summarize shared experimental events.

In this sense, theories draw predictive laws that are solely made of buoys that point to other buoys in a dense fog. Buoys are composed by facts and their attached meanings. In other words, we do not make maps about reality (as people normally believe), all we do (and can hope to do) is describing the previously mentioned shared experimental events in a fairly orderly way.

What this means, in the context of present discussion, is that we all live in a closed phenomenal world, it is from this PW that we project stuff like "matter" and "minds" (to name the two central concepts involved constantly in this thread).

Now, do not confuse this with idealism, there is something "outside" the PW, we can deduce the existence of other PW's because language is a learned process, and we can deduce that the cause of our PW's are certain regularities, that we can call "reality beyond us".

What I don't believe is that we can create theoretical models that allows us to "see" that reality without us... a perspective from nowhere if you want. All our theories are necessarily anthropocentric and have the purpose of dealing with our PW in an orderly way, nothing more... No matter how hard we try to believe otherwise, our models deal with us, not with any projected "real reality".
 
So you are saying the behavior of every system in the universe would be identical if there were no humans to observe the difference?

What is this, some kind of joke? You are joking, right?

No, I'm saying the systems would be each individually entirely different if not interpreted. It's only human interpretation that makes different execution of the same algorithm the same.
 
I take it from your "mad scientist" talk that you're alluding to my comment,
Actually, no! My apologies if you thought so! I was actually alluding to an old MAD magazine story, a take on the Frankenstein tale, as an illustration of how we do have a theory of public behavior. A silly one, but certainly not pointed at you.
but I don't really understand how what Belz said is "exactly the right answer" for the point I made because my contention had to do not with behavior but with experience, and the two concepts are quite different from one another. A "private behavior" would just refer to a cognitive process that is not outwardly communicated. An "experience" would refer to the actual sensations associated with those cognitive processes. The conundrum I pose deals with the fact that since you can explain any behavior, public or private, completely in terms of cognition, where is there any room in a theory of the mind for the concept of experience?
Frankly, this really depends on what you mean by "cognitive process", and by "experience". Cognitive neuroscience, recently, has been finding the actual nerve pathways and interconnections which are one or another aspect of thinking. This is wonderful stuff, and far removed from a former cognitive science which inferred cognitive mechanisms (some assumed to be physical, but others which were unabashedly mental mechanisms) circularly, from the behavior they allegedly were the cause of. It's the equivalent difference between opening up the hood of a car and looking at the gear, or keeping the hood closed and inferring what must be there from how the car behaves (some mechanics are good at this, but remember that they have already looked under the hoods of many cars; in this analogy, such foreknowledge is absent).

You speak of "experience" as separate from this, and I have to wonder what it adds that is not already covered by the "cognitive process". It's like the difference between seeing a tree, and seeing the image of a tree; what does the latter add? At least one person on this thread has essentially re-labeled the process of seeing a tree as an image; your words here explicitly separate the two. I dislike the language because it is dualistic, and someone could speak of an "image" (meaning the process) and be understood as speaking of some superfluous concept like qualia.

And can you really explain everything in terms of cognition? I've certainly never seen it done. I have seen cognition used to prop up some fictional entities, though.
 
That, right there, is all that needs to be said.

That it's possible for one human being to encode thoughts using a computer to pass them onto another human being? I hardly think that is devastating to my viewpoint. I could have used sign language, or written my thoughts down on a piece of paper. And if we'd all been erased from existence, what significance would those writings have?
 
A label that indicates that, if food were available, we would eat it.

But you've given considerable and thorough attention to the difference between the sensation and the behaviour. We can feel hungry but not need to eat. We can feel hungry and decide that we don't need to eat. We can not feel hungry and eat anyway - and you gave a personal example of how that might be so.

I'm saying that there is no physical explanation for the experience of sensation.
 
To briefly state what I'm in, I believe that all theoretical accounts of reality (including materialism, physicalism and even religions) are attempts to make sense, to summarize shared experimental events.

In this sense, theories draw predictive laws that are solely made of buoys that point to other buoys in a dense fog. Buoys are composed by facts and their attached meanings. In other words, we do not make maps about reality (as people normally believe), all we do (and can hope to do) is describing the previously mentioned shared experimental events in a fairly orderly way.

What this means, in the context of present discussion, is that we all live in a closed phenomenal world, it is from this PW that we project stuff like "matter" and "minds" (to name the two central concepts involved constantly in this thread).

Now, do not confuse this with idealism, there is something "outside" the PW, we can deduce the existence of other PW's because language is a learned process, and we can deduce that the cause of our PW's are certain regularities, that we can call "reality beyond us".

What I don't believe is that we can create theoretical models that allows us to "see" that reality without us... a perspective from nowhere if you want. All our theories are necessarily anthropocentric and have the purpose of dealing with our PW in an orderly way, nothing more... No matter how hard we try to believe otherwise, our models deal with us, not with any projected "real reality".

I agree that basically, all theories and 'isms' are models of reality and not reality itself. Even so, one must begin with a frame of reference in describing anything. Our systems of theories and languages may be, to some extent or another, self-referential but this is the first, and only basis upon which we can build any view of the world.
 
A label that indicates that, if food were available, we would eat it.

As you pointed out earlier, there are instances of people choosing to eat when they aren't really hungry. Of course, when an organism is hungry the probability that it will try to ingest food will go up, but such a description tells us very little about what the subjective experience of hunger is.

IMO, behaviorism, and some other theories of cognition, have become too content with the 'black box' approach to understanding. Its comparable to simply being content with Mendelian descriptions of heredity without wanting to seek out how it comes to be or what the underlying basis for it is :-/
 
I think there is something that it is like to be a bat or a dog. It's very different, undoubtably, from what it is like to be a human. I don't think we experience most things in the same way that dogs and bats do. They may experienc more like how we would if our left hemisphere were turned off, so the story flow stops.

But we can agree to disagree.

For me, being a human simply means processing information in the way humans do. Bats and dogs likewise. There are feelings and sensations specific to the species, I'm sure. But I don't see that this is really what Nagel meant when he asked the famous question.

Nick
 
No, I'm saying the systems would be each individually entirely different if not interpreted. It's only human interpretation that makes different execution of the same algorithm the same.

Right.

So let's see where this reasoning takes us:

1) Without human interpretation, every system in the universe is entirely different from every other system.
2) I am not the same system as you.
3) Therefore by 1) and 2) there is no similarity between you and I other than what humans interpret as being a similarity.
4) Therefore either a) consciousness is nothing but human interpretation or b) all systems can be conscious c) exactly one system can be conscious.

Congratulations, you have just shown the HPC to be garbage!

It strikes me as somewhat comical that even when you think you are arguing in favor of the HPC you actually aren't. But then again, thats how logic works -- it always wins in the end.
 
I agree that basically, all theories and 'isms' are models of reality and not reality itself. Even so, one must begin with a frame of reference in describing anything. Our systems of theories and languages may be, to some extent or another, self-referential but this is the first, and only basis upon which we can build any view of the world.

That is correct. Still, my point here is not that our theories are not good (for their specific purposes), but that the discussion is flawed right from the beginning. We do not live in "a material world". Computers and brains are different kind of stuff, that behave similar but are not equal (by any sense), "minds" are very poorly described, so badly in fact that I believe we are merely talking about woo. And in any case, as far as common definitions of it I'm certain that they are not a product of the brain.

No, I'm not stating that they are different from the brain either (you know, I have to state things clearly because some people here like to jump on phrases like the above one screaming that I'm proposing some kind of "immaterial stuff"... or some other idiocy ;)).

If we simply started from what we have, instead of imposing our ideas on reality, we would be able to reach surprisingly different conclusions, that's all I'm really saying.
 
Last edited:
That it's possible for one human being to encode thoughts using a computer to pass them onto another human being? I hardly think that is devastating to my viewpoint. I could have used sign language, or written my thoughts down on a piece of paper. And if we'd all been erased from existence, what significance would those writings have?

They would have significance to any system they had significance to.

What is devastating to your viewpoint is that if you enter a line of text, hit "Submit Reply," and suddenly every single human evaporates, your computer and the rest of the internet will still behave in a certain way. In particular, your computer will still package information into packets, your ISP will route them according to other information, any queries to the JREF Forum servers will be completed according to still other information, and it is just possible that latency issues might result in the information the no-longer-existing you generated being displayed on a screen somewhere. There would be no human to interpret that information in the way that humans interpret information -- so what?

The fact that binary is completely alien and completely meaningless to humans, yet it is the language of digital computers, sort of spits in the face of your notion that without humans there would be no meaning and hence no information in the universe.

The same can be said of the cries of whales and the chirps of birds and the chemical signals bacteria send each other.

For you to claim that such communication between entities that clearly understand the messages would in fact be meaningless if there were no humans is, quite literally, a joke.
 
If we start from what we have, instead of imposing our ideas on reality, we would reach surprisingly different conclusions, that's all I'm really saying.

Oh, you too?

So tell me then -- what operational changes should be made in cognitive science and artificial intelligence research in order to "start from what we have, instead of imposing our ideas on reality?"

Akumanimani couldn't answer that question. Can you?
 
westprog said:
No, I'm saying the systems would be each individually entirely different if not interpreted. It's only human interpretation that makes different execution of the same algorithm the same.

Right.

So let's see where this reasoning takes us:

1) Without human interpretation, every system in the universe is entirely different from every other system.
2) I am not the same system as you.
3) Therefore by 1) and 2) there is no similarity between you and I other than what humans interpret as being a similarity.
4) Therefore either a) consciousness is nothing but human interpretation or b) all systems can be conscious c) exactly one system can be conscious.

Congratulations, you have just shown the HPC to be garbage!

It strikes me as somewhat comical that even when you think you are arguing in favor of the HPC you actually aren't. But then again, thats how logic works -- it always wins in the end.

I'm going to have to differ with you on this one. For starters, we as humans can recognize analogous relations but analogy is just similarity, not identity. There are points of overlap that relate all things to one another [which allows us to recognize analogy and metaphor] but the point is to define the essential characteristics of entities.

As with every other process in the universe, consciousness and mind are classes of phenomenal interaction. Artificial computers are analogous to brains in that they process and manipulate symbols to produce an output of some kind. What we seek to find out is what are the essential processes that give rise to what we colloquially call consciousness. If one were to go by the definition being proposed by S-AI then everything is 'conscious' and we're left with the nonsensical conclusion you've come to.

It is clear that simply processing information in a self referential manner is not sufficient to produce consciousness. That information processing is a necessary requisite to conscious experience is a given; everything that exists processes information in some capacity. What we seek to find out are what the sufficient requisites for consciousness are and what are its essential physical characteristics. To date, we don't have the answers to these questions which happen to be the core of the HPC/EMA.
 
Last edited:
Oh, you too?

So tell me then -- what operational changes should be made in cognitive science and artificial intelligence research in order to "start from what we have, instead of imposing our ideas on reality?"

Akumanimani couldn't answer that question. Can you?

That wasn't the question you asked me and I actually have, in fact, answered your original question.
 
You speak of "experience" as separate from this, and I have to wonder what it adds that is not already covered by the "cognitive process". It's like the difference between seeing a tree, and seeing the image of a tree; what does the latter add? At least one person on this thread has essentially re-labeled the process of seeing a tree as an image; your words here explicitly separate the two. I dislike the language because it is dualistic, and someone could speak of an "image" (meaning the process) and be understood as speaking of some superfluous concept like qualia.

And can you really explain everything in terms of cognition? I've certainly never seen it done. I have seen cognition used to prop up some fictional entities, though.
Answering the latter question first, no, the specific functions of the mind haven't been entirely explained yet, but I don't expect the brain to turn out to be a magic box, so I think one can make the educated presumption that we're eventually going to be able to explain every little thing a person does and expresses about themselves in terms of cognition.

On the issue of "experience," experience and cognition are two conceptually distinct entities, regardless of whether or not they may represent aspects of the same thing in a practical sense. When you conflate the two, you are introducing the concept of experience into a theory unnecessarily.
 
For me, being a human simply means processing information in the way humans do. Bats and dogs likewise. There are feelings and sensations specific to the species, I'm sure. But I don't see that this is really what Nagel meant when he asked the famous question.

Nick


What Nagel meant by it was that he assumed that everyone agrees that bats do experience, so there is something that it is like to be a bat. Experiencing and having feelings in his mind appear to be closely linked if not inseparable.

Having experiences means that something occurs in an organism which can feel that 'occurrence'; I have no reason to doubt that bats not only process information but 'feel'.

The real issue is what do we mean by 'feel'? Say, for instance, the current example - what do we mean when we say that someone feels hunger?

My own answer would be that information processing occurs producing a behavioral tendency to want to eat. That behavioral tendency can be ignored for some other gain or 'given into'.

Tendencies toward action have to show up somehow. What we call feelings seems like the way that they do.


ETA:

At least that's my take on it.
 
Last edited:

Back
Top Bottom