• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
I can often tell, from a handful of symbols on my computer screen, that someone posting on this subject is feeling angry. Often because of some symbols that I've caused to appear on their computer screen.

That's your evidence?
 
That's your evidence?

No, my evidence that human beings are able to figure out how other human beings are feeling is the entirety of human interaction. If we leave aside everything that human beings say and do, then indeed, there is no evidence.
 
No, my evidence that human beings are able to figure out how other human beings are feeling is the entirety of human interaction. If we leave aside everything that human beings say and do, then indeed, there is no evidence.

I don't understand what you are trying to say. Please restate a bit more clearly.
 
But there is no way for the robot researcher to distinguish between something that causes something to claim to have a subjective experience, and for something to actually have a subjective experience.
I disagree. We have a label, "subjective experience". We have assigned that label to something, and we use it in terms of a theory of mind.

There should be a correlate to that label in the mechanistic explanations, and a correlate to things we claim in our theory of mind to the mechanisms. There has to be that correlate, because if it's not there, we're not describing our subjective behaviors. And since the thing we labeled with the symbol "subjective experience" should play a critical role in the causal chain to our talking about subjective experiences, then any correct causal theory of our inner workings must include it.

Therefore, no correct causal theory of our inner workings can exclude our subjective experiences, if we have them. Even if our robot researcher developed this theory, the theory must include our subjective experiences as a component, if we have them.

Thus:
If he can explain the behaviour in terms which do not involve subjective experience being real, then surely he will do so.
If his theory is correct, then either his explanation does involve subjective experiences being real, or subjective experiences are not real.
We might have a number of explanations for the box producing the shape of a star, but we are going to prefer a little star shaped object inside blocking the path of the pen to the existence of a magic elf inside the box.
Subjective experiences, if real, are not magical elves.
If the robot can find some principle that positively identifies subjective experience as being real, then he might well accept it, even though it is something that he cannot access himself. However, if an explanation can be found that doesn't involve any new principle, then that is what he will tend to go with.
Subjective experiences should not be a new principle, if they are real--unless you need a new principle to create a correct causal theory.
He will note that the human being claiming subjective experience is able to make erroneous statements about his external environment due to incorrect information, and hence can assume that the claim of subjective experience is similarly subject to error.
How does the robot researcher identify that theories involving subjective experiences are erroneous? Certainly it's insufficient for the robot researcher to merely use different words for things.
 
I don't claim that the process is infallible. I do claim that it is better than random.
I'd agree with that. One might expect to do better than random when communicating. But it's not evidence for humans being 'very good' at figuring out what others are feeling.
 
I'd agree with that. One might expect to do better than random when communicating. But it's not evidence for humans being 'very good' at figuring out what others are feeling.

I think that the ability of human beings to detect the feelings of other human beings via the slightest of clues is extraordinary. It is sufficiently developed that people have to actively attempt to hide the outward appearance of their feelings in order that they not be detected. People who can't do it are considered to have a serious disorder. Whether you consider "very good" as an accurate description is of course a value judgement.
 
I disagree. We have a label, "subjective experience". We have assigned that label to something, and we use it in terms of a theory of mind.

There should be a correlate to that label in the mechanistic explanations, and a correlate to things we claim in our theory of mind to the mechanisms. There has to be that correlate, because if it's not there, we're not describing our subjective behaviors. And since the thing we labeled with the symbol "subjective experience" should play a critical role in the causal chain to our talking about subjective experiences, then any correct causal theory of our inner workings must include it.

Therefore, no correct causal theory of our inner workings can exclude our subjective experiences, if we have them. Even if our robot researcher developed this theory, the theory must include our subjective experiences as a component, if we have them.


If subjective experiences are part of a causal link, then a behavioural analysis will eventually have to take account of them. If however subjective experience is not part of a causal link, then it will be possible to make a full behavioural analysis without including subjective experience.

Even if a sufficiently detailed analysis does include subjective experience as a necessary part of a causal chain, we certainly don't have access to that causal chain so far. Even supposing that our objective robot might, given some total understanding of the system, include subjective experience as part of the model, he would not need it to explain the system given a level of understanding comparable to what we have now.

Thus:

If his theory is correct, then either his explanation does involve subjective experiences being real, or subjective experiences are not real.


Clearly if subjective experiences are real, and his explanation rests on them not being real, then his theory is not correct. This is not surprising because he lacks information that is possessed by people who do have subjective experiences - i.e. that subjective experiences are real.

Subjective experiences, if real, are not magical elves.
Subjective experiences should not be a new principle, if they are real--unless you need a new principle to create a correct causal theory.

In the world of the objective robot, subjective experiences are every bit as unreal as magical elves. They do no explain anything in the way the universe works except in the very specific case that human beings claim to have them. It is likely that the objective robot would not even understand what the human beings meant by their feelings. If he could construct a theory which explained their behaviour - including their claim of subjective experiences - without having to allow for the reality of subjective experiences - then it seems likely that he would do so.

It is indeed the case that such an objective approach would assume that subjective experience is not real. If subjective experience is real (and I am of the opinion that it undoubtedly is) then the failure of the objective approach to establish this shows that there's something lacking in the objective approach to this area.

That is in itself not surprising. The human investigator has access to all the information that the objective robot possesses, and in addition, had the ability to know that subjective experience exists. The human being is better informed than the objective robot, and hence is more likely to make a correct judgement.

How does the robot researcher identify that theories involving subjective experiences are erroneous? Certainly it's insufficient for the robot researcher to merely use different words for things.

The robot will not be able to establish as a matter of certainty that subjective experience does not exist, but he will be able to construct a perfectly valid theory that fully accounts for human behaviour within the parameters of what is already known. In order to allow that subjective experience is real he would need to incorporate a number of ill-defined additional concepts into his world model.

It might well be that given sufficient knowledge the objective robot would end up having to accept that subjective experience was real. However, that begs the question. We don't know what additional research might establish. If we assume that the objective robot has access to the same knowledge we have now, then he might well assume that subjective experience does not exist. We, on the other hand, are very unlikely to assume this, because we know it does.
 
If we assume that the objective robot has access to the same knowledge we have now, then he might well assume that subjective experience does not exist.

What an utterly stupid strawman.

Current knowledge suggesting that perhaps subjective experience isn't what people like you think it is is not equivalent to suggesting that subjective experience doesn't exist period.

Indeed, I don't see how anyone could rightly think current knowledge suggests such a thing since current knowledge includes the testimony from billions of humans that subjective experience is indeed real.
 
But there is no way for the robot researcher to distinguish between something that causes something to claim to have a subjective experience, and for something to actually have a subjective experience.
Of course there is. The researcher can both question the robot regarding its claim and examine the pathways leading to the claim.

He will note that the human being claiming subjective experience is able to make erroneous statements about his external environment due to incorrect information, and hence can assume that the claim of subjective experience is similarly subject to error.
And?
 
You can still be a strict materialist - it's just that what we mean by matter has changed as we have learned more about it.

Yeah but that's like saying you only eat bananas, and "oh, well, I consider all edible compounds to be bananas."

Seems like it is simpler to just call yourself a food-eater rather than a banana-eater-where-banana-means-any-type-of-food.
 
You can still be a strict materialist - it's just that what we mean by matter has changed as we have learned more about it.

Yes it seems to be reducing in size each time I look. Soon we'll be left with illusive energies and forces in a void.
 
Last edited:
If subjective experiences are part of a causal link, then a behavioural analysis will eventually have to take account of them.
Sure.
If however subjective experience is not part of a causal link, then it will be possible to make a full behavioural analysis without including subjective experience.
I'm not sure this is coherent. It should be phrased the other way--if a full behavioral analysis is possible, and that analysis does not include something that can rightfully be called "subjective experience", then there's no such thing as a subjective experience.

This is the real problem scenario. If you really do have subjective experiences, then it necessarily follows that the term "subjective experiences" means something; and that requires, as a requisite condition, that we were able to associate that label with some thing. How did we ever manage to do this if there's no causal connection?
Even if a sufficiently detailed analysis does include subjective experience as a necessary part of a causal chain, we certainly don't have access to that causal chain so far.
This directly contradicts your claim that you know you have subjective experiences.
Clearly if subjective experiences are real, and his explanation rests on them not being real, then his theory is not correct.
Suppose we pick up the machine from before, and find a label on it claiming that it was made with Infografix Technology <TM>. "Humbug!", says your robot researcher. "I cannot believe in what is obviously a ploy by a marketing guy to sell this device." And so the robot researcher opens the machine, starts fiddling with it, and then manages to figure out exactly how the machine works.

"Aha!", says the robot researcher. "Just as I suspected. I now have a complete theory of this machine's inner workings, and nowhere did I ever run across this Infografix Technology thing. I knew there was no such thing!"

Now, I suspect the robot researcher is loony. Even knowing the full workings of the machine, there's no way it can conclude that Infografix Technology does not exist, because the robot researcher forgot to figure out what Infografix Technology even means. The robot could easily be wrong, given that the workings of the machine that the robot researcher figured out is Infografix Technology.
This is not surprising because he lacks information that is possessed by people who do have subjective experiences - i.e. that subjective experiences are real.
Back up a bit. Before telling me that we have information that subjective experiences are real, tell me what it means for them to be real. And before we get there, please tell me how we came to conclude that these were the things we should be attaching the label "subjective experience" to, in order to call ourselves worthy Native English speakers.
In the world of the objective robot, subjective experiences are every bit as unreal as magical elves.
There's only one world though. The objective robot needs to figure out what the words "subjective experience" refer to. Upon opening up our heads, it figures out what causes us to utter those words. Somewhere in that mess is the key to that objective robot understanding what "subjective experience" means.

The next step is for that robot to determine if the thing it discovers "subjective experience" should mean, is actually there.

Given this, I'm not so sure I agree with your conclusions. If we are really describing subjective experiences, then there must be something there that we're talking about. This thing must play a critical causal role in our description of it. And therefore, the robot researcher should find a correlate to the term "subjective experience" and some thing that really is there, causing us to label it with that term. If the robot does not associate the term "subjective experiences" with the mechanism that causes us to describe them at this point (which should necessarily exist, if we have those things), then the robot is broken. Check the warranty.
If he could construct a theory which explained their behaviour - including their claim of subjective experiences - without having to allow for the reality of subjective experiences - then it seems likely that he would do so.
"Without having to allow for the reality" is a bit of a bigger claim than you're letting on. It's more like denying that our star maker uses Infografix Technology. Sure, you can say that a theory including Infografix Technology would be superfluous, but you cannot actually deny the reality of that theory unless you know what Infografix Technology refers to.

But in our case, your robot researcher should know exactly what causes us to claim we have subjective experiences. That in itself tells the robot what it is we're referring to.
It is indeed the case that such an objective approach would assume that subjective experience is not real.
No, it's not the case. The subjective reality and the objective reality should be the same reality. You are underestimating what it takes in order to make a claim that a thing is not real.
In order to allow that subjective experience is real he would need to incorporate a number of ill-defined additional concepts into his world model.
He would only have to incorporate a theory concerning the meaning of "subjective knowledge" relative to the entities he is studying. The causal mechanisms are sufficient for him to incorporate that theory.
If we assume that the objective robot has access to the same knowledge we have now, then he might well assume that subjective experience does not exist. We, on the other hand, are very unlikely to assume this, because we know it does.
You're directly contradicting yourself above. If we know subjective experiences exist, and the robot has access to the same knowledge we have know, then the robot automatically knows subjective experiences exist.
 
I think that the ability of human beings to detect the feelings of other human beings via the slightest of clues is extraordinary. It is sufficiently developed that people have to actively attempt to hide the outward appearance of their feelings in order that they not be detected. People who can't do it are considered to have a serious disorder. Whether you consider "very good" as an accurate description is of course a value judgement.

Quite, I can surmise nearly everything my partner is thinking and feeling from observing the movement in her top lip. When considered in the context of the situation at hand.
 
Yeah but the logic is the same for any monism. Nobody is a strict materialist anymore, that position makes even less sense than dualism.
I realised that the omission of the comma in bold in this sentence below from my last post, made it virtually illegible.

No,
monism can include confections of dualism such as the spirit/matter relationship and the subjective/objective duality. While remaining a monism at a more fundamental level.
It should read properly this time.

My point remains, assuming monism, any kind of dualism including spirit/matter and objective/subjective dualities can be present provided they are aspects of an underlying monism.

It would be our limited knowledge of the nature of this monism which would fail to appreciate this situation.

This point was elucidated eloquently by Inchneumwasp before he left the forum in despair at the lack of philosophical literacy on the forum.
 
I realised that the omission of the comma in bold in this sentence below from my last post, made it virtually illegible.

It should read properly this time.

My point remains, assuming monism, any kind of dualism including spirit/matter and objective/subjective dualities can be present provided they are aspects of an underlying monism.

It would be our limited knowledge of the nature of this monism which would fail to appreciate this situation.

This point was elucidated eloquently by Inchneumwasp before he left the forum in despair at the lack of philosophical literacy on the forum.

But that isn't really dualism then.

If a duality is an aspect of an underlying monism, then by definition each side of that duality can have knowledge of the rules governing the other.

But the whole point of spirit/matter dualism, for example, is that knowledge of the spirit is impossible to gain from the perspective of matter alone. In other words, if we made a robot that could think, but didn't have a spirit, it couldn't know anything about the spirit world.

This is in direct opposition to the fundamental tenant of monism, that there is a *single* underlying set of rules that everything follows. Because if the spirit world and the material world share the same rules, then one doesn't need to have access to the spirit world to learn the rules that the spirit world functions according to.

If you want to say that you think spirit/matter duality is possible, and that one day we will understand the full set of monistic rules that govern both, then that is a perfectly valid viewpoint. However it is certainly not dualism -- it is just another way of saying there is stuff that we don't understand yet.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom