I don't believe that something that didn't have subjective experience itself would be able to produce any kind of evaluation of it.
"Subjective experiences are not real" is a kind of evaluation of it.
I don't think that any set of rules that it could apply could determine what subjective experience is.
But subjective experiences should have a mechanism, and that mechanism should have a causal relation to your describing them with the term "subjective experience". Our robot can use this as a basis for applying the term "subjective experience" to a meaning. At an absolute
worst case, it would include too many things, and this ill-defined extra that you think is critical to subjective experience would remain unknown.
But there should also be a reason that the robot can figure out, in the causal connections--at least in theory--for why you say that there is this ill-defined extra. If that reason is, in fact, based on the actual ill-defined extra, then the robot is actually in a better situation of saying what subjective experience really is than you are, even with your "cheat" of actually experiencing it. And if it is "not", then there is no such ill-defined extra.
If you see a third possibility, then please, point it out.
The tools we use to make judgements are, in theory, capable of being applied automatically.
But the tools we use to make judgments are, in practice, applied automatically.
The robot would apply test A, which would say that it has something that looks like a penny, and test B, which would say that it had something that didn't weigh the same as a penny.
You're being a very poor devil's advocate. I think I can guess the reason you're going along these lines--it's because you want to remove the analogy of "looks like a penny", because you think I'm defining subjective experience. That is, in fact, not what I'm doing--I'm playing the devil's advocate, and
granting that the robot has no subjective experience, but trying to show you that even in that scenario, the robot winds up having a perfectly good theory of what subjective experience is.
What you're trying to do, I'm guessing, is to make the robot's evaluation seem to me to be that much further removed from experiencing, by associating it with things like objective measures. However, in doing so, you are ironically weakening your argument. Here's why.
Suppose your robot does analyze pennies and photographs this way. Well, we do not. We come to the conclusion fairly quickly that the penny behind the curtain is a real penny, but the penny in the photograph is just a photograph of a real penny. Now let's say that the robot hears us make this "magical" claim that we can tell real pennies from photos of pennies without weighing them--just by looking at them. Let's further suppose that the robot doubts us, and performs a simple test of our alleged capabilities using something very similar to the MDC.
Now here's the problem. We pass the test.
Now what is the robot to do with our claims of "magical" capabilities of divining real pennies from photos of pennies? At this point it must conclude that there should be some underlying mechanism that we use that allows us to make this determination, even though it does not know exactly what this mechanism is. Remember the plotting machine making a star? Just like that.
And also similar to this machine, we make a claim that we use Subjective Experience technology <TM> to perform this analysis. Well, the robot doesn't know exactly what Subjective Experience technology is, so it goes about opening our heads and figuring out how we
really do it.
You can guess the rest of the plot... I keep repeating this story.
If this equates to subjective experience, then everything that interacts with something else has a subjective experience of it.
I'll grant that it doesn't equate if you note that I never said that it equates. I still reach my conclusion that the robot doesn't arrive at a claim that subjective experience doesn't exist (with the caveat that it might, but depends on the robot and the line of inquiry; nevertheless, I still maintain you should check that robot's warranty).
You're missing the point of what I'm trying to do... when I say "at least analogous to" in the latter post, I'm not trying to claim that they "are" subjective experiences in themselves. My point is about how an entity that is minimally capable of doing so associates a word to a meaning--it has to map this word to some set of invariants that applies to the way that the word is used. The analogy here is so tight that there's no reason for the robot to doubt that subjective experiences actually exist--unless you can make some specific claim that differentiates what you surmise that the robot would get "subjective experience" confused with, and what subjective experiences actually are.
And that's what I was looking for when I asked you to make a claim about subjective experiences that you think the robot would disagree with. In the latest reply, you suggested that this was probably impossible. Well, the implications of it being impossible is that the robot would think it does exist, and that it's no different than the kinds of things it thinks it might be.
And furthermore, who is to say the robot's guess is wrong? Shouldn't the subjective experiences actually correlate to something real, that is part of the causal chain, assuming they do exist?
I don't find this a useful or distinguishing definition.
Doesn't matter. The whole point is, "subjective experiences are not real" is a type of claim. As I said before, you're severely underestimating what it takes to claim this--even for "objective robots".
The objective robot experiences the penny the same way that the slot machine does, or the table. It's a conceptual aid.
As are experiences.