Hi UncaYimmy, i think there is some misunderstanding here.I didn't mean that i want your objective opinion about my previous experiences (of course you can't do that),but about my suggested protocol.
so let us give it another try:
what's is your objective thoughts about the following argument regarding confirmation bias ? (if you want to ,you may also put my protocol aside for the sake of the discussion)
There is no confirmation bias in the outcome results of experiments that involve self-evident hits and non self-evident misses, because the odds are against that the results shows 20 hits, while those experiments were done under random but scientifically acceptable controlled settings.
I don't understand what the part in bold means.
In order for there to be confirmation bias, there needs to be something to be confirmed. The definition of hit and miss need to be defined as well. I don't get that from the above, so I really can't comment directly.
I will repeat what I have before in this thread and in others. When investigating a detection ability there are three possible outcomes for a given trial: Correct Detection, Failure to Detect, and False Positive (Incorrect Detection). A false positive is clearly a miss.
But is it confirmation bias to ignore Failures to Detect (FD)? Stated another way, is FD always a miss? No. It depends what you're testing.
Let's use a "hearing test" as an example. In a hearing test the subject is wearing headphones through which pure tones at various frequencies and volume levels are played. The subject indicates whenever he hears a tone. The technician then plots the frequency and volume level detected.
At the end of the test we will have a pretty good idea of what frequencies that person can hear and at what volume (if any) In this "test" we ignore the FDs. But that begs the question of whether we have performed a test or a study. In one sense it might be a good "test" to see if the person can hear at all. In another sense it is really just a "study" of the frequency response of that person's hearing.
What will complicate matters is if we have a lot of False Positives (FP). That is, the subject said he heard a tone when no tone was being played. A properly designed hearing test will play the tones at random intervals to keep the subject from anticipating when the next tone might be played. I'll leave it to the statisticians to work out the details, but suffice it to say that if there are a lot of FPs, then at best the test/study is inconclusive.
Sticking with the "hearing test" suppose somebody says that they can hear tones outside of the range of normal human hearing (below 15Hz and above 18kHz). What happens if we ignore the FDs? Well, once again it all depends. Our results might tell us that the subject
can detect these tones but only at a certain volume or above. Thus in one sense we are "ignoring" the FDs when answering the yes/no question about his hearing. In another sense we are using that data to refine the claim.
So, under what situation must we
always count the misses? When they are in direct contradiction to the claim.
Continuing with this example, suppose the subject says, "I can detect tones from 5Hz through 22kHz at a sound pressure level of X or higher." In that case we cannot ignore FDs at all nor can we ignore FPs. Similarly the subject might add "with 85% accuracy" to his claim. It wouldn't change the test protocol at all - only our true/false evaluation of the data.
So, in order to judge confirmation bias, we must know the
exact claim and protocol. In my next post I will address that in regards to what you wrote early on.