This analogy is defficient in many ways. If anything, what you have is the equivalent to one of the binary questions of the list of 30. Also, you have not included an analogous target. All you have here is the answer to one descriptor question. So, left with just this, of course there is no way to extract any information!
Sorry, but you haven't understood, or I
really should have made it simpler for you. My analogy was chosen to represent PEAR's actual methodology and data collection processes in an understandable way. Underneath the larded gobbledegook of their reportage, this
is the quality of their research. Other researchers have been particularly scathing for precisely these reasons. But let us push on!
Each data point (or descriptor, as you call it) is the equivalent of taking a noise measurement from your analogous radio speaker
at a single point in time. So over a period of time, you will have many points of data generated, equivalent to multiple data points. However they will all be binary, and thus all useless, as described in posts above.
Let me extend the analogy. If you had your initial faint noise and then applied 30 different filters to it and then compared the result of each filter to a control noise source that did not contain the voice pattern, then we might have an analogy that gets near to what is going on in the PEAR experiments.
But you are not applying 30 different filters to it. You are applying only a few - reduction of all the data points to a few (usually two) levels.
Look, I'll take my analogy further for you, so you can see where it logically ends up. The PEAR report was actually an aggregation of a number of studies over 25 years, with the data massaged to make them compatible. This is the equivalent of having not just one radio making machine-gun clicks, but many of them all going simultaneously. And the analysis was the result of trying to make sense of the
totality of noise they all produced at once.
And what results might we expect, logically, from this sort of exercise? Given that the most detail was in the unfiltered data, it should produce the most "positive" results. But it would also be the most likely to be affected by outside interferences - sunspots, passing interference, etc, etc. But these are very similar to the effect being sought, so it would seem reasonable to assume that any filtering would take out the good stuff too.
And when filtering WAS applied, lo and behold, the effect began to rapidly disappear. So rapidly that by the third and fourth iteration, it was gone. Did you note that PEAR reverted to a lower filtering level when re-analysing some other data later? That's why.
Once again, as urged by the science world, the obvious solution is NOT more data reduction, but better measurements and better quality control. In our analogy, this would be stuff like using high-fidelity digital recorders, working in Faraday cages, and other steps to minimise base data anomalies and maximise base data quality and reliability.
But wait, I don't understand why you have gone to such lengths to invent an inadequate analogy that just confuses the issue and also does not address my question. Why can't you explain an answer to my question by refering to the actual processes that went on? Much simpler.
Because I have answered your question in multiple ways, including an extended analogy I took the time to prepare. And yet you don't seem to have absorbed the rather obvious and salient observations.
Instead, you seem to be trying to nitpick one tiny point on the outer rim of the galaxy of this whole exercise. In what appears to be an attempt to invalidate an entire commentary based on a tiny and quite irrelevant discrepancy.
Can you not understand that it doesn't matter about the analysis results if the base data is rubbish. You can't make a nice cake from rotten eggs.
They got positive results from their binary descriptor based data sets and overall positive results!
And yet they reported they got nothing at the end. I quoted this above from the body of their report, and it's in the abstract of their own report. Perhaps you will tell us what we who have read the report many times have missed?
I agree that the possibility of collusion and bias brings serious doubt upon the data. However, the data from the PEAR paper does not support the hypothesis that the positive results from the binary data collection experiments had anything to do with the method of analysis. Any bias present when participants answered the distributive descriptor questions would be passed on to a binary treatment of the data. For example,
Lets say I get asked - Is there water in the scene?
I am faced with 10 possible answers so I give this question a 7, equivalent to "quite sure". I give this answer because I have a psychological bias towards water.
When converted to binary, this 7 will become a "yes". Cleary, the bias is still responsible for the "yes" answer.
And, as you pointed out before, the conversion of the distributive scores into binary should, according to this kind of hypothesis, increase the incidence of hits.
But the data shows that the distributive data does not produce an artifactual result when treated as binary.
Can't you see that your hypothesis does not stand up to the data?
No. For the umpteenth time. Biniarising the data CHANGES the results, so it WILL produce artificial positives OR negatives.
I'm getting the impression that you somehow think it amazing that a matrix of data that is damn near all zeroes initially somehow doesn't change a lot when it is binarised...to all zeroes.