• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Vision From Feeling

Status
Not open for further replies.
Whoops clicked submit instead of preview - trying to work with table tags. More later...
 
Last edited:
Pup:
I had a very quick and rushed cereal test on Monday November 10th. Small samples of cereal were placed in a total of three identical paper cups. Two of these contained plain cereal, and a third contained cereal with the Lactobacillus supplement. Even with these identical containers I claimed to feel and see a distinct low, dark and heavy vibration from the plain cereal, and a distinctly tall, bright and white from the one with the bacteria. The three cups were shuffled and placed randomly in a row by a friend who would then leave the room so not to give away which is which. My task was to identify which of the three had the bacterial supplement. Here are the results: C = Correct, F = Failed
1) C
2) C
3) F
4) C
5) C
6) C
7) C
8) C
9) F
10) C
11) C
12) C
13) C
14) F
15) C
16) F
17) C
18) F

As is typical when I have a chemical identification test, in the beginning I have good results, but after a while I begin to feel very drained, tired and get a headache. I begin to rush and guess wanting to get the test over with sooner, I also become unable to identify the desired sample, as is seen by the increase in incorrect answers toward the end. I wanted to make twenty runs but it became impossible due to the way I started to feel. Someone who is skilled in statistics can interpret these results for us.

The probability of 13 successes out of 18 independent trials with a probability of success of 1/3 for each trial is 0.0008526.
 
OK then here's the thing about quitting whilst you're ahead

Trials|Hit?|Running total|Running Percentage|Running probability|Running odds
1|1|1|100%|0.333333333|1 in 3
2|1|2|100%|0.111111111|1 in 9.01
3|0|2|67%|0.259259259|1 in 3.86
4|1|3|75%|0.111111111|1 in 9.01
5|1|4|80%|0.04526749|1 in 22.1
6|1|5|83%|0.017832647|1 in 56.08
7|1|6|86%|0.006858711|1 in 145.81
8|1|7|88%|0.002591068|1 in 385.95
9|0|7|78%|0.008281258|1 in 120.76
10|1|8|80%|0.003403953|1 in 293.78
11|1|9|82%|0.001371742|1 in 729.01
12|1|10|83%|0.000543804|1 in 1838.9
13|1|11|85%|0.000212629|1 in 4703.02
14|0|11|79%|0.000690993|1 in 1447.2
15|1|12|80%|0.000285109|1 in 3507.44
16|0|12|75%|0.000792465|1 in 1261.89
17|1|13|76%|0.000341482|1 in 2928.42
18|0|13|72%|0.000852596|1 in 1172.89

You can see how the probability of getting that score or better at that stage swings up and down. Quitting at the right time can make all the difference.

Here's how it could have gone after that - just by guessing a 1 in 9 chance that you'd have got both remaining tests right

Trials|Hit?|Running total|Running Percentage|Running probability|Running odds
19|1|14|74%|0.000380798|1 in 2626.07
20|1|15|75%|0.000167366|1 in 5974.94

a 2 in 9 chance that you'd have got the first remaining test right then the second one wrong

Trials|Hit?|Running total|Running Percentage|Running probability|Running odds
19|1|14|74%|0.000380798|1 in 2626.07
20|0|14|70%|0.000878807|1 in 1137.91

a 2 in 9 chance that you'd have got the the first remaining test wrong and then the second one right

Trials|Hit?|Running total|Running Percentage|Running probability|Running odds
19|0|13|68%|0.001874823|1 in 533.39
20|1|14|70%|0.000878807|1 in 1137.91

finally a 4 in 9 chance that you'd have got both remaining tests wrong

Trials|Hit?|Running total|Running Percentage|Running probability|Running odds
19|0|13|68%|0.001874823|1 in 533.39
20|0|14|65%|0.003724569|1 in 268.49

Taking a weighted average we find that if you'd just taken a random punt on these last two you'd have ended up with a final score of

((1 x 0.000167366) + (2 x 0.000878807) + (2 x 0.000878807) + (4 x 0.003724569))/9

= 0.002064541
or 1 in 484.37

That'd still be an impressive result but below the JREF pass mark.
 
Last edited:
Someone better with statistics will hopefully jump in, but it would be hard to draw any conclusions from that cereal test. An open cup is not a sealed container. Cereal boxes have sealed plastic bags inside them. Further, if you were keeping score with each test, it was like a game of Rock Paper Scissors. You weren't necessarily using your ability to find the cereal. You could play against your friend, refining your strategy with each test. And of course you've said you are never wrong, and now you've been wrong 5 times.
I'm no statistician, nor do I play one on JREF, but using the tables at http://www.automeasure.com/chance.html

The required success ratedfor guessing better than random chance, given a 1/3 choice, over 20 trials is:

100:1 2-12 expected (65%)
10,000 0-15 (80%)
1,000,000 0-17 (90%)

VFF's score was 13/18 = 72%

Just going by back of the cigarette packet guesstimate, I'd say VFF would need a score of 14 or better to get past the MDC preliminary challenge at odds of 1:1,000, and if that were the case, at the moment would fail.

ETA : from Beth's calcs I would seem to be wrong (just). 1:1173 I think was the resultant odds?
 
Last edited:
Hi everyone. I'm back to answer your comments. I believe it will take me a while.


Soapy Sam:
I came here to this Forum to discuss my challenge application with the IIG. It will be done on detection of health problems, and the date for the test has not been set yet. We are all eager to find out whether I have the ability or not and I agree that there will be a lot of talking and waiting before the actual test and before the results come in. I will try to arrange simpler tests on the other aspects of my ability while we are waiting for the official test. Please understand that I am a busy college student and I will take the time for the other tests as I can.

Thank you for taking the time to respond. I was a student once. Not so busy as I might have been, and that was without any internet to distract.
I thought I was rather special in those days. Most of my classmates had similar conceits. We were very young. Thinking one's self special turned out to be very normal.
I have to agree with other posters that I see more similarities than differences between your claims and attitudes as I read them here and those of many previous challenge claimants.
There may be someone out there with a true inexplicable ability , but there definitely are many people who unconsciously kid themselves as well as many who consciously kid others. I would expect from what you say, that you fall into one of those two groups.
I am never sure which I prefer. Correcting honest self-delusion is rarely successful and may do harm to a genuine, but misguided individual. Revealing the deliberate falsehoods of a con man usually results in him moving a few streets over and starting again. Therefore, disproving such claims may do more harm than good.
I think you are simply mistaken. If so, you will need strength of character to accept contrary evidence and abandon belief in your own uniqueness- and with that I wish you luck.
As for the test itself, well, luck should not come into it, either way.
 
Hi Anita,

Firstly I want to say more about the selection bias involved in quitting the test once you were not doing so well.

This sort of thing can produce a notable effect and goes part of the way to explaining your favourable result with the quick and dirty test.

Check out this thread which analyses the possibility of selection bias in the Pear labs data.

http://www.internationalskeptics.com/forums/showthread.php?t=125294

Your first quick and dirty trial may have told you something about your abilities. You got the impression that the trials were strenuous and your ability tailed off.

Therefore with future repetitions you need to ensure that you work in shorter sessions. Ten at a time to be on the safe side.

At the same time you'd been exposed to the criticism of an unconscious selection bias. You must in future be rigorous about what is just a warm up and what is a proper test, how many trials you're going to do and therefore when you're going to stop recording data. If you feel tired, take a rest but ensure that you eventually complete the pre-specified number of trials.

Selection bias alone however is not enough to explain this result. I did a little statistical experiment in excel and from that it seems that even if you were to do 100 trials and pick the best sequence of 18 in a row from those 100, you'd expect that the best sequence would represent a 1 in 10 chance. There's only a 4% chance of getting a result such as yours from this extreme form of selection bias.

So something else is most likely going on apart from the selection bias involved in stopping the test early rather than picking up up again once you'd rested.

Firstly it's been commented that you got immediate feedback regarding your successes and failures. This not only raises the suspicion of selection bias but it gives you information upon which you may base subsequent guesses. I'm not suggesting that you did this consciously but if the sequence picked by the person shuffling the cups was not randomly determined then the subconscious mind is very good at picking up such patterns. Take a look into this examination of using a test that used combination of using not quite random sequences and immediate feedback.

http://www.csicop.org/si/2000-09/staring.html

Worth remembering that one because Sheldrake misreports it as CSICOP replicating and therefore upholding his results.

So that's another reason why removing immediate feedback is a good idea.

It's also a good reason why the sequence should be determined randomly.

Now a complex sequence if die rolls has been suggested for this purpose. To be honest I haven't gone through that sequence to see if it generates notable patterns but I do notice that the positioning of the two dummies is dependent upon their starting positions so there's the potential for there to be a problem. Of course that's only in the position of the dummies which should ideally be indistinguishable from one another but we'll address the possibility that they're not later.

A much simpler method is with a single die roll and a lookup table.

die roll|Position A|Position B|Position C
1|target|dummy1|dummy2
2|target|dummy2|dummy1
3|dummy1|target|dummy2
4|dummy1|dummy2|target
5|dummy2|target|dummy1
6|dummy2|dummy1|target

With a greater number of cups per trial I'm in favour of picking tokens from a bag.

Even without immediate feedback randomisation is important. It's easier to stumble across a pattern than it is to predict a random sequence.

Do you happen to have the sequence of positions noted down. It might be interesting.

So what about actual observable clues that might be in evidence. Firstly there's your fellow experimenter. I'm pleased that you ensured they weren't in the same room as you at any point. It's surprising how often this makes a difference. So often that you might be forgiven for believing that telepathy were common place. In fact we all subconsciously pick up on cues - it so easy even a horse can do it.

http://en.wikipedia.org/wiki/Clever_Hans

That's actually a flaw I note with the IIG protocol. Whilst they've taken every effort to ensure that the observer who accompanies you doesn't know the complaints of the subject you're looking at, the subject themselves do know what ailments they're suffering from. I'm not sure that adequate steps have been taken to reduce this observer expectancy effect.

Over here in the UK there's a mentalist by the name of Derren Brown - his book "Tricks of the Mind" is an excellent read. In it he gives a few tips for reading certain "tells" people tend to have. These are purely subconscious and, he says, they're present even if you tell the person not to give anything away. In fact he advises performer to make a big deal out of telling their subjects not indicate anything with their facial or eye movements. It convinces both the audience and the subject that they're really not giving themselves away when in fact they just can't help it.

In an adversarial test (for a prize or to otherwise prove a paranormal ability to other people) you'd have another observer accompanying you and (also ignorant of where the first observer had set the target) to ensure you didn't cheat but if the purpose of the test is merely to prove to yourself that your abilities are paranormal then one co-experimenter is fine. Obviously you'll know for yourself if you're choosing to cheat.

I'm not concerned about the possibility of olfactory clues. If someone says that they can tell the position of a cup of breakfast cereal from the other side of a room through scent alone I'd be inclined to call that a superhuman ability and suggest they apply for the challenge on that claim. How close exactly were you from the cups?

The fact that the cups weren't covered does concern me. You don't have to be able to see the cereal directly for the colour/texture to potentially make a difference to the ambient light within the cup. If you've got three pieces of card to label the cups positions I suggest your co-experimenter places the card on top of the cup.

My second concern is that through mild wear and tear paper cups might become distinguishable from one another. Lets say that the cups are not 100% identical. Without immediate feedback you may not be able to what they contain but you can ensure you pick the same cup every time. That drops your odds of getting 20/20 from one in three and a half billion to one in three. If we could get such a protocol accepted by the JREF then there's a one in three chance of passing the preliminary test and a one in three chance of passing the final test. Works out that you'd expect to win the million bucks after nine attempts.

That's why I'd expect to see such a protocol insist on something like fresh cups every time. In fact you'd probably be looking at sealed cardboard boxes.

The sort of paper cups I'm used to aren't always very opaque. You can often see the levels of the drink inside. Of course, I'm not saying that you could see right through the paper well enough to make out fine detail. Although I guess you actually are saying that - but I'm not saying you can do it by non-paranormal means. What I am saying is that if you might be able to tell the levels, and if the levels aren't exactly identical then that's another way you might subconsciously distinguish one cup from another. After your initial 1 chance in 3 success was confirmed it would be possible to track that cup in subsequent trials.

A conscious cheat might need one or two trial runs before starting the test proper.

So if you could, rather than shuffling cups use fresh cups refilled from the cereal box each time. When you're finished each trial, the cereal should be poured back into the box or the bin and those cups discarded. We wouldn't want bacteria clinging to the cup and confusing your abilities. Of course the cups should be filled in a preparation area to avoid the possibility of residue around the test area giving the game away.

Doing this also allows a simplification of the randmisation process as there really will be no difference between Dummy1 and Dummy2

If you can adopt those measures and still perform just as well then you've taken a giant step towards proving the paranormal. If not then you've taken what I might consider a far more interesting step towards investigating the true nature of your rare talent.

Hope this helps. Look forward to hearing more.
 
Last edited:
Here is a protocol that, while being more work, will blind the process a little more thoroughly. Using this protocol, any flight may contain any combination of treated vs. un-treated samples.

People – Preparer(s), Tester/Observer, VFF

36 paper cups; 1 deck of cards, with one red suit removed; 36 identical opaque envelopes, each marked with a unique code/identifier; scale

Test area will be unoccupied initially.

Preparer(s) will enter test area and –

Separate test cereals into individual samples of X grams each (use paper cups as temporary containers) -
12 treated
24 non-treated

Shuffle cards.

Deal cards, one at a time, face up.

If a black card is dealt, place one non-treated sample in an envelope and seal it.

If a red card is dealt, place one treated sample in an envelope and seal it.

For every envelope, record (on a separate data sheet) the code/identifier and the sample type.

Repeat above until all envelopes have been filled and recorded.

Place all sealed envelopes in a container.

Preparer(s) leaves, taking data sheet and all materials/equipment with him (them), leaving only the container of envelopes.

Tester/Observer and VFF enter test area and -

Tester/Observer selects three (3) envelopes at random from container, and presents them to VFF.

VFF will have Y time to identify which (if any) envelope(s) contain the treated cereal.

For each flight, Tester/Observer records envelope codes and VFF’s choices.

Repeat, until all envelopes have been examined by VFF.

Preparer(s) will re-enter test area and compare Tester/Observer records with original data sheet.
 
Hi Anita,

Firstly it's been commented that you got immediate feedback regarding your successes and failures. This not only raises the suspicion of selection bias but it gives you information upon which you may base subsequent guesses.

The problem with this criticism is that her success rate declined over the course of the test, which is in line with her explanation of fatigue and not supportive of the hypothesis that the immediate feedback was producing better guesses. While that type of learning is possible, if it were happening in this particular case the error rate would have gone down rather than up over the course of the trials.

eta: In regards to the possibility of selection bias due to stopping at 18 rather than running 20 tests: if you assume that she failed both test 19 and 20, the probability of getting 13 correct guesses out of 20 independent trials with a probability of 1 out of 3 for success on each trial is 0.003725.
 
Last edited:
Actually, my concern with the cereal test as it was performed is simply how personal choice randomization and instant feedback operate. Most people suck at setting random patterns. If the person shuffling the cups knew which place had the cereal last, they would be likely to not place the cereal in the same spot twice, and VFF, just knowing human nature, would be likely to not choose the same spot twice as her guess. This drops the odds of correct guesses down considerably, if you only have to choose one correct cup out of two rather than three.

As the shuffler gets bored or tired, it is more likely that they will allow repeats, dropping the odds of VFF guessing correctly. This pretty much falls in line with the pattern of the test data.

I agree with Ocelot (and others), the cup setting must be randomized mechanically, not by some person's choice, and VFF should not see her results until the end.
 
The problem with this criticism is that her success rate declined over the course of the test, which is in line with her explanation of fatigue and not supportive of the hypothesis that the immediate feedback was producing better guesses. While that type of learning is possible, if it were happening in this particular case the error rate would have gone down rather than up over the course of the trials.

From the second trial onwards (assuming no "warm ups") there was feedback which might potentially have helped Anita in her picking. I'm not convinced that more than one or two prior results would produce a proportional advantage in predicing how the co-experimenter would next shuffle the cups. I just don't know and whilst it seems instinctive to suggest it might be the case that a marked improvement should be seen, I can't say that it would allways be apparent over the random element.

Lets consider the anti hypothesis that having the two previous shuffles as reference she can predict the next shuffle with 69% accuracy. Breaking the last 16 trials into quartets we see she got 3/4 3/4 3/4 then 2/4. That doesn't seem to be a significant departure from what'd be expected.

Lets say she can pick the next shuffle 71% of the time if she knows at least one prior result. Then the next four attempts she's 3/4 the next 4 3/4 then a purple patch of 4/4 and a slump to 2/4

Again not a huge departure from what we'd expect. Not enough to refute the antihypothesis that the probability is constant over time.

You can data mine to slice and dice the sequence any way you like. Whilst there's a visible downward trend towards the end that's not incompatible with a contstant level of ability to predict overlaid by random noise.

Remember also that the sequence doens't appear to have been predetermined so the tailing off might equally represent the shuffler getting feedback on how to defeat Anita's strategy.

Think of it as a game of Rock Paper Scissors between the two. To begin with Anitia appears to be able to predict the shufflers strategy very well indeed - if later in the sequence Anita isn't doing quite so well, does that mean that Anita is getting tried or that the shuffler has developed a more effective counter strategy.

eta: In regards to the possibility of selection bias due to stopping at 18 rather than running 20 tests: if you assume that she failed both test 19 and 20, the probability of getting 13 correct guesses out of 20 independent trials with a probability of 1 out of 3 for success on each trial is 0.003725.

You mean 13 or more of course... Yes I know and if the final two went according to chance, she'd be expected to end up with feat of probability 0.002064541.
 
I agree with Ocelot (and others), the cup setting must be randomized mechanically, not by some person's choice, and VFF should not see her results until the end.
Quite true. Your point about the randomization is well taken.

My point was that the data better support the hypothesis of fatigue rather than the hypothesis of a learning effect due to the feedback provided after each trial.

You mean 13 or more of course
Yes, I did. Thanks for the correction. You're quite right about the results being consistent with a much higher level of probability than 1 out of 3.

One thing I'd like to point out is that if the sequence had been reversed, with the trend toward fewer failures at the beginning rather than the end, would you still be arguing that the shuffler was growing tired and or was learning to anticipate her strategy in order to better defeat her?

Also, consider that if the results had come back that she performed no better than chance and she was coming up with similar explanations to explain why the results had differed from her expectations, how would you be reacting then?
 
Last edited:
Good protocol! I only have one comment.

Tester/Observer selects three (3) envelopes at random from container, and presents them to VFF.

In this situation, the envelopes can be presented one at a time, two at time, all together, whatever works best for VFF. It doesn't matter because they are randomly filled and selected. Therefore each envelope constitutes a separate independent trial and there is no need to stick with the three at a time scenario if something else is preferred.
 
The problem with this criticism is that her success rate declined over the course of the test, which is in line with her explanation of fatigue and not supportive of the hypothesis that the immediate feedback was producing better guesses. While that type of learning is possible, if it were happening in this particular case the error rate would have gone down rather than up over the course of the trials.

I disagree. It could be that the subtle clues enabling her to unconsciously (or consciously for that matter) determine the right cup we're not always visible. This would result in the misses being scattered.

Furthermore, fatigue could also make it more difficult for her to recognize the clues as she tries to "reason out" her answer rather than just accepting what her "instinct" told her.

And the former could easily contribute to the latter.
 
Having taken a more comprehensive look at VFF's site, and having read every post in this thread up to now, I'm beginning to feel that I'm being had.


M.
 
Good protocol! I only have one comment.
In this situation, the envelopes can be presented one at a time, two at time, all together, whatever works best for VFF. It doesn't matter because they are randomly filled and selected. Therefore each envelope constitutes a separate independent trial and there is no need to stick with the three at a time scenario if something else is preferred.
Yes, I'm aware of that. The choices have to be presented in some fashion, though, so I picked the 'triangle' format that was already used.
 
Having taken a more comprehensive look at VFF's site, and having read every post in this thread up to now, I'm beginning to feel that I'm being had.

"Had" as in intentionally deceived? I don't think so. She started with the IIG like 8 months ago. It would be a very elaborate deception that doesn't seem to have any real payoff. It has none of the other earmarks of somebody getting their kicks.

I think she genuinely believes she has some sort of gift. I don't believe for a moment she can do what she says she can do in the manner she describes.

I must say I have my own gift. I used to have fun at happy hours guessing what people did for a living just by watching them. Most of the time I didn't verify my guesses, but when I did I was uncannily accurate. I'm sure if I were the visually imaginative type I could have shut my eyes and envisioned them at work doing their jobs. I should also note that while I did not keep records, I recall being pretty damned good at the TV show "To Tell The Truth" - I would make my guess before any questions were asked.

Of course, I was probably a lot less accurate than I remember. In reality I probably had a lot fewer "trials" than I recall. And I know I was picking up on all sorts of visual clues. But then again, maybe I was sensing vibrations...
 
"Had" as in intentionally deceived? I don't think so. She started with the IIG like 8 months ago. It would be a very elaborate deception that doesn't seem to have any real payoff. It has none of the other earmarks of somebody getting their kicks.

I think she genuinely believes she has some sort of gift. I don't believe for a moment she can do what she says she can do in the manner she describes.

I must say I have my own gift. I used to have fun at happy hours guessing what people did for a living just by watching them. Most of the time I didn't verify my guesses, but when I did I was uncannily accurate. I'm sure if I were the visually imaginative type I could have shut my eyes and envisioned them at work doing their jobs. I should also note that while I did not keep records, I recall being pretty damned good at the TV show "To Tell The Truth" - I would make my guess before any questions were asked.

Of course, I was probably a lot less accurate than I remember. In reality I probably had a lot fewer "trials" than I recall. And I know I was picking up on all sorts of visual clues. But then again, maybe I was sensing vibrations...


I can't put my finger on it, but it's something in her writing here that sets off the alarm.

I don't have any "special gifts" that I am aware of. If I did have one, I'd be sure to exploit it for monetary gain. At my age, I'm beginning to think life's too short to try to disabuse everyone of their belief in what is essentially horse manure.


M.
 
I can't put my finger on it, but it's something in her writing here that sets off the alarm.

I think what we're seeing is someone who is truly examining her abilities for the first time and being questioned by people who don't believe her. She's being shown her inconsistencies and the unreliability of what little testing she has done. And since the ability isn't real (sorry, Anita), there's no doubt that the more she talks about it, the more troublesome it becomes for her.

You could be right. It could be a ruse. But if there's any deception, I believe it's self-deception.
 
My point was that the data better support the hypothesis of fatigue rather than the hypothesis of a learning effect due to the feedback provided after each trial.

The learning effect is supported not by the trend but by the high proportion of correct guesses. Either Anita has a genuine paranormal ability or one or more of the effects I mentioned are in play.

I haven't seen the sequence or run tests on people's ability to apply learning to such a sequence but I do note that the csicop data showed a only a very gradual increase in predictive ability with Sheldrake's non random sequences after four blocks of 60 trials. That's a lot more than 18 trials. It did however show a great advantage from the feedback right from the get go. For the most part. The advantage kicks in quickly then increases gradually over time.

I have great doubts that such an increase due to learning would be significantly apparent over a mere 18 trials. Its is my contention that over just 18 trials, random chance predominates over any trend that may otherwise be having an influence. There could be a trend towards increased accuracy (through learning or some other factor) but over 18 trials random chance may cause the appearance of a downward trend almost as often as the upward trend that might be favoured by whatever factor is in play. Likewise there could be a trend towards decreased accuracy (through tiredness of some other factor) and yet random chance may cause the appearance of an upward trend almost as often as the downward trend favoured by this alternative factor.

I don't believe that the slight downward trend is significant enough in magnitude to actually require explanation.

I've performed a little excel experiment to prove this. Plotting x = trial number against y= 1 for a hit, 0 for a miss, we can fit a line to those points using the least squares method. The gradient of that line gives us a magnitude for the downward trend. For Anita's data this gradient is -0.0258

I then generated a random list of hits and misses. Based on Anita's success rate, each point in the sequence has a 72% chance of being a hit. No underlying trend was programmed in. The first trial had the same chance of being a hit as the last trial. I then did the same line fitting and noted the gradient. Through the power of macros I repeated this 1000 times. The average gradient, as you'd expect was pretty close to zero - horizontal. (0.0002312) the gradients varied between -0.068111455 and 0.080495356 a standard deviation of 0.020336. We got downward trends steeper than those from Anita's data 107 times (10.7% of the time) and upward trends of greater magnitude 88 times (8.8% of the time)

What this means is that if we expected no discernable change in accuracy over time then getting a result showing a downward trend more extreme than Anita's would not be a great suprise.

I tried programming it to have a falling accuracy rate as suggested by Anita's data with the accuracy rate for trial 1 starting at 94% and dropping linearly to 50% by trial 18

Over 1000 repetitions of this experiment the average gradient as you'd expect was close to that programmed in (-0.0257977) we got 89 results with an upward gradient, allthough only 2 of them steeper than the downward trend we'd attempted to introduce.

So a downward trend like that indicated by Anita's data can be reversed in direction to show a slight upward trend just by by random chance and even completely inverted in rarer circumstances to show an upward trend just as large as the downward one we'd expect.

I then primed it with a swift learning hypothesis. The first trial had only a one in three chance of success. Thereafter with the advantage of knowing where the shuffler had previously placed the target the guesser had a 70% chance of guess where they'd move the target to (based upon Anita's success rate in the last 17 of her trials) in 1000 repetions of this experiment a downward trend steeper that that shown by Anita's data was found 67 times or 6.7% of the time.

As such there is no anomolous result to explain away here.

With a slightly slower learning hypothesis where the the accuracy rate for the first trial remained one in three, second attempt now had a 50% chance of success and subsequent attempts had a 70% chance of success we suddenly dropped to only 5 out of a thousand showing such a downward gradient steeper than Anita's.

This suggests to me that if non-random sequences and immediate feedback were the overriding factors factors in Anita's high sucess rate then only the position of the target immediately prior to the one being predicted was relavent to the prediction being made.

These deductions of course are irrelevant if Anita had a "warm up" or for some other reason she knew where the target was prior to the first shuffling.

Either way although the results are indeed indeed slightly more consistent with a fatigue hypothesis than a learning hypothesis, it is not by a statistically significant amount. The results, even if we arbitarily dissmiss the possibility of fatigue being a factor in Anita's accuracy are still not inconsistent with the hypothesis that Anita was learning from immediate feedback. When we consider there is no need for a dichotomy here and that both these and the other factors previously mentioned may have had simultaneous effects there is even less reason to dissmiss the possibility of learning.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom