PartSkeptic’s Thread for Predictions and Other Matters of Interest

Status
Not open for further replies.
Nicely put, Jay - you echo the conversation I had with my wife, although you stated the position far better than I. Thank you.

Pixel42, thank you as well. As previously mentioned, you have the patience of a saint. You’re right, of course - during these events we do learn from each other.
 
I love these threads although I have very little to contribute, I learn a lot. Before I retired I had a JayUtah type at work who was the exact opposite and argued constantly that we needed to do things that simply will not work and cited personal web pages and Twitter for proof.
His reasoning was that just because 325 people have tried to do things this way doesn't mean we shouldnt try.
What I meant about Jay was he had the same style, not the same rational arguments.:D He was also one to use many big words, mostly nonsense to make a point rather than clear and concise statements. It got very tiresome.
I figured out how to counter his screed by reading a lot here. He got let go as a program manager and I as team leader stayed for 15 more years. :)
Its nice being right sometimes.
This dismantling by Pixel42 and JayUtah is the most entertaining thread in a long time.
 
It's now possible to mine data to absurd lengths. So let's say I have a hypothesis A and a measurable outcome X. I also collect control information, B, C, and D, all of which are potential confounds to X. I want to see whether X varies only with A, and not with B, C, or D. Software to do that is essentially free. But since it's quarantine and we're all stuck indoors, I can play with the software. Does X correlate to the Boolean expression (B and C)? (A or B, but not C)? (not B, but C and D)? If you have a lot of control variables, you have a practically infinite set of algebraic possibilities you can test for "significance."


This part reminded me of this web site: https://www.tylervigen.com/spurious-correlations


Did you know that the Divorce rate in Maine correlates with Per capita consumption of margarine to 99.26%
 
I have been in bed for most of the day. Not well. But I have figured out a possible test under the conditions.

I will lie in bed. My wife will put the modem in the lounge on top of the sofa which is on the other side of the wall. She will flip a coin to decide the first trial of 15 minutes. Heads is on. Then wait 15 minutes with the unit off. The third 15 minutes will be the second test run and the state will be the opposite of the first state. I will wait a fourth 15 minute period in be. I will decide whether the test was on-off or off-on.

I can do this in the morning before I take the pain tablets. And I can try again in the evening at about 7 pm.

When the modem is on, my wife will put on a video on her cell phone in the study, communicating with the modem. We will test the emf in the lounge and in the bedroom where my head will be.

I figure that although I might be in pain, the pain will get much worse when the modem is on. The 15 minutes between is because the effect is often delayed.
Re: the highlighted
Best to do the coin flip again.
If you know it's the opposite of the previous test then it skews the results.
 
The other thing missing is the success criteria. You need someone whose maths is less rusty than mine to calculate how many trials you need to do for the result to be statistically significant, and how many hits are required to reach that result. Maybe JayUtah will oblige.

Sorry, just noticed this.

For a binomial distribution, the rule of thumb is that np and nq have to be at least 10. p and q are both 0.5 here, so you want n=20.

Now can it be done with fewer? Yes, but smaller values for n mean you have to get more of them right. This is because the binomial distribution is a discrete probability distribution. Trying to do hypothesis testing with this distribution by itself is the statistical equivalent of modeling the Sydney Opera House with Legos. Numbers that can change only by proportionally large, discrete steps don't allow for fitting curves very well, and significance testing is all about deciding how closely two curves fit. Hard to do that conclusively with Lego curves. If n=5, you essentially have to guess right all five times for it to be significant to the 95% confidence interval. You get more leeway as n increases.

When n is sufficiently large, the binomial distribution starts to approximate the normal distribution to the point where we can begin to exploit the properties of the normal distribution. Specifically the z-test for significance becomes an option. At the minimum n=20, you need to get 15 right in order for your z-value to exceed the requisite 1.96 (corresponding to the 95% interval).

The standard experiment method in this kind of case introduces an indirection. It requires a lot more work, but the science is far less assailable. You do a run of, say, 10 trials. Chance says you will get the right answer 5 times. The number of right answers you actually get is your score for that run. That's one data point. Over many 10-trial runs, those scores are expected to fit a normal distribution around a mean of 5, if the null hypothesis holds. The binomial distribution predicts the null-hypothesis behavior of each run of n trials. The mean of the actual experimental scores over several runs (a different n) is what gets tested for significance against the normal distribution. Obviously this means you have to do many runs, each with enough trials in it to let the score vary suitably. If you did 100 runs of ten trials each, you'd need a mean score of at least around 5.3 in order to call it significant at 95% confidence. The fewer runs you do, the fewer degrees of freedom in your model. The fewer trials in your run, the less standard your variance. (It's still a discrete distribution -- Poisson, actually, and not truly normal.) All these affect how confidently you can fit (or fail to fit) your experiment curve to the normal distribution that represents the null hypothesis.

Now these numbers are rough estimates from some quick calculations, so don't take them too seriously.
 
Last edited:
Sorry, just noticed this.

For a binomial distribution, the rule of thumb is that np and nq have to be at least 10. p and q are both 0.5 here, so you want n=20.

Now can it be done with fewer? Yes, but smaller values for n mean you have to get more of them right. This is because the binomial distribution is a discrete probability distribution. Trying to do hypothesis testing with this distribution by itself is the statistical equivalent of modeling the Sydney Opera House with Legos. Numbers that can change only by proportionally large, discrete steps don't allow for fitting curves very well, and significance testing is all about deciding how closely two curves fit. Hard to do that conclusively with Lego curves. If n=5, you essentially have to guess right all five times for it to be significant to the 95% confidence interval. You get more leeway as n increases.

When n is sufficiently large, the binomial distribution starts to approximate the normal distribution to the point where we can begin to exploit the properties of the normal distribution. Specifically the z-test for significance becomes an option. At the minimum n=20, you need to get 15 right in order for your z-value to exceed the requisite 1.96 (corresponding to the 95% interval).

The standard experiment method requires a lot more work, but the science is far less assailable. You do a run of, say, 10 trials. Chance says you will get the right answer 5 times. The number of right answers you actually get is your score for that run. That's one data point. Over many 10-trial runs, those scores are expected to fit a normal distribution around a mean of 5, if the null hypothesis holds. The actual mean from your runs is what gets tested for significance against the normal distribution. Obviously this means you have to do many runs, each with enough trials in it to let the score vary suitably. If you did 100 runs of ten trials each, you'd need a mean score of at least around 5.3 in order to call it significant at 95% confidence. The fewer runs you do, the fewer degrees of freedom in your model. The fewer trials in your run, the less standard your variance. (It's still a discrete distribution.) All these affect how confidently you can fit (or fail to fit) your experiment curve to the normal distribution that represents the null hypothesis.

Now these numbers are rough estimates from some quick calculations, so don't take them too seriously.
When I was participating in the MDC discussions, I'd often refer to this page, "Laws of Chance Tables"

If find it is a quick reference where you can look up random chance.
Also it is objective and both parties can see clearly what number of hits are required for mathematical significance.

For instance in this case a 50/50 chance of picking the router being on/off.

For a mathematically significant chance of 1 in 100.
You'd need to perform the test with at least 10 trials.
Any success rate of 1 to 9 hits out of 10 trials are still within the bounds of probability, or pure chance/guessing.
Note : getting zero correct or 10 correct is mathematically significant!

For 1 in 1000, you'd need to do a minimum 20 trials.
Hits from 2 to 18can be expected as simply chance

For the MDC significants of 1 in 1,000,000 you need to perform 30 trials.
Hits from 3 to 27 can be expected as simply chance
 
Re: the highlighted
Best to do the coin flip again.
If you know it's the opposite of the previous test then it skews the results.

Agreed. The trials have to be independent in order for them to count as separate.
I too initially thought from his wording that he was planning to do two trials each session, but when I noticed the lack of a second coin toss I assumed what he is actually proposing is that in each session he has two 'bites at the cherry' so to speak - the second, opposite, value is to allow him to confirm his choice about the first. There are just two possibilities each session: on then off, or off then on. He chooses one of those two possibilities, with a 50% probability of being right by chance. So each session is a single trial, and he is going to be doing a maximum (assuming he feels well enough to do both) of two trials each day.

Thanks for the memory refresh Jay and Ehocking. Given that this is a dry run which is just to determine whether it's worth doing a more formal test in front of witnesses, I would suggest he initially do 10 trials with a success criteria of at least 8 correct guesses - that, whilst not a statistically significant result, would I think justify going ahead with a more formal trial. That's five days of tests, perhaps spread over a couple of weeks.
 
So each session is a single trial...

That's fine as long as it's reflected correctly in the n-value. Some protocols mandate similar reversals to control for certain biases, but I'm at a loss to imagine what that could be in this case. To be clear, I'm not opposed to that part of the protocol per se. It just has to be represented properly in the statistical model in order to satisfy the independence requirement of the binomial distribution.

Given that this is a dry run which is just to determine whether it's worth doing a more formal test in front of witnesses, I would suggest he initially do 10 trials with a success criteria of at least 8 correct guesses...

As a pilot study, sure. 8 out of 10 is on the cusp of significance.

I'm trying to answer two separate questions, probably neither of them well. I appreciate your praise, but honestly I can think of at least three other people on this forum who I consider far more expert than I in experimental statistics. The one question is, "How would a professional scientist design this experiment statistically?" That's perhaps a distraction. Interesting, but not necessarily practical.

The more important question is, "Given the constraints on the experiment we can do, what statistical model will work?" If we're constrained to a single run of n trials, and the binomial test for significance, it's certainly defined for small n-values. We don't strictly need the z-test. The question is not so much, "How many trials do I need to do?" It's more, "How many successes do I need out of n trials in order to achieve p < .05 significance?" The short answer is that as n increases, the fraction of required correct answers decreases -- but it always has to be more than chance.
 
That's fine as long as it's reflected correctly in the n-value. Some protocols mandate similar reversals to control for certain biases, but I'm at a loss to imagine what that could be in this case.
It reminds me of DowserDon's preference for having only one of the 10 dowsing spots in each walkway disturbed. Knowing there was just the one he needed to find increased his confidence, even though having more would have decreased the number of walkways needed to get a statistically significant result, and hence the cost of the experiment, considerably. I can sort of understand it.

Of course the other possibility is that PartSkeptic's understanding of experimental design is so poor he simply didn't realise the effect having the second on/off decision always be the opposite of the first would have on it.
 
Here is an article that sums up the concerns of industry influence:

https://www.investigate-europe.eu/en/2019/how-much-is-safe/

...Some scientists are sounding the alarm about potential health risks caused by radiation from mobile technology. Completely unfounded, assure most radiation safety authorities. They take advise from a small circle of insiders who reject alarming research – and who set safety limits.
...The following graphic shows just how close-knit this circle is:
https://www.investigate-europe.eu/wp-content/uploads/2020/06/how-much-is-safe-1.png
...“The majority of researchers are defined as dissenters and are simply shut out through a process that is not ethically justifiable.
...“ICNIRP does not have an open process for the election of its new members. It is a self-perpetuating group with no dissent allowed. Why is this not problematic?”
...The committees agree on a basic premise between themselves: The only documented health risk from mobile radiation is the heating of body tissue. The radiation safety limits are set to prevent this from happening. As long as one adheres to these, there is no health risk, according to all but one committee. For most mobile users it is easy to stay safe in relation to these limits: They are only reached or exceeded by standing directly in front of a base station at a shorter distance than 10 meters. Are not nearly five billion mobile users worldwide proof that this works well?No, argue a significant number of scientists who believe that people may be harmed by being exposed to mobile radiation far below these limits, especially in the course of many years of use.
...“There is a lot of politics in deciding what goes into a study and what is left out. For instance, excluding people over the age of 60 from a brain tumour study in Australia that was recently published, does not make any sense”.
...This particular study, co-authored by two scientists also represented in ICNIRP, concluded that there can be no link between mobile phones and brain tumours because the incidence of brain cancer in the general population has been stable for years. It sharply contrasts a paper published in England last year that showed more than a doubling of glioblastoma, the most aggressive type of brain tumour, between 1995 and 2015.
...The lesson to be learned from the tobacco issue, he thinks, is to be careful not to give too much access and influence to industry. “In 2000, WHO published a major mea culpa report on how it allowed the tobacco industry to influence its thinking. But then they repeated that with EMF. They have never given me an answer to why”.
...The ICNIRP head agrees with critics on one issue, though: More research is needed. “Absolutely. There is still much uncertainty. For example, we know too little about the long-term effects of mobile use for brain cancer to draw conclusions. We absolutely need more information”, says Eric van Rongen.



There is uncertainty they agree BUT use that to roll out a potentially dangerous technology.

Are you all happy to be lab-rats in this global experiment? I know the answer. :thumbsup:
 
Last edited:
It reminds me of DowserDon's preference for having only one of the 10 dowsing spots in each walkway disturbed. Knowing there was just the one he needed to find increased his confidence, even though having more would have decreased the number of walkways needed to get a statistically significant result, and hence the cost of the experiment, considerably. I can sort of understand it.

Of course the other possibility is that PartSkeptic's understanding of experimental design is so poor he simply didn't realise the effect having the second on/off decision always be the opposite of the first would have on it.


Once more you assume I am inexperienced and stupid. Read my post carefully. The one hour experiment produces one result. I have a choice between ON-OFF or OFF-ON. It is a binary choice as much as the choice between ON or OFF. It may be possible to do two a day rather that one every two days of staying home.

Tell me how you take into account that the experiment requires a human subject whose state of health varies from day to day and hour to hour. If you have a better way to deal with this please let me have the benefit of your insight and superior wisdom. (Am I allowed to counter your negative assumptions about me, with some sarcasm of my own? Or will that get me warned/kicked off?)

Yes, I can do SOME trials with only on or off to see how much my mental and physical state affect the result.
 
OK, that sounds like the beginnings of a reasonable test protocol. I assume you will synchronise watches and agree fixed times for each switch on/off.

The main thing missing is how each of you is going to record the results. I suggest you write down the sequence you think has taken place during each trial and put it in a sealed envelope labelled with the trial number, and your wife does likewise. When you have completed a set number of trials, open and compare the envelopes.

The other thing missing is the success criteria. You need someone whose maths is less rusty than mine to calculate how many trials you need to do for the result to be statistically significant, and how many hits are required to reach that result. Maybe JayUtah will oblige.


So that's two trials a day? It shouldn't take too many days to get enough data if you can manage that.


So just having the wifi on isn't sufficient, it also has to be in use? OK.


Test it how? With what? Why? When? I don't understand this bit.


OK.

My wife bangs on the wall at the beginning of the test. Before she flips the coin. Or I just shout from the bedroom that I am ready.

You know that I can calculate the odds. And do test procedure. We did it for my test of Zener cards.

I will measure the radiation with the Gigahertz HF35C meter. RMS and PEAK in 4 directions. From the wall, from 45 degrees from the wall in the up, left and right.

By what standards do you want success? I am not going to do 1000 tests. I will try 10 to start with. Remember what I said about tests failing because the subjects refuse to continue to subject themselves to harm? Here is such an example. You are setting me up to fail.
 
Once more you assume I am inexperienced and stupid. Read my post carefully.
No, I assumed you meant exactly what you have now clarified you did indeed mean, as you would know if you'd read my post carefully.

I've highlighted the wording that made your test specification ambiguous:

I will lie in bed. My wife will put the modem in the lounge on top of the sofa which is on the other side of the wall. She will flip a coin to decide the first trial of 15 minutes. Heads is on. Then wait 15 minutes with the unit off. The third 15 minutes will be the second test run and the state will be the opposite of the first state. I will wait a fourth 15 minute period in be. I will decide whether the test was on-off or off-on.

I can certainly understand why, despite careful reading, others interpreted that as meaning you were planning to do two test runs each session, because that's exactly what I first thought myself. It took a bit of head scratching to work out what you probably meant, and I responded with the question "So that's two trials a day?" to check that I was indeed understanding you correctly.

If you'd specified your test protocol more precisely (and included the bits you left out completely) there would have been no confusion.
 
My wife bangs on the wall at the beginning of the test. Before she flips the coin. Or I just shout from the bedroom that I am ready.
OK, that will work.

You know that I can calculate the odds. And do test procedure. We did it for my test of Zener cards.
You did not specify the success criteria, which is a vital part of the test protocol.

I will measure the radiation with the Gigahertz HF35C meter. RMS and PEAK in 4 directions. From the wall, from 45 degrees from the wall in the up, left and right.
Why? When? I don't understand this bit.

By what standards do you want success?
The usual standard is statistical significance, but given the objective here something less is probably acceptable. See my later post.

I am not going to do 1000 tests. I will try 10 to start with. Remember what I said about tests failing because the subjects refuse to continue to subject themselves to harm? Here is such an example. You are setting me up to fail.
On the contrary, I am helping you to design and run a test which will have a reliable result. A reliable result is the one that corresponds with reality, which is not necessarily the one you (or indeed I) are expecting. If the result is reliable the test has not failed, whatever that result actually is.

If the test has to be abandoned because you feel too ill to continue it will not be possible to draw a conclusion from it. That would be unfortunate, but nobody's fault. Perhaps you could try again later, and it might take you more than a couple of weeks to assemble sufficient data. As long as you discard any particular trial result before you find out whether or not you guessed correctly, that is not a problem. The objective here is to help you confirm or eliminate one possible cause of your symptoms, it is entirely for your benefit.
 
Last edited:
I am not doing too badly. Only one pain tablet at 8am. A little tired. Have run some errands. I will tonight do one 15 test and see if I can tell if it is on or off. I will wait a while and do another test to see if it is on or off. I will take the measurements. This will at least give some information.


There are some concessions from ICNIRP that there may be "effects" from RF radiation. But they say they are not "harmful".

People want instantaneous communication and games. Cell phones have a use. Cars kill people. Air pollution kills people. Alcohol kills people. Tobacco kills people (tobacco sales are now banned in SA due to Covid-19). All directly and indirectly. Modern living is a risk to health. People accept the risk.

What if RF radiation starts to affect people on a global scale? Increased mortality from cancer and heart attacks. Increased autism and epilepsy in children. More ADHT. IQs lowered. Increased Alzheimers. Depression and suicides. Addiction to pain tablets.

I (and many others) are of the firm opinion that the effects are already noticeable and growing. I can lower my risk from some of the other toxins mentioned above but I cannot escape from cell tower radiation without a serious change in life-style.

What level of "effects" (such as listed above) is acceptable?

This is where the new argument is going because the industry cannot continue to deny the strength of the growing evidence.
 
I am not doing too badly. Only one pain tablet at 8am. A little tired. Have run some errands. I will tonight do one 15 test and see if I can tell if it is on or off. I will wait a while and do another test to see if it is on or off. I will take the measurements. This will at least give some information.
I still don't see what part the measurements are supposed to play, but OK.

There are some concessions from ICNIRP that there may be "effects" from RF radiation. But they say they are not "harmful".
Link?

People want instantaneous communication and games. Cell phones have a use. Cars kill people. Air pollution kills people. Alcohol kills people. Tobacco kills people (tobacco sales are now banned in SA due to Covid-19). All directly and indirectly. Modern living is a risk to health. People accept the risk.
There is no escape from risks to health. Modern living is actually safer than at any previous time in history, as the life expectancy figures attest, but there are always risks to everything we do.

What if RF radiation starts to affect people on a global scale? Increased mortality from cancer and heart attacks. Increased autism and epilepsy in children. More ADHT. IQs lowered. Increased Alzheimers. Depression and suicides. Addiction to pain tablets.

I (and many others) are of the firm opinion that the effects are already noticeable and growing. I can lower my risk from some of the other toxins mentioned above but I cannot escape from cell tower radiation without a serious change in life-style.

What level of "effects" (such as listed above) is acceptable?
If any such link is ever established, then a judgement will have to be made as to what level of risk is worth the reward of the advantages obtained. That judgement would obviously depend on how common and serious those risks were. But I see no reason to spend much time pondering such a relatively unlikely 'what if' in the absence of objective evidence of harm, when there is so much else of obvious and immediate concern to occupy us.

This is where the new argument is going because the industry cannot continue to deny the strength of the growing evidence.
I do not currently see that happening.
 
There is uncertainty they agree BUT use that to roll out a potentially dangerous technology.

Are you all happy to be lab-rats in this global experiment? I know the answer. :thumbsup:

You continue to make allegations about INCIRP, without providing any supporting evidence. Do you have any evidence that this group is funded by the telecoms industry, or that its output has been influenced by them?

In answer to your question, it would appear that the vast majority of the scientists and researchers in this field are quite happy to be 'lab rats', and for their families, friends and loved ones to be 'lab rats' too. Does that help you answer your question?
 
PSA.

I know for a fact that JayUtah is one of the best engineers and critical thinkers I have encountered. He always with the accuracy of a lazer beam, zero's in to the heart of whatever matter is under discussion with knowledge and clarity. Both here and on other sites where we have interacted.

I was disgusted to see (now consigned to AAH) partskeptic engage in such a vile attempt at a slur. A scurrilous attempt to vilify JayUtah for no reason.

I am fully aware that Jay is perfectly well capable of defending his own corner and needs no defense from me, but that was an attack the just went too far, and I feel I need to register a protest against it.
 
Status
Not open for further replies.

Back
Top Bottom