Moderated Is the Telekinesis Real?

You're way too generous patiently trying to inform "Buddha" and make him reason and understand his mistaken ways.

My explanations are obviously not for his benefit. He doesn't read them or understand them. But someone might. If descriptive statistics "clicks" for someone because of what i wrote, I consider it time well spent.

In the context of debate, if I didn't provide at least some substance to my objections, it would likely come down to even more puerile gainsaying. It's much harder for Buddha ti make stick the notion that I'm the incompetent one if my posts are substantial and his are just vitriol, gaslighting, and cherry-picking.
 
There are two issues regarding Jeffers’ experiments:

I gave 3 printed references showing that t-tests can be used only for normal distributions, there is only so much I can do. None of my opponents gave a single reference to a printed material showing that my assertion is false. I am not going to waste my time my longer discussing this topic. If someone prefers to remain in a state of perpetual ignorance, this is his problem.

I will be referring to normal distributions, you might find this data useful:

https://www.mathsisfun.com/data/standard-normal-distribution.html

Here is the link for more advanced audience:

https://www.statsdirect.com/help/distributions/normal.htm

A normal distribution forms a distinct bell-shaped curve, which is shown in the article.

For his experiment Jeffers used a double-slit setup.

http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/slits.html

Obviously, this is not a bell-shaped curve but an interference pattern, which is not a normal distribution.

One of my opponents asked me some time ago if I ever admit that I was wrong about anything. Initially I wrote that Jeffers didn’t conduct his experiment because I could not find his article. Later I found his article and corrected my mistake. Now I admit that I made a mistake, which is not a big deal for me, I do not try to hide my errors. Would anyone follow my lead and admit that he/she was wrong about the t-tests?

Now I go down a memory lane. I participated in many debates at various websites. I had strong, weak and average opponents. All my opponents, except for a heavily medicated person who hated psychiatrists and psychiatric prescription drugs, were reasonably smart and didn’t proudly showcase complete lack of knowledge in many areas.

Well, as they say, there is a first time for everything. A local Pied Piper has a small retinue who accept his every word; they do not realize that he leads them into the land of total ignorance. As I can see, vast majority of the board members are intelligent people who ignore the Pied Piper.

It looks to me that his followers do not realize that he produces nothing but hot air. If they want to learn something, they should listen to intelligent people who can show a road to the land of knowledge. Or they can enter that land on their own if they read books instead of his fantasy posts.
 
""""Buddha"""" said:
For his experiment Jeffers used a double-slit setup.

http://hyperphysics.phy-astr.gsu.edu...opt/slits.html

Obviously, this is not a bell-shaped curve but an interference pattern, which is not a normal distribution.


:dl:

The fact that you needed to explain this -and explain it in such a "curious" way- is showing how lost you are.
 
""""Buddha"""" said:
One of my opponents asked me some time ago if I ever admit that I was wrong about anything. Initially I wrote that Jeffers didn’t conduct his experiment because I could not find his article. Later I found his article and corrected my mistake. Now I admit that I made a mistake, which is not a big deal for me, I do not try to hide my errors.

Yes, because they're "unhidable". Besides two facts: you're only admitting what you were pointed to be awfully wrong many times; and that "I admit that I made a mistake, which is not a big deal for me" is unnecessary as you are mistaken several times an hour.

""""Buddha"""" said:
Would anyone follow my lead and admit that he/she was wrong about the t-tests?

What about yourself? It's you who really continues to ignore when they are used.
 
""""Buddha"""" said:
I participated in many debates at various websites. I had strong, weak and average opponents. All my opponents, except for a heavily medicated person who hated psychiatrists and psychiatric prescription drugs, were reasonably smart and didn’t proudly showcase complete lack of knowledge in many areas.



Any link to that? Why don't you lecture us pointing to some of those great debates?



It looks it's all more of your fantasies, or you just keep changing your user name to leave your failings hidden in the past. :rolleyes:
 
I gave 3 printed references showing that t-tests can be used only for normal distributions...

No, you proved you didn't understand the references or the science from which they are drawn. You're still stuck on the difference between a dependent and an independent variable, a mistake you've been carrying forward since you tried to describe clinical trials. You don't even know what the dependent variable is in either the PEAR or the Jeffers studies.

The t-test is used for observations that, for whatever reason, do not fit the normal distribution or cannot be expected to fit it without parameters that are unknown. This is why the observations are fit to a different distrubution, the t-distribution, which can be parameterized to account for such confounding unknowns. The dependent variable in the analysis is still expected to be normally distributed, but it -- by itself -- is not the observation.

I said all this yesterday, and you obviously either can't understand it or have no wish to. Like it or not, it explains how both Jay and your mined quotes can be simultaneously correct.

there is only so much I can do.

...which doesn't apparently extend to dealing with what your opponents actually say. All you've done is post a bunch of snippets from elementary web sites and complain when people point out that they don't mean what you think they mean. If this is your best effort, you're not very good at it. I can explain. All you can do is mine quotes and stamp your little feet.

None of my opponents gave a single reference to a printed material...

Straw man.

Your opponents have discussed the issue thoroughly and outlined your errors, a situation that is not the least dispelled by anything you've cherry-picked from the literature. Your opponents are correcting you based on extensive knowledge of the field, not hastily-Googled snippets. While one can provide concise references to individual fact, one cannot provide references to accumulated knowledge and experience.

You're simply trying to script both sides of the debate, demanding that the rebuttals to your claims can only take certain forms before you'll pay attention. I'm displaying actual understanding. Beat that.

I am not going to waste my time my longer discussing this topic...

Translation "I'm in over my head, so I'm going to give the standard excuse for punching out of the discussion lest I embarrass myself further."

Now I admit that I made a mistake, which is not a big deal for me....

Clearly it is, because it had to be dragged from you by concerted effort of your opponents over several days. If admitting mistakes is no big deal for you, why didn't you just do it without being asked? Why didn't you recognize your error on your own and offer an apology?

When you spend so much time accusing everyone else of being stupider than you, it really does matter on what time scale you correct your own mistakes -- if in fact you ever do.

Would anyone follow my lead and admit that he/she was wrong about the t-tests?

Except you're also wrong about the t-test. Sorry, admissions of failures don't work that way. You still haven't explained why the other critics of Jeffers failed to see the "obvious" mistakes you think you found.

Obviously, this is not a bell-shaped curve but an interference pattern, which is not a normal distribution.

Are you seriously trying to compare those two things directly? Do you have the faintest clue what was actually studied for significance in the PEAR and Jeffers studies?

Now I go down a memory lane.

More social engineering. You can't or won't address the actual discussion of statistical analysis except by assiduous bluffery which everyone easily sees through. So now you're resorting to shady casting of aspersion. "Nobody pay attention to Jay! Don't listen to him!" Sheesh, how desperate can a guy get? You have time to insult me but not time to address my explanation of the basis for the t-test.
 
JayUtah said:
None of my opponents gave a single reference to a printed material...
Straw man.

Your opponents have discussed the issue thoroughly and outlined your errors, a situation that is not the least dispelled by anything you've cherry-picked from the literature. Your opponents are correcting you based on extensive knowledge of the field, not hastily-Googled snippets.

And, """"Buddha"""", we're too veteran in this field of web forums for engaging in a mere debate of copypasta with cats. You may have enjoyed your macaroni-gluing posts with other posters like you in other venues and even won a couple of stars and a pack of glitter to embellish your projects.

But here, you're not getting from us anything but instead we hold a mirror before you where you can watch the full horror of your own ignorance. That is proper debate technique addressed to the ones who, like you, come to web forums on false pretences to quench some narcissistic thirsts while they harm public education and all that we hold dear.
 
Last edited:
[Y]ou're only admitting what you were pointed to be awfully wrong many times.

And it's a tactical withdrawal to boot. "I admitted an error, now you have to." He's belatedly trying to portray himself as gallant and conciliatory only so that he can turn around and say his opponents are entrenched by comparison. Buddha's arguments are about three-fourths ham-fisted social engineering on any given day. What do we really learn from the experience? Buddha is willing to jump to a conclusion and cling to it tenaciously, letting go only when it serves his purposes -- if he lets go at all. Yet here he is today practically begging people to take his word for it. Trustworthiness and face-saving are incompatible ends.

What about yourself? It's you who really continues to ignore when they are used.

Indeed, how about our own trip down memory lane. "That's not a proper baseline!" Well, yes it is, and Jahn (not Palmer) is the one who decided to use it. The t-test compares two empirically sampled data sets. He didn't know this. "You have to collect all the baseline data ahead of time." Well, no, his own source -- the Navy's mid-century statistical manual -- talks about pairwise sampling, varying the dependent variable and letting the independents do what they may at each trial. That's what happened in the PK research using various kinds of apparatus. But what Buddha hopes you don't see is that he's moved ahead to accepting that the t-test works with empirical baselines.

Buddha is trying desperately to portray himself as the teacher. But the record shows him struggling to keep up, either silently leaving his former misconceptions behind in hopes that they'll be forgotten, or declaring that he no longer has time to debate them. Even if one doesn't understand the statistics, at the end of the day Buddha is still stuck with the fact that none of the critics-of-the-critics he's arrayed as a screen seem to agree with him on what the problems are, if any, with Jeffers' work. Is it easier to suppose that they all somehow missed what Buddha is telling us is the low-hanging fruit of the "obviously" misapplied t-test? Or is it easier to conclude that Buddha is wrong -- yet again -- in his drunkard's-walk argument? That's an answer you see even without knowing the field. Buddha insinuates that any properly qualified statistician should draw the same conclusions he does. Except they don't.

But you know what? Aside from an illustration of Buddha's method, the t-test is a subject we can allow him largely to drop, as he has demanded. Why? Because its more important role in this debate is to telegraph Buddha's major error. And a colossal one it is, too. You note :--

The fact that you needed to explain this -and explain it in such a "curious" way- is showing how lost you are.

What an understatement.

And by that I mean it's a challenge to convey just how far off the mark Buddha really is. "You can't apply a t-test to an experiment using a double-slit apparatus because the apparatus produces an interference pattern and not a normal distribution." That fundamentally misunderstands what all these experiments are actually recording, manipulating as the dependent variable, and treating with statistical significance tests. I mean fundamentally misunderstands. The raw behavior of the apparatus -- however devised -- is not what the significance test is performed on. It's not the dependent variable in the experiment. That's right: Buddha is fundamentally mistaken on how all the PK experiments he's discussed were statistically modeled.

Keep in mind the ever-present fact that none of Stanley Jeffers' critics or reviewers managed to latch onto this exceptionally egregious "error" of applying a significance test to something that doesn't even vaguely look like a statistical distribution. And that's because -- whatever other criticism they may wish to mount -- they know that's not what Jeffers was trying to do. Buddha's approach is essentially a cargo-cult version of how this kind of research derives the dependent variable.

Here's what I think might have happened. Buddha keeps focusing on the physics principle that drives Jahn's random-even generator (REG). The behavior of that principle can be accurately described by a properly parameterized normal distribution. Separately, a straightforward significance test exists that compares a sample to a normal distribution and gives you the probability that the random process that produced the baseline distribution could have generated that sample sequence. Those two concepts found each other in Buddha's head and produced a narrative for these experiments. It's a coherent-sounding narrative because it has the commonality of the normal distribution. They "must" be connected the way Buddha imagines, right?

Well, no. The truth is not quite that simplistic. The REG also embodies a process that converts the noise to a discrete binary value. The IEEE article describes it in depth. The law of large numbers says that over an infinite number of "clicks" of such a device, the same number of ones should appear as zeros. But the central limit theorem acknowledges that over short runs, you may get more of one digit than another. The number of runs that depart from equilibrium by a given degree of mismatch is what would ideally be a normal distribution. (But in deference to the non-ideal nature of the process and its confounds, it is fit to the t-distribution, not a normal distribution.) The double-slit phenomenon says that reconverging paths will discretize to one or another node of the interference pattern according to the probabilistic nature of quantum mechanics. Unaffected by PK, over a large number of runs, the distribution across the interference pattern nodes should be symmetrical. But over a short number of runs, there will be some preference for "left" or "right" nodes of the interference pattern. And the degree of that preference should form a normal distribution. Again, because this is empirically collected baseline data, the t-test is used. Then the goal for the subjects in Jeffers' experiment is to bias the deposition of photons to different nodes of the interference pattern to a degree that quantum mechanics cannot account for. That's the dependent variable in Jeffers' experiment.

That indirection is all-important, because it forms the statistical basis for creating a usable distribution out of something that isn't natively a bell-shaped curve. Bias from whatever something is supposed to look like is the data. It's a histogrammic view of the world that comes naturally to people who work with statistics all the time. But it's something that's wholly absent from Buddha's thinking. It's safe to say Buddha has never statistically modeled a real-world science experiment. He's fixated on the nuts-and-bolts operation of experimental apparatus and he has only a simplistic, literal concept for how that translates into a statistical sequence.
 
Could it perhaps be that you and Jay have, over the years, proved that you know what you're talking about?

Possibly, and Buddha is playing the social-dynamics aspect of that for all it's worth. He's trying to characterize his opponents rather than address what they say -- entirely ad hominem. He's trying to portray me as a charismatic leader whom people will follow unquestioningly, and himself as the salvation of these poor lost souls if only people will give him due reverence.

It's obvious at this point that he's in way over his head, as he was in all his other threads here. And it's evident that he realizes this, which is why I think he's pulling out all the stops either to get me out of the picture or to convince people not to listen to me. But since he's a newcomer, his impression of my reputation and how it got that way is woefully underinformed. He's really bad at gaslighting because people can immediately recognize when he's doing it.

And then we get someone coming here whose theories both of you have managed to expose as nonsense?

Because, of course, they are nonsense. And the methods he's using to support them are the worst kind of trollish claptrap. "Here are references to the literature that prove I'm right." Well, no, those are descriptions cherry-picked from the literature of concepts to which he's alluded. But they don't support the conclusions and inferences he's drawn from those concepts. Yes, we know what a normal distribution looks like. Yes, we know what an interference pattern looks like. Documentation of each of those individually does not support the assumption that Jeffers considered them congruent, as Buddha claims.

Conversely, he demands we all respond in kind. "You must supply a reference to the literature that refutes my claim specifically." Well, no, that's not how the body of literature on any particular subject works. It's not a comprehensive laundry list of all the conceivable Thou Shalt Nots. But someone with a reasonably comprehensive understanding of the field in general can bring that to bear to discuss some particular error. That's why we have ordinary experts. Most professional statisticians have never published a book. But any professional statistician can explain what's wrong with some particular proposal she encounters. That's the essence of true expertise, not Google-fu.

I explain. That's what i do. I can do that because I have a broad basis of knowledge that isn't tied to one particular Googled-up item or one particular paper or one particular passage in a book. It's a distillation of years of study and experience.
 
The primary control against the touching of the specimen was visual
observation of the subject. However, since sessions often lasted up to two
hours, Hasted conceded that full attention by the observer(s) throughout the
* period could not be maintained. Supplementary controls both against
external physical fc ce and electrostatic or electromagnetic artifacts
included: (a) electrode sensors designed to register touching of the metal,
- (b) electrical shielding of the strain gauges, (c) dummy loads, and (d)
* video recording of target strain gauges. None of these controls, except
possibly the first, were utilized in all sessions, although anomalous
* phenomena were recorded in the presence of each. However, details about the
implementation of the controls, e.g., the precise locations of the dummy
loads, were rarely reported.” Palmer, page 179

I am not sure which one of Hasted’s experiments Palmer refers to because I do not have his book, The Metal Benders. However, judging from the description of an experiment that I read (I provided a link to Hasted’s article in one of my previous posts) it was flawless, as anyone with engineering background would confirm. Besides, “the precise locations of the dummy loads” are unimportant, any engineer knows that. If Palmer thinks that it is necessary to know the locations, he should have explained the reason why this data is of any value.

This is my question to the opponents – why would you think that the locations are important? If you fail to provide a realistic answer, this would indicate that you have very little knowledge of electromechanical engineering.

Video recording of target strain gauges is also of no value because it is not used to measure structural changes in the metal, all engineers would agree with that.

I will go slowly over this section of Palmer’s report to show how ridiculous it is.
 
But what Buddha hopes you don't see is that he's moved ahead to accepting that the t-test works with empirical baselines.

You mean that he finally understood that basic piece of knowledge and playing the I-knew-it-from-the-very-beginning card.

Buddha is trying desperately to portray himself as the teacher. But the record shows him struggling to keep up, either silently leaving his former misconceptions behind in hopes that they'll be forgotten, or declaring that he no longer has time to debate them.

I wonder who he expected the public to be. It's obvious for those of us who are educators that he's not "teacher material" and for most of those who have being educated by educators, who quickly spot that he hasn't got the "physiqueecriture du rôle"

Here's what I think might have happened. Buddha keeps focusing on the physics principle that drives Jahn's random-even generator (REG). The behavior of that principle can be accurately described by a properly parameterized normal distribution. Separately, a straightforward significance test exists that compares a sample to a normal distribution and gives you the probability that the random process that produced the baseline distribution could have generated that sample sequence. Those two concepts found each other in Buddha's head and produced a narrative for these experiments. It's a coherent-sounding narrative because it has the commonality of the normal distribution. They "must" be connected the way Buddha imagines, right?

Well, no. The truth is not quite that simplistic. The REG also embodies a process that converts the noise to a discrete binary value. The IEEE article describes it in depth. The law of large numbers says that over an infinite number of "clicks" of such a device, the same number of ones should appear as zeros. But the central limit theorem acknowledges that over short runs, you may get more of one digit than another. The number of runs that depart from equilibrium by a given degree of mismatch is what would ideally be a normal distribution. (But in deference to the non-ideal nature of the process and its confounds, it is fit to the t-distribution, not a normal distribution.) The double-slit phenomenon says that reconverging paths will discretize to one or another node of the interference pattern according to the probabilistic nature of quantum mechanics. Unaffected by PK, over a large number of runs, the distribution across the interference pattern nodes should be symmetrical. But over a short number of runs, there will be some preference for "left" or "right" nodes of the interference pattern. And the degree of that preference should form a normal distribution. Again, because this is empirically collected baseline data, the t-test is used. Then the goal for the subjects in Jeffers' experiment is to bias the deposition of photons to different nodes of the interference pattern to a degree that quantum mechanics cannot account for. That's the dependent variable in Jeffers' experiment.

That indirection is all-important, because it forms the statistical basis for creating a usable distribution out of something that isn't natively a bell-shaped curve. Bias from whatever something is supposed to look like is the data. It's a histogrammic view of the world that comes naturally to people who work with statistics all the time. But it's something that's wholly absent from Buddha's thinking. It's safe to say Buddha has never statistically modeled a real-world science experiment. He's fixated on the nuts-and-bolts operation of experimental apparatus and he has only a simplistic, literal concept for how that translates into a statistical sequence.

Well, he won't plead the fifth, he'll just deny the charges.

I enjoy better having """"Buddha"""" spring the traps. I had one laid with the Poisson part of his argument regarding Jeffers' experiment physics and I'm sure he would have stepped in if you hadn't explained it in detail.

I think is now late to lay another one regarding t-tests and Z-test in a context of numerous readings and its practical effect on the final figures one reaches.

If only """"Buddha"""" had rush-studied t-distribution, including some solved problems in a Schaum-series like book, before opening his big mouth... He's now condemned to eternally defend his persecutory conclusion without even having realized what the variables were.
 
""""Buddha"""" said:
The primary control against the touching of the specimen was visual
observation of the subject. However, since sessions often lasted up to two
hours, Hasted conceded that full attention by the observer(s) throughout the
* period could not be maintained. Supplementary controls both against
external physical fc ce and electrostatic or electromagnetic artifacts
included: (a) electrode sensors designed to register touching of the metal,
- (b) electrical shielding of the strain gauges, (c) dummy loads, and (d)
* video recording of target strain gauges. None of these controls, except
possibly the first, were utilized in all sessions, although anomalous
* phenomena were recorded in the presence of each. However, details about the
implementation of the controls, e.g., the precise locations of the dummy
loads, were rarely reported.” Palmer, page 179

I am not sure which one of Hasted’s experiments Palmer refers to because I do not have his book, The Metal Benders. However, judging from the description of an experiment that I read (I provided a link to Hasted’s article in one of my previous posts) it was flawless, as anyone with engineering background would confirm. Besides, “the precise locations of the dummy loads” are unimportant, any engineer knows that. If Palmer thinks that it is necessary to know the locations, he should have explained the reason why this data is of any value.

This is my question to the opponents – why would you think that the locations are important? If you fail to provide a realistic answer, this would indicate that you have very little knowledge of electromechanical engineering.

Video recording of target strain gauges is also of no value because it is not used to measure structural changes in the metal, all engineers would agree with that.

I will go slowly over this section of Palmer’s report to show how ridiculous it is.


:s2:


James Randi already did the debunking. Google it.


Refrain yourself of overloading your post with stupid phrases like your « Besides, “the precise locations of the dummy loads” are unimportant, any engineer knows that.»
 
The name of the "other variables"? I'd rather say you're a person very restricted in knowledge. :D:D:D:D:D:D:D You don't even know the "simplex method" and you allow yourself the buffoonery of voir diring me?

How deeply hurt you are, caught in the open in the full horror of your ignorance! :rolleyes:

Neeners, neeners and I-dare-yous won't hide what you've already written: your failure addressing all these subjects.

For instance:



You don't have the slightest idea about the physics behind the equipment either. You're basically suggesting the need of dealing with a Poisson distribution to examine the problem of the total number of clients served by all tellers in all branches in all banks in all countries during the whole century.

You don't even know the slit width nor the times involved in Jeffers' or "your" modified Jeffers' (clearly specified by Jeffers' from the beginning), otherwise you would've refrained yourself of writing down such statistical tomfoolery.



That's just another piece of paranoid thinking we're so accustomed to read in your posts and the vanity press booklet you claim authorship.
That is very smart of you not to answer my questions and to try to laugh the matter off. You might be surprised, but I have a copy of the famous book by Dantzig. But this is not my point. You failed to answer a simple question, which either you forgot the project that you carried out many years ago or you gave yourself a promotion.

If the first scenario is true, you have a very selective memory -- you do not remember the project but you remember how the students reacted to it. It looks more likely that the second scenario is true -- you promoted yourself from the position of a lab instructor to the one of a professor.
 
If you fail to provide a realistic answer, this would indicate that you have very little knowledge of electromechanical engineering.

And creating the impression in your mind that you're so much smarter than everyone else seems to be the only goal you are interested in pursuing at this forum, which is probably why you've managed to alienate everyone except a couple people. Narcissism is so very off-putting.

We're not done with PEAR and Jeffers. As much as you'd evidently like to leave all that embarrassment behind and try out engineering as the next topic you're going to pretend to have mastered, first we still have the glaring errors in your previous discussions, which I took the time last night to explain in detail. They bear on any further discussion. So before we endure yet another lengthy pontification from you on "how wrong Palmer is," we need to keep having the discussion about how wrong Buddha is. So long as your understanding remains the yardstick for measuring others' supposed errors, that's going to be my point of focus despite your attempts to script the debate otherwise.

You fundamentally don't understand what's being measured in any of these experiments as the statistical variables. You're fixated on the detailed operation of the machinery, and you're struggling to make that fit a preconceived (and simplistically) wrong model of how the analysis works. Please address that before attempting to move on and commit the same sorts of errors all over again.
 
Last edited:
...how the students reacted to it.

The students' reaction was the relevant point. You're trying to change the subject and demand he describe the irrelevant details of the experiment that provoked the reaction. Instead you need to be explaining why you're making the kinds of statistics mistakes here than a beginning student would make. And trying desperately to cover them up.

It looks more likely that the second scenario is true -- you promoted yourself from the position of a lab instructor to the one of a professor.

You need arguments that aren't so blatantly ad hominem.
 
That is very smart of you not to answer my questions and to try to laugh the matter off.

:dl:

But I did answer your question! It's in the italicized words. Not that you are educated to notice it. You don't know what the simplex method is and how a problem solved with it looks like.

You might be surprised, but I have a copy of the famous book by Dantzig. But this is not my point.

:dl:

You, name dropper, and your imaginary library. Sure, it's not the point (the point is after the non sequitur)

You failed to answer a simple question, which either you forgot the project that you carried out many years ago or you gave yourself a promotion.

You are right in not considering the possibility I'm unwilling to follow your ridiculous "commands", why else are the "either... or..." in web forums made for ? Remember, we don't work for you. You work for us providing entertainment in exchange of you having a place where to blog your fantasies, including false achievements, jobs and possessions you could be proud of if real.

If the first scenario is true, you have a very selective memory -- you do not remember the project but you remember how the students reacted to it. It looks more likely that the second scenario is true -- you promoted yourself from the position of a lab instructor to the one of a professor.

Then emigrate to Argentina -you're used to emigrate-. They give salaries and pension plans based on fantasies written on web forums ;). Or we can choose an arbiter to get our pay stubs and then publish snapshots of them simultaneously.
 
Last edited:

Back
Top Bottom