• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

What force controls probability?

Yeah, that's what I mean. Given that probability is not very difficult math in easiest easiest practical formulations, I'd be surprised if it wasn't applied to military strategy until the 20th century. I wonder when it was first historically recorded being applied to games such as cards and dice.
Wikipedia attributes the formalization of probability theory to Pascal in 1654, in the context of gambling. I'd like to hear about earlier applications.
 
This may be one of those questions that has no answer, because it's worded wrong. But it occurred to me...

I understand the concept of probability, from a basic layman's perspective at least. You flip a coin randomly enough times, the occurrence of heads and tails will gradually trend toward being equal, though there may be lots of excess heads or tails at any point along the way. And there are plenty of detailed calculations to figure the probability of any number of results after any number of flips.

My question is not about the calculations, but about the force that is acting on the coin to make the predicted results happen.

...............

I might be arguing in circles here, but is entropy, the statistical probability for disorder, and which may be precisely measured and quantified, the underlying factor?
 
Isn't it actually so that if you flip a coin many times and count the number of heads and tails, the absolute difference will tend to grow, while the relative difference will tend to 50-50?

I think I did some experimenting with this as a kid, and found that one of these would seem to grow a little faster than the other, though not so much as to mess up the probability. So while the absolute difference kept growing, it constituted a smaller and smaller part of the whole.

Like this:

10 flips: 6H, 4T, diff 2 (20%)
100 flips 56H, 44T, diff 12 (12%)
1000 flips 524H, 476T, diff 48 (4.8%)
etc.

The diff grows, the relative difference tends towards 50-50.


Mosquito - Heads I win, Tails you loose
 
I find it easiest to think of probability (at least at the macro, non-QM level) as being an expression of uncertainty based on limited information.
So do I normally, but I'm far to lazy to explain over the internet how you can actually use that to derive an acurate discription of what's going on in a model when that uncertanty is hard coded. It all comes down to symmetry anyway. :D
I'm not a physist but I imagine you can deal with quantum physics in exactly the same way, the only difference being that it might, in principle, not be possible to extract the relevent information a priori.
Oh, and re coin tosses: I'm too lazy to Google it at the moment, but I seem to recall seeing a recent study that found that a coin toss is .000001% (or something like that) more likely to be a head than a tail (or vice versa; and I'm sure it depends on which coin they were studying).

Yes, there is a 30 page masters theses on how, if you take a coin lying heads up and gently toss it so that it just turns through at least a full rotation and doesn’t bounce it is more likely to land heads up.

On the other hand there is a three page paper by ET Jaynes describing a lazy afternoon spent with a pickle lid (an idealised unfair coin) and flipping it normally like a coin didn't seem to introduce any bias, although spining it on a table and throwing it like a frisby did.
 
Uhm, excuse me?

But has anyone, anywhere, at any time, ever established that there is, in fact, a force of any sort that influences a coin toss? Or any other random event? Seems an awful lot like what the homeopaths come up with, such as water "remembering."
 
Yeah, that's what I mean. Given that probability is not very difficult math in easiest easiest practical formulations, I'd be surprised if it wasn't applied to military strategy until the 20th century. I wonder when it was first historically recorded being applied to games such as cards and dice.

The maths behind probability may not be that complex, but it is probably the most misunderstood phenomenon ever. People constantly misinterpret, or plain refuse to believe, what the maths tells them. The game show where you pick a door and then the host removes one and asks if you want to change your choice is an obvious example (sorry, can't remember the name). No matter how much the answer is explained, many people simply cannot grasp that the "obvious" answer is wrong, evern though the simple maths shows this. The same is true for many, if not most, probablistic things.

Going back to the military question, how many last stands and suicidal attacks have there been, succesful and failiures? I would say probability has always been applied to strategy and then ignored.
 
Why this is a silly thread

I've pinpointed what bothers me in this thread, an obvious error in the OP:

I understand the concept of probability

Clearly false!!

One of the fundamental understandings of probability is that the prior coin flips in no way influence future flips ... and this can even be tested:

If there was such an influence or force, then I could conceal the results of the first thousand flips, and the woo being tested would be able to determine whether those hidden flips were skewed toward heads (or not) by studying the sequence of future flips. But (given a fair coin) all future sequences of heads and tails are equally likely.

OK? OK!
 
This seems to have been touched on before, but allow me to give it a try.

I'll use a coin toss as an example, but it would work with any random event (I believe).

A coin toss has two possible outcomes, heads or tails. This means we can have two outcomes: H or T.

Now, let's do two flips. We can have the results HH, HT, TH, or TT. Note that 50% of these cases produce a 50/50 split.

Let's go to four flips. Our results can be:
HHHH HHHT HHTH HHTT HTHH HTHT HTTH HTTT THHH THHT THTH THTT TTHH TTHT TTTH TTTT. Out of the 16 possibilities:
2 are all one side (all H or all T).
4 have three heads and a tail
4 have three tails and a head.
6 are 50/50.

So, there's more possibilities that come out to 50/50 than any other result. Even the non-50/50 results, if "averaged" would cancel each other out to 50/50.

However, this doesn't answer your question, it just explains the aggreagate/collective behavior of multiple events. The same analysis can be used to show that 7 is the most likely result of throwing two dice and adding the results (or 10/11 for three, or 14 for 4, etc). Just amount any aggregate of a random event can be evaluated this way (here comes the catch) assuming each possibility is equally likely.

ANd that's the key. THe aggregate behavior of multiple coin tosses approaches 50/50 because the chances for a single toss are 50/50. So, to the question about forces that affect probability, the answer is "whatever forces apply to a single instance of the event in question."
 
The maths behind probability may not be that complex, but it is probably the most misunderstood phenomenon ever. People constantly misinterpret, or plain refuse to believe, what the maths tells them. The game show where you pick a door and then the host removes one and asks if you want to change your choice is an obvious example (sorry, can't remember the name). No matter how much the answer is explained, many people simply cannot grasp that the "obvious" answer is wrong, evern though the simple maths shows this. The same is true for many, if not most, probablistic things.

Going back to the military question, how many last stands and suicidal attacks have there been, succesful and failiures? I would say probability has always been applied to strategy and then ignored.

That's not what I'm asking. I'm asking for the first time in recorded history where military strategists (or game enthusiasts) are recorded in the historical record as consciously appling basic mathematical principles of probability in the strategy-making process.

I'm aware that most humans are wired to situationally make irrational decisions based on probability. The game show Deal or No Deal is studied by economists because of that. I think there was an article in Slate about it: people will generally irrationally choose to avoid a low probability of great harm even when it means missing a chance for moderate benefit, when a rational analysis of the cost/benefit and probabilities would clearly indicate that they should take the risk.
 
I think there was an article in Slate about it: people will generally irrationally choose to avoid a low probability of great harm even when it means missing a chance for moderate benefit, when a rational analysis of the cost/benefit and probabilities would clearly indicate that they should take the risk.

What's irrational about giving more weight to potential harm than potential benefit? Suppose I offer you to play a game where there's a 5% chance you'll lose both eyes and 95% chance you'll win 5 bucks without any bodily harm, would you play? Or would you rather play a game were there's 50% chance you'll owe me a quarter or 50% chance you win $2. Objectively, you're more likely to win and have a bigger payoff in the first game, but the second game is much less dangerous in the case of a lost (though you are more likely to lose at the second game). So what is the rational choice?
 
What's irrational about giving more weight to potential harm than potential benefit? Suppose I offer you to play a game where there's a 5% chance you'll lose both eyes and 95% chance you'll win 5 bucks without any bodily harm, would you play? Or would you rather play a game were there's 50% chance you'll owe me a quarter or 50% chance you win $2. Objectively, you're more likely to win and have a bigger payoff in the first game, but the second game is much less dangerous in the case of a lost (though you are more likely to lose at the second game). So what is the rational choice?
I don't think anyone would put their eyes at risk for 5 bucks. But you'll probably (pun intended) get some takers on your second game.
The rational choice would be not to put your eyes at stake for five bucks and rip you off in the second game instead (provided one gets more than one try, you didn't mention that).
 
What's irrational about giving more weight to potential harm than potential benefit? Suppose I offer you to play a game where there's a 5% chance you'll lose both eyes and 95% chance you'll win 5 bucks without any bodily harm, would you play? Or would you rather play a game were there's 50% chance you'll owe me a quarter or 50% chance you win $2. Objectively, you're more likely to win and have a bigger payoff in the first game, but the second game is much less dangerous in the case of a lost (though you are more likely to lose at the second game). So what is the rational choice?

There's nothing irrational about what you're describing. You're comparing apples (losing both eyes) and oranges (money). In standard rational choice models, you'd need to place a dollar value on the loss of both eyes (or convert both "loss of both eyes" and "gain of X dollars" into "utility units"). No person in his right mind would play the first game, since the loss of both eyes presumably vastly outweighs the benefit of winning five dollars.

Nor is it necessarily irrational for someone to decline a game that, like your second game, has a positive expected value. (Assuming that is an option -- I'm changing your hypothetical.) People do not act to maximize their expected wealth, they act to maximize their expected utility. Economists have long understood that the diminishing marginal utility of money makes people risk-averse.

Translating that into plain English: each dollar you gain brings you less benefit than the one before. $100 would mean more to you if you were poor than if you were rich. Most of us would play your second game because gaining $2 means more to us than losing $0.25 does. But if you change the numbers to $200,000 and $25,000, some people might decline, since the harm of losing $25,000 (which for some people would mean losing their house or car or having to drop out of college, etc.) could outweigh the benefit of gaining $200,000. Change the dollar amounts to Win = 4 x your entire net worth, and Lose = your entire net worth, and I suspect most of us would pass.

(Of course, what you'd really want to do in these examples is get some backers to share in your risk.)

It's risk-aversion that explains the existence of insurance: people are willing to pay $y to be insured against an expected loss of $x, even if y>x.
 
Pup said:
And of course, whatever affects the motion of coins in the air affects far more complex things as well, up to and including human behavior. But I'm not even gonna go there yet. :)
Well, you sort of have to go there to answer the question, though. Because..
So, to the question about forces that affect probability, the answer is "whatever forces apply to a single instance of the event in question."
Only thing left to discuss now, is what exact forces acts upon a given phenomenon considered random (from flipping a coin to the Heisenberg indeterminacy principle).
 
What drives it toward 50% over time? Our conception of things like "50%". The odds of getting HHTT and HHHHH on 4 flips are still the same, nothing is driving a change in probability, but we happen to view (in most cases correctly) HHTT and TTHH as being relatively the same outcome. So in the long run, there's a larger percentage of outcomes that we would classify as "50%", or for other distributions, tendency towards average outcomes. This is called the central limit theorem, and is what drives enthropy.
 
That's not what I'm asking. I'm asking for the first time in recorded history where military strategists (or game enthusiasts) are recorded in the historical record as consciously appling basic mathematical principles of probability in the strategy-making process.

The oldest military manual that I've seen that contains explicit references to probability theory is from 1919, but that book (Ampumaoppi by General Paul von Gerich) is based on older sources (I think that it is an adapted translation from a Russian original but that is not mentioned anywhere in the book). I would guess that that particular application was already in use by late 19th century the latest. The subject of probability analysis in the book was rifle fire: namely, what is the distribution pattern of fired rounds. The book contains formulas that allow a company commander to estimate how effective his rifle fire will be against opponent at different ranges. [And many of the example situations would have been more suited to the mid-19th century battles than post-WWI battlefield: things like lines of men facing each other on open ground.]

Though, that use of the theory is more tactical. The main problem of applying probability theory to the strategy is that it is practically impossible to assign probabilities that are accurate enough to most of the events.
 
I have only skimmed this thread because you are all much too clever for me :) I think it was Hawking who said that quantum theory is actually quite deterministic, and if we could measure what was happening with all the particles at the quantum level, we could predict anything at the macro level. But of course there are too many particles, so what we see appears to be randomness. For practical purposes, we can treat this pseudo-randomness as the real thing.

The question also reminds me of my own field of clinical trials. We assign patients to treatments randomly to minimise the effects of undetected factors. Human bodies are very complex, and we don't know what else is going to change with treatment, so we try to damp out those effects by randomising them. It seems a bit like not knowing what those quanta are doing. Actually the randomisation lists we use are computer generated so they are deterministic and not really random. But again the process that produces the lists is complex and the result is close enough to random to make no practical difference.

Just random thoughts.........
 
Thomas, Dunstan: my toy example was just illustrate the idea of "low probability of great harm vs chance of moderate benefit" and how the words irrational and rational shouldn't be used to describe a decision based on these probabilities. But I'll admit that Dunstan's modified version of my game 2 is a nicer example.

CaptainManacles: You seem to be thinking of the laws of large numbers (and misstating/misinterpreting it, which is common).
 
My view on this is as follows:

Probability is a general descriptive model of observed behaviour.

A model is a simplification of reality.

Simplification is achieved through assumptions.

The assumptions are based on an approximation of what we expect based on theoretical gymnastics and observations.

Therefore the only link between probability and the real world is that probability is defined around real world observations. This is no different to any other theory (such as quantum physics or whatever) because we have no idea what is at the absolute root of cause and effect. Therefore all we can do with modelling is match observations and assume it is practically useful even if it is not based at the true roots of cause and effect. In this sense every model faces the same problem, not just probablility.

Cheers
 

Back
Top Bottom