You mean you can predict it more accurately than a random variable that was uniform on the 2 to 12 interval. You are using your definition to justify your definition to justify your definition.
Lets hear a strategy to do better than even betting on a uniform uncorrelated die. I'm saying with this definition it corresponds well to our intuitions.
If the system is non-uniform and/or correlated we can talk about likely outcomes. It is saying that the definition is
logical and consistent.
It would be absolutely silly for me to say my example of random makes sense according to your definition of random.
Which incidentally, you still haven't bothered to provide. Is that ever going to happen? In absence of an alternative to compare it to, my definition is still the best one available.
Second, there are probability distributions that will be predicted with less accuracy than a uniform distribution. The sum of two dice example is just one example of a non-uniform distribution. Try the same with some bi-modal or multi-modal distributions and you will find the same isn't true.
You're going to need to be clear about what isn't true about bi-modal distributions. As I understand it, I can do better making predictions about a bi-modal distribution than a uniform one. I'm certain I'd rather be betting on a horse race where I knew the horses were bi-modally distributed rather than uniformly.
Prediction for one die roll: 3.5 +/-2.5
Appears with a confidence interval I can supply an exact prediction on uniform probability as well.
Yes, but you have noticed that your confidence covers the
entire interval. I can either interpret that statement as a trivial platitude or as saying that you have
no idea what outcome there will be. You might as well say, 'I can predict exactly that there will be some value'. Nice prediction, but I'd hardly say it supports your case.
On snake and ladders, or any board game where movement is determined by die roll. Your position after your turn is correlated with your position at the beginning of the turn. It still isn't predictable. Especially true in games like trivial pursuit where the number of rolls in a turn is variable. Correlation doesn't necessarily make things predictable.
You make a mistake here. Lets say the game has progressed for some time, I'm on position 70 on the board and I'm rolling one 6 sided die. The only outcomes are 71-76, the capacity to predict that it isn't going to be 68 and that it isn't going to be 77 tells us nothing because those are impossible outcomes. In other words, the outcome of the die roll is uncorrelated with the previous rolls. In a game like this you'd actually have to remap the meaning of the symbols on the die depending on previous rolls to create a correlation.
Hmmm, it is possible to make a pseudo-random number generator that won't give itself away. For example, if we want a random string of 1s and 0s ...
1. The next number in the sequence is the number that would minimize the entropy of the sequence to this point.
2. If added a "1" or a "0" both result in the same entropy, then choose 0.
It is a fundamental result from discrete number theory. I suggest you brush up on your math.
You haven't really explained your algorithm, but there is a reason they are called 'pseudo-random' if your algorithm worked you could patent it and create a true random number generator. But there is a theorem that prevents you from going back and definitively calculating the entropy of the sequence. Read up on Kolmogorov complexity
Or If I'm wrong you can patent your algorithm or you can tell me and I'll patent it.
An example of technically random process that isn't random in most practical senses is the decay of a chunk of radioactive material is an example of a random process where the result is random in only the most technical sense of the word. Individual atoms decay at random interval, but after one half-life passes your prediction of the amount of material left will be accurate to an incredible precision.
Well it sure seems like you're spending a lot of time arguing about 'technically random' processes. Care to make an argument about why evolution is macroscopically random? It seems like Jimbob is the only one still making macroscopic arguments.
It sure seems like what you're saying is that systems with random components can be macroscopically non-random, with this example. So how about you explain how you can tell the difference between the two in the macroscopic world?
First, to generate a "bit" that is random and without bias, all you need is a distribution where the median lies between two possibilities. For example, with the roll of one die I can generate "0" bit on a roll of less than 3.5 and "1" otherwise. With the sum of two roll I can't simply because the median, 7, might come up. But on the sum of 3 dice I can generate a bit based on whether the result is above or below 10.5.
I guess I was thinking about continuous wavefunction distributions. If the distribution is continuous and the median will always be a potential outcome in the distribution. So it will always take two trials over a continuous distribution. If its a discrete distribution, I'll concede the point that you can get a bit in one trial, if the median is not a possible outcome.
Of course, this is a digression from my digression. But the bottom line is that to get a random bit from a skewed distribution you need to use a function that unskews the distribution. Otherwise your outcome
will not be random
To conclude:
Walter, If you want to make a point about the macroscopic randomness of evolution why don't you: #1 Propose a definition of random that you think we can agree with. #2 Start talking about evolution.
Then we can drop all these technicalities.
mijopaalmc said:
Actually, this describes your modus operandi much more closely. You provided a definition of "random" that didn't take some of the oldest and most basic concepts in probability theory and statistics into account and therefore is invalid in so far as it does not describe most systems that could be described as random.
Interesting claim, but you don't provide any warrants, any reasons, the logic behind your point. Which is the converse of what I was claiming that you don't address the logical and cogent points made by others. This is the 'broken record' strategy of communication. If you don't explain the details of your claim and ignore the details of other people's claims it only obfuscates, because you are deliberately resisting 'getting to the bottom of' the issue. Of course if you realize you are wrong, but are just to stubborn to admit it, then obfuscation is a good way to go.
But where could I possibly find evidence of this claim??? Maybe I'll look to your next sentence?
Unfortunately, that is not my reasoning. I recognize that there seems to be some sort of cognitive break between the description of evolution as non-random and the practice of statistics within evolutionary biology. The former insists that anything that is uniformly distributed, independent, and uncorrelated is random, whereas the latter allows for those conditions and makes statements about how likely it is to expect such things given the characteristics of the data.
I make five separate points, you lump them together and respond 'that is not my reasoning' . No its not your reasoning, it is my reasoning. The onus is upon you to address my reasoning or concede the point.
The icing on the cake, is that you fall back on the 'evolutionary biologists don't understand the consequences of their statistics'. To the contrary, you don't understand the consequences of their statistics.
Even data that is generated by a random (equiprobable) process can, if the sample is small enough, display bias, correlation, or dependence. Similarly, small sample data that is generated by a process that is biased, correlated, or dependent can lack those properties. Thus, it becomes essential to develop methods to detect these and other properties that exist beyond the artifacts of sampling.
[/QUOTE]
Correction: your straw man representation of my argument is flawed.
Correction: Mijo's correction has turned correct into incorrect.
I make my conclusion from 5 separate arguments.(and about 12 in the post before that) Your reasoning is flawed. You do not address the reasoning. You don't even explain why you think it is a straw-man.
Even data that is generated by a random (equiprobable) process can, if the sample is small enough, display bias, correlation, or dependence. Similarly, small sample data that is generated by a process that is biased, correlated, or dependent can lack those properties. Thus, it becomes essential to develop methods to detect these and other properties that exist beyond the artifacts of sampling.
Again: making assertions without evidence. Also 1-5 in the previous post still apply. Most specifically #3 conflations of the methods with the model and the model with the physical world
I address this in detail ~4 posts ago in my discussion with Walter Wayne. Do try to keep up Mijo.