That’s actually the same question I asked when I first noticed structured responses emerging from this process—'If it's random, how is it structured?
Over time, I started considering the possibility that maybe randomness itself isn’t what we think it is. Maybe what we call 'random' is just a placeholder for patterns we don’t yet fully recognize. If structure keeps emerging consistently, could it be that randomness is just an illusion of limited perception?
If we assume randomness exists despite seeing structure emerge repeatedly, then isn't the belief in randomness also something worth questioning as more a paranormal concept than a real one?
I’m genuinely interested in your thoughts on this. Do you think randomness is a real phenomenon, or could it be that we simply haven't mapped out the deeper structures behind what we call 'random' yet?
Randomness is a characteristic of abstract models (explanations, narratives) much like casting spells is a characteristic of the fictional wizards at Hogwarts. When we apply probability theory to a dice game in a casino, we model the individual dice throws as random. (That model matches our experience sufficiently well to lay bets, design games that meet expected profit margins, and detect cheating.) We model the timing of the decay of individual atoms in radioactive materials as random. (That model matches our experience sufficiently well to predict how the characteristics of radioactive materials will change with time, design reactors, and measure the age of some substances from isotope ratios.) We imagine the text of the individual books in the fictional Library of Babel as random. (That model matches our expectations of what the Library's fictional denizens experience as told in the story.)
Even in abstract models there are different definitions for random. Consider a random sequence of digits 0-9. We might say a random sequence is one where there's no correlation between any characteristic of the previous entries in the sequence, and the next element. We might say instead, a random sequence is one where it's impossible to predict the next entry better than chance expectation. In our abstract models these are equivalent, as are other definitions/measures involving concepts of information or entropy. (In our abstract models, wizards at Hogwarts can cast spells.)
The digits of the decimal expansion of pi meet the first definition but not the second; we call that pseudo-random and the algorithm that spits out the digits is an example of a pseudo-random number generator. The same is true for all other digital pseudo-random number generators. Ideally they pass all possible tests of randomness (such as, no hidden messages from mysterious intelligences)
except for being the result of that one particular configuration and deterministic operation of that particular random number generator. While the lack of e.g. any eventual repeating pattern can be mathematically proven for some RNGs (such as the digits of transcendental numbers), it's probably not possible to prove the absence of some other not-yet-discovered hidden structure or pattern in all such cases. But where the generating algorithm is simple and deterministic (the algorithms for generating digits of pi, say, or the Wolfram Rule 34 RNG used in
Mathematica, it's difficult to fathom where any kind of intelligence involved in generating or influencing the outputs could be hiding.
Radioactive decay, as far as we've ever been able to tell, is random in ways that conform to all the definitions. That is, our model of when unstable atoms decay is entirely sufficient to explain all observations. We cannot be certain that will remain so for all future observations, but for example if it turns out to be possible to find hidden messages from the universe in the decay of isotopes in a chunk of carbon, we wouldn't say "OMG randomness conveys messages," we'd say instead, "OMG radioactive decay isn't random after all."
The fictional narrator of the Babel Library suggests that the collection of books in the library isn't actually random either. They're an exhaustive set instead: the Library contains one each of every possible book (printed sequence of characters) of the established length. That implies (though the story doesn't discuss) that if you're holding a book whose first letter is A, the chance of the next book you pick up also starting with A is less (by an unimaginably small but still calculable amount) than the chance of it starting with some other character. Just as if you draw a spade from a shuffled deck of cards, the chance of the next card in a fair draw being another spade is a little less. What's random is the sequence the cards are in after the shuffle. The apparent randomness the Library of Babylon narrator observes really comes from how the books are distributed (shuffled) among the shelves.
One more important note, ideal randomness is the lack of a pattern, but that doesn't mean it has no properties at all. If you evaluate, say, an equally distributed random sequence of digits 0-9, you'll find for example that a digit in the 5-9 range is more often than not followed by a lower value digit. That's not a bias, flaw, or "hidden pattern" in the random sequence, it's an expected property of one. That might be obvious, but many more subtle or more complex of these expected properties (periodicities, streaks, a certain frequency of "surprising" or "less random seeming" subsequences, and the like) get mistaken for "hidden" structure. That's one reason humans are bad at creating statistically ideally random sequences by thinking alone (they tend to avoid the "surprising" coincidences that
should be there), and also bad at evaluating how close to ideally random a sequence is without using mathematical tools.