Ask jt, humots or Caveman about the nature of likelihood.
We're asking
you to demonstrate expertise in likelihood. You have already admitted you don't understand what these named posters have said. You just glom onto them because they occasionally correct your critics dispelling skepticism is your real goal. Just today one of these authorities has reiterated his conclusion that you simply don't get how the likelihoods work in your model. Are you really sure you want to be invoking someone who just told you that you were wrong?
The likelihood in Bayesian inference depends only on the given -- cause and effect is not taken into account.
Causes and effects, if any, are folded into the reckonings of the various terms of the inference. The physics of a lottery ticket are folded into the model for the denominator of the likelihood ratio. The cause-and-effect genetics are folded into the model as well. They may be a probability distribution reckoned over a number of parameters. They may be
approximated in some cases by a random variable to within an acceptable margin. But that does not mean that cause and effect do not play into a Bayesian inference.
Again take the Bayesian search, which is a better example for this than lotteries or card drawings. A house consists of 5 rooms: a living room, kitchen, dining room, bedroom, and bathroom. The cat is somewhere in the house and it's your job to find her. Randomly distributing a cat among the five rooms would yield over time an evenly distributed probability of finding the cat in any one of the five rooms. But that's not how cats work. The cat doesn't like the dining room because it's all hard surfaces. She abhors the bathroom because she associates it with baths. She likes the living room and bedroom because they have soft padded surfaces to nap on and windows to stare out of. Based on what we know about the cat, we can adjust the probability distribution accordingly for the priors. This is akin to the example jt512 gave you more than a year ago, where he was able to state the priors of a coin toss based on information about the physics of a coin.
The aim of the Bayesian search is to optimize the search path to find the cat as quickly as possible. The events that drive the likelihood ratio from step to step are the observations that the cat was not found in the room just searched. The posteriors guide the next step of the search given a failed search of some room. The process is Bayesian because our probability distributions are not based on frequentist modeling but rather on knowledge.
Armed with the prior, we search the living room first and fail to find the cat. However, we have to consider false negatives when reckoning the effect of that observation on the probability distribution. The living room is brightly lit, but large and full of places for cats to hide. Conversely the dining room is also brightly lit but has few places for a cat to hide undetected. So in transforming the prior distribution to the posterior distribution, we have to consider the difficulty of searching each room, since that affects whether we get a false negative. Our posterior distribution would then include elements such as "likelihood that the cat is in the dining room given that the cat was not observed in the living room." And that would guide where we search next. But the key is that none of that estimation was based on a random variable. In this case it was based on subjective knowledge of the particular house being searched. In other cases it can be based on discernible cause-and-effect "physics" of the problem, even if those physics are quite complicated. If an operative cause is "the room is dark" or "the room has lots of hiding places," the relevant effect will be that there is a greater probability of a false negative in searching that room. The model must reflect such things if it's to be useful in a Bayesian sense. The utility of Bayes is precisely that it is
not bound to frequentist modeling.
Incidentally because our likelihood included the possibility of a false negative, the posterior probability that the cat is in the living room is still non-zero. And that's where it gets interesting, because in this particular example that posterior may still be higher than the ongoing likelihood that the cat will be in the bathroom. At some point our model may even suggest searching the living room again before searching the bathroom just once.
Your problem, Jabba, is that you simply don't understand how these things are model. At the conceptual level. I told you this as Fatal Flaw no. 1. The answer you finally gave for that was simply to quote chapter and verse from some source about what a statistical inference is. Telling us what constitutes a valid inference does not prove you've formulated one.