Linda,
First of all I want to thank you for your participation in this conversation. It is helpful in both examining my own ideas and understand other people's better.
I going to try to condense this post as it's getting too long to respond to each paragraph, particularly when several relate to the same misunderstanding.
fls said:
I'm sorry, but your statement still isn't making any sense to me. What sample? What measured value?
The possibility that any particular idea, about which no information is given, is true, as modeled by a sample set of all possible ideas. A sample of randomly chosen ideas could provide an estimate of the proportion that are true within a particular margin of error.
Not at all what I was thinking of. Thank you for being more specific. I’ll try to relate this idea to our discussion now.
I wouldn't make such an assumption in those cases because it wouldn't be useful. Why do you consider such examples appropriate analogies?
You stated that an assumption of 50/50 is useful. Why wouldn't it be useful in those situations?
Because we have better ways to estimate those values and we already know that a reasonable expectation for such estimates will be very far from 50/50. We may have minimized our maximum error, but we can far more effectively minimize our
average error by making use of the information we already have.
I don’t have any illusions about the disadvantages of of the minimax approach. The advantage I find useful with this approach is that it does not require
any assumptions about the expected value of such an estimate.
In such models, a point estimate is needed to run the simulations. Minimizing the maximum error is a reasonable criteria to select the value used.
Except the 'value' you are estimating with your "minimizing the maximum error" criteria is
not the proportion of matter to anti-matter, but rather the number of particles to sample in order for the characteristics of those particles to serve as a reasonable estimate of the proportion of matter to anti-matter in the universe.
What I was using it to estimate, as you originally asked your question, was the
proportion not the number of particles to sample. The minimax principle can be used to decide upon a value for an estimated parameter in the absence of sample data. I am NOT talking about using it to compute a sample size. I think this is a major misunderstanding between us and I'm going to skip responding to the portions of your post that I think relate to this misunderstanding.
Now that you are aware that I'm not talking about mimimizing sampling error, but estimating a parameter value without prior assumptions, do you still consider the result to be an arbitrary rather than rational choice?
I choose an interval of [0, 1].
My choice has nothing to do with the idea under consideration (it could be about unicorns or teacups orbiting Saturn if that will rid you of the idea that any of this represents some sort of prejudice against gods). It has to do with being wrong most of the time if you assume an idea has a 50/50 chance of being true.
Ah, your prejudice is against point estimates then. Okay. Yes, any point estimate of the proportion has a probability equal to zero. Let’s switch to interval estimates then. Perhaps I can communicate my confusion with your position more clearly.
You chose the interval [0,1]. Do you consider the probability of the interval [0, .5) to be equal to that of the interval (.5, 1]? My sense is that you do NOT give them equal probabilities. That is why I keep asking about how you estimate that value. To me, your denial that you have such a value is inconsistent with finding .5 to be an irrational arbitrary choice for a point estimate. If you find the probabilities of the two intervals equal, then .5 is a very reasonable and rational choice for an estimate of that value. OTOH, if you find one interval to have greater probability than the other, then you are assessing and making an estimate of the value of that probability.
You seem to think that raising the issue of measurement error is relevant.
I'm sorry. I guess I was using a bit of technical jargon. Sometimes I forget how much of my vocabulary is influenced by the training. This is measurement error in the sense that any probability falls into a larger class of mathematic constructs termed measures. It has nothing to do with measurement error in the sense of taking a sample and making a determination of the maximum error of any statistics computed from that sample. (Though that definition could be considered to fall out of the one I was using.) Instead, it has to do with making a point estimate for a parameter from a theoretical distribution without any prior knowledge of the value of that parameter other than it’s range.
I do not understand why you cannot see how different these two issues are. I am going to think on this a bit and see if I can come up with a way to explain it that finally makes sense to you. It would help if you would read the Ioannidis paper carefully (if you haven't already). It may also help if you attempt to explain to me why you think an assumption about measurement error speaks to an assumption about the probability that an idea is true.
I hope my explanation above clears that up.
However, even if you disagree with me regarding the use of this principle, I don’t understand why you would consider using such criteria to be an arbitrary selection rather than a rational one.
Because you are estimating someone's IQ by measuring their height.
While estimating someone’s IQ by measuring their height would not be accurate enough to be useful, a slight correlation could be expected because malnourished children are stunted in both physical and mental development. It would not be a random choice – which is what I thought you were meaning as arbitrary. Are you meaning instead a choice which has a large average deviation from the true value?
Perhaps you could explain to me how you feel someone might rationally arrive at an estimate, either a point or an interval estimate? How did you rationally arrive at your opinion?
I would consider, out of the vast totality of ideas that humans have had, how many of them have been false. And I would also consider whether we are any good at finding true ideas, considering that it took us 10's of thousands of years to realize even a few essential truths about the universe that were not obvious on informal observation.
Thank you for answering my last question here. This answers my question regarding how you are setting an approximate value that I keep asking for earlier. Your consideration of the number of true ideas to false ideas is, I agree, a rational approach to making a preliminary estimation of that probability. You have to assume that the hypothesis in question is simply a random selection from that population and has not been subjected to selection bias - i.e. it is no more or less likely to be true than any other randomly chosen idea. While that's not an assumption I'm willing to make, if you're willing to acknowledge that assumption, I won't quarrel with it.
I have no problems with this approach as a rational one, I simply don't see it as the ONLY rational approach to the question. Incidently, I feel that that are also rational theistic arguments as well.
Do you feel your approach is the only rational approach, or can you provide me with an answer to the question immediately preceding this one that gives a result different from the one you like best?