Ah! I see the problem. (maybe). You are conflating two different sets of calculations. Of course we cannot take the numbers you give here and calculate alpha. What we need to know is what this theoretical distribution of numbers is measuring. If we are looking at medical research, for instance, where a drug is very costly and a false alarm would be expensive...or where an illness is devastating, and a miss would be intolerable. These things, which can be quantified in terms of dollar costs or person-hours or other numbers, are the relative costs of type I and type II errors in the real world (not in some hypothetical normal distribution. The relative costs of a drug, the prevalence of a disease in the population, those sorts of things are part of the analysis that goes into determining alpha.So you say.
Say an average is theoretically distributed normally with a mean of 5 and a standard deviation of 1.
We take 20 samples and observe a mean of 4.7.
Test the hypothesis that mu = 5.
This book just sets alpha = .05. Then it goes on to calculate a test statistic, and a p-value, then compares the p-value to alpha, and ends up not rejecting the null hypothesis that mu = 5.
But you're saying you can calculate alpha. Can you do it here please to shut me up?
And yes, one could determine that the ideal balance of type I and II error was an alpha of .07 (HIV drugs have been approved at that level, before the sample had sufficient power to have reached .05), although in such cases, social reasons will likely push researchers toward the .05 or .01 because everybody else uses them.
My grad stats courses spent enough time on this topic that I cannot simply think alpha is chosen out of thin air.