Strawman. I did not ever say that the tests were inappropriate for their intended use. It is the further use of these tests which is where a bias emerges. I have repeatedly said that; it is dishonest of you to characterise my statement as you do.Re: your undemonstrated claim that they are "biased". If they are, I'd suggest you talk to JREF since they're the ones designing biased tests according to you.
I asked a hypothetical question. It is intended to make a point. Your link shows totals by state for a number of categories. Of far more interest are some of the links from your linked page. The statistics there are gathered for the purpose of looking at driver safety. As such, the categories and reporting techniques are exactly what you would expect in order to achieve that aim. They do not present a situation analogous to my hypothetical.I guess you'd have to ask the people who look at such data in real life for specifics, since that data is collected in real life, and many others), and they cannot control many things either.
"Might"? "Hint"? Yeah, that is pretty much the best you can do. The data on an unsafe intersection, if improperly collected, "might" seriously under-report the number of accidents. In comparison, then, another intersection "might" "hint" that it is worse than this first one, and get the sign or signal.But how about a simple example, for you- a number might hint that perhaps a change in the road, signals, or signs is in order.
Your answer shows either that you do not understand sampling and experimental design, or that you are ignoring it for the sake of argument.