I've been listening a bit to this debate, and it's interesting to me that both sides are so convinced they're right, and so convinced that they're unbiased, and rational in their position, and it makes me wonder if perhaps they are both right in their own way, but thinking differently about it.
I should preface this with the statement that I'm no statistician or epidemiologist, and know only what I've heard so far on some radio programs and debates.
I haven't seen the studies that the recommendation is based on, but it appears, from what I've heard, that the result is based on a relatively simple balancing of benefit versus harm, for a relatively large population. Perhaps this is not the case, but it appears that they've taken a virtual scale, and in one pan they put the benefit (you don't die of an undiagnosed cancer), and in the other pan they've put a whole panoply of possible negative consequences. So, for a population of n patients, if more are harmed than helped, you say it's not a good bargain.
What I don't see here is how they have weighed harm. {Edited to add: they did try to emphasize that the financial consequences were not weighed in this study.} The obvious negative consequence if you need a mammogram is that you will die of cancer, which is a horrible, terminal, drastic consequence, at least in most people's opinion. Does the study weigh only equivalent negatives, or does it, as its advocates at least appear to be saying, bundle all the negatives as a cumulative "harm," including consequences which, while nasty enough, are not singly equivalent to a painful death from cancer? Again, I'm not sure how they're weighing this all, but on one NPR program I was listening to, one of the persons responsible for the study appeared to be referring to a large list of negatives including stress and inconvenience, as well as unnecessary biopsies and mastectomies. Obviously some of these consequences are heavy, and even fatal, but some are not. It did not appear from what I heard that the advocates of the new study are balancing deaths against deaths, but rather that they're making a judgment of relative harm. Even if that judgment is wise and impartial and well considered, it's debatable.
The second thing I don't know is whether the study used matched pairs, or whether it's a general population study. From what I've heard so far, it's a population study. I don't know what rules were applied for inclusion in the study. While that may be perfectly adequate to weigh overall benefits for a population of getting a procedure, unless the subset of the population matches one's own circumstances, it's of far less value in making a personal decision about the same thing. To pick an obvious random example, if a certain percentage of the negative consequences of the test were related to diabetes or heart disease, or some other condition, then obviously a person without these conditions would see different odds. Or to pick another example, if some of the consequences were due to poor medical judgment on the part of doctors, a person who has a really good doctor with a proven record of good judgment has a different decision to make. I don't know how the study has taken these factors into account, but suspect that if it did not use fairly careful matching, at least some of these factors will have been omitted, and if that is true, then it would be a poor basis for individual decision making, and a poor basis for any policy that drives or enables individual decision making, no matter how valid it is as an actuarial reference.