Did Nate Silver nail it or what?

I'm curious how this would have played out if the predictions and results were reversed; meaning if Silver had been predicting a Romney victory this whole time and had been proven correct.

Leading up to the election, Silver made several appearances on left-leaning shows like Realtime with Bill Maher, The Rachel Maddow Show, and The Colbert Report. Would he have been asked to appear if he had been projecting a 80% likelihood of a Romney victory?

I think Rachel would have had him on regardless, given that she's such an unabashed geek. Colbert too, probably (it would have played to his schtick). Probably not Maher, though. And I can almost hear Ed Schultz's bloviating ("I don't care about your numbers and fancy computers, I know the American people, and blah blah blah...").
Yeah you guys.

Sure Nate Silver was right, but what if he was wrong?

I mean did you think about that?
 
Yeah you guys.

Sure Nate Silver was right, but what if he was wrong?

I mean did you think about that?


I think boooeee was asking what the reaction would be if Mr. Silver predicted a Romney win and Romney actually won. Basically, would liberals/Democrats still be praising Mr. Silver's accuracy, would they simply ignore him, or would they criticize his methodology?
 
Last edited:
However, as I pointed out yesterday, based on his percentages, his probability of getting them all right (ignoring Florida) was about 25%.

Careful. Such a conclusion assumes that the results in the various states are statistically independent. This is almost certainly false: If several states swing one way, others are then more likely to go the same direction.
 
Why not? He had Frank Luntz and John Fund on in the last few weeks.


Fair enough. Maybe he would have had him on at the beginning of the show rather than joining the panel in the middle. That spot seems to be reserved for more friendly guests. It just felt that the reason Maher had him on was to point out how much the right hates facts and logic.

But I'm inclined to agree with Donal that Maher tends to be pretty selective when it comes to math and science.
 
Careful. Such a conclusion assumes that the results in the various states are statistically independent. This is almost certainly false: If several states swing one way, others are then more likely to go the same direction.

Not only that, but the overal chance of success should be seen as a meta-probability of the individual state probabilities, or the individual state probabilities as subsets of the overall probability. That Obama won in the predicted manner means that he should have won most to all states he was predicted to win. Otherwise he would not have won overall.

To change how to look at it, consider Obama's chance of a win is a chance of surviving an illness, with the chance of taking individual states the chance that individual organs do not fail from the illness. The model successfully predicted which organs would fail and that Obama survived.
 
Fair enough. Maybe he would have had him on at the beginning of the show rather than joining the panel in the middle. That spot seems to be reserved for more friendly guests.

I hadn't thought of that but I think you're right. And as I recall, Luntz was on at the beginning and Fund was on the panel.

But I'm inclined to agree with Donal that Maher tends to be pretty selective when it comes to math and science.

That's true, unfortunately.
 
Actually, he is also getting a little criticism like that from some statisticians, that his error limits were too conservative.
I totally get that line of thinking, but it's still flawed, and here's why:

Every single possible EV combination had less than a 50% chance of happening; not just that, but less of a chance than 332 for Obama. So held against that standard, every single possible result would have been deemed a failure. Silver wasn't saying 332 vs. all other possibilities. He was assigning a likelihood to each one.

Now, if it ended up at, say, 322 for Obama, and someone said, "Well that wasn't the most likely outcome, but that just proves him right," they would be foolish.

Think of the NBA lottery. If one team has the most entries, which gives them, say, a 35% chance of getting the top pick, and that team does end up winning the first pick, should people be thinking "fix!"? No. That the team with the highest probably got the pick is not a reason to doubt that they were allotted the correct amount of chances. That would be ludicrous.
 
Last edited:
Here's a good example, though I've seen the same thing elsewhere too:

The problem with this analysis is that it assumes that all of these events are independent events, like coin flips. A dozen coin flips is a dozen independent events, none of which affect each other or are all affected by a common independent influence.

The elections in each state however, while not necessarily affecting each other, are all affected more or less by the same national news media. So if Obama is getting positive coverage in the news media in Florida in the week before the election, he is almost certainly getting equally positive coverage in the news media in Virginia. If people in Wisconsin hear a negative story about Romney, then people in Ohio almost certainly hear the same negative story at the same time. That's why you can't analyze these predictions as if they were discreet, independent events.

(Just thought somebody should explain why that isn't true.)
 
Careful. Such a conclusion assumes that the results in the various states are statistically independent. This is almost certainly false: If several states swing one way, others are then more likely to go the same direction.

That could happen if the polls are systematically biased, yes. But the criticisms being leveled are not making that argument. They are basing their complaints that it was too unlikely to get them all right, even (especially?) assuming that each state was independent and random.

Of course, the criticism is that, according to Silver, the chance of him getting ALL the states right should have been less than 10%. However, aside from something with a 10% chance happening is not a stretch, the biggest factor in that 10% is Florida, which basically cuts the chance in half.
 
Every single possible EV combination had less than a 50% chance of happening; not just that, but less of a chance than 332 for Obama. So held against that standard, every single possible result would have been deemed a failure. Silver wasn't saying 332 vs. all other possibilities. He was assigning a likelihood to each one.

I've pointed this out as well (especially regarding Wang). Wang's prediction wasn't a failure by any means. Sure, he had predicted 303 as the most likely outcome, but he had 332 as the second most likely outcome, and the difference between them wasn't large. In fact, it came down pretty much totally to Florida. Wang had Romney's chance of winning Fla at 60%, so that meant that, between the options of 303 and 332, he had 303 about 60% of the time and 332 40%. In contrast, Silver had them flipped, because he had the Florida probability flipped. But again, the relative chance of 332 vs 303 just reflected the probability of Obama winning Florida.

No one who understands this thinks there is any real difference between a prediction of 20% chance of 332 and a 17% chance of 303, which is what Silver had. Both are perfectly reasonable. He even had 350ish as possible (add NC), which makes perfect sense considering NC was the one he was most likely to get wrong.

So yeah, Silver nailed it, but so did Wang. The most important lesson is NOT about the details of the statistical approaches, whether it is Bayesian or whatever, but that it is possible to get reliable insight from the polls.

However, this goes right back to Silver's past with Baseball Prospectus, and, in fact, can be traced back to Bill James. It's the exact type of thing that James originated, and Silver and the BP crew advanced, when he claimed that minor league baseball stats, when properly taken into context (in particular, league, park, age), are just as reliable as major league stats.

What Silver and Wang are doing is the same thing - polls, when properly taken into context, are reliable indicators of voting results.

Folks familiar with the history of Sabremetrics are not all surprised by this. We've seen it all before. Heck, I had a lot of interactions with some of the Baseball Prospectus guys (I don't remember Nate being involved, though) long before they ever started doing BP, and was familiar with their work. I knew they were working on projection models at the time, and they absolutely got a ton of crap from people who wouldn't believe it, just like what is happening now.
 
Okay, so Silver was right and I was wrong. It strikes me, however, that there is a risk to accepting Silver, and it is higher on the liberal side than it is on the conservative side. Suppose in 2016, Silver's method projects a pretty easy win for the GOP. Isn't there a strong risk that becomes a self-fulfilling prophecy, as liberals, dispirited by Silver's projections, stay home from the polls in droves?

What would have been Silver's projection in 2000 and 2004? He surely would have had Bush as the favorite in both years, right? Wouldn't that have made Democrats less likely to go out to the polls? While it would not have made a difference in the presidential race, it might have had major consequences in the down-ticket contests. Remember, this was the criticism that the Democrats leveled against the networks in 1980; that by declaring the election (and many states) for Reagan before the polls had even closed, they artificially deflated turnout, hurting Democratic candidates for lower offices.
 
Every single possible EV combination had less than a 50% chance of happening; not just that, but less of a chance than 332 for Obama. So held against that standard, every single possible result would have been deemed a failure. Silver wasn't saying 332 vs. all other possibilities. He was assigning a likelihood to each one.
Thank you. I'm really beginning to think it's time to give up on the idea of folks understanding statistics.
 
Okay, so Silver was right and I was wrong. It strikes me, however, that there is a risk to accepting Silver, and it is higher on the liberal side than it is on the conservative side.

Only if you believe that poll results determine voter tendencies, rather than the other way around. I don't.

Suppose in 2016, Silver's method projects a pretty easy win for the GOP. Isn't there a strong risk that becomes a self-fulfilling prophecy, as liberals, dispirited by Silver's projections, stay home from the polls in droves?

Nobody's ever proven that to happen. I have no fear of it; the GOP turned its base out roughly as expected this time, for example, it's just that their base was only good for about 47%.

What would have been Silver's projection in 2000 and 2004? He surely would have had Bush as the favorite in both years, right?

Don't know about 2000 as it was clearly much closer than any of the last three Presidential elections (I haven't looked at the state poll #s from then); in 2004 he definitely would have, as the national and state polls showed that Bush was clearly going to win. Kerry supporters had their period of 'poll denial' then.

Wouldn't that have made Democrats less likely to go out to the polls?

Again, only if you believe that poll results determine voter tendencies.

Remember, this was the criticism that the Democrats leveled against the networks in 1980; that by declaring the election (and many states) for Reagan before the polls had even closed, they artificially deflated turnout, hurting Democratic candidates for lower offices.

Possibly true in 1980 (and possibly just the usual loser-side pouting), but it's 2012 now and it will be 2014 before this issue comes to the fore again. Not a concern to me, shouldn't be a concern to the GOP.
 
Okay, so Silver was right and I was wrong. It strikes me, however, that there is a risk to accepting Silver, and it is higher on the liberal side than it is on the conservative side. Suppose in 2016, Silver's method projects a pretty easy win for the GOP. Isn't there a strong risk that becomes a self-fulfilling prophecy, as liberals, dispirited by Silver's projections, stay home from the polls in droves?

What would have been Silver's projection in 2000 and 2004? He surely would have had Bush as the favorite in both years, right? Wouldn't that have made Democrats less likely to go out to the polls? While it would not have made a difference in the presidential race, it might have had major consequences in the down-ticket contests. Remember, this was the criticism that the Democrats leveled against the networks in 1980; that by declaring the election (and many states) for Reagan before the polls had even closed, they artificially deflated turnout, hurting Democratic candidates for lower offices.

Honestly, while that is a lurking concern, I don't think you can directly compare what Silver does with what the apparent problem in 1980 was.

For instance, opinion polling showing a candidate is far behind might result in that candidate's supporters getting discouraged and staying home, or it might result in the candidate's supporters redoubling efforts to register new voters or increasing Get Out The Vote efforts. Since polling starts and is in flux so far ahead of election day, it's hard to say how any given forecast based on any given set of poll results before election day might influence turnout when that day comes.

Networks actually and definitively declaring things on election day itself, though, has a huge impact, because it's one thing to be told your candidate is down in the polls a month before the election, and quite another to learn your candidate has already lost in the key battleground state you live in just as you're getting ready to drive down to the polling station to cast your ballot.
 
Last edited:

Back
Top Bottom