That's what I'd assume.
Consider the following: suppose they had predicted that for every state, Clinton had a 90% chance of winning (ignore maine and nebraska and DC issues). Now, if that is exactly true, then that means that we should expect her to LOSE 5 states. And the average number of electoral votes would be 53.8 for Trump and the rest for Clinton.
The problem is, you don't know which are the 5 states that she would end up losing. Or if it would be 4, or 6. The chance that she wins them all is only 0.5%. It could be that the 5 states she loses would be the 5 largest in electoral votes. In that case, Trump would get a lot more than 50 votes. Alternatively, maybe it is the 5 smallest states, in which case he might get 20.
Now, in this scenario, either of these outcomes is equally likely. However, in the real case, where probabilities vary all over the place, the math is really hard. So in that situation, it's probably a lot easier to run 10 million simulations using the probabilities for each state and looking at the outcomes that way. If I had the probabilities, I could do it easily.
I will say, however, that I think Silver underestimates the probabilities for the states, at least has been recently. In fact, his claim that he correctly called 99/100 states in the last two elections would suggest that. Unless his probabilities are are in the 99% range, he should be getting a lot more wrong than he is. As I pointed out above, if all the states have a 90% probability, the odds of getting them all right are only 0.5%.
I pointed this out after the last election. Silver's model isn't working as good as he asserts, because if he was right, he'd be getting a lot more wrong. If that makes any sense.