Ziggurat
Penultimate Amazing
- Joined
- Jun 19, 2003
- Messages
- 61,634
Just to be certain, I assume that last expression should be dS2/dE1, the omitted d being an oversight.
Yes, that was a typo, it should indeed be dS2/dE1.
Thanks to your help and Tim Thompson's, I think I'm getting somewhere. Question: How should I interpret the above? Why do we "want" to maximize entropy? Is it because over time it will "seek a maximum?
Yes.
The fundamental postulate of statistical mechanics is that for an isolated system at equilibrium, every accessible microstate is equally likely. That means that for a system composed of two subsystems, every energy distribution between the two subsystems is in principle possible. But some energy distributions have relatively few corresponding microstates, and some have very many corresponding microstates. For a macroscopic system, the number of accessible microstates as a function of energy distribution will be VERY sharply peaked. The numbers involved are just ginormous, but the relative width of the peak is really, really, REALLY small (roughly speaking, the relative width will scale as N-1/2, where N is the number of particles. For (say) 1020 particles, then, the relative width of this peak is around 10-10. So even though it's possible to end up far from the peak of your distribution, in practice it just doesn't happen because the probabilities are too small.
Now, in the above discussion I talked about the number of accessible states, not the entropy (which is, aside from a factor of k, equal to the natural log of the number of accessible states). The reason that we use entropy and not the number of accessible states directly is for simplicity. For a system made up of two subsystems, the total number of accessible states is equal to the product of the number of accessible states for each subsystem, but the total entropy is the sum of the entropies of the two subsystems. That makes entropy easier to work with. And if one does the sort of calculus approach to finding the maximum of the number of accessible states as a function of energy distribution, the condition you get can be easily expressed in terms of a derivative of a logarithm of that quantity. So we work with entropy because it's simpler, but we're still just operating off the fundamental postulate, along with the knowledge that our accessible states (and thus our macrostate probabilities) are VERY sharply peaked.
Over time, we can expect to drift around randomly. If we're at or near equilibrium, the probability of staying near equilibrium is really high, because there's so many more microstates there. If we're off equilibrium by any significant amount (and "significant" is, per above, quite small), then there are so many more microstates available if we move towards equilibrium energy distribution than if we move away from it, that even with individual microstate transitions being completely random, the probability that we'll drift towards equilibrium over time is huge. REALLY huge.