Good, we're getting somewhere. Now, how do we tell whether a set of election results falls into the category that Benford's Law can be applied to, or to the category that Benford's Law cannot be applied to? Because, of course, these things are never a black/white boundary; it's trivially simple to construct an example where the polling numbers just barely span an order of magnitude, but, because the distribution is weighted towards the centre, there are relatively few examples at either end of the distribution, and if the ends of the distribution happen to fall on values where the leading digit is 1, for example, we'll see a gross deviation from Benford's Law because there will be fewer 1's, 2's and 9's than expected.
Are you starting to see the problem? Benford's Law can be used to indicate the possibility of faked data, by noting that the distribution deviates from that expected, if and only if the data range is suitable; and the result of an unsuitable data range is that the distribution deviates from Benford's Law.
Now if you go back to the original claim, there's one example of Biden data that's flagged as suspect. The main deviation visible is that the proportion of 1's and 2's in the distribution is much lower than expected; but this is exactly what would be predicted by a data range too small for Benford's Law to be applicable. Meanwhile, some of the Trump data shown deviates just as much from Benford's Law, but this shows up as an excessive representation of 1's in the data, which is what would be expected from a small data range centred on a value with a leading digit of 1.
So we've established that some data ranges cannot be examined using Benford's Law, we've observed that these ranges will give the sort of deviations seen in the data presented, and there isn't enough information given in the claim to determine whether this is due to a data range that's too small. Overall, that makes the claim practically worthless in the absence of corroborating information.
Dave