I say this because to the extent that you can run the operations out of order it will only be during concurrent operations of the algorithm anyway. For serial operations the system has no choice but to pause and wait for required results before proceeding.
As I understand the concept of a desk check, I don't believe this reasoning applies. If we have gates A'
1 and A'
2 having inputs (11) and (01) respectively, that feed into gate A'
3 in the next cycle, then A'
3 is going to get inputs (01). We can calculate (01)->1 for A'
2, then (01)->1 for A'
3, then (11)->0 for A'
1.
Suppose for some odd reason a mistake in the A' algorithm showed up, where the A'
2 gate emitted 0, then it'd be a different thing we do is all--it may even change the order of our checks (in this case it would). We use N to compute A'
3 first as (00)->1, and that would look right. Then A'
2 as (01)->1, and
that would look wrong, and then we'd be done with our desk check, and conclude that the A'
2 gate's calculation messed up. This means that our A'
3 calculation computed what it did, yet not what it was supposed to, but so what? We found the error--it failed the desk check. We go and fix A'
2 and rerun it.
So no, we don't have to wait for A'
2 to complete in order to run A'
3 in a post hoc desk check. We simply assume the entire thing ran smoothly, and try to prove the assumption false.
If we were doing a test in the blind, however, we wouldn't have this information, so we have to run the whole thing in order. But we're not doing that, I don't think. I think all we're doing is figuring out if A' ran correctly, and I think we already know everything A' did. But the case to consider is the case in which it did happen to run correctly, and we're running N out of order.
So your example is a little misleading because there will be times when you simply can't run the calculations out of order without changing the results of the algorithm. Accepting this, it isn't as crazy as it sounded at first.
Don't you have the same problem computing entire time slices backwards? You have to know what the inputs were to slice 2000 in order to simulate that at all. That depends on what happened in 1999. I can't see a way to reasonably interpret doing a desk check backwards without having the same concerns about serial versus concurrent processing.