rocketdodger
Philosopher
- Joined
- Jun 22, 2005
- Messages
- 6,946
As I understand the concept of a desk check, I don't believe this reasoning applies. If we have gates A'1 and A'2 having inputs (11) and (01) respectively, that feed into gate A'3 in the next cycle, then A'3 is going to get inputs (01). We can calculate (01)->1 for A'2, then (01)->1 for A'3, then (11)->0 for A'1.
Suppose for some odd reason a mistake in the A' algorithm showed up, where the A'2 gate emitted 0, then it'd be a different thing we do is all--it may even change the order of our checks (in this case it would). We use N to compute A'3 first as (00)->1, and that would look right. Then A'2 as (01)->1, and that would look wrong, and then we'd be done with our desk check, and conclude that the A'2 gate's calculation messed up. This means that our A'3 calculation computed what it did, yet not what it was supposed to, but so what? We found the error--it failed the desk check. We go and fix A'2 and rerun it.
So no, we don't have to wait for A'2 to complete in order to run A'3 in a post hoc desk check. We simply assume the entire thing ran smoothly, and try to prove the assumption false.
If we were doing a test in the blind, however, we wouldn't have this information, so we have to run the whole thing in order. But we're not doing that, I don't think. I think all we're doing is figuring out if A' ran correctly, and I think we already know everything A' did. But the case to consider is the case in which it did happen to run correctly, and we're running N out of order.
Don't you have the same problem computing entire time slices backwards? You have to know what the inputs were to slice 2000 in order to simulate that at all. That depends on what happened in 1999. I can't see a way to reasonably interpret doing a desk check backwards without having the same concerns about serial versus concurrent processing.
Yeah but I disagree with Pixy about the backwards thing too, so my argument still stands. See my post on that.