That's not necessarily good design. Such a distributed system has to be designed very robustly to account for communications failures.
By robust design I mean that, when the transactions are sent from the terminal to the central server, there needs to be a protocol in place so that the terminal knows when a transaction has been successfully recorded by the central server and also a means to identify transactions that have not been successfully sent.
You could have a sequence like this:
- terminal sends transaction
- server records transaction
- server sends acknowledgement of recording
- terminal marks transaction as sent
If communications fail after the server has recorded the transaction but before the terminal has received acknowledgement, the two records at terminal and server will be in an inconsistent state. There
has to be a way to bring them back into consistency and you can't just send the transaction again because that could result in a double post, if the server can't identify it as a transaction it has already got.
You may say it is relatively trivial to fix that and it is, but back in the late 90's, the world was not as connected as it is now and it is quite possible that the engineers didn't think of it.
I think you can assume we all know the basic facts of the case and you don't need to repeat them.
I don't necessarily agree.
There are many reasons why there might be a shortfall including dishonesty or genuine mistakes, but as soon as it was clear that a pattern was emerging - this might be as simple as several sub postmasters reporting similar problems - Horizon should have been investigated.