I have a quibble though: Kay says “There are no 99 per cent probabilities in the real world”. Clearly, there are. That doesn’t mean, though, that you or I know what they are.
The real point is that at very low probabilities, the chances of your model being wrong dwarf the chances you’re predicting. If you model a probability as 20%, but there’s a 2% chance that your model’s significantly wrong, the true probability is somewhere in 20±2%. That’s useful to know. But if you model a probability as 0.2%, that doesn’t magically make your chance of having got the model wrong a hundred times smaller. What you really have is a probability of 0.2±2 %. It might as well be 1±2% or 0.000001±2% — the question of how sure you are about your model is far more important than whether the model says 1% or 0.1%