Consider a proof consisting of n steps.  Suppose that it is possible to check the correctness of each of these steps, but that n is so large that it is impractical to check all of them.  Instead, start checking the steps at random and stop if you find an incorrect step (in which case the theorem is not proved) or if you reach m correct steps.  What can you conclude in the latter eventuality? 

 

     Let C denote the event that the proof is corect and let A denote the event that you have found m randomly selected steps correct.  By Bayes’s Theorem:

 

    

 

where  is the conditional (or posterior) probability of C given A;  is the conditional probability of A given C;  is the unconditional (or prior) probability of C; and  is the conditional probability of A given the complement  of C.  As , the posterior probability simplifies to:

 

      

 

     A low Bayesian trick is to eliminate the prior probability by considering the so-called Bayes factor, defined as the ratio of the prior to posterior odds in favor of C, which in this case reduces to .  Let the random variable K be the number of incorrect steps in the proof given that the proof is incorrect.  Then:

 

    

 

     Consider two simple cases.  In Case 1:

 

    

 

That is, conditional on the proof being incorrect, there is only a single error.  In this case,  so that, for example, to achieve a Bayes factor of 0.05 (i.e., posterior odds 20 times prior odds) for a proof of n = 100 steps, you have to sample 95 of the steps.  Not much help.  In Case 2:

 

    

 

That is, all possible numbers of incorrect steps are equally likely.  My guess is that there is a simple expression for , but I came up against a Limit to Laziness in trying to find it, so I resorted to computation.  Here is the result for n = 100:

 

m        

 

10                          0.082

20                          0.038

30                          0.023

40                          0.015

50                          0.010

60                          0.007

70                          0.004

80                          0.002

90                          0.001

 

In this case, to achieve a Bayes factor of 0.05, you only have to check that around 15 randomly chosen steps are correct. 

 

Result:  It is harder to check a correct theorem of a good mathematician than of an error-prone mathematician.