left up

Extensibility and LCF

As de Bruijn says, we want to use abbreviations for common patterns of inference. The patterns of inference that are common may, of course, depend on the particular piece of mathematics being formalized, so it's desirable to allow ordinary users to extend the inference system, while at the same time being confident that only correct inferences are made. LCF realizes this desire exactly. Arbitrary programs can be written to prove a fact, perhaps from several premisses; the only restriction being that although terms and types may be built up, and theorems decomposed, quite arbitrarily, theorems can only be created by the primitive inference rules. It is not necessary that the program should prove some fact in a uniform way; it might, depending on the form of the fact to be proved, invoke one of several quite different algorithms. This is not always appreciated by outsiders, as one can see from the following assertion by [davis-schwartz]: 'an LCF tactical is limited to a fixed combination of existing rules of inference'. It is not, and therein lies the value of the LCF approach as compared with simple macro languages.40

Just how practical is this idea of breaking all inferences down to primitives? At first sight, when one considers the enormous number of special-purpose algorithms and the likely cost of generating formal proofs, it looks completely impractical; [armando-building] opine that it 'turns out to be both unnatural and ineffective' (though apparently on general grounds rather than on the evidence of having tried). And indeed, not all LCF systems have stuck with the rigorous policy of decomposing all inference to the primitives. For example, Nuprl users have quite happily added extra proof procedures for linear arithmetic or tautologies when the need arises. However these human factors do not provide hard evidence about feasibility.41 The users of the HOL system at least have ploughed their own furrow, implementing a variety of quite sophisticated proof techniques, all decomposing to primitive inferences. We are now able to step back and survey the lessons. There are two questions: whether it is a reasonably tractable task for the programmer, and whether the resulting program is acceptably efficient (the latter question is closely related to our previous discussion of feasibility). It has emerged that many HOL derived rules are feasible and practical. We can identify two clear reasons why the apparent difficulties are often not really serious.

First, complex patterns of inference can be encoded as single theorems, proved once and for all. These can then be instantiated for the case in hand rather cheaply. This idea has long been used by HOL experts; an early example is given by [melham-tydef]. A more recent, and complex, example is the following [harrison-hol93] justifying a step in a quantifier-elimination procedure for the reals; it states that a finite set (list) of polynomials are all strictly positive throughout an interval if and only if they are all positive at the middle of the interval and are nonzero everywhere in it.

|- foralll, a, b. a < b and
(forallx. a < x and x < b implies FORALL (POS x ) l)
= a < b and
FORALL (POS (({a + b})/({2})) ) l and
not existsx. a < x and x < b and EXISTS (ZERO x ) l

Many special-purpose algorithms can be understood as a natural process of transformation on terms or formulas. It is not difficult to store theorems that justify the individual steps; the algorithm can often be implemented almost as one would do without the primitive inference steps, and generally with a modest linear slowdown. This applies, for example, to HOL's implementation of rewriting, as well as numerous special-purpose rules that users write in the course of their work, e.g. the conversion to differentiate algebraic expressions by proof described by [harrison-fmsd94]. There are a few delicate points over the efficiency of equality testing and the use of imperative data structures, but these do not usually cause problems. More awkward are situations where a decomposition to primitive inferences may cause certain sidecondition checks to be repeated unnecessarily. This is precisely analogous to the insertion of array bound checks by some programming language compilers, even though they can be seen, by the global logic of the program, to be unnecessary. However it is potentially more pernicious, in that it can fundamentally change the complexity of certain operations. More experience is needed to see if this ever becomes a problem in practice.

Second, many other proof procedures rely heavily on search. For example, automated theorem provers for first order logic often perform thousands, even millions, of steps that are purely devoted to finding a proof. Once the proof is found, it is usually short, and can be translated into HOL inferences very easily, and proportionately the burden of translating to primitive inferences is practically nil; performance can match any other system written in the same language. There are now several tools for first order automation in HOL, the earliest being due to [kumar-faust]; they all work in this way. Actually, it's not even necessary to perform proof search in the same language or using the same machine; [harrison-thery2] describe the use of the Maple computer algebra system as an oracle, to find solutions to integration and factorization problems which can then be checked fairly efficiently inside HOL. LCF-style checking of a result discovered by arbitrary other means forms an interesting contrast with computational reflection, a technique relying on code verification which we consider below. [blum-checking] has put the case for result checking rather than code verification in a more general context; an example in a rather different field, computer graphics, is given by [mehlhorn-checking]. It is not essential that the 'certificate' allowing the result to be formally checked should merely be the answer. For example, in first order automation, the proof itself is the certificate. Supporting this kind of checking may lead to more emphasis on algorithms that produce such certificates. For example, [pratt-prime] has shown how to give polynomially-checkable certificates of primality for natural numbers;42 it would be worth investigating similar methods for other symbolic algorithms.

The combination of these two factors is enough to make it plausible that any inference patterns likely to occur in normal mathematics can be performed with acceptable efficiency as an LCF-style derived rule. Even for the special algorithms often used in system verification, it's not inconceivable that an LCF implementation would be fast enough in practice; for example [boulton-thesis] describes an implementation of a linear arithmetic decision procedure. An apparently difficult case is the BDD algorithm, but a HOL implementation has been produced by [harrison-cj95] which exhibits only a linear slowdown, albeit a large one (around 50). Moreover, a finding/checking separation does offer some hope here, since deciding on a variable ordering can be done outside the logic, and a good ordering can cut the runtime of the main algorithm by orders of magnitude.

What are the rules that we can, in principle, implement in the LCF fashion? Suppose that a rule whose instances are schematically of the form:

Gamma1 |- phi1 ... Gamman |- phin

Gamma |- phi

has the property that whenever the hypotheses are provable, so is the conclusion. That is, the rule is conservative or 'admissible'.43 Assuming also that the rule is describable, i.e. recursively enumerable, then it is possible to write an LCF derived rule that implements it. It need simply accept the argument theorems, and begin an exhaustive proof search for the corresponding conclusion.

One might argue that this doesn't really implement the rule, since if one faked some non-theorems as hypotheses, the corresponding conclusion would not result, even though any rule with an unprovable hypothesis is (vacuously) admissible. Since in an LCF system one cannot derive such non-theorems, this objection is moot. Nevertheless it is clear that few if any practical LCF derived rules are going to work in such a fashion. Instead they are going to use the form of the hypotheses to construct a proof in the formal system. So the rules that are likely to be implemented in practice in LCF are the (recursively enumerable and) derivable ones. A rule is said to be derivable if for any instance, there is a deduction in the formal system of its conclusion from the hypotheses. Crudely, we may contrast admissible rules with derivable ones by saying that the latter do not rely on any special properties of the turnstile '|-', such as inductive closure of the set of theorems.

The fact that the standard LCF approach cannot, in one sense (and as we have argued, in a practical sense as well), implement arbitrary admissible rules has been pointed out by [pollack-extensibility]. However, as he also stresses, many admissible rules become directly implementable if one has not merely the form of the hypotheses, but also information about the way they were proved. Consider the Deduction theorem for Hilbert-style proof systems, which allows passage from Gamma scup p |- q to Gamma |- p implies q, and 'Skolemization' in certain systems of intuitionistic logic, allowing a step from forallx. existsy. P[x,y] to existsf. forallx. P[x,f(x)]. Both these can be justified by a transformation on the proofs of the hypothesis, but apparently not easily just from the form of the hypothesis.

It is not clear that there are any useful admissible rules that are not derivable for 'reasonable' logics (such as HOL). For example, we are aware of only one major theorem proving system that is based on a Hilbert-style axiomatization, namely Metamath [megill-metamath], and it uses various tricks to avoid use of the Deduction Theorem if possible. If non-derivable rules are desired, it is of course possible to store proofs as concrete objects with the theorems. Various constructive systems such as Coq and Nuprl are already capable of doing that, since extracting programs from proofs is one of the key ideas underlying them. Of course there is a question mark over whether such manipulations on proofs are computationally practical.


left right up home John Harrison 96/8/13; HTML by 96/8/16