Subsections


5. Logic and Computation

In the next chapter I will put forward the idea that a revolution in philosophy arising from advances in logic has yet to come. Though in the some fundamental respects the revolution I anticipate does not differ from the one which failed to take root in the first half of the twentieth century, its character is nevertheless quite different in detail.

The differences result in part from the kind of philosophical critique which I have attempted in describing Logical Atomism and Logical Positivism, but are more substantially attributable to a new dynamic in the development of logic which has arisen since the invention of the digital computer. In the first half of the twentieth century the epicenter of advances in logic moved from philosophy to mathematics, in the second half it moved to theoretical computer science. Steady growth of knowledge about potential and actual applications of symbolic logic, and continual improvements in the capabilities of digital hardware and software in supporting these applications have transformed our understanding of the potential impact of logic on Philosophy.

In this chapter I describe what seem to me for our present theme the most significant developments of logic which have taken place in the second half of the twentieth century, developments almost exclusively associated with digital computers and computer science.

5.1 Proof

The revolution in logic began with the important changes in the notion of proof found in Frege's Begriffsschrift.

This was not a one off change, the notion of proof has continued to evolve from that date to the present, first as a result of the further development of symbolic logic and later catalysed by the increasing impact of the digital computers on the practice of mathematics, and of the work of those information technologists interested in the mechanisation of reason and mathematics.

5.1.1 Proof as Syntax

The first stage in this evolution, which is begun with Frege, is to clarify the formal character of mathematical proof. The purpose of proof is to provide a standard of demonstration which is independent of intuition, at least insofar as the validation of proofs is concerned. Though intuition may be essential in the discovery of a proof, once discovered, it should be possible to verify the correctness of a proof in a completely mechanical way.

What is meant here by a mechanical verification is that given the rules and axioms of the logical system in which the proof is conducted, and the definitions upon which the proof depends, the checking of the correctness of the proof is simply a matter of syntactic manipulations which can be checked in principle without any understanding of the subject matter of the proof.

This formal idea of mathematical proof quickly became entrenched into the mathematician's idea of what a proof is, even though in practice mathematicians (except in spectacular but rare demonstrations such as Principia Mathematica) used delivered less formal demonstrations which were at best only in principle capable of translation into formal proofs. The formal notion of proof also became the basis for the new discipline of Mathematical Logic, which was primarily the meta-theory of formal proof and related topics.

The gap between theory and practice so far as the nature of proof was concerned later resulted in dissent among some philosophers of mathematics or philosophically minded mathematicians who disputed that the formal notion of proof could be taken as definitive of proof in mathematics. However, our interest here is mainly with that formal notion and its evolution, and does not assert or depend upon this being definitive of mathematical proof. We are in fact more interested in the use of these formal kinds of proof in the mechanisation of applications of mathematics in engineering, and the adoption by mathematicians in practice of this kind of proof is not of central concern.

The formal syntactic notion of proof evolved from Frege mainly in the direction of greater precision and rigour in the definition of the formal system. This was firstly for the purposes of reasoning about these logical systems as a part of mathematical logic. Later even greater formality was required for the mechanisation of these logical systems using computers. In this latter case not only were the rules of the logic to be specified with sufficient precision that a computer could perform the checking of proofs, but special notations were devised in which logics can be described in a manner sufficiently precise for use by computers.

5.1.2 Computability and Proof

One of the very important results coming from mathematical logic, intended to explore the fundamental question whether the whole of mathematics was formalisable, was the idea of effective method. The key result for which this notion was clarified was the existence of unsolvable problems, established independently by Alonzo Church and Alan Turing.

The interest of these developments for our present story is in the concepts and vocabulary they provide, which enable us to grasp the essential character of formal proof and to hence to arrive at a more abstract and general characterisation of proof. This characterisation, though it antedates the invention of the digital computer, is particularly relevant to the kind of liberalisation of the idea of formal proof which will prove advantageous when computers are used for mathematics.

An effective procedure for computing a function is what we would now think of as a computer program which always terminates with the value of the function (never loops or halts with an error condition). If the function delivers a boolean result (true or false) then it effectively defines a subset of the possible inputs to the function and is said to constitute a decision procedure for that subset (or for the property whose extension is that subset).

The requirement of formal proofs that their correctness be mechanically decidable can now be expressed more precisely, since we have an agreed notion of what ``mechanically'' means. The requirement becomes that correctness of proofs be effectively decidable.

The proof of some conjecture may now be considered as falling into two parts. The first is to find a proof, the second is to check the proof. The first is, in general hard, and the second routine (but possibly tedious).

This opens the way to mechanising not only the checking of proof, but also its discovery, for though finding a proof is hard, it can be made routine but extremely tedious if you are able to recognise one when you see it. The core requirement for formality of proof is just that you can recognise a proof when you see it. So for any formal system whose language is countable the British Museum Algorithm can be used to find proofs. This algorithm consists of enumerating the candidate proofs for the conjecture at hand and testing each one for correctness. Since the possible proofs are enumerable, and testing each one takes a finite time this method will eventually find the proof if there is one. However, if there is no proof, it will continue forever, without ever realising that its search is futile.

This procedure can be used to test mechanically for theoremhood working just from the conjecture, but because it fails to terminate when set to work on something which is not a theorem, it is called, instead of a decision procedure, a semi-decision procedure.

Now we have it that for a formal system, proofhood is decidable and theoremhood is semi-decidable.

Now let us pause to consider what has happened to our notions of logic and proof.

First let me note that we have implicitly a broad notion of what kind of language can be made into a formal logical system. Any language which has a countable set of formulae some subset of which are deemed ``true'' is a candidate.

Second we have a notion of proof which is decidable. A proof is a path from axioms to some conclusion which uses only steps from a decidable set of primitive inferences to reach the conclusion. For this reason checking a proof is mechanisable.

Thirdly, using the decision procedure for proofhood in a mindless algorithm we can obtain a semi-decision procedure for theoremhood. This method is in practice useless, because severely suboptimal in its use of computational resources. But there will be many different algorithms which give the same results, some of which will be more efficient.

Now the only algorithm we have mentioned so far, the British Museum algorithm, is handicapped in two quite different ways. Firstly it uses, by hypothesis, a completely unintelligent search algorithm, and therefore may take a very long time to find a proof. Secondly, the space it searches is determined by some specific primitive logical system, defined typically by a collection of axioms and a number of inference rules. The choice of this inference system has a major impact on the length of proofs and on the difficulty of finding them.

This is particularly damaging if the required proof involves a great deal of calculation, which may not be so frequent in pure mathematics, but will be very much more frequent if, as we propose, proof technology is brought to bear on engineering problems using applied mathematics. This is because routine computations, when translated into formal proofs, become much more complex. Doing arithmetic computation by proof, in typical mathematical foundation systems, will consume many orders of magnitude more processor time than would a correct but efficient algorithm performing the computations in a more traditional manner.

Note then that any correct semi-decision procedure is as good as another from the point of view of telling us whether a conjecture is or is not a theorem. A semi-decision procedure need not make use of a proof system at all in the traditional sense of this term, but if it is a semi-decision procedure it will be reliable in its pronouncements about theoremhood.

We will find then, that for some purposes, liberalisation of the notion of ``proof'' to admit arbitrary semi-decision procedures is beneficial. A specific way in which this is now commonly done is by the use of ``oracles'', which are decision or semi-decision procedures for some special domain. Another way which is rarer but particularly relevant to the engineering applications which will preoccupy us here, is the integration of compilation technology into a logical kernel, so that evaluation of expressions in our logical languages can be accomplished in proof with optimal efficiency. A third method which has been discussed is the use of reflection principles, which admit the introduction of new rules of inference which have been shown to be logically sound.

The kind of ``liberalisation'' of proof which we have considered here, at its extreme goes well beyond what can be expected to be generally accepted. Two considerations may be separated. The first is the claim that the result of some arbitrary semi-decision procedure can be taken to constitute a proof. The second is the question, however the results may be described, of what utility there may be in implementing such procedures. My main interest is in exploring the latter question. If the readers only objection to my thesis is that is misuses the term ``proof'' I shall be well satisfied.

Having pressed to (or beyond) its limit the idea of proof, in a rather abstract way, I will now continue with some observations about the way the notion of proof has in practice been stretched by developments which have already taken place in software to support proof.

5.1.3 The LCF Paradigm

``The LCF paradigm'' is a way of building proof tools which has contributed to the evolution of the idea of proof. This method was created by Robin Milner and his associates at the University of Edinburgh for the purpose of building a tool for developing and checking proofs in Dana Scott's Logic for Computable Functions, LCF, a logic devised for use in the study of program verification.

Because strictly formal proofs turn out generally to be very long it was thought necessary to have them checked by machine, and desirable to have the machine provide support for constructing the proofs. To provide an environment in which researchers could explore the ways in which computers could be programmed to construct (as well as check) proofs a special language was devised for this purpose. Because the language was intended for manipulating sentences in the object logic it was name ML, which stood for metalanguage.

This language was to be used to write the proof checker and to write general facilities for proof construction and also by the users of the proof tool to write algorithms for constructing the specific proofs which they were working on. This presented a special problem. If the user of the tool were able to write code in ML which augmented the capabilities of the theorem prover, how could we be sure that errors in his code would not cause errors in checking proofs?

The solution to this problem was ingenious, involved innovation in programming language design, and incidentally provided the extension to the idea of proof which is our present interest. The ML language was a strictly typed language, which means that the data manipulated by programs was classified into various kinds, and the compiler was capable of checking that any section of program was only ever applied to the kind of data for which it was intended. The language also provided for abstract data types to be defined by the user. When such a data type is defined the user provides a definitive set of primitive methods for constructing and manipulating data of this type, and once these have been defined subsequent code is not permitted to construct this kind of data other than be invoking the primitive methods (or other methods which achieve their effects through the primitive methods).

A formal system is implemented following the ``LCF paradigm'' by defining an abstract data type whose values are the theorems of the formal system. The primitive methods of the abstract datatype are made to correspond precisely to the axioms and inference rules of the formal system, so every computation of a theorem involves a computation which precisely correspond to a correct proof in the formal system.

A conjecture is then considered to be ``proven'' if the corresponding theorem can be computed.

On the point of safety, because the compiler does not allow the definition of new methods of constructing theorems once the abstract data type has been defined, a user is not in a position to compromise the soundness of proof ``checking''. However, by comparison with the traditional notion of proof some bizarre things have happened.

Hitherto a proof has been a syntactic object which shows how the desired conclusion can be reached from the axioms using the inference rules. Establishing a theorem would involve constructing such a proof and then checking its soundness (though there is no reason why these could not be done together). In the LCF paradigm this syntactic object is never constructed (and hence never checked). In effect, at each stage in the proof the abstract data type is checking that the next result is obtainable by a primitive inference rule from the premises available, and, having checked this it doesn't bother to record the details of the proof. There is really no proof ever constructed and no checking ever done, but the computation of theorems is constrained by the abstract data type in such a way that the theorem could not be computed if there was not a proof of it.

We have therefore moved here to the notion of proof as a certain kind of computation.

Since this computation is done by a program, and the text of this program is usually retained, this program script is sometimes considered to be the proof, and we have moved to the idea of proof as a certain kind of computer program.

At this stage we note however, that the bottleneck on derivation imposed by the primitive logical system remains largely intact. In fact, because the computational costs would otherwise be prohibitive, the abstract data type provides not only the primitive inference rules but also a selection of derived rules which are implemented without recourse to the primitive rules, but this is a small step to widen the bottleneck, it is still there and still of major significance in slowing down proof.

5.1.4 Oracles

Proof tools implemented following the LCF paradigm are at one extreme in the spectrum. The LCF paradigm minimises the amount of code which need be trusted and gathers this into a logical kernel. Most other kinds of proof tool are implemented in a much more trusting way (partly because the code is only produced by the implementor of the proof tool, not by its users), and consequently may take larger proof steps and achieve higher levels of computational efficiency.

However, even for LCF provers, it is now widely accepted that wherever a decision procedure is known for some subclass of its language, once coded up the decision procedure can be used as if it were an extension to the logical kernel.

Typical subdomains for which an oracle might be implemented include, propositional tautologies and linear arithmetic. Though propositional tautologies of moderate size can quickly be proven through the logical kernel, the use of proof technology in digital hardware design may involve checking of very large tautologies, for which an oracle is essential.

5.1.5 Compiler Integration

The development and evolution of digital computers and their applications during the second half of the twentieth century has provided enormous commercial impetus to the development of both theory and practical methods for dealing with formal notations of diverse kinds. The main role of such notations is in communication between man and machine, the more formal character simplifying the interpretation of the notation by the machine, and importantly, simplifying the human users understanding of how the notation will be interpreted by the machine.

One very important kind of such notations is that of programming languages. The purpose of a programming language is to describe a computation to be performed by the computer. In this it performs a part of the role which a mathematical foundation system is required to perform.

The main differences, for our present purposes, between a programming language and a foundation system are as follows:

  1. The foundation system must be able to express general laws, such as the commutativity of arithmetic addition, which when viewed as a prescription for computation involve an infinite number of computation steps (typically expressed using a universal quantifier).
  2. Programming languages have evolved in the light of experience of extensive use on large applications and are more suitable for describing large and complex algorithms, or algorithms which while not intrinsically complex are required to operate on very complex information structures.
  3. Programming languages are generally suitable for processing into efficient machine code which will perform the required computations using a minimum of computational resource.
  4. Foundation systems, while generally inefficient in the way they evaluate prescribed algorithms, are capable of ``evaluating'' some of the constructs involving universal quantifiers, which, if viewed as algorithmic prescriptions could not be evaluated using finite resources.

The ability of foundation systems to deliver answers to questions universal quantification is of commercial value. It allows expensive and extended simulation of digital hardware to be dispensed in some cases in favour of formal demonstration of correctness claims involving implicit or explicit universal quantification. This kind of phenomenon motivates the introduction of proof based methods into the engineering design process.

In the context of engineering design and verification the downside of foundation systems becomes more significant. It then becomes worth considering whether these two paradigms, compiler technology and proof technology, can be integrated.

This is still a research topic in its infancy. The aim is to have logical foundation systems which support linguistic pluralism by semantic embedding, into which efficient and practical programming languages are embedded, and which, in the course of proof are capable of evaluating expressions or functions defined in these languages with the same efficiency as would have been done by a state-of-the-art compiler.

In order to achieve this the character of the process, when considered as proof is further eroded. To do such computations efficiently it would be necessary to trust the code generated by the compiler, and to retain no records of the computation (other than its results). To do this in a proof system following the LCF paradigm the compiler and all the code it generates would have to be considered as part of the logical kernel, or as a very broadly scoped ``oracle''. To do this in a more traditional proof system in which proofs are objects, some potentially very large steps in the proof would have highly abbreviated documentation in the proof (something like ``evaluating this function on this value yields this result'' omitting the detail of how this is established).

5.1.6 Reflection Principles

Reflection principles are a method for getting round the computation bottleneck provided by the logical kernel of a proof system.

In general the formal system implemented by a logical kernel will be adequate for the applications for which it is intended. This means that what can be proven is sufficient, even though the speed with which it can be proven may be unsatisfactory. A reflection principle provides a logically safe way of introducing efficient oracles which does nor depend upon trusting the code of the oracle.

The idea is that an oracle (meaning in this context any specially coded derived inference rule) should be proven to permit the demonstration of no results which could not be demonstrated without it, and it will then be allowed to be used instead of the primitive inferences which would otherwise have to be used. You can then code up efficiently a decision procedure which would be inefficient if it had to operate via the logical kernel, and evaluation of this procedure could then be taken as proof. A reflection principle is an extra inference rule supported by a logical kernel which allows a theorem to be deduced from the fact that it results from applying a meta-function which has been proven conservative to theorems (possibly none) which have previously been proven. The introduction of a reflection into the logical kernel must be done with some care, since some naive formulations are known to engender contradiction, and involves effective importation of the compiler into the kernel, since the benefit depends upon compiling new code and then using it for deductive inferences.

It may be noted that the merits of reflection principles over the use of unverified oracles are simply in terms of trust, and that the cost of verification might be very high indeed. Since it is normal in the application of computers to trust the compilers which are used to compile applications, which are in general more complex than the relevant parts of proof tools, it may be argued that except where specially high standards are sought, a large logical kernel incorporating a range of powerful oracles is desirable without the need to formally verify the oracles.

5.1.7 Computers in Mathematical Proofs

The evolution to the idea of ``proof'' which we have discussed in the previous sections has been largely invisible to mathematicians, who rarely make use of proof technology. Proof technology is primarily developed by computer scientists for and used for research and applications related to computer science, e.g. hardware and software verification.

Though proof technology has not had much impact on the practice of mathematicians, they have gradually made increasing use of computers. Initially this was primarily for numerical computations in applied mathematics, later for symbolic manipulations, also mainly in applied mathematics. There have however been a number of uses of computers in complex proofs, and these have proven controversial.

There is some overlap here with philosophical debates about the nature of proof, since the objectors to the use of computers in proofs generally appeal to the necessity for proofs to be subjected to peer review, taking a proof to be established by some kind of social process in contrast with the formal notion of proof in which a proof is a formal object whose correctness is established by some computational process. However the uses of computers in proofs by mathematicians do not generally involve ``proof technology''. It is more typical that some mechanical and time consuming part of a very large proof is implemented as a specially coded algorithm so that it can be accomplished by machine. The classic example is that of the four colour theorem, but there are increasing numbers of this kind of use of computers.

Though these phenomena do raise philosophical questions, and are resulting in shifts in the informal social notion of mathematical proof, they are not significant for the theme of this work.

5.2 Logic Technology

In the last section, while ostensibly discussing evolution of the idea of ``proof'', a great deal of the discussion has revolved around developments in software engineering. It is the development of software to support the use of formal logical systems which is the main factor causing change in the ideas.

However, this technology is not entirely concerned specifically with proof. The use of logic is not only, and not primarily concerned with proof. Proof is simply a validation process which takes place at the end of a more varied creative process, and is not itself the most important part of the process.

Before we become interested in constructing and checking a proof, a process which might be along the following lines is undertaken.

  1. Identify a problem domain.
  2. Think about how this problem domain can be modeled abstractly.
  3. Consider the important concepts involved in this problem domain.
  4. Formulate important conjectures about this domain using the identified concepts.
  5. Prove the conjectures.

Alternatively, more application oriented:

  1. Identify a kind of applicable technology.
  2. Devise mathematical models of the technology.
  3. Identify a specific application.
  4. Formalise the requirements which the technology is to satisfy in the application.
  5. Design, or get a computer to design, an artifact using the technology which satisfies the requirement.
  6. Verify correctness of the design using the mathematical models of the technology.

Ideally the final step would be unnecessary, the mechanised design process would give results correct by ``construction''.

Proof is only a part of these processes. The process involves extending formal notations to cover a new application domain, in part conceptual and in part notational innovation. It involves developing an understanding of this domain, which might come in the form of a mathematical theory (and hence involves some proof). It involves formulation of a problem in this application domain, this in itself can be a very large undertaking. It involves developing a solution to the problem, which is the process of engineering design. Only then do we come to the verification of the solution by proving that it meets the requirements (or by simulation or other methods).

Roger Bishop Jones 2016-01-07