Subsections


3. Revolution in Logic

The phenomenon which I here describe as a ``revolution in logic'' is the establishment of methods which with almost complete reliability are able to confirm with almost absolute certainty the truth of almost any true analytic proposition. For a handle I call this ``the formalisation of analyticity''.

In this chapter I propose to tell the ongoing story, stopping only at the present day (and proceeding into the future in a later chapter). Though the method is historical, the purpose is to provide as clear a description as possible of certain aspects of the state-of-the-art, in relation particularly to the application of formal logical methods, as it stands at the end of the twentieth century. Historical material is presented only to the extent that it helps the explanation.

The story involves philosophers such as Frege and Russell, logicians such as Godel and Church, and computer scientists too. It concerns philosophical and logical advances both of a fundamental and of a superficial character, and it involves advances in methods and supporting software.

The story is presented chronologically, but involves threading together a number of quite disparate sub-themes. I shall try to keep clear the different kinds of development in progress and how they bear upon the matter in hand.

Before beginning I would like to emphasise the pragmatic nature of my interest in this matter. I consider these developments important because I believe them to be useful. I do not take pragmatism to the point of redefining truth to suit it, but I do think it an effective antidote to excessive scepticism. One particular dog-in-a-manger scepticism which I wish to sidestep is that which declines to accept the truth of the logicist doctrine because of qualms about whether the formal systems needed for the derivation of mathematics are truly pure unsullied logic. I personally am happiest to regard the truths of set theory as logical truths but the point of my story is not this, and I believe its value does not depend upon how broad a scope we attach to the phrase ``logical truth''. It is sufficient for my story that methods are now known which are effective in solving certain kinds of problems, and an increasing wealth of computer software is under development, gradually making these methods easier and more productive in application. In the fullness of time, I will suggest in later chapters, these developments may exert a profound influence on the character of philosophy. My present purpose is to describe these new developments.

In describing the developments as ``the formalisation of analyticity'' I am placing a very personal label upon these developments, and placing upon myself the additional burden of making this label seem appropriate. A significant part of the sequel will be directed to this end.

3.1 The Vision of Leibniz

We have already mentioned (2.10) Leibniz, his characteristica universalis and his calculus ratiocinator. In this chapter I hope to show that we are coming remarkably close to realising those ambitions.

To many that will seem an odd thing to say, for as a result of the ``revolution'' discussed in this chapter we now have definitive technical results which show conclusively that neither the characteristica nor the calculus is possible. Specifically, either the incompleteness of arithmetic or the unsolvability of the halting problem (which entails it) suffice to show that a calculus ratiocinator cannot encompass even the whole of a priori knowledge, let alone empirical science. On the possibility of a characteristica universalis doubt is cast by Tarski's work on the undefinability of arithmetic truth.

Even if (as I shall suggest) these theoretical limitations are not so lethal in practice as we might fear, the search for artificial intelligence has shown that very many problem domains are not only computationally intractible, but are also highly resistant to effective heuristics.

Added to these logical and logistical difficulties there are the philosophical difficulties which arise from attempts to solve by apparently a priori methods real world problems whose solutions must surely be obtainable only a posteriori.

Nevertheless, the vision which inspired Leibniz remains a worthwhile source of inspiration for the next century, in which substantial economic benefits may be expected to flow from an enlightened continued development of logic technology to that same end. It is my aim in this chapter to make this opinion seem to the reader less absurd than he might now find it.

3.2 Mathematical Rigour

The period from the renaissance to the end of the eighteenth century saw a hectic development of new areas of mathematics to support the new mathematically based science and engineering. This was a pragmatic process, not materially hindered by the foundational problems in reasoning about continuous quantities which were first discovered in ancient Greece.

In the nineteenth century some mathematicians went ``back to basics'' and progressively constructed the mathematical foundations necessary to retrospectively rigourise the differential and integral calculus and all that had been built upon it.

Two key aspects of mathematics which were in need of clarification were the number systems used in analysis and the notion of ``function''.

3.2.1 Numbers

Clarification of the number systems required for analysis began with a simplification.

Methods of proof had hitherto relied heavily on the use of ``infinitesimal'' numbers. These were introduced by Leibniz, one of two independent inventors of the calculus (the other inventor was Newton), but had long been criticised.

Infinitesimals are vanishingly small numbers, which behave in some but not all ways as if they are zero. Because of this apparent equivocation there were doubts about the rigour of arguments involving infinitesimals, and conceptual difficulties in clarifying the nature of number systems which include them.

It was easier two hundred years ago to see how analysis could be undertaken without infinitesimals than it was to see how number systems could rigorously be established which included them. This was done by introducing the notion of a ``limit'' in terms of which differentiation and integration could be defined without resort to infinitesimals.

Having thus simplified the number system required by analysis, the required ``real numbers'' (including irrational numbers solving the problem known to the Greeks of incommensurable ratios) were defined in terms of rational numbers, by Dedekind, using ``Dedekind cuts''. Rational numbers being readily definable in terms of the positive whole ``natural'' numbers, the mathematical foundations of analysis were thus secured by the reduction of analysis to arithmetic. Here mathematicians might have rested content, but philosophers pressed the matter further, seeking to reduce arithmetic to logic.

3.2.2 Functions

The idea of a ``function'' had begun with mathematical expressions defining how some value depends upon the value of others whose names appear in the expression. Thus a function $square$ might be defined by the expression


\begin{displaymath}
square(x) = xx
\end{displaymath} (3.1)

indicating that the value of this function for any $x$ is to be obtained by multiplying $x$ by itself.

However, mathematicians, as well as using specific functions, developed theories about functions, in general. When functions are the subject matter of mathematical investigations, a definition based on textual expressions may seem both too complex and too narrow. A mathematician reasoning about a class of objects wants to understand the full scope of applicability of his results and will prefer a definition which is no more restrictive than is necessary for the validity of his results.

Thus, Dirichlet, in a paper on the representation of arbitrary functions using Fourier series, defined $y$ as a function of $x$ if to each value of $x$ there corresponds a unique value of $y$, independently of whether the correspondence could be captured by any expression or rule. This paved the way to the predominant modern conception of a function as an arbitrary ``graph'' which can be identified with a set of ordered pairs (typically but not necessarily, of numbers).

Liberalisation of the notion of function was not uncontroversial, and though it has been very fruitful mathematically, it continues to be contested in some circles to the present day.

Two opposed fundamental conceptions of function have therefore emerged, each with myriads of variants. The first is that of function as graph, and the second that of function as rule. Whichever of these fundamental conceptions is adopted, functions are a source of paradox, and are therefore a key element in a successful explication of the logical foundations of mathematics, or of logic itself.

3.3 Frege's Logicism

Modern logic begins with Frege.

Frege sought to demonstrate that arithmetic is, not synthetic a priori as Kant had alleged, but analytic (and consequently a priori). In interpreting Kant's claim Frege did not, as some authors do, take seriously Kant's ``definition'' of analyticity in terms of inclusion of the subject of a proposition within the predicate, which he may have taken to be an error on Kant's part engendered by Aristotle's logic.

Frege explains analyticity ([6] section 3) as follows. He makes no distinction between analytic and a priori, considering them both to concern the manner in which a claim to knowledge is justified. If a proof can be constructed which depends only upon premises which are general logical laws or definitions (bearing in mind that the admissibility of the definitions may also need to be demonstrated), then the proposition so demonstrated is analytic.

Frege's refutation of what he takes Kant to be claiming then depends upon Frege first providing an adequate logic and then definitions of the necessary mathematical concepts and finally derivation of the theorems of arithmetic from that basis.

This approach can be summarised by the phrase: ``Arithmetic = Logic + Definitions'', which we will later find helpful in understanding why Frege ran into difficulties.

3.4 Foundation Systems

3.4.1 Frege's Foundations

Frege's attempt to demonstrate his logicist thesis that arithmetic is analytic may be thought of as occurring in the following four stages.

Concept Notation

Frege's first step is to replace the inadequate formal logic of Aristotle by a more powerful logical system better suited to mathematical reasoning. This he called Begriffsschrift [5] This system was not thought by Frege sufficient to provide the ``logic'' part of the ``logic + definitions'' prescription, even though it did provide a powerful logic, and did allow for and was used by Frege himself with definitions of non-logical concepts.

Frege's Foundation System

Frege supplemented his Begriffsschrift with additional axioms before he was ready to use it as a foundation for the derivation of mathematics using only definitions. It is also notable that his writings over the period between the publication of Begriffsschrift and that of volume 1 of Grundlagen become noticeably more type oriented and the logical system of the Grundlagen is usually considered to be a second-order logic. Though Frege had come to appreciate more the need for type-like distinctions, he had not noticed how easily contradictions arose in default of them, and the Foundation system which he adopted proved inconsistent. With hindsight it is easily repaired, but the need for such repairs which in effect place restrictions on abstraction damages the philosophical rationale for the foundation system. If the rules of abstraction seem arbitrary, how can the claim be justified that they are logically fundamental?

Definitions and Derivations

To this extent Frege was certainly right, that foundation systems can be established within which the development of mathematics can proceed simply by defining new mathematical concepts as required (in terms ultimately of the set of primitive concepts of the foundation system) and deducing mathematical theorems, ultimately from the primitive axioms of the foundation system, using only the inference rules specified by the foundation system. Thus mathematics is seen to be in large part formalisable, and the status of its truths (as to whether they are truths of logic) is inherited from the foundation system.

3.4.2 Foundations to Follow

Though Frege's own system was shown to be unsatisfactory, and Frege never himself repaired it, within a decade there were two new alternatives, neither of which has since been found to be inconsistent. These were both published in 1908 and had quite different motivations.

In the same philosophical tradition as Frege, in the promotion of essentially the same philosophical thesis (which has since been called logicism), Russell devised his Theory of Types [12], which was then used as the basis for Principia Mathematica [13].

From a more mathematical standpoint, unencumbered by the desire for a convincing philosophical rationale, Ernst Zermelo published an axiomatisation of set theory in a paper entitled Investigations in the foundations of Set Theory I [15].

There then followed a century of activity, first by mathematical logicians and then by theoretical computer scientists and software engineers in which many many different logical systems were devised which were capable of deriving mathematics in an essentially similar way, i.e. without the use of new logical principles but essentially using only safe ways of defining new mathematical concepts in terms of the concepts provided in the foundation system. In a sense we could say in 1908 that the foundational problem had been solved and that all the subsequent development was about pragmatics, but applicability and impact depend upon pragmatics and so we will take an interest in some of these later developments.

But first, some fundamental theoretical advances have light to cast.

3.5 Expressiveness and Strength

The expressiveness of a language is a measure of how much can be expressed in the language. Any well-defined language can be provided with a proof system, which permits some of the sentences of the language to be proven true. If this is done then the language with its proof system becomes a logic. The strength of a logic is a measure of how much can be proven in the logic.

Leibniz, we have seen, thought that there could be a language in which anything could be expressed, and a calculus which would permit the settlement of all problems (and hence the proof of all true conjectures). The ``revolution'' in logic has provided modern enthusiasts with methods which get them very much closer to realising Leibniz's dreams than he ever was. It has also provided some negative results, which can reasonably be understood as refuting the possibility of not only of the calculus ratiocinator, but even of the lingua universalis.

However, we will see, that the practical consequences of these negative theoretical results are not so clear. While the theoretical results prevent universality, they place weak limits on how close we can approach to universality. From a practical standpoint the significant problems are not the inadequate expressiveness of our languages, or the formal incompleteness of our logics. The are problems associated with computational intractability, which we can hope to mitigate by advances in software and hardware technologies.

In section I aim to give an account of some of these theoretical and pragmatic questions.

3.6 Expressiveness

It is implicit in Frege's conception of his program that an expressive language can be obtained by starting with a basic language providing a handful of fundamental concepts and a suitably powerful method for defining new concepts, and then adding definitions of the concepts required for whatever purpose is in hand (in Frege's case the development of arithmetic).

I aim to explain how this basic scheme can be extended in two different dimensions.

The first dimension concerns primarily how the language expresses itself, the second concerns what kinds of thing the language is capable of expressing.

In the first dimension we move from the narrow conception of definition as it was understood by Frege to the broader notion of conservative extension, and from there to the more powerful linguistic technique of semantic embedding.

In the second dimension we move from the language of arithmetic, to languages in which arbitrary analytic propositions are expressible, and from there to languages which encompass contingent synthetic truths. We do not attempt to move beyond the languages of science, concerning ourselves with the logical and the empirical but not with values, or other aspects of language in which questions of truth may play a secondary role to those of social dynamics.

3.6.1 Definitions

Frege was aware that definitions were crucial to his enterprise, and that the admissibility of definitions should be carefully scrutinised if they are not to provide the basis for unsound proofs. However, his awareness was incomplete, and he was unable to articulate adequate criteria for the admissibility of definitions. This was not apparent until Russell communicated to Frege the paradox which now bears Russell's name, and even then Frege does not appear to have grasped its significance.

Frege had two distinct criteria for the admissibility of definitions, one for objects and another for functions (which for Frege included propositional functions, or predicates). For things other than functions the rule was that a definition must give a name to something which exists, and therefore that if a name is assigned to an object satisfying some description then it is necessary to establish that there is exactly one object which satisfies the description.

In the case of functions, the admissibility of a definition of a function depended on it being definite, for every value to which the function might be applied, what value resulted from the application.

Implicit in this dual standard is a principle of functional abstraction, which in effect states that to every abstraction there exists a function. It is easy to see that the analogous principle in set theory results in contractions, and Russell's paradox suffices to demonstrate this.

This problem of Frege's may be resolved by settling ontological issues before addressing what definitions are to be deemed acceptable, and then sticking rigidly to Frege's first principle, that a definition may be used only to give a name to some entity which is already known to exist.

3.6.2 Conservative Extension

It turns out that limiting the introduction of names to be done by definition, though logically safe, is pragmatically inconvenient.

The ways in which new names are introduced can be liberalised without compromising consistency in the following ways.

First, we observe that consistency of the logic does not depend upon the name being unambiguous. A name can consistently be introduced as denoting some one object satisfying a description, without settling which one. Where a name is introduced as satisfying an indefinite description rather than a definite one this may still be called a definition, but is then a loose definition.

Loose definitions have the advantage of being more abstract. Where an object is needed but the choice is arbitrary among a number of equally suitable candidates, the use of a loose definition is preferable since it prevents any subsequently developed theory from making use of (and hence depending upon) features of the chosen representative which are accidental rather than essential.

Once we begin to take advantage of loose definitions it also becomes advantageous to be able to introduce more than one new name at the same time. For example, if we were to define the natural numbers we would need in the first instance to introduce three new names. One for the set of natural numbers, one for the successor function (which adds one to its argument), and one for a distinguished element of that set which is not the successor of any other natural number. If these names are to be introduced in an abstract way, i.e. without actually deciding which of the many suitable sets will be used for numbers, then the three names are best introduced together, defined as loosely denoting any combination of set, element and function which interrelate in the required manner.

3.6.3 Flexible Syntax

Though we have in the notion of ``conservative extension'' a flexible and powerful way of extending our vocabulary so that new subject matters may be addressed without extending our logical foundations, this approach does not provide for the kind of notational precision which mathematics has preferred and which is now expected in many languages used with computers.

Techniques are available for defining languages in which not only can new concepts be introduced, but in which choice of how these are written down goes beyond choice of a name. These kinds of technology may be used to approach the kind of notational sophistication found in mathematics and science.

3.6.4 Semantic Embedding

In some cases, especially in applications involving the use of computers, the introduction of new notations does not suffice, since the notations already in place may interfere with the processing of the new notations. In such cases complete new languages may be required.

To talk of a universal language in which to express analytic propositions may seem nonsensical because analyticity is language relative, and analytic propositions can be expressed in almost any reasonable well defined language.

One way to deal with this problem is to talk in terms of translatability. To say that a particular language is universal in respect of a certain kind of proposition, one simply means that any other language in which such propositions can be expressed is translatable into the universal language.

Semantic embedding is a technique for providing support of various kinds for some language, via an ``embedding'' of the latter language into the former which is faithful to the semantics of the embedded language. This is an oblique way of talking about a translation of the one language into the other. The phrase semantic embedding is mainly used in contexts where various kinds of processing for the embedded language are provided using a behind the scenes translation into the host language, e.g. proof support.

3.7 Strength

The publication of Principia Mathematica was a major triumph for logic as a foundation for mathematics. It effectively demonstrated the power of the new logical methods. It was natural to assume that a small logical system capable of deriving so much mathematics (albeit arduously) encompassed the whole of logical truth and would be capable of demonstrating any true mathematical conjecture.

On the strength of this kind of conviction about the complete formalisability of logic, and in the face of some remaining uncertainty about the consistency of logics, David Hilbert inaugurated the new subject of meta-mathematics closely associated with his formalist philosophy of mathematics.

3.7.1 Completeness of First Order Logic

An important confirmation of these intuitions about the power of the new logical methods came when Kurt Godel proved that ``first order logic'' was complete. For those inclined to consider first order logic the embodiment of logical truth this result amounts to the complete formalisability of the truths of logic. Every valid formula of first order logic is provable, but this provides only a semi-decision procedure for first order validity. When it comes to the foundations of mathematics however, this get us less far than one might have hoped. First order logic is ontologically neutral and therefore does not provide a context in which definitions can readily be deployed. It is necessary for the development of mathematics to use axiomatic theories rather than definitions, and we find that first order axiomatic theories fall short of providing wholly adequate characterisations of the subject matters of mathematics.

3.7.2 Incompleteness of Arithmetic

A principle objective of Hilbert's programme was to establish using only limited means the consistency of the formal systems within which mathematics could be derived. This objective was shown to be unrealisable (under its most natural interpretations) by Kurt Godel, when he demonstrated the incompleteness of arithmetic.

This showed that mathematical truth is not fully formalisable, and in consequence, either that logical truth is not formalisable or that mathematical truth goes beyond logical truth. Similar observations could also be made for analytic truth. If the truths of mathematics are held to be analytic, then they are not wholly formalisable, if they are held to be (perhaps by definition) formalisable, then they cannot encompass the whole of mathematics.

These alternatives are not deep problems with debatable but definite answers. They reflect in a straightforward way choices about the meaning of these terms. Arguments can be deployed about the merits of a broader or narrower definition of the terms ``logical truth'' and ``analytic truth'', and such arguments are likely to be more valuable than debates about whether mathematical truths are logical or analytic which presume without careful analysis or without recognition of the discretion involved, some particular meaning for these terms, or worse debate the matter without clarifying the semantic issues at all.

I will continue to work with the broader notions of logical and analytic truth.

The incompleteness results, though strictly about the power of logical systems, have a special lesson to teach us here, which concerns not power but expressiveness.

3.7.3 Measures of Strength

Having discovered that formalisations of arithmetic are necessarily incomplete it becomes of interest how different formalisations compare.

A naive way to compare would be to simply compare the sets of theorems derivable under different formalisations. Theories would then form a lattice under set inclusion, i.e. a theory A is stronger than another theory B if the theorems derivable under B are a subset of those derivable under A.

This approach is limited however to different axiom systems using the same notation. To avoid this constraint we can talk about the interpretation of on theory in another. In this case a theory A is stronger than a theory B if B can be interpreted in A. For this to work satisfactorily we need an adequate definition of ``interpretation''.

Roger Bishop Jones 2016-01-07