Subsections


2. The Project

It is my aim here devise a philosophical framework suitable to underpin a particular enterprise, the project. This might sound like a narrowly scoped philosophy, but ``the project'' is broad enough to encompass the whole of mathematics, science and engineering when undertaken by deductively sound methods. The project is, in brief, the automation of deductive reason and its applications, undertaken in a particular manner which I hope to make clear. The relationship between that project and the philosophical ideas which I offer as a supporting context is complex. A full and detailed account of the project depends upon the philosophy, but the project plays an important role in the articulation of the philosophy, they are hand-in-glove.

In this section I make a first attempt at describing the project trying to minimize reference to the underpinning philosophy. In subsequent chapters the philosophical issues are entered into in earnest, and in this way it is hoped that aspects of the project are made more clear.

The scope of the phrase ``automation of deductive reason'' is intended to include intelligent reasoning, but not to encompass much of the aims or methods of that field known as artificial or machine intelligence.

Here and in subsequent sections explanations I use try to make the ideas clearer partly by talking about their historical evolution. In this section the connection with some of the pioneers and visionaries who have contributed to the formalization and automation of reason in ways particularly relevant to out project are drawn out. The principle figures in this are Gottfried Wilhelm Leibniz and Rudolf Carnap, as exemplars of the philosophers with similar projects, and Alan Turing is discussed for a contrast with the kind of automation of reason I have in mind here.

2.1 Leibniz, Characteristic and Calculus

Leibniz was a rationalist, a philosopher who was particularly concerned with what can be established by reason rather than observation. The distinction between the two he recognized in the dictum:

``There are two kinds of truths, truths of reason and truths of fact.''

Despite recognizing this distinction, he perceived that all these truths might be codified formally, so that the distinction between truths and falsehoods could be determined by computation.

His vision was universalistic in several respects. He envisaged a universal characteristic, which was to be a formal logical language in which all knowledge might be codified, and a calculus ratiocinator, a method of calculating the truth value of any sentence in his universal characteristic.

These two ideas provide the first prototype for this work.

Leibniz did not imagine that the truth of scientific hypotheses could be established in this way, rather that the corpus of scientific knowledge could be encoded in the universal characteristic, the calculus then settling questions in the context of that knowledge. In consequence, Leibniz also promoted the collaborative development of an encyclopedia, of scientific academies and journals. Leibniz perhaps also guessed that the computations involved might be non-trivial, and contributed also to the development of the information technology of his age, calculating machines.

Leibniz in these ideas was at least three centuries ahead of his time. The logic he knew was essentially that of Aristotle, and fell short of what is needed for the formalization of science by a wide margin. The necessary innovations in logic did not appear for another two hundred years. Beyond the adequacy of modern formal languages to express the entire corpus of scientific knowledge, there lie known limitations in the extent to which formal deduction can capture truth in these systems. Mechanical decision procedures are known not to exist, so for these reasons we know that no calculus could be relied upon to settle all well formulated problems in the universal characteristic.

These limitations are perhaps of only marginal significance for applications in science and engineering, but even within these limitations there are serious problems of computational intractability. For many or most problems whose answer is in principle computable, it will not be accomplished in a reasonable timescale or through the deployment of available resources. Brute computation will therefore not suffice in any but the most simple cases, and something closer to the workings of human intelligence would be needed, to come close to an effective implementation of Leibniz's calculus.

The technology of calculating machines was even further from sufficiency. It remains moot whether today's information technology is up to the task. Beyond the adequacy of the hardware, there are problems with the software, potentially even less tractable.

Leibniz's ideas in these matters had little influence in his own time or for centuries following, not just because they were so far removed from what was then realizable, but also because they were not even published. His largest impact pressed matters in the opposite direction, and came from his independent invention of the differential and integral calculi, and particular for the notation which he devised for this epoch making mathematical innovation. The epoch thus spawned was two centuries of rapid development of mathematics applicable in science and mathematics. These enormous advantages took place despite the reduction consequent on the introduction of infinitesimal quantities. Right at the beginning of this period the philosopher Berkeley criticized these new developments, but it was two hundred years before mathematicians began to take the problems of rigour seriously and the rigorization of analysis began, ultimately leading to the new developments in logic which made it sufficient for Leibniz's project.

The rigorization of analysis was accomplished first by the displacement of infinitesimals in a precise notion of limit and by construction of real numbers from natural numbers, effectively reducing analysis to arithmetic. This done, the next challenge taken up by Kettle's Frege was to show that arithmetic was simply a matter of logic, requiring no special metaphysical insights. In rising to this challenge Frege, inspired in part by the example of Leibniz, re-invented logic in the form of his Begriffsschrift or concept notation, which provided the basis not merely for the logicisation of arithmetic, but also potentially for Leibniz's universal characteristic.

Frege's project was more narrowly scoped, but even the formalization of arithmetic was an arduous undertaking, into which Frege had invested decades of industry when Bertrand Russell pointed out to him the inconsistency of his basic principles. At about the time when Russell and Whitehead were completing their own assault on the logicisation of mathematics (a substantial hurdle in which was ensuring against the kind of inconsistency found in Frege's system) the young Rudolf Carnap was attending as an undergraduate Frege's lectures on his new logic and absorbing much of Frege's attitude to rigour in deductive reasoning and its general applicability.

2.2 Carnap's Programme

Carnap was first apprised of modern logic as an undergraduate by Frege's lectures on his `Begriffsschrift'' [Fre67,VH67]. At this stage in his development Carnap clearly showed an interest both in philosophy and in mathematics and science. He was also strongly inclined towards the precision of language which he found in Frege's logic which he contrasts with the teaching of logic in philosophy. Within the sciences, he preferred physics because of its greater conceptual clarity. He also perceived the distinction between ethical, evaluative, emotive and metaphysical language and scientific doctrine.

From this position as an undergraduate just before the great war he moves forward in a period of about seven years to the point at which we can see the formulation of the central ideas which motivated his work through the rest of his life. The most significant new influence in forming these ideas came from Bertrand Russell. Carnap became acquainted with Principia Mathematica[WR13] and began to prefer its notation to that of Frege. He also begins to feel that a concept is not clearly understood until he can see how to express it in symbolic language.

At the end of 1921 Carnap read Russell's Our Knowledge of the External World[Rus21] and was inspired by Russell's characterization of ``logico-analytic method'' in philosophy. It is at this point that Carnap self-consciously devotes himself to this philosophical method and begins intensive reading of Russell's writings on the theory of knowledge and the methodology of science.

Russell, in his work on Principia Mathematica had undertaken with Whitehead a task similar in character to Frege's project. However, Frege's focus had been on the logicisation of mathematics. Russell had a broader conception of the applicability of the methods, and advocated a kind of philosophy in which such methods were used exclusively. Carnap took up this challenge enthusiastically.

Carnap's philosophical programme therefore involved first the idea that philosophy in general should be analytic in the specific sense that its methods and results are logical, and should be obtained by the new logical methods pioneered by Frege and Russell. Secondly Carnap conceived of the role of the philosopher as being primarily concerned with facilitating the progress of empirical science. The kind of facilitation he had in mind was analogous to the innovations in mathematics undertaken by Frege and Russell, the establishment of new languages in which scientific laws could be precisely enunciated and the surrounding theoretical and methodological framework for the formalization of science.

The work of Frege and Russell had been universalistic in the sense that they sought a single new language in which the whole of mathematics could be developed. At this stage Carnap's thinking was along similar lines, but since he was trying to carry forward the ideas into empirical science, it did not seem feasible to stick with the same purely logical systems. Carnap's aims at this point may therefore be thought similar to Leibniz's desire for a universal characteristic in which all of science could be formalized. Later Carnap became more pluralistic, but the idea of using formal logic to encode scientific knowledge and of formalizing deductive reasoning about science places his enterprise within the scope of Leibniz's.

Like Leibniz, Carnap's ideas were ahead of their time. He was working in the context of modern symbolic logic, which is (I shall argue), no longer shackled by the limitations of Aristotelian syllogistic. But strict formality in language and proof is arduous, and good mechanical support a prerequisite to widespread adoption of such methods. Principia Mathematica formalized a significant part of mathematics, and was very influential during the formative years of the new academic discipline of mathematical logic. These new developments did have a significant impact on Mathematics, which was made more rigorous through the systematic use of axiomatic set theory. But mathematicians did not follow the example of Russell and Whitehead by taking up strictly formal notations or formal proofs. Not even in mathematical logic did this happen, for mathematical logic became a meta-theoretic discipline in which formal systems were studied rather than applied. Proofs in mathematical logic may often be concerned with formal systems, but they are not themselves formal.

Not only was formalization rejected by mathematics and science, it was soon to be rejected, together with Carnap's entire philosophical outlook, by philosophy. The first major assault on Carnap's position was by Quine, who had been sniping at some of the core doctrines, notably the concept of analyticity, ever since his first exposure to Carnap's philosophy as a Harvard postgraduate visiting Carnap in Vienna. Quine's critique modulated into outright repudiation of the central tenets of Carnap's philosophy in the his ``Two Dogmas of Empiricism''. It was not long before an attack came on another flank from Saul Kripke, who dismantled the identification of analyticity and necessity which were inter-definable in Carnap's conceptual scheme and thereby invented a new kind of metaphysics.

The idea of Russell and Carnap, that philosophy should consist of logical analysis and of Carnap that it should also be conducted formally both disappeared. Symbolic or mathematical logic does play a substantial role in analytic philosophy, but it is the meta-theoretic techniques of mathematical logic which are used in philosophy. There is no general adoption of formality as a way of achieving rigour in philosophy, no general recognition that there is a deficit in rigour might requires such a remedy.

Another impediment to the realization of Leibniz's project was, during Carnap's lifetime, beginning to be dissolved. The digital electronic computer was invented and the first steps toward using this technology for the automation of deduction were in progress. The torch was about to pass from philosophy, not to mathematical logic, but to Computer Science.

2.3 Information Technology

These two philosophers strove to realize ambitions which were soon to be made more feasible by the invention of the stored program digital computer, and with it another academic discipline, Computer Science, which over the second half of the $20^{th}$ century would conduct an enormous amount of research relevant to both of their projects. The first home of logic, Philosophy, and the new discipline of mathematical logic, treated logic in a meta-theoretically. They took formal logic primarily as something to be studied rather than directly used (by contrast with the earlier work by Frege and Russell and Whitehead in which large parts of mathematics were formally derived). Computer Science, as well as conducting theoretical and meta-theoretical investigations relevant to computing, found reason to undertake proofs formally, with the assistance of their computing machinery.

For these reasons, the primary academic locus of research relevant to the ambitions of Leibniz and Carnap moved from Philosophy to Computer Science. However, the idea of using computers for science was more predominantly pursued by purely computational rather than formally logical methods. I hope to make this distinction clearer, to make the case that logical methods deserve to be given special consideration, to give a philosophical context in which it make sense to do that on a substantial scale, and to consider some aspects of how this might be progressed.

2.3.1 Alan Turing

Alan Turing was a mathematical logician, computational engineer and a visionary thinker whose ideas on artificial intelligence have been influential on research in the automation of reasoning.

For our present purposes it is primarily his writing on artificial intelligence which is of interest, as encompassing objectives even broader than our own, but Turing was an important figure in a milestone in our understanding of computation which was reached during the 1930s.

One of the tenets of the philosopher and mathematician David Hilbert in the early part of the $20^{th}$ century was that all definite mathematical problems are susceptible of solution. This thesis was held to be equivalent to the effective decidability of validity in first order predicate logic, which was called the entscheidungsproblem. To make the problem precise enough to be given a definitive mathematical answer it was necessary to make precise the notion of effective calculability. This resulted in several different logicians putting forward different notations in which arbitrary algorithms (methods of solution) could be expressed. Church came up with concise notation for functional abstraction known as ``the lambda calculus'', Kleene with a system for defining numerical functions by recursion, Emil Post came up with a system based on transforming strings using a set of ``productions'' which defined transformations on the strings. All these systems provided ways of defining effectively computable functions over the natural numbers, but the one which looked most like a description of a computing machine was that invented by Alan Turing, since called the Turing machine [Tur36].

These different ways of describing effective computational processes all turned out to be similarly expressive, and this remarkable discovery underpinned the idea that they did indeed all capture the notion of effective calculability. The entscheidungsproblem having by these means been made more definite, Church and Turing independently solved the problem by exhibiting problems which were provably unsolvable (by any effective procedure).

This result limits what could possibly be achieved by Leibniz's calculus ratiocinator, as did Gödel's previous result on the incompleteness of arithmetic. It seems from Leibniz's description that he probably did imagine that any definite question within the scope of the scientific knowledge codified in the lingua characteristica would be answerable using his calculus and a sufficiently rapid calculator. The Gödel and Church-Turing results show that this cannot be the case. I will look a little closer later at these and other limiting factors and consider their significance for ``the project''.

Turing also wrote on Artificial Intelligence, and his conception of Artificial Intelligence provides an alternative ideal for the automation of reason which is not predicated on machines achieving the impossible. His seminal paper in this area was Computing Machinery and Intelligence[Tur50]. Turing's essential idea was that human intelligence, though very differently implemented, is not fundamentally distinct from or more capable than the kind of information processing which could be undertaken by a sufficiently complex and powerful Turing machine. In addressing the question whether a machine can think, Turing described an ``imitation game'' to make this question more precise, this later became known as the Turing test. The Turing test involves human beings interacting with a person and a machine through similar interfaces which conceals which of the two was involved, i.e. using a keyboard to converse with someone or something in a distant or private place. If the observer cannot tell the difference between the man and the machine when he interacts with it in this way, then perhaps we should be ready to acknowledge that the machine thinks, and hence exhibits some important aspects of human intelligence.

2.3.2 The Science of Computing

During the first half of the $20^{th}$ century the technology of computation had moved rapidly. The Turing machine provided a very simple abstract prototype for an important transition, from special purpose to general purpose computational engines. It anticipated the general purpose stored program digital computer, and these relatively complex calculating machines were just becoming technically feasible. They first appeared in the 1940s using technologies such as electromagnetic relays, and the electronic vacuum tube. Soon the transistor emerged, and the technology of computation began a long period of progressive miniaturization yielding ever smaller and faster building blocks for digital computers.

With the invention of the digital computer came the new academic discipline of Computer Science, and enthusiasm for formal systems and the automation of reason passed from Philosophy and Mathematical Logic (itself just an infant) to Computer Science. Computer scientists were interested in formal languages because they allowed algorithms to be described precisely for execution by a computer. The complexity of these algorithms rapidly increased and correctness became an issue. How could one be sure that the instructions for achieving some computational objective would realize their intended aim? Since formally defined algorithms could be construed as mathematical entities, it seems that certainty might best be realized by a mathematical (i.e. a deductive) proof. Unfortunately these proofs turned out themselves to be even more complex than the programs whose correctness they were intended to show, and the questions then arose how one could facilitate the construction of these proofs and how one could be sure that the proofs were correct? The computer itself provided one answer to these questions. The computer could assist in the construction of the proofs, and since it is in the nature of formal proofs that their correctness can be checked mechanically, they could also check the proofs. In this way, that part of Computer Science which was particularly interested in ensuring that computer programs are correct was drawn into the the use of formal notations, formal deductive systems and the automation of reason.

Meanwhile another branch of Computer Science developed which sought to program computers so that instead of merely doing large scale drudgery, they exhibited some kind of intelligence. This was the field of Artificial Intelligence, and later the related interdisciplinary enterprise of Cognitive Science. Within these disciplines a variety of approaches to the problem were pursued, but despite this variety a single important cleavage was captured by the contrast between ``neats'' and ``scruffs''. The project around which this book is constructed may be thought of leading towards an extremely ``neat'' AI, so this distinction can be helpful in initial sketches of the project.

The distinction between neats and scruffs may be seen through the contrast between ``connectionist'' AI and methods based on theorem proving. The connectionist approach to AI approximates the human brain by simulation of a neural network using highly parallel processing. The idea is to devise a network which is capable of learning and then to teach it the required skills. If such a system achieved a competence in a complex deductive science it would do so indirectly. Such competence would be an application of its general intelligence, not a source or means of attaining that intelligence.

``neats'', on the other hand, think more like Leibniz. For them the intelligent capabilities are build on a foundation which includes a formal language for representing propositional knowledge and a deductive inferential capability. This does not necessarily mean that they are aimed at different problem domains to the ``scruffs''. Neats may seek systems capable of ordinary language discourse and common sense reasoning, but still approach this on logical foundations which would be unfamiliar to most people.

The neat/scruffy distinction is concerned with means rather than ends. There is a related distinction which concerns ends rather than means. This is the distinction between taking the aim of a research programme to be the replication of some aspect (or the whole of) human intelligence, of whether some other characterization, independent of the methods or competence of human beings, is give of the objective of the research.


2.3.3 Automation and Intelligence

We arrive then at my first characterization of the project which this philosophical dissertation seeks to underpin.

The aim of the project is a comprehensive formalization of the deductive aspects of philosophy, mathematics, empirical science and engineering design.

2.3.4 Some Distinctions

Developments since Leibniz and Carnap give us better insights into the extent to which their projects are realizable. The diversity of related research which has been undertaken enables us to discriminate many different similar kinds of project and to identify more precisely the kind of projects which Leibniz and Carnap might have entertained if they were working today.


2.4 The Philosophy

Here are the bare bones of a philosophical framework to underpin ``the project''.

The philosophy is positivistic, insofar as it delineates a particular approach to empirical science and its application, which it is suggested may in some ways be preferable to extant methods. In this I am not being prescriptive, I do not assert that science should be conducted in this way.

The framework is, in the first instance, conceptual. This consists to a large extent in the choice of particular meanings for terms which have a long history of shifting usage. Some of the most fundamental of these come in pairs as dichotomies, i.e. disjoint concepts which together exhaust some significant larger domain. The most fundamental of these is the distinction between logical and empirical truths. Simple though it might appear to be, it is a source of endless philosophical controversy, and it will therefore be necessary to consider in detail the evolution of this distinction over its entire history and come to as precise an understanding of this fundamental cleavage as possible.

Such fundamental concepts are not mere accidents of language, and are one element of the philosophical framework which we may think constitute a kind of metaphysics. 2.1

Roger Bishop Jones 2012-09-23