Subsections


8. Language Planning

In his intellectual autobiography [Car63a] Rudolf Carnap provides, as well as a sketch of the development of his thinking, a section covering ten important topic headings under which his ideas aspirations and accomplishments in these areas were discussed. One of these topic areas is ``language planning''.

Under this heading Carnap mentions two different but related problems, the problem of constructing ``an auxiliary language for international communication'', and that of constructing ``language systems in symbolic logic''. In both cases he credits Leibniz's previous contributions.

Perhaps because different aspects of the latter problem were a major part of his work and were addressed under different headings (such as logical syntax and semantics) he devoted most of the space under this heading to non-symbolic languages.

He mentions the shift in his perspective from the universalist conceptions of logic found in Frege and Russell to his own pluralism, and his concern with the difficulties arising in the choice of languages for science and the systematization of science in a pluralistic context.

Though Carnap devoted much time to devising and describing methods for defining languages, their semantics and deductive systems, the problems which might arise from the use in science of a plurality of such formal languages were not addressed.

Since his day as a result of the invention of the digital computer there has been an explosion in the use of formal languages, and other kinds of representation of data or knowledge.

If we wish to reason using the information held in such representations, then we embrace a pluralism broader than Carnap could have conceived, and something like Carnap's problem of ``language planning'' in its formal aspect, becomes pressing.

This problem is considered here as a prelude to the discussion of architectures for formal analysis in which this diversity of representation can be encompassed without compromise to logical coherence.

8.1 Languages, Notations and Representations

The support of rigorous deductive reasoning is of the essence both in the projects of Leibniz and Carnap and in the successor we are considering.

Leibniz's conception of how propositions should be expressed in order for his calculus to operate was radical even by todays standards. The form of proposition was the subject predicate form, of which there were four variants, and in which the subject and predicate were each represented arithmetically as a pair of co-prime natural numbers. His calculus involved quite simple arithmetic operations on these numbers to determine whether the proposition was true of false.

The merits and limitations of this proposal are not our present concern, it is mentioned to emphasize the very broad limits on the kinds of representation and proof which are open to consideration.

By contrast with Leibniz, our aim is to admit into our architecture a very broad range of representations rather than devising one representation suitable for the calculus and expecting all knowledge to be coded using that method of representation so that the calculus could be applied.

The general plan is that any representation be admissible provided only that it has a definite significance, preferably a formally specified significance or semantics.

8.2 Universalism and Pluralism

Frege and Russell, pioneers of the formalization of mathematics, each conceived of a single language in which the whole of mathematics might be expressed and proven.

Frege's formula has been paraphrased:

Mathematics = Logic + Definitions

Which may be read as asserting that given a suitable logic, the whole of mathematics can be developed simply by writing down formal definitions of the concepts of mathematics, and then proving the required mathematical theorems involving those concepts.

There is no reason in this scheme even to contemplate the possibility of making use of more than one language, but if we do, problems immediately arise. The simplest is that the idea of a formal deductive system is such that deductions take place exclusively in one language. The deduction rules which govern the structure of proofs in such a system only allow premises within a single language to be used in a proof. The effect of this is to ensure that if more than one language is used, the results obtained in one language cannot be used in the proof of results in another language. This kind of awkwardness is not a feature of informal mathematics and would represent a serious and avoidable flaw in a formal approach to mathematics.

Rudolf Carnap, inspired by Bertrand Russell to adopt a ``scientific'' approach to philosophy, and hoping to do for science what Russell had done for mathematics, i.e. to establish methods suitable for the formalization of science. This was to be done by adapting the new logical methods to the domain of empirical science.

There was in this an immediate difficulty, arising from the necessity of making empirical claims. Concepts applicable to the material world cannot in general be defined in terms of purely logical contexts. It does not seem that one can proceed by analogy with Frege and plausibly argue that:

Empirical Science = Logic + Definitions

Something more seems to be needed, and it is natural to suppose that this is a language which goes beyond logic by including concepts which are empirically significant.

So it appears that Carnap's enterprise compelled him to consider new languages. Even before this, Carnap already was, by his own autobiographical account, a pluralist with respect to language (though he might not at that time have described himself in those words). His pluralism at that time was primarily in relation to distinct ways of talking about the world which were associated with different metaphysical postures. The most solid example of this is the distinction between the materialistic language of science, in which material objects are the subject matter of the language, and the phenomenalistic language of empiricist philosophers going back to Hume, who confine themselves to talking not about material objects but about the sensory evidence which we have of that world.

It is characteristic of Carnap's quite original attitude to metaphysics that he was happy to talk in either of these languages even though their proponents took them to be representative of incompatible views on metaphysics. Carnap regarded the underlying metaphysical issues as meaningless, and took a pragmatic attitude to the use of the languages, eschewing any supposed metaphysical commitment that might be supposed to entail. His pluralism, to be made explicit later in his ``principle of tolerance'', was to the effect that any language can be adopted, subject to pragmatic considerations, rejecting the metaphysical dogmatists who asserted the legitimacy of only that language which embodied their preferred metaphysics.

At the same time as Carnap enrolls himself into what he thinks of as Russell's program, in the 1920's, the high noon of the Vienna circle, Hilbert at the center of a group of mathematicians in Berlin is moving forward on a different conception of how to formalize mathematics, and Carnap, eager to absorb all the new developments in logic which might be relevant to his project, is paying attention.

Hilbert's approach to formal mathematics, though fully embracing the new logical methods and pressing them forward, is distinct from the universalistic conceptions of Frege and Russell and harks back to the axiomatic method first enunciated in Euclid's elements. Hilbert had already applied the new methods to the axiomatic theory of geometry, achieving standards of rigour which for the first time surpassed those found in Euclid's geometry. This method envisages a special language for each branch of mathematics in which the primitives were ``implicitly'' defined by systems of axioms.

This is a substantial break with the ideas of Frege, who was very fussy about definitions. It is essential in the development of mathematics to pay close attention to definitions, since making use of a definition which is incoherent invites paradox and invalidates all proofs which depend on those definitions. Risks are also associated with undertaking definitions on a piecemeal basis (defining a function first over one domain, and then over some other domain), because of the possibility that these partial definitions might conflict, or that the partiality itself might result in unsound reasoning (Frege had no methods for reasoning about partial functions).

8.3 On the Need for Synthetic Propositions

Carnap's pluralism had several sources. One was his metaphysical agnosticism, which made him disinclined to reject languages which embodied objectionable ontologies. A second was the gradual emergence after Frege's Begriffsschrift, first of direct alternatives (serving essentially the same purpose) such as Russell's Theory of Types and then of a diversity of formal languages serving a variety of purposes, such as modal logics. The third was the desire to have formal languages in which synthetic propositions could be expressed.

We will consider language planning here without addressing the last of these three concerns, confining ourselves to the problem in relation to languages in which only analytic propositions can be expressed.

[There is a problem of order here. I had material in the chapter on the architecture of knowledge which explains why and how we can do without formalization of synthetic propositions. I have moved it here, but it probably doesn't fit yet. ]

To give a relatively concrete starting point to an otherwise highly abstract process I begin by describing an architecture which meets many of the more basic requirements, and in terms of which further requirements might be made intelligible. This is an architecture for the support of interactive proof in a logical foundation system. In many respects this architecture answers to the needs of Carnap's project, but falls short of what Leibniz envisaged. Before describing that architecture we describe the idea of a logical foundation system for mathematics.

8.4 Logical Foundation Systems

In describing the kind of architecture here proposed, it is useful to begin by describing certain kinds of system which have been already in use and which provide a starting point from which the proposed system may be regarded as a further development.

The kind of system I have in mind here is an ``interactive theorem prover'' for a ``logical foundation system''. In this section I describe the idea of a logical foundation system. In the next, the structure of a typical interactive theorem prover working with a logical foundation system, and then the proposed architecture is approached as a transformation to that starting point.

The idea of a logical foundation system begins with Frege, and refers to the kind of logical system which is needed for his logicist project of reducing mathematics to formal logic.

A logical foundation system is a formal language, preferably with a well defined semantics, and with a deductive system, which has sufficient semantic expressiveness and deductive strength that in it the concepts of mathematics may be defined and the theorems of mathematics derived.

From the point of view of establishing Frege's logicist thesis the system has to be in some sense ``purely logical'', and many contemporary philosophers doubt that any such system can serve as a foundation for mathematics. From out present point of view however, this aspect is inessential, it is not necessary for us to enter here into the question of what is a logical truth. The important part is that the system, whether purely logical or not, be suitable for defining mathematical concepts and for deriving mathematical theorems from those definitions.

In speaking here of ``definition'' we do not intend definitions as mere syntactic abbreviation, but as the introduction of new constants into the language satisfying certain defining conditions. This is not to include implicit definition by arbitrary axiomatic extension as advocated in Hilbert's modern conception of axiomatic method, though the primitive constants of the system will effectively be defined in that manner. It is required that the development of mathematics be possible by definitions which are conservative extensions of the logical system, and for which it is possible for the system to reliably confirm that the proposed extension is conservative, and hence that the extended system will be consistent if the system being extended was.

The axiomatic set theory ZFC is a logical foundation system in the terms described above (though the means of conservative extension are not normally regarded as part of ZFC itself). Higher order logic as formulated by Alonzo Church in his Simple Theory of Types (STT) is another foundation system which has proved popular for applications of formal mathematics in Computer Science, because of its successful implementation in a number of interactive theorem provers.

8.5 Interactive Theorem Provers

There has been continued research on the use of computers for proving theorems since shortly after the invention of digital computers. Software for theorem proving can be divided into two kinds which are called ``automatic theorem provers'' and ``interactive theorem provers respectively.

An automatic theorem prover is one which is supplied a description of the proof required, and searches for a proof without further involvement of its user. This might be a theorem prover for first order logic which is supplied with a goal to be proven and some premises from which to prove it and is then expected to come up with a proof in first order logic. Generally, the user would provide additional information influencing the way in which the prover undertakes the search for the proof, but the search would not itself involve any further interaction with the user.

From the point of view of general mathematics, and especially from the point of view of applications in information systems engineering (say in software or hardware verification), an automatic prover may not be suitable because of the complexity of the context in which the proof must be conducted, or simply because the required theorem itself may be so complex that no completely automatic proof is feasible within the current state of the art.

For these situations the use of an ``interactive theorem prover'' may be preferable. Such a system presumes that overall control of the construction of the proof is in the hands of its user, who will himself determine the gross structure of the proof in which the computer will assist by filling in detail. Interactive theorem provers usually incorporate automatic proof techniques, and may interface with automatic provers to obtain the kinds of proof which are within their scope. They increasingly act as integration platforms for diverse more narrowly focussed proof automation facilities permitting a single problem to be solved by an appropriate mixture of methods.

The fact that the user has control over the proof architecture means that problems on a greater scale of complexity can be tackled by an interactive theorem prover, typically limited only by the amount of human effort needed to direct the proof and progressively reduce the problem to portions modest enough to be undertaken automatically.

Because of the complexity of the specifications in the context of which proofs with an interactive theorem prover, whether these be simply an aggregation of background mathematical concepts relevant to the particular branch of mathematics in which the advances are to be made, or the specification of complex mathematical models of computer software or hardware, an interactive theorem prover must be able to organized a non-trivial collection of interdependent constant specifications. With typical theorem provers for variants on Church's STT this will be as a collection of ``theories''.

A theory in this context is like a hybrid between the concept of theory in mathematical logic (which is a set of sentences in some logical language) and the idea of a module in a modular programming language. On the logical side it is the purpose of the theory to collect together the definitions (axioms conservatively extending the logical system) which determine some particular context in which reasoning will take place. The theory (as a data structure managed by an interactive theorem prover) may act as a repository for the theorems which are proven in the context of those definitions. The organization of definitions into a hierarchy of theories also serves to record interdependencies between constant definitions.

On the modularity side theories serve to control the scope of relevance of definitions, so that a constant is only in scope when all the constants upon which its definition depends are also in scope, and may control various aspects of the presentation of the formal material to the user, details of concrete syntax.

8.6 Theory Hierarchy as Knowledge Base

The interactive theorem prover and its theory hierarchy represents for us the status quo ante. That from which we seek to move forward in our successor to the projects of Leibniz and Carnap.

To see this we consider how this relates to the ideas of Leibniz and Carnap.

Digital computers sewn together already acts in some respects as the repository of all human knowledge, and thus serve a role similar to that for which Leibniz promoted the development of academic journals and encyclopaedia. They do so in the rather trivial sense that all the very many academic journals which have appeared since the time of Leibniz are available online (for a price), but they also provide completely new ways of aggregating and disseminating scientific and other knowledge, which threaten to displace the now traditional academic journals. Beyond this computers store an ever growing volume of information about all aspects of our lives and the world around us.

However, this information is mainly stored as data, the significance of which is not itself systematically recorded. It is a database, not a knowledge base.

Though the theory hierarchy of an interactive theorem prover was conceived of and is implemented to support only the accumulation of relatively small numbers of definitions and theorems rather than data on a large scale, it is of a very general character. Logically it would suffice for all the purpose for which computers store and manipulate data.

To illustrate this in intermediate ground let us consider how these systems are in fact used, and make modest extrapolations in the direction of the ambitions of Leibniz and Carnap.

The predominant uses of interactive theorem provers are in the development of purely mathematical theories, or of the kinds of applied mathematics which appears in theoretical computer science and in the application of such theories to reasoning about various kinds of computer systems, be they software or hardware. The software side is less significant because software may plausibly be regarded as consisting of abstract entities, computer programs as analogous with mathematical functions. The hardware is significant because we here venture into reasoning about something concrete rather than abstract. We are here using logic and mathematics to reason about the behavior of large scale electronic artifacts.

This is significant for our conception of Carnap's program. One of the particular problems which Carnap grasped in order to support the application of modern logical methods in science was the extension of those logical methods to languages which speak of the concrete world rather than simply to languages in which one could speak only of abstract entities. This was the spur to Carnap's pluralism, to his interest in the problem of defining languages and their semantics, and in languages in which synthetic propositions could be expressed. He departed from the universalistic perspective of Frege and Russell who sought a single logical language for all purposes, because he saw these universal logical languages as confined to the demonstration of analytic propositions, and hence insufficient for the purposes of empirical science.

But here we see, in the communities developing and applying interactive theorem provers based on logical foundation systems, the application of those tools to real world problems without any adaptation of the underlying logical system for that purpose.

Two considerations contribute to the possibility, even the desirability, of sidestepping this problem addressed by Carnap. The first is that computer scientists considered first the use of these kinds of system for the verification of software, and it is reasonable to think of software as something abstract rather than as some kind of physical entity. For the verification of software there is no apparent incentive to work with languages in which synthetic propositions can be expressed.

Roger Bishop Jones 2012-09-23