up

2. PHILOSOPHY AND ONTOLOGY

2.1 Philosophical positions

Any formal foundation system for mathematics is necessarily connected more or less intimately with some philosophical position upon the nature of mathematics.

The three principal 'schools' of philosophy of mathematics in the twentieth century have been logicism, intuitionism, and formalism.

Logicism, of which Bertrand Russell was one of the principal proponents, is the thesis that the whole of mathematics is ultimately reducible to symbolic logic. In "The Principles of Mathematics" [Rus03] (the manuscript of which was completed on the last day of the 19th century), Russell states that: "The fact that all Mathematics is Symbolic Logic is one of the greatest discoveries of our age; and when this fact has been established, the remainder of the principles of mathematics consists in the analysis of Symbolic Logic itself."

Intuitionism, a school of thought most prominently associated with Brouwer rejects classical mathematics in favour of the more spartan constructive mathematics. According to Bishop [Bis67] an important element of the intuitionist position is that: "every mathematical statement ultimately expresses the fact that if we perform certain computations within the set of positive integers, we shall get certain results". Intuitionists reject some of the principles of classical logic, notably the law of the excluded middle.

Formalism, a doctrine and a programme due to Hilbert, is characterised by the view that classical mathematics may be established by formal derivation from plausible axioms, provided that the consistency of the formal axiomatisation is established by "finitary" or "constructive" means.

Of these positions only the intuitionist position has survived intact to the present day, though it remains a position which the majority of working mathematicians find unacceptable.

The logicist position failed to be established primarily because two of the principles (axioms) necessary for the development of classical mathematics are difficult to establish as principles of logic. Neither the axiom of infinity nor the axiom of choice can be convincingly shown to be logically necessary propositions.

The formalist programme was shown to be unachievable by Kurt Gödel, he demonstrated that classical mathematics is not completely formalisable, and that no formalisation of arithmetic can be proven consistent by finitary means [God31].

2.2 Ontology

Associated with each of the philosophical positions outlined above are underlying ideas on the population of the universe of mathematics, on what, as far as mathematics is concerned, "exists". Logicism and formalism share similar ontologies, since they both aim to provide a foundation for "classical" mathematics. They differ to some degree in the formal system in which mathematics is derived, and differ widely in how the formalisation is to be philosophically justified, but ontologically they are broadly similar.

The underlying ontology is that of a hierarchy of sets, built up in stages from a (possibly empty) collection of individuals. Hatcher describes this in [Hat82] (speaking with reference to the Zermelo- Fraenkel axiomatisation of set theory) " ... the hierarchy of sets we envisage consists of all the sets we can obtain by starting with the null set and iterating, in the indicated manner, our power set and union operations any transfinite number of times."

This process rehabilitates Cantor's informal set theory, after Russell's paradox had shown Frege's formalisation of it to be inconsistent, by restricting abstraction so that no set can be formed until after the formation of all the sets which it contains, (this is not the same as requiring that a description of a set may not mention any sets not formed before it). This may be restated as the requirement that the transitive closure of the membership relation is anti-reflexive (and hence its reflexive closure is a partial ordering on the universe of sets). Though this last condition is not fully adhered to by all the formal foundation systems for classical mathematics, (Quine's New Foundations [Qui63] being one counter- example), one of its consequences, that functions may not be members of their own domains, is present in all classical foundation systems of which I am aware.

The intuition behind this ontological position is probably attributable to Bertrand Russell. The first attempt to articulate the idea is in [Rus03], and results from Russell's attempts to identify the logical errors which give rise to the paradoxes. When the idea is elaborated in Russell's Theory of Types [Rus08], it is easily confused with the proscription of impredicative definitions, but seems still to be a part of the underlying intuition. Although "first-order" axiomatisations of set theory began without such a clear commitment to a hierarchy [Zer08], in the later axiomatisations known now as ZF and NBG (see [Hat82]), the hierarchy is cleaned up by the inclusion of an axiom of regularity,

Russell's intuition cuts across the intuitions which are encouraged by an acquaintance with digital computers (for which he can hardly be blamed). In considering the behaviour of computers it is perhaps more natural to consider types as ways of interpreting objects, We can consider an object stored in a computer memory, at one moment as a data value, and at the next as a program, rule or function,

In computing we can accept a single countable domain and interpret the members of this domain in terms of types. There appears to be no clear intuitive reason to proscribe applying a rule to itself, and this is practically very useful.

The significance of the problem of self application of functions has been argued by Dana Scott in [Sco70] and elsewhere. The theoretical underpinning of denotational semantics has required and resulted in resolution of these difficulties within a classical framework. This is done by slimming down function spaces until they are small enough to be isomorphic to the domains over which the functions range. Self- application then requires a non-standard account of the result of applying a function to an argument, (One which doesn't require a set directly or indirectly to be a member of itself, as proscribed by the axiom of regularity).

This solution carries rather too much baggage with it to be entirely satisfactory as a foundation system. If we take it to be founded on a first order axiomatisation of set theory, then we have first implicitly accept the need for a hierarchy of types. Next, by chosing "first- order" logic, we determine to do mathematics in just the first level of this hierarchy, the individuals, Then we construct a set theoretic hierarchy within this domain of "individuals", and finally collapse this hierarchy by slimming down function spaces until they become homeomorphic with their domains. Having twice accepted a system almost designed to prevent self-application, it is not surprising that some mathematical sophistication is required to construct yet again a type free notion of function application within this framework,

Even where the formalisation of classical mathematics is not required, as in the work of Martin-Lof [Mar75,82] in formalising constructive mathematics, and that of Constable in constructing formal systems for the verification of programs [Con80], constraining the ontology to be hierarchic seems to have proven necessary to avoid inconsistency.

We choose to start from the beginning with a type free system. The difficulty here is in giving any "mathematical" respectability to the system. The term "mathematical" is now so strongly associated with classical set theory, that an account of semantics which does not ultimately result in denotations in classical set theory is in danger of being considered not mathematical.

In constructing a foundation system, our ontological intuitions are crucial, and the indications are that the richness of classical ontologies is incompatible with a natural account of self application. In determining on a foundation we will first identify our domain of discourse, which we consider an important step in ensuring consistency in the foundation, Before we do this we will examine more closely the idea of logical truth, since this provides additional motivation for our selected ontology.

2.3 Logical pluralism and Conventionalism

Concern for single foundation systems has largely been displaced by a pluralistic attitude to foundations. Logicians and Philosophers study different foundation systems on their technical merits without feeling bound to choose between them, Mathematicians are mostly able to work in a way that can reasonably be interpreted in any of the classical systems, and Computer Scientists feel free to adopt or invent any formal system which suits their purposes.

These pluralistic attitudes have a philosphical counterpart in the linguistic-conventionalist account of the status of logical principles. This principle states (roughly) that a logical truth is true in virtue of the meanings of the terms it contains, i.e. in virtue of accepted linguistic conventions. Pluralism and conventionalism have in common that they seem to support the view that logical truths are not absolute, but are arbitrary.

If logical truth were entirely arbitraryg, then it would likely be of limited utility. However, we know there to be, underlying the plurality of informal and formal logics, some common principles which we can claim to be absolute logical principles.

These principles are about conformance to rules. Either informally or formally, we suggest, that the idea of logical truth depends upon proof, that in the essence of the idea of proof is the view that proofs are checkable, and that the method of checking proofs be effective. There may be doubt about whether a statement has a proof, but given a putative proof of a statement there must be an "effective procedure" for testing whether it is indeed a proof.

We now know many languages within which effective procedures may be described, (lambda calculus, combinatory logic, Turing machines, recursive functions, Post productions...) and the fact that these have been shown to be equivalent in expressive power gives us a basis for claiming that the notion of effective computability is an absolute one, It is from this that our notion of absolute logical truth derives. The claim that a sentence is a theorem of a formal system is just the claim that a particular effectively computable partial function over sentences yields a token representative of "true" when evaluated on the given sentence.

The primitive formalism which we describe below, and hence the various languages which we construct from it, are capable of expressing and proving just those propositions which indicate the result of applying some effective procedure to some value,

2.4 Neo-constructive ontology

We now identify a domain of discourse and the properties we wish to express over this domain.

Since we are concerned to reason about the properties of computers and their programs we choose a more or less arbitrary denumerable domain, which we may consider as the collection of values storable in the memory of some ideal computing device, Functions are to be represented by conventions whereby the values in the domain may be interpreted as rules describing some computational procedure. Properties or predicates are identified with partial computable functions over the values in the domain into some subdomain designated as representing the truth values.

As an example, we select as a domain of discourse the free combinatory algebra generated from the constants K and S under the binary operation of application. By the use of an embedding of this domain into itself and a reduction process over the terms of the domain we are able to represent all partial computable functions over the domain using elements in the domain.

The reduction process is effected by the rules:

U                => U
( (K u) v)       => U
(((S U) v) W)    => ((U W)(v W))
(U v)            => (X Y)                if u => x and v => y
Where U,V,W,X,Y are arbitrary values in our intended interpretation, and (u v) is the application of u to v.

=>* is defined as the transitive closure of =>.

Elements may be encoded into the domain using the rules shown below in section 3.6.

An element u of our domain is considered to satisfy the predicate represented by an element v if the application of v to the encoding of u, (which we write 'u') is reducible to K. i.e. if

                    (v 'u') =>* K
The true propositions, are those elements of our domain which are reducible to K under the above reduction system.
up home © RBJ dated 1986/7/15 HTML 1996/6/6 modified 1997/12/1