Subsections

10. Back at Work: 1977-1998


10.1 Expert Systems in Decision Support - 1982

I wrote this paper while working on microcode architectures for mainframes (CME), when I had decided it was time to move on. The paper was never published, never even submitted for publication so far as I recall, but did help to get me a job in ICL working with expert systems.

10.2 Knowledge Bases - 1985

In 1985 I moved from expert systems (not impressed) to database management software. While working professionally on relational database software I thought privately about knowledge bases.

This topic combines an interest in AI with one in foundations, which is typical of my interest. I thought up an architecture and made some steps toward implementing it on an ICL PC (with a Z80 processor).The starting point for this is the idea that intelligence involves reflexive reasoning, so an artificial intelligence should be a self applicative piece of software. To make it easy to reason about itself, this should be in a nice functional language, and at this time the state of the art in implementation of functional programming languages (embodied in Miranda) used combinators. So the knowledge base is a combinator, part of which is a program which takes the knowledge base (including itself of course), as an argument, and computes a new value for the knowledge base, Actually, it was to be a worm, not overwriting the database but continually adding new versions of it to a list, the space cost being mitigated by the sharing one gets for free in functional systems.

The foundational element comes in when you start considering the combinator which is the knowledge base not as a data structure but as a proposition, and want to take the computations which update it not as mere computations but as inferences. So what kind of a logic do we want for this? Well, the self applicative bit made me think that this had to be a type-free system. It was going to be reasoning about a function which was to be applied to something which contained itself. The obvious answer to this need seemed to be combinatory logic, about which I knew next to nothing. So that's how I got interested in combinatory logic, and how I resumed thinking about the foundations of mathematics.

Somewhere in this period I went to a workshop at Appin in Scotland which was sponsored by ICL and was on persistent databases, so I wrote this paper: ``Persistent Applicative Heaps and Knowledge Bases'', as my offering (which was a bit irrelevant since I was really there because ICL was paying). I also got a slot to talk. After hearing the tenor of the papers I sensed a consensus with which I disagreed, which was that languages with persistent storage had to have dynamic type systems. I decided not to talk about my paper at all, but instead to argue, on the fly as it were, the falsity of this common ground. The argument was along the following lines. We would expect none of these programs to raise type errors during execution. We would expect this to be provable. It must therefore be statically determinable, and this static determination could be built into a type system. Of course, I don't really remember the details of my argument, my memory isn't that good. But I still think there is a case that this is possible. Possibly not with a decidable type system, but the main point was not to advocate a static type system but simply to challenge the assumption that a dynamic type system was essential. My talk caused an uproar and I loved it. Like so many conferences and workshops in computing the average talk went by with very little if any comment and no serious discussion. I livened up the proceedings considerably.

Some of the academics talked sympathetically to me afterwards, clearly thinking I had been through a terrible ordeal, which I must be regretting. I wasn't, I was very pleased with it all. Not that the discussion was wonderful, I only had something like twenty minutes, and the discussion had to be curtailed. But my presentation on was quite brief, and it caused instant uproar. In fact, discussion had to be curtailed simply so that I could complete the presentation of my argument (most people wanted to object without actually hearing the argument).

Its not that I crave infamy, but it was all pretty stodgy up 'til then.

10.3 Creative Foundations - 1986

Having decided that combinatory logic was the appropriate foundation system to use in a ``knowledge base'' I was disappointed to discover that combinatory logics, when they were consistent were weak, I spent some time trying to devise a consistent and strong illative combinatory logic.

The first red herring on this trail was the idea that the coding of recursive functions in combinatory logic could be exploited to yield a strong system. No-one understood the idea, not even well enough to tell me why it was a duff one. I wrote two essays before I figured out what the problem was. The first was ``Logical Foundations and Formal Verification'' which was presented at an Ada Verification workshop in South Carolina, but possibly not included in the proceedings. The second was ``Creative Foundations for Program Verification'' which I thought a really neat paper, and was published in the proceedings of Milcomp86 a military computing show with a pretence at being a conference.

It wasn't till I had some conversations with Peter Aczel and tried to explain it to him that the penny dropped for me. He asked me what was the proof theoretic strength of the system, and I said it didn't really have one. He also spent some time explaining proof theoretic strength to me, and explaining some of his systems to me. However, I was looking for something particular, and he took offence I think that I said that his systems were not quite what I was looking for. I think he was thinking of me as if I was one of his students, and students it seems don't do that kind of thing. I think they are supposed to be more like disciples.

This was for me I believe a very instructive error, though its still hard to explain exactly what was the error (mainly because its hard to make clear the rationale which made me think that the ``creative theory'' was a foundation system.

After this I continued to think about reflexive foundation systems, particularly in combinatory logic, and went down quite a few blind alleys. This only stopped when I got deeper into well-founded set theories and concluded that what they offered was really the neatest kernel of the foundational problem, and that reflexivity if thought desirable could be done somewhere other than the foundations.

I now think that reflexive foundations if we really want them may be best approached via set theories with a universal set. These contain well-founded sets as well as ill-founded ones of course, and it seems that you can make them as strong as you like by talking about the well-founded part.

10.4 Well Founded Foundations 1988

In 1988 I was for a while assigned as ICL's representative (and the secretary) on the VDM standardisation panel. VDM was a formal specification language with a kind of domain theoretic semantics, which came from Brian Monahan. I had an agenda which I had been pressing for a couple of years which said that specification languages should be foundation systems, and that ideally foundation systems should be reflexive. Well VDM and Z were both well founded specification languages, but the former was not really much of a foundation system so far as I could see. Ontologically it did have a hierarchy of types progressing through a function space constructor. But the function spaces were spaces of continuous functions, good enough for modelling higher order computable functions, but not for mimicking higher order logic. So VDM looked to me rather like a first order logic, an elaborate version of LCF, to weak to do serious mathematics. I don't know whether I was right about his, possibly not.

Anyway the problem of finding a foundation system which was suitable both for Z and for a strong VDM seemed interesting. It seemed to me that the key problems to be solved relative to the known well-founded foundations such as ZFC and HOL, were the need for polymorphism and the need for structuring. Structuring relates both to local definitions (which don't work in HOL in the way that they do in ML and in which one needs them to work in any serious specification language) and to support for modularity.

The game plan for this work was to do the work from semantic models in Cambridge HOL. First axiomatise ZFC in HOL (not worrying about the fact that the result would be strictly stronger than the first order version). This gives a rich ontology within which ontologies of the desired systems could be constructed. Then three theories were to be constructed in turn. The first was to change the ontology from an ontology of sets to one of functions. This is done, roughly, by taking the heriditarily functional sets from the domain of ZFC in HOL. There were some tweaks to this to make the resulting theory of pure functions extensional. The way I worked with this was to define a new type based on these pure functions, define various operations over these functions, think up an axiomatisation of the theory and prove the ``axioms''. The adequacy of the ``axioms'' could then be established by doing the reverse construction, going back to the theory of sets using a ``hereditarily'' subset of the pure functions which mimics the sets of ZFC in HOL. I didn't to the last bit, and I didn't complete the proofs of the proposed axioms.

The next two theories were the theory of polymorphic functions, and that of ``structured functions''. A polymorphic function was a function over the pure functions, in which the functions in the domain are interpreted as assignments of types to type variables. So they are families of functions indexed by ``type'' assignments. I should mention that this is all really ``type free''. I'm heading for something like an illative combinatory logic except that it isn't really reflexive.

There is just an illusion of reflexiveness of the kind you get in a polymorphic type system. To explain this another way, the idea is like HOL without the types. Its like HOL in that everything is in the semantic domains, all the logical connectives and quantifiers, unlike first order logic in which there is a distinction between terms and formulae and only the terms denote values in the semantic domain (the domain of an interpretation). To do this in HOL without running into consistency problems you have the type system. In the proposed system you don't actually have types, consistency is realised through well-foundedness. There is no type system, but the well-foundedness gives you the same problems that you get with a type system, which is why we still have polymorphism. The problems here are the problem of the non-existence of functios you would like to have, like the identity function, the equality predicate, and of course the quantifiers. So we settle instead for families of functions. e.g. the family of identity functions restricted to some domain. The family of restricted quantifiers.

The ``structured functions'' where polymorphic functions defined in contexts where various identifiers are thought of as external, perhaps because they are defined in some other module, perhaps because they are defined in some local definition. The value of the polymorphic function depends upon the values of some identifiers in relevant context. The context is again represented by a function, which is understood as a map from names to values.

To make this work you need some constraints on the function spaces which are used to represent the polymorphic and structured functions. I didn't progress this work far enough to be sure what the necessary conditions are.

This work petered out for several reasons, firstly there was an influx of theoreticians from Denmark into the standardisation process sponsored by Dines Bjorner, who had there own strong sense of what should be done with the semantics of VDM which was quite different from my ideas. Second I was taken off the VDM work and John Dawes was put back (for reasons nothing to do with me). Thirdly, doing all this formally would have been too large an undertaking. This was one of a long series of formal experiments from which a little is learnt but which could not hope to be completed with the kind of effort available.

The idea of making novel well-founded foundation systems by filtering the sets of ZFC was however one which I think is quite fun. I also have tinkered with category theoretic foundations in the same vein.

10.5 Stealing Proof Theoretic Strength

Before I gave up on non-well-founded foundation systems the method I was thinking in terms of stealing proof theoretic strength from ZFC. The idea is to start with a weak illative combinatory logic which includes a weak arithmetic with standard semantics (i.e. the numbers really are the natural numbers). Then you strengthen it by adding more arithmetic truths, one obvious contender is the one which says that ZFC is consistent. There is a result of Feferman's which isn't proved in this context but which one could hope to make work in this context. It says roughly that if you add to Robinson's Q an axiom stating the consistency of some other first order theory then that theory becomes interpretable in the extension to Q. Consequently adding the consistency claim gives you all the proof theoretic strength of the system you assert consistent (plus a bit).


10.6 Embedding Z in HOL - 1990-1993

This was one of the few good ideas which I had which actually got implemented because I got a team at ICL to do it! The result was ProofPower. Not a startling idea, obvious but effective.

The whole story is of course rather complicated some of it is covered in [Jon10b,Jon10a].

Roger Bishop Jones 2016-01-07