# Overview:

 Distilling an essence of type assignment for the lambda calculus.
Pure type systems are a family of typed lambda calculi each member of which is determined by a triple (S,A,R) where:
S is a subset of the constants of the system known as the sorts is a set of axioms of the form: c:s, where c is a constant and s is a sort. is a set of triples of sorts which determines what function spaces can be constructed in the system and what sort each function space inhabits
Each PTS is a formal system in which judgements of the form:

Context TypeAssignment

may be derived. These judgements assign a type to some term of the typed lambda calculus in the given context.

# Syntax:

 Term ::= Variable | Constant | Term Term (Function Application) | Variable: Term. Term (Functional Abstraction) | Variable: Term. Term (Dependent Function Space Construction) TypeAssignment ::= Term : Term Declaration ::= Variable : Term Declaration-list ::= Declaration , Declaration-list | Declaration Context ::= (Declaration-list | ) Judgement ::= Context TypeAssignment

# ProofRules:

Axioms Start Rule c:s if (c:s) A A:s , x:A x:A if x is fresh B:C; A:s , x:A B:C if x is fresh F:(x:A.B):s; C:A (F C) : (B[x:=C]) A:B; B':s, B=B' A:B' , x : A b:B; (x:A.B):s (x:A.B) : (x:A.B) A:s1; , x:A B:s2 (x:A.B):s3 if (s1, s2, s3) R

# Semantics:

Whereas all other aspects of this account of Pure Type Systems are taken from Barendregt, the following discussion of the semantics of these systems is my own attempt to make sense of the systems, and is therefore put forward in a tentative way.

### The Purposes of Semantics

A semantics for a formal system many serve many purposes, and depending on what purpose is to be served, different approaches to defining the semantics may be appropriate.

The first, and arguably primary purpose of the semantics is to tell us what the statements in the language mean! Typically, for a logical system this would be done by identifying the kind of thing which the language is talking about (an interpretation) and then, in terms of such an interpretation, what the value of expression and the truth value of formulae in the language is.

A second purpose of a semantics is to reassure us that the logical system is consistent. This comes in two parts. The first is to show that the rules of inference preserve "truth" (under all interpretations). The second is to show that there are interpretations under which the axioms are true (called models). Strictly speaking one also wants to know that not all formulae are true (since otherwise a consistency proof could be carried through with a debased semantics in which all formulae are deemed valid).

To a mathematician or professional logician there is one further and more difficult stage, which is to understand what all the models are and possibly to understand the structure of this collection as a mathematical domain. Hackers may be happy to manage without this, especially since the answer often comes out in inscrutable category theory.

Here I am concerned primarily with the easiest part, which is to mention the semantics of the language, without particular concern for its consistency or for a complete understanding of the full range of models. However some discussion of models in relation to consistency I can manage.

### Classical Set Theoretic Interpretations

For the benefit of those with a better familiarity with classical set theory than with typed lambda calculi I make the following observations on the relationship between the two (which substantially influence my presentation of the semantics which follows).

Clearly these lambda calculi are primarily about functions which are also well treated in set theory. The other things we see are types which are awfully like sets. In fact, when we look at the language there is an obvious set-theoretic interpretation, and under that obvious interpretation all the axioms and rules are sound with the exeption only of some instances of the rules.

Depending on the collection of rules you have you will find either that:

1. just about any set theory will provide a model
2. a set theory will provide a model if its big enough
3. no well founded set theory can provide a (well-founded, set-theoretic) model
Case 1 is uninteresting, but an example of this is the simply typed lambda calculus.

Case 2 is more interesting. The rules are essentially about closure under the dependent function space contructor. In set theory, read "axiom of replacement". If you chose a set theory with "universes" then every set is a member of a larger set which is closed under the replacement axiom, and hence closed under the dependent function space constructor. A model of such a set theory will provide "natural" models for the pure type systems whose rules follow certain constraints. Here's my conjecture about a sufficient constraint:

The sorts are partially ordered and the allowed instances of the rule are all monotonically increasing, i.e. s3 s2 and s3 s1
For this class of pure type systems a consistency proof should work out very simple using a plain old set-theoretic model (but I'm just guessing!).

Case 3

Satisfying the monotonicity constraint on the sorts is intended to ensure that the system is well-founded, and failing it means that the pure type systems might not have any well-founded models. The simplest example of such a system is probably the system 2 (the polymorphic or "second order" lambda calculus). I believe this was proved by John Reynolds in his paper "polymorphism is not set theoretic" (you need to interpret both "polymorphism" and "set theoretic" carefully for this to be a true claim, it sounds a tad more general than it is).

These are really the interesting systems, since the others can be read as offerring less than you get in classical set theory. In a sense the point of these typed lambda-calculi is just that they provide a basis for investigating kinds of polymorphism, which is, ways of getting round the awkwardness imposed by the classical constraint to well foundedness. One way is to drop the well foundedness, and the problem then is to get something which is consistent and strong (if you are interested in doing mathematics).

Anyway, these awkward type systems are generally studied by people into proof theory and their consistency is generally established by normalisation results (i.e., by reasoning about syntax) rather than by semantic methods. Once you have a proof of normalisation you can make syntactic models, though you no longer need a model for the consistency proof and you don't need a particular model for describing the semantics. (of course, its all really much more complicated than this) If you want a hint of more motivation, the most trendy (although getting a bit long in the tooth these days) aspect of practical computing which militates against well-foundedness and might therefore benefit from these more interesting type systems is inheritance in object oriented languages. If you attempt this naively in a well founded set theory the types turn out too "big" to be sets (i.e. the same size as the universe).

### The Semantics

An interpretation of a PTS (S,A,R) consists of:

 A universe of discourse U. An assignment of an element of U to each member of C. A relation "" over U x U. A partial function apply: U x U ~> U. A partial function pi: U ~> U.

An interpretation is:

 set theoretic if U is a subclass of the domain of some set theory "" is the restriction of the set membership relationship over that domain to U apply(x,y)=z iff z is a unique value such that x and pi f {g | dom(g) = dom(f) x. g(x) f(x)} well founded if there is no infinite sequence of elements ei of U such that: i. ei+1 ei. standard if it is set theoretic it is well founded and pi f = {g | dom(g) = dom(f) x. g(x) f(x)}

Now I've set this up so that the semantics can be give as if it were a direct interpretation in a classical set theory, (just by chosing suggestive names), but all the key terms can be interpreted in ways which are quite non-standard, so it could be a non-well-founded set theory or a syntactic model.

In the context of an interpretation (U, ass, , apply, pi) and an assignment val of values to free variables the meaning m of the various constructs of the language is defined as follows. The terms may or may not have a value, if they have no value then any term formed from them has no value and any judgement of which they are a part is false. In the following undefinedness is not treated explicity but arises from the application of partial functions or from the use of definite descriptions which are not satisfied in the domain of the interpretation.

 m val c = ass c m val v = val v m val (f a) = apply(m val f, m val a) m val ( Variable: Term1. Term2) = v: (m val Term1) . m ([Variable=>v]val) Term2 m val ( Variable: Term1. Term2) = v: (m val Term1) . m ([Variable=>v]val) Term2 m val (Variable: Term) = (m val Variable) (m val Term) m val (Declaration, Declaration-list) = (m val Declaration) m val Declaration-list m val [] (* empty list of declarations *) = true m val (Declaration-list Type-assignment) = (m val Declaration-list) (m val Type-assignment)

A judgement is satisfied by an interpretation if its value as defined above is true for every assignment to its variables. An interpration which satisfies each of a set of judgements (which we might call a theory) is a model for that set of judgements.

A proof rule is sound if any interpretation which satisfies all the premises also satisfies the conclusion.