[hist-analytic] Methods of Proof: Re: Clarity Is Not Enough

Roger Bishop Jones rbj at rbjones.com
Mon Feb 23 05:59:34 EST 2009


On Sunday 22 February 2009 16:51:23 steve bayne wrote:

> For example, a theorem in some systems
> can be derived in more ways that one. In another system
> the number of available proofs varies. So, in a sense, one
> can imagine the possibility of a metric for redundancy in
> methods of proof. 

Probably not actually, because in most systems you can
include aribtrary irrelevancies in a proof and this results
in there being infinitely many proofs of every proposition.
Since these are all countable infinities (unless we admit
infinitary logics) all propositions either have no proof
or countably many and you don't get a metric.

> Optimally a method of proof would have
> no redundancy.

Proof theorists do use cut-elimination to normalise 
proofs, which does reduce the number of distinct proofs
of the same proposition.  Whether this amounts to
elimination of redundancy is moot.

> If we consider propositions as points and proofs as
> finite sets of points, then one wonders about the possibility
> of using certain theorems in topology in proof theory.

Proofs are normally considered as having more structure than
a finite set of propositions, one expect to be told from which
intermediate propositions each proposition is arrived at and by
what rule.

> Here is a "wild" case, one for which I make no claims except to
> illustrate the general point. If we think of all the propositions
> in propositional calculus as contained within a bounded
> interval, and a proof as a sort of 'cover' in the sense of a
> subclass of open intervals which contains the union of
> the members of all propositions, then couldn't we use
> something like the Heine-Borel Theorem to prove that a
> proof covers all subclasses of propositions?

Not clear to me which subclasses of which classes of
propositions you are talking about here.
I'm not myself aware of any results along these lines.

> A far more
> general question might be: Is something like Godel numbering
> essential for some of these metatheorems?

In general, to reason about language A in language B one must
be able to represent the syntax of A in the ontology of B.
If B is arithmetic then this is called "arithmetisation"
and Godel numbers (as in the original papers) are an arithmetisation
of a version of the language of Principia Mathematica. 
Godel would probably not have chosen arithmetic for his "metalanguage"
if he had not been intent on proving the incompleteness of
arithetic.  Set theory is a better vehicle for metatheory,
and in this case one would encode the syntax directly as sets
rather than arithmetically.

Roger




More information about the hist-analytic mailing list