[hist-analytic] Methods of Proof: Re: Clarity Is Not Enough
baynesrb at yahoo.com
Mon Feb 23 08:46:00 EST 2009
"In general, to reason about language A in language B one must
be able to represent the syntax of A in the ontology of B."
I agree with most all of what you have said here, but I'm not
sure about the above. Some maintain no ontology involved in
logic at all. Now I don't agree with this, but we have to be
clear before passing on to other matters. Can you give me an
example of how syntax is represented in ontology. Now a point
My interest here is redundancy in a logical system. When you
speak of "languages" I take it you mean canonical or formal
languages, if so then I think there is a difference in how
redundancy is to be regarded. For example, in making the
grammar of a natural language explicit lack of redundancy
has always been considered a virtue. So when you are setting
up principles or parameters that will "generate" (as in
"generative syntax") all on only the sentences of a given
language alternatives must be evaluated and redundancy in the
*application* of a rule becomes paramount. Now in logic there
is a difference. Let me give an example. In some proofs of
both up and down versions of Lowenheim/Skolem the occurrence
of vacuous quantifiers makes no difference, so that there is
nothing really wrong with '(Ex)(Ey)(Ez)Fxy', but in standard
linguistic theory, where natural languages are the "object
language" there is nothing wrong with this. So my point was
this: if you set up an two axiomatic systems, and one involves
less redundancy then that is the better one. Redundancy would
be determined by how many ways a theorem can be proven; the
more ways of doing it the more redundancy. The quantificacional
example I've given above would if duplicated in standard
linguistic theory would yield massive over generation. In logic
this doesn't lead to inconsistency but my point was, in part,
to raise the question whether the linguistic case and the
logic case are similar. One other thing: If you take redundancy
to mean repeated application of a rule, then it might be that in
the sense I intend, a system like Hilbert's would be less
redundant than, say, Frege's. I suppose formalization of
arithmetic and the formalization of natural language may
differ here. Finally, on topology.
There is a connection here. Tarski wrote on it briefly and there
have been others. It might be argued, and I think not without
reason that topology is a branch of model theory; the connection,
notwithstanding the flawed suggestion of Heine-Borel (perhaps) seems
Anyway, I think we need to get clear on what, precisely, you
take to be ontology and its relation to syntax.
--- On Mon, 2/23/09, Roger Bishop Jones <rbj at rbjones.com> wrote:
> From: Roger Bishop Jones <rbj at rbjones.com>
> Subject: Re: Methods of Proof: Re: Clarity Is Not Enough
> To: hist-analytic at simplelists.com
> Date: Monday, February 23, 2009, 5:59 AM
> On Sunday 22 February 2009 16:51:23 steve bayne wrote:
> > For example, a theorem in some systems
> > can be derived in more ways that one. In another
> > the number of available proofs varies. So, in a sense,
> > can imagine the possibility of a metric for redundancy
> > methods of proof.
> Probably not actually, because in most systems you can
> include aribtrary irrelevancies in a proof and this results
> in there being infinitely many proofs of every proposition.
> Since these are all countable infinities (unless we admit
> infinitary logics) all propositions either have no proof
> or countably many and you don't get a metric.
> > Optimally a method of proof would have
> > no redundancy.
> Proof theorists do use cut-elimination to normalise
> proofs, which does reduce the number of distinct proofs
> of the same proposition. Whether this amounts to
> elimination of redundancy is moot.
> > If we consider propositions as points and proofs as
> > finite sets of points, then one wonders about the
> > of using certain theorems in topology in proof theory.
> Proofs are normally considered as having more structure
> a finite set of propositions, one expect to be told from
> intermediate propositions each proposition is arrived at
> and by
> what rule.
> > Here is a "wild" case, one for which I make
> no claims except to
> > illustrate the general point. If we think of all the
> > in propositional calculus as contained within a
> > interval, and a proof as a sort of 'cover' in
> the sense of a
> > subclass of open intervals which contains the union of
> > the members of all propositions, then couldn't we
> > something like the Heine-Borel Theorem to prove that a
> > proof covers all subclasses of propositions?
> Not clear to me which subclasses of which classes of
> propositions you are talking about here.
> I'm not myself aware of any results along these lines.
> > A far more
> > general question might be: Is something like Godel
> > essential for some of these metatheorems?
> In general, to reason about language A in language B one
> be able to represent the syntax of A in the ontology of B.
> If B is arithmetic then this is called
> and Godel numbers (as in the original papers) are an
> of a version of the language of Principia Mathematica.
> Godel would probably not have chosen arithmetic for his
> if he had not been intent on proving the incompleteness of
> arithetic. Set theory is a better vehicle for metatheory,
> and in this case one would encode the syntax directly as
> rather than arithmetically.
More information about the hist-analytic