Roger Bishop Jones
The notation used in presenting this model is the formal specification language Z. This is in essence a weak typed set theory with some useful constructs and notations built into the language itself, and others available in a standard library.
It is a cultural norm among users of this specification language that formal specifications are presented interleaved with informal annotation which helps to connect the formal model with its intended application. For this reason, among others, it may be appropriate for use in this philosophical context.
My aim is, using this language, to describe a mathematical model of the intended subject matter which will permit rigorous reasoning sufficiently reliable that those who disagree with my conclusions will only be able to do so on the basis of a disagreement with the model. To show that any of the dichotomies are untenable very strong reasons for rejecting the model would be needed. In order to guard against the possibility of such a rejection the model will be made as general as is consistent with the naive treatment of the relevant concepts will permit.
I will take the view that all three dichotomies are ways of classifying statements where a statement is a sentence in some language with a known semantics in a context sufficient to determine the meaning of the sentence. In the case of the a priori/a posteriori distinction this concerns the kind of evidence, if any, which would be required to justify a claim to know the truth value of the relevant sentence.
These dichotomies are applicable only to sentences in context which can be expected to have a truth value, and it is to be expected that the meaning of such a sentence will provide a basis for the determination of the truth value, given a perfect knowledge of how things are. This provides the necessary minimal conditions for there to be a semantics for the portions of the relevant languages expressed denotationally and yielding truth conditions. However, we do not assume a truth functional semantics. This assumption is stronger than we need make, and we will indicate some reasons which suggest that it is stronger than we should make.
The presentation of the semantics denotationally is without loss of generality, but does presume, as is conventional in mathematics, a rich abstract ontology of raw materials from which models are constructed.
The model, we will see, is primarily a model for the semantics of languages, in terms of which it is possible to define the concepts which interest us, and to show how they are related. The languages under consideration are presumed to be capable of talking about the "real world", even though our modelling ontology is entirely abstract (all names denote, and all quantifiers range over, sets). We have no interest in the details of how possible worlds are modelled, and therefore simply introduce a name which is to be understood as denoting the set of all possible worlds:
We require that there be at least two distinct truth values, and allow that there may be more.
The model of the semantics of a language is expressed in terms if sentences, contexts, propositions and possible worlds. We have nothing to say about what these are and do not depend upon any assumptions about them. The model of a language may therefore be considered parametric in these matters. W, P, S, and C are to be understood as names for the sets of:
In order to admit languages in which the meaning of a sentence in context is a proposition the definition of semantics of our language is factored into two parts. The first part is the meaning proper, which is a proposition. The second is an evaluation function, which may be thought of either as giving the semantics of propositions, or simply as giving their truth conditions (avoiding the terminological confusion which might arise from talking of the semantics of something which is itself a meaning). I will give reasons later which this step is without loss of generality, since the constraints we place on the nature of a proposition are so weak that in any language for which even a partial truth valuation is available there will be technically satisfactory candidates for the role of propositions.
Further to the introduction of propositions, and again without loss of generality, I accept that some philosophers consider that a sentence should be considered to have a meaning even when it is considered out of context. Both notions of meaning can co-exist, though for the sake of clarity different names must be used to refer to them. The most economic way to recognise this alternate concept of meaning, which I will call the content of a sentence, is simply to use a curried form of the meaning function. Instead of using a function which takes a pair of arguments, the sentence and a context in which it is to be interpreted, we use a function which takes two single arguments in succession, the sentence first. When applied to a sentence it yields a function which maps contexts onto propositions, which when applied to the context gives the proposition.
The only constraint we place on the meaning and valuation functions is that they are total functions over their respective domains which yield respectively, propositions and truth values. Even the totality requirement is purely formal since the truth value domain may contain extra values which may be used to represent undefinedness, a technical device which allows a partial function to be modelled using a total function.
The term proposition is the name which we give to the meaning of a sentence in context, and the context is whatever information may be necessary to settle the meaning of a sentence. Once the meaning of a sentence has been settled the only other information is the world in which the truth of the proposition is to be discovered. It is therefore required that the combination of a possible world and a proposition has a determinate truth value (though we do not presume that it is "True" or "False")
The concepts which concern us are then defined as follows:
I'm going to give cursory attention here to some of the arguments and counterexamples which have been used to discredit the naive view of the triple dichotomy in the latter part of the nineteenth century.
A couple of preliminary observations.
I have tried in this paper to approach in discussion of a philosophical problem a standard of rigour comparable to that normally expected in pure mathematics. In some respects, because I thought it clearer and more informative to do so, by adopting a more formal approach I have gone a little further than is usual in mathematics.
It is a consequence of those standards of rigour that in mathematics once a proof is furnished there is no requirement to review any previous arguments which may have been mustered against the proven proposition. If any doubts remained they would be doubts about some part of aspect of the proof and would be addressed by expanding or clarifying the proof.
It is a mark of the lack of rigour in philosophy that it is expected when a new position is presented, that previous positions must still be argued against.
Quine is of interest here because of his denial of the tenability of the analytic/synthetic dichotomy. I do not know whether Quine's doubts in this matter were confined to the use of the distinction only in relation to natural languages, or whether it was intended to apply only to natural languages. I am under the impression from what I have read of Quine that he made no exception for formal notations.
The two principle arguments of which I am aware are:
In relation to his ``two dogmas of empiricism'' [Qui53], two features of Quine's argument seem to me to be doubtful.
Firstly, as a matter of general procedure, Quine puts forward a weak account of how the analyticity of statements is supposed to be established, argues that this method cannot work, and concludes that the distinction is untenable. This fall into a general but fallacious method of reasoning which has been common in the twentieth century. It is a special case of an even broader group of fallacious arguments in which sceptical arguments are used to arrive at definite negative conclusions, contrary to the ancient good sense of Pyrrho.
In this particular case Quine begins with the idea that an analytic truth is to be shown analytic by substitution of equivalents until a truth of pure logic is obtained, citing Frege. He then goes on, through a discussion of radical translation, to doubt that the requisite knowledge about synonymy can be obtained.
Frege was of course primarily concerned with the development of mathematics. What he had in mind was that the concepts of mathematics would be defined one by one, ultimately in terms of purely logical concepts. Insofar as the demonstration of the truths of mathematics involved substitution of synonyms it was specifically the substitution of the body of a definition for the word or construct it defines. In this particular context, for this kind of question about synonymy, it is hard to see what relevance radical translation has. To the extent that there is any doubt about the meaning of a definition the doubt is inherited from the defining expression by the defined, and causes no doubt about their synonymy.
Even had Frege's prescription been quite unworkable, this would not be conclusive. In fact, as illustrated in the preceding formal model, the idea of a definition has now been supplanted by the broader idea of a conservative extension. When mathematics is developed by logical methods from some logical foundation system using conservative extension, there is no requirement that a synonym be available for use in eliminating mathematical concepts.
Quine has therefore attacked indecisively a method which is no longer preferred.
Before I move to the relevance of holism one more observation about Quine's method. It seems to me that Quine introduces the indeterminacy of translation as a sceptical argument intended to show the difficulty of establishing claims about synonymy. The concept of analyticity as defined in this paper is defined directly in terms of the semantics of the language in question. It will therefore be the case that any uncertainty about the meaning of the language is likely to translate into uncertainty about what statements in the language are analytic. Analytic statements are not distinctive in this, it applies equally to all statements in the language: undertainty about meaning translates into uncertainty about truth. However, this does not reflect on the tenability of the analytic/synthetic distinction or the well definedness of the concept of analyticity.
The concept of analyticity is well defined for any language satisfying quite minimal conditions. It is defined in terms of an arbitrary language of which the semantics is not known. If in practice we have difficulty in deciding whether a statement in the language is analytic, the remedy is to clarify the meaning of the statement, either by research or by fiat. In practice natural languages are often extended to make them more precise for some particular purpose, often they are supplemented or displaced by more formal notations, and natural languages are also made more precise by the importation of concepts from formal languages. (an example of which is the use of the term ``set'' once incoherently precise, now used (variously according to context) in quite (if not absolutely) precise and coherent ways.