In his essay (appropriately) titled ‘Ontological Relativity’, Willard Quine introduced the notion that there is no such thing as a fixed, standard ontology, but instead ontology must be relativised, so each individual has one or more ontologies unique to them, and that we use language as a means to (where possible) translate between them. The key point in his argument was that it is impossible, purely by means of language for me to determine whether you and I ontologise concepts for which we have a common term in the same way. That means that we cannot, as one may have thought, use language to establish a consensus ontology, as we cannot, purely based on language, derive a unique meaning for common terms. To use Quine’s example, we may have an agreed term ‘rabbit’, and we may even agree on what it denotes, but we have no way of determining whether it should ontologise as ‘an animal of such and such a shape’ or as ‘a collection of such and such kinds of body parts’. In the absence of a consensus ontology, we must therefore conclude that there is complete ontological relativity, a fact which is one of the starting points for my essay Against Standard Ontologies.
Now, Quine’s argument is very persuasive, but it depends largely on rather tendentious thought experiments, such as the infamous case of the rabbit Gavagai. This is not to say that these thought experiments are invalid, but as they depend on somewhat unusual special circumstances to acquire their force, they inevitably lead to the question of whether ontological relativity is truly endemic, or whether it is purely a feature of extreme cases within the realm of possible ontologies, and that most of the time we can actually establish a consensus ontology. Therefore, in this essay I shall present a formal argument based in the structure of language, that does not depend in any way on special examples, which shows that any reasonably complex language can and must exhibit ontological relativity.
I am going to walk through the structure of language stage by stage, starting from the individual units of language and building up via grammatically correct sentences to sentences with truth value, sentences with reference to a model of the world and finally sentences that refer to the world as we perceive it. In the process we will see precisely where ontology enters and why it must be relativised.
About language and ontology
So we start from the basic units of language. In English these are words, but in other languages (especially agglutinating languages) these might be lexemes that glue together to form words. Therefore I will us the abstract term ‘element’ to refer to the basic atomic unit of language, that is to say the collection of basic units that can be combined and recombined to form utterances.
It seems to be a general fact that in all natural languages (at least all the ones we know about) elements combine to form utterances. Utterances themselves generally consist of one or more segments, each of which is capable of standing on its own as a complete, formally correct unit of speech. That is to say, these segments can be uttered on their own and be assigned a ‘meaning’ (more on that anon). To see my meaning more precisely consider the sequences of English words:
- The cat sat on the mat
- He ate them because he
Here 1 can stand on its own. It does not beg any questions. However, 2 is incomplete, as we do not know what it was that he ate them because of. I will call these basic segments sentences. Thus 1 is a sentence and 2 is not. The rules specifying whether a sequence of elements is or is not a sentence constitute the grammar or syntax of a language. So syntax tells us how to build sentences from elements.
A grammatically correct sentence is all very well, but if we want to do anything with it we need to be able to tell its truth value. That is to say, if a sentence can be seen as an observation about the way the world is, we want, given a source of information about the world to plug into it, to be able to tell whether that observation is accurate. The next step gives us part of this information, in that given a grammatically correct sentence, the semantics of the language tell us how to derive the truth value of a sentence from information about a class of special elements within it: its predicates.
A predicate is a unit that predicates a property of an object (the thing can be pretty well anything, from a referenced thing in the world, to another predicate to a complete sentence) in such a way that the result of doing so is a truth value. For example consider the following:
- The grass is green
- ‘All your base are belong to us’ is a grammatically correct sentence
Here 1 applies the predicate ‘is green’ to the object ‘the grass’, giving the truth value ‘true’, while 2 applies the predicate ‘is a grammatically correct sentence’ to the object ‘all your base are belong to us’, giving the truth value ‘false’. Given a predicate one can, in principle, define its extension and antiextension, which are respectively the collection of objects of which it is true / false.
My assertion, which appears to be true of all known natural languages, and which goes back in philosophy to Alfred Tarski, is that once I know the extension and antiextension of all predicates in a sentence, and know which of these all objects in a sentence belong to, then the semantics of a language tell me how to derive the truth value of a sentence from that information and the structure of sentence. Consider the examples:
- The grass is green
- The dog, which had long hair, was rolling in something that smelled horrible
1 is obvious: as noted above, we just check whether the object ‘the grass’ is in the extension of the predicate ‘is green’. If it is then the sentence is true. 2 is more interesting; to see how it works, let me recast it:
- There was a thing x such that x smelled horrible and the dog was rolling in x and the dog had long hair
So the sentence is true precisely when (a) the dog had long hair, and there is some thing x such that (b) x smelled horrible and (c) there is a relation of ‘was rolling in’ between the dog and x. So the truth value of the sentence reduces to evaluation of three predicates.
Now we have our predicates with their extensions and antiextensions. At the moment we have a purely formal system of symbols that bears no relation to the world as we perceive it. How do we know how to relate the objects in a sentence to objects in the world? In other words, how do we know what ‘the dog’ in the sentence above refers to? This actually turns out to involve three steps. First we have to identify what the things are that our world consists of, then we have to describe each type of thing, so we can recognise it when we see it, then we have to identify which of the things we discriminate within the world is the thing referenced in our sentence.
For the moment we stick with the third of these steps. Say we have correctly discriminated the world into a collection of things. We then need to be able to look at that collection and relate objects within our sentences to those things. This is what we mean by reference: a term like ‘the dog’ in our sentence above is said to refer if it corresponds precisely to a thing in the world that we have discriminated as being of the kind ‘dog’. Reference is therefore, as we can see, absolutely necessary if we are to be able to make any sentence we utter concrete, in the sense of relating to the world we perceive. Moreover, even with sentences dealing with purely abstract matters, if terms do not refer, that is, if they cannot be assigned to (abstract) things of specific, well-understood, commonly agreed kinds, then there is no way that I can understand your utterances, for there is no way that I can relate the objects in your sentences to anything in my conceptual world. Thus without reference, language as a tool for communication is useless.
The final thing we have to deal with is the first two steps outlined above as preconditions for reference, that is to say building a conceptual model of the kinds of things the world is made of, then describing each kind of thing in such a way that we can discriminate instances of it within the world and ask questions about its properties (that is, assign it to the extensions or antiextensions of predicates).
This turns out to be the part of the structure which simultaneously is the most critical for evaluating the ‘meaning’ of sentences and the one about which we can say least. The first of these claims should be obvious, in that if I divide up the world in a different way to you then you may utter sentences that, from you point of view, reference specific objects, and yet, from my point of view, those objects do not even exist. A simple case of this would occur if I had been blind from birth, in which case colour terms would be entirely meaningless to me; words like ‘red’ and ‘green’ would be valid words, and I would even be able to determine the truth value of sentences like:
- Green is a colour
- An object can be red all over and green all over simultaneously
But those sentences treat ‘red’ and ‘green’ as objects of predicates like ‘is a colour’, not as predicates in their own right. As predicates, they have no reference and hence no (anti)extension, so I genuinely have no way of answering as to the truth value of:
- This dog is brown
As an additional subtlety, given the sentences:
- Unripe tomatoes are green, ripe tomatoes are red
- This tomato is green
Then if I were blind from birth, I could answer as to the truth value of 1, because I can learn these facts about the habitual colours of tomatoes, and yet I have no way of answering 2 other than asking someone else to do it for me. Going the other way, say I were a human being and you were an animal with sonar-based senses (e.g. a dolphin). To such an animal, objects properties go beyond their visible externals and include their internal constitution in terms of density, mass distribution, etc. Thus your ontology would contain large quantities of information that simply vanishes on translation to mine; you would distinguish classes of objects that I saw as being identical. Ontology is inherently private.
We conclude from this that two speakers of a language can easily agree on syntax and semantics, as these are the mechanics of language, which depend only on the internal structure of a sentence and not at all on the outside world. Reference begins to be problematic, for example consider the sentence:
- Cicero was troubled by serious crime
Does ‘Cicero’ reference the American city or the Roman Senator? In either case the sentence is true, so we have to deduce reference from context. Thus reference depends not just on the sentence itself, but on the context in which it is placed. This context has two aspects. First, we can assign reference to particular terms by ostention, that is by (literally) pointing at an object while using the term we wish to assign it to, e.g. saying ‘This dog is brown’ while pointing out a particular dog. This can be generalised to apply to a very wide range of cases. It provides what we can consider the occasion specific part of the context by indicating those references that cannot be deduced from the sentence or from background knowledge. So, second comes background knowledge or what Quine calls a conceptual scheme. I do not need to have the term ‘dog’ in the sentence above defined for me because you assume that I know what a dog is.
How can you test that I know what a dog is? The test is simply that you and I should agree on the contexts in which the term ‘dog’ can be used in a sentence and on the truth of the resulting sentences (at least in cases where we can both make sense of those sentences). So if I were to answer ‘It’s not a dog, it’s a canary’ that would imply a failure of common reference. However,we can determine whether you and I agree on the class of objects referenced by the term ‘dog’, and if we do then we assume that we have a common reference.
As soon as we move onto ontology that breaks down entirely. It may be that I break the world down in a way entirely alien to you, but have still been able to spot common features in things you reference as ‘dog’, and so can agree on the reference of the term, even if my ontology is entirely different. For example, if I had the senses of a spider with eight eyes, complex chemical sensors (sense of smell) and very sensitive motion detectors, my ontology might classify all items based on whether they were moving or not, so I would consider a moving dog as distinct from a stationary dog, not out of perversity or choice, but simply because my brain was wired in such a way that all visual percepts automatically came to me with a motion indicator attached to them. Again, if I were a robot which had eight distance sensors instead of two eyes, my ‘visual’ perception of the world would be as structures in an eight-dimensional space and would (as for the dolphin) include information about internal structures of objects, and again this information would be an inherent part of my perception, not just something tagged on to a more basic perception. So if perception differs ontology will differ.
But now, none of us have the same perceptions as one another and none of us have the same conceptual schemes as one another. You and I will be trained by common culture in how to break things down as far as reference goes, and in so far as our common neural anatomy goes, but as we move beyond reference into ontology, as ontology is always private, we have no way of telling whether we do, in fact, share an ontology or not, because our only tool for testing this claim is language, and language can only tell us about reference. Therefore ontological relativity is necessary, not because we can prove it is true, but because it is necessarily impossible to prove that it is not true.