The Porter Zone

Philosophical musings and more

Category Archives: Ontology

Types and ontologies


An ontology is essentially a systematic model for how we break up the world around us into things. This clearly plays a role in perception, as it is the ontology that lets us construct mappings that say that to such and such a collection of qualia I will assign such and such a thing, hence allowing me to build a model of the world I inhabit. Ontology also plays a major role in language, in that language is itself an abstract formal means of communicating information, but that information more often than not consists of statements referring to objects, and it is an ontology that lets me translate these referring statements into things. So if you ask me to get you some strawberries, I use my ontology to translate the term ‘strawberry’ into something that I can recognise (red, small pips on surface, etc). Thus when one individual communicates to another, it is by using an ontology that they both connect referring terms in statements to referents in the world.

What this means is that if communication is to be unambiguous, the parties involved in the communication must share an ontology. Now, for the parties to completely share an ontology is in principle impossible, as demonstrated in our essay Against Standard Ontologies, but we can, by deliberately limiting the scope of our communication, limiting the complexity of our language and formalising the ways in which we refer to things, do something to minimise the risk of systemic confusion.  Therefore this paper attempts to understand what formal structures are required to provide a sufficiently rich world-picture to enable useful communication, while keeping the level of formality sufficiently high that we can avoid systemic misunderstanding, and also while acknowledging that the robot’s world-view will be radically different to ours. This last point is crucial, in that it means that our ontology must exist at a sufficiently high level that none of the referring terms we wish to use in communication actually refer to sensory percepts. Rather there must be several levels of abstraction between any referring term and the qualia we expect to perceive when we recognise the referred thing.


Before we get on to types, it is worth saying a little about what we expect of languages, because that will inevitably influence the way that we express a type system, and maybe even the kind of type system we use. I keep the discussion at the level of syntax and structural semantics rather than content.

Basic structures

So, a language is a formal system with a vocabulary of symbols which can be combined in various ways to produce sentences. We posit the basic rules:

  • Symbols in the language are functional or non-functional.
  • Functional symbols can belong to two classes, that of referring terms and that of properties

So we have the beginnings of an ontology. Here non-functional symbols are connectives, such as ‘and’, ‘or’, etc. Symbols that are functional can refer to objects, or they can describe objects, and hence be properties. Note the crucial fact that I have not said that the division into referring terms and properties is exclusive: a symbol can be both a reference to a thing and a way of describing something else (e.g. ‘red’ is both a noun and an adjective).

In terms of an ontology, we have here defined two top-level and potentially overlapping kinds (that is to say concepts at a higher level than type): Referring and Property, so there are two top-level properties with these names that provide a basic classification. The next key step is the following:

  • Given any referring symbol x and description P then I can ascribe the property P to x, so I can form the sentence ‘x is P’.

And now I finally need:

  • All sentences have a truth value.

Truth values

Even the statement that sentences have a truth value is mired in controversy, because what do we mean by a truth value? Do we mean standard Boolean True and False, or do we include intermediate values like Unknown and Undefined, or do we go even further any allow a continuous-valued logic where the truth value is a probability? All of these choices are possible and have persuasive arguments in their favour: intermediate truth values are useful when knowledge is incomplete and probabilities are useful when it is uncertain. But suppose I am using probabilistic truth-values and I have a property P and I say ‘x is P’; do I mean by this that x is P with probability 0.3? It might mean that I am uncertain about what properties x has, and I am certain with probability 0.3 that it is has P, or it might mean that x is some mixture, a proportion of which, given by 0.3, is P. The first of these applications of probabilistic logic is uncontroversial; the second is problematic, for not all types can be mixed. Therefore we conclude that there are two kinds of probabilistic thinking:

  • Truth values dealing with probability arising from uncertainty. A typical sentence is ‘x is P’ which is true with probability p. This means that we are uncertain of the truth of sentence but believe that it is true with probability p.
  • Truth values dealing with mixing. A typical sentence is ‘q% of x is P which, if it is true, means that we believe that x is a mixture, q% of which is P.

Note that the two kinds of probability may be combined, e.g. ‘q% of x is P’ with probability q.

Properties and referring symbols

We can now consider a quite important question, which is whether types are always properties or whether it is possible to have a type that is not a property. Saul Kripke (in Naming and Necessity) argues that proper names are such types. That is to say, that ‘Julius Caesar’ is not a property, or even shorthand for a property, but is an unanalysed label picking out that one individual. This is, to say the least problematic. First, each of us may know what ‘Julian Caesar’ means to us, and we may be able to communicate about him, but it would be a very brave person to say that my idea of Caesar is the same as yours. Second, how does this picking out work in the absence of a property-based description? Surely ‘Julius Caesar’ is just a meaningless jingle to which we have assigned meaning by associating it with a number of properties.  Now let us look at things the other way: is it possible to have a property that does not refer? The answer to this depends largely on whether or not we ascribe only extensional properties to a property.  That is to say, if a property is defined only in terms of the set of objects of which it is true, then there is no need for it to refer. However, such an approach is extremely limiting because unless we have all properties of all objects pre-programmed into us, then we cannot use properties to describe an object we have not previously encountered. We do not have such pre-programming and neither will a useful robot; we have to be able to generalise. Therefore properties require a means of connecting them to the world of things, that is to say a way of referring to them.

If we take this thought to its furthest extent, it seems that the only real distinction between properties and referring symbols is that properties will in general refer to more than one thing. But even here the distinction is minimal, in that any collective symbol, e.g. ‘robot’ turns out on inspection to be nothing more than a description in disguise. It seems then that, if we follow Kripke:

  • Property x implies Referring x.
  • There is a kind ProperName such that ProperName x implies Referring x.
  • Referring x and not Property x implies ProperName x.

It is worth making a quick observation about free variables, that is to say symbols like ‘it’ or ‘the thing’ that allow one to substitute an desired entity in place of the symbol. These symbols are at first sight neither referring nor properties, and yet they can be either, e.g. ‘It is my head’ or ‘it is my favourite colour’, where in the latter ‘colour’ is object to the property ‘my favourite’ and yet is itself a class of property.

Analysed and unanalysed symbols

Kripke also insisted that his proper names be unanalysed. There is considerable value in the concept of unanalysed symbols. At least one unanalysed proper name exists, that is to say ‘me’ (whether the ‘me’ be a human or a robot). Others exist in the form of objects that are self-evident, manifest facts in the universe of the speaker. So a robot may have a number of sensors which are, to it, things whose nature is fixed, whose identity is self-evident, and which have no properties other than existence. However, there are
two points here. First, the information produced by those sensors need not be unanalysed or undescribed.  Second, to a human observing the robot, each sensor on the robot will belong to a particular type, which may itself be defined in terms of another type, and so on. Thus we see that ontologies are indeed relative.


  • There is a kind Unanalysed which consists of symbols that are atomic, and are not defined in terms of other terms.
  • ProperName x implies Unanalysed x.
  • There are a number of unanalysed properties.

Unanalysed symbols refer to the things that we do not need to have defined because they are manifest and obvious (and hence they are very slippery, e.g. I know that ‘me’ is well-defined, but ‘you’ is hugely uncertain). There are also descriptions that are unanalysed, so a robot does not need to analyse ‘sensor’ any further.  We can build a hierarchy of symbols using the relation ‘is defined in terms of’, so when trying to define a symbol we analyse the way we describe it and then we define those descriptions by seeing how they are described, and so on until we reach unanalysed terms. Sufficient unanalysed terms must exist
that this hierarchy is finite. Therefore:

  • The property symbols of the language form a finite hierarchy with relationships based on ‘is defined in terms of’. There are no a priori constraints on the structure of the resulting directed graph save that it must contain no cycles.
  • The resulting directed graph is rooted in symbols x such that ‘Unanalysed x’ is true.

Note that this hierarchy is not a hierarchy in the usual ontological sense, because it is not based on the property that a description expresses. The hierarchy tell us how symbols are defined, not what they mean.


Now we have the basics and the top-level kinds settled, we can begin to consider what kind of model we should use for types. To start, let us see what we need to do with types.


First, given any entity we need to be able to ascribe a type to it. As noted above, this applies equally to things and properties: a thing can be of one or many types, and a property can also be of one or many types. Therefore:

  • There should be a way to ascribe a type to a symbol in a language.

Before we go any further with this we need to ask is a type part of our language or not? All of the kinds I introduced above were metalinguistic and it may seem reasonable for types to follow that model, and so to inhabit a metalanguage rather than the language proper. This view therefore assumes that types cannot be reified, or at least cannot be described within the language, for they exist without it. But now consider a type like ‘mammal’; that this is a type might be disputed, but it is certainly treated as one in common usage. This type is clearly not metalinguistic as it can be defined in terms of other properties within the language, and so it seems that types must themselves be symbols within the language unless we want to limit quite severely the expressiveness of the type system. This proves to be quite a constraint, as most formal type systems rely on metalinguistic type labels. Once types become symbols within the language then one can have types of types and so on and so forth, as well as interaction between types and unanalysed terms. In particular, it means that the type system can evolve with the language, which is clearly a good thing. Therefore it is clear that considerable restraint is needed to ensure that whatever type system is used does not become over-complex. The constraint, to make up for this profusion of riches, is that we are now severely limited as to how the types are expressed and ascribed. Mechanisms such as the type-labels of the typed lambda calculus, though usable, become extremely limiting because now we have symbols acting in two roles depending on whether they are a referring symbol or an ascribed type. A much simpler, and more natural, approach is to treat types as predicates or properties within the language, so a referring symbol x is of type T if ‘x is a T’ evaluates to True.

  • There is a kind Type such that Type x implies Description x and all types have kind Type.
  • Type ascription is effected by predication, so ‘x is a T’ is a model sentence.

One may think that as a corollary of this, no unanalysed symbol may belong to any type within the language. But consider, ‘me’ is an unanalysed symbol, being a proper name, and yet it carries the description of ‘person’ or ‘robot’ or whatever. Therefore in fact any referring symbol can be ascribed a type.

B. The type system

So we are now at the position that a type is a description of kind Type and we assert that a thing x has type T by saying ‘x is a T’. Now we need to discuss relations between types. A common model for ontologies is to use types that are formed into a hierarchy, so each object lies in a particular place on a tree of types, and so is a member of one type, which is itself a member of another and so on up to the root. We saw in Against Standard Ontologies that this model is untenable so something more complex is required. In order to clarify a possible confusion, note that types, being descriptions, are indeed hierarchical, but the hierarchy involves their definition and not their membership, that is to say type T1 being defined in terms of type T2 does not imply that everything that is a T1 is also a T2. Therefore there is no inconsistency in our model.

So how are types organised? It makes immediate sense to introduce a relation along the lines of ‘is a’, on types which in fact generalises the predication relation that ascribes a type to an entity in a way consistent with our contention that types are referring symbols. Thus I can say ‘T1 is a T2’ which implies
that if any x obeys ‘x is a T1’ then in addition ‘x is a T2’. Therefore:

  • The relation ‘x is a T’, where x is any referring object and T is a type or kind, is transitive, so ‘x is a T1’ and‘T1 is a T2’ imply ‘x is a T2’.

Note that this means that the relation ‘is a’ is not that between class and superclass or between object and class, but is rather a more complex relation that comprises both of these. However it can be useful to think of a type as a class and a referring term as an object (recalling that classes themselves are objects), and I shall refer to this analogy with object-orientation repeatedly below.

So we can model types as a graph. There is no obvious a priori structure to this graph, in particular, any one entity may belong to more than one type, for example ‘sonar sensor’ is a ‘distance sensor’ and an ‘imaging sensor’, and though ‘sonar sensor’ and ‘imaging sensor’ both have type ‘sensor’ they are themselves distinct types. Therefore, in object oriented terms, we are dealing with a type system that allows multiple inheritance.

More intriguingly, we have noted above that even unanalysable descriptions may themselves have types. Therefore the unanalysable symbols are not themselves the roots of the directed graph constructed by the relation ‘is a’. We must, however, ensure that the graph of types is finite by avoiding cycles. There is a subtle point here. Consider the recursively defined binary tree type:

  • If Type A then a Tree A is either:
    • A Leaf , which is a value of type A
    • A Node, which is two Tree A one for each of the left-hand and right-hand sub-trees

Here, though the definition is infinite, the recursion is functional rather than one based on typology; that is to say, the type itself creates no recursion within the graph of types.

  • Types form a directed graph based on ‘is a’ which is finite and contains no cycles.

However there is another point here, which argues against a purely graph-theoretic view of types, which is that a recursive type such as Tree is not well-described within such a hierarchy. In particular it is an example of the critical concept of a parameterised type, which leads us into the next section.


Say I have a predicate P in my language, so I can apply it to any term x to create a new term Px.  Say x has type T1 and P has type T2, what type does Px have? Clearly it can depend only on T2 or only on T1, and in some complex cases it can depend on the precise values of P and x. The obvious way of handling this is with generalised arrow types. Recall that a basic arrow type is a type

T = T1 > T2

such that if P has type T and x has type T1 then Px must have type T2. I want to generalise this a little. What we need of a predication type is first a guard expression which states that I can only compose P with x if the type of x obeys some condition, and second a rule which states that there is some function such that the type of Px is the result of applying this function to x.

  • The most general form of type we need takes the form ‘Px is a Tx provided Gx’, where G is a Boolean predicate and T  is a predicate that maps referring objects to types.

Using this model I can describe the binary tree type above by saying that it takes any type A and returns, based on it, a type Tree A as defined above. This kind of parameterisation is absolutely crucial, for example, in data analysis and artificial intelligence, where I want to take streams of data from any number of sources, which might therefore have different types, but to apply standardised processing to them regardless of the underlying type. And, more to the point, this is something I cannot necessarily do with a standard object oriented inheritance paradigm, because the range of types I want to work with might be so disparate that in fact they share no meaningful common ancestor.


In conclusion our need is for something quite subtle that involves a number of different structures, some based on object orientation, some closer to functional programming. We have to effect a difficult balancing act so as to keep them all in play and not allow any of the different paradigms to take over or become too complex.


Ontological relativity without the rabbit


In his essay (appropriately) titled ‘Ontological Relativity’, Willard Quine introduced the notion that there is no such thing as a fixed, standard ontology, but instead ontology must be relativised, so each individual has one or more ontologies unique to them, and that we use language as a means to (where possible) translate between them.  The key point in his argument was that it is impossible, purely by means of language for me to determine whether you and I ontologise concepts for which we have a common term in the same way.  That means that we cannot, as one may have thought, use language to establish a consensus ontology, as we cannot, purely based on language, derive a unique meaning for common terms.  To use Quine’s example, we may have an agreed term ‘rabbit’, and we may even agree on what it denotes, but we have no way of determining whether it should ontologise as ‘an animal of such and such a shape’ or as ‘a collection of such and such kinds of body parts’.  In the absence of a consensus ontology, we must therefore conclude that there is complete ontological relativity, a fact which is one of the starting points for my essay Against Standard Ontologies.

Now, Quine’s argument is very persuasive, but it depends largely on rather tendentious thought experiments, such as the infamous case of the rabbit Gavagai.  This is not to say that these thought experiments are invalid, but as they depend on somewhat unusual special circumstances to acquire their force, they inevitably lead to the question of whether ontological relativity is truly endemic, or whether it is purely a feature of extreme cases within the realm of possible ontologies, and that most of the time we can actually establish a consensus ontology.  Therefore, in this essay I shall present a formal argument based in the structure of language, that does not depend in any way on special examples, which shows that any reasonably complex language can and must exhibit ontological relativity.


I am going to walk through the structure of language stage by stage, starting from the individual units of language and building up via grammatically correct sentences to sentences with truth value, sentences with reference to a model of the world and finally sentences that refer to the world as we perceive it.  In the process we will see precisely where ontology enters and why it must be relativised.

About language and ontology

So we start from the basic units of language.  In English these are words, but in other languages (especially agglutinating languages) these might be lexemes that glue together to form words.  Therefore I will us the abstract term ‘element’ to refer to the basic atomic unit of language, that is to say the collection of basic units that can be combined and recombined to form utterances.


It seems to be a general fact that in all natural languages (at least all the ones we know about) elements combine to form utterances.  Utterances themselves generally consist of one or more segments, each of which is capable of standing on its own as a complete, formally correct unit of speech.  That is to say, these segments can be uttered on their own and be assigned a ‘meaning’ (more on that anon).  To see my meaning more precisely consider the sequences of English words:

  1. The cat sat on the mat
  2. He ate them because he

Here 1 can stand on its own.  It does not beg any questions.  However, 2 is incomplete, as we do not know what it was that he ate them because of.  I will call these basic segments sentences.   Thus 1 is a sentence and 2 is not.  The rules specifying whether a sequence of elements is or is not a sentence constitute the grammar or syntax of a language.  So syntax tells us how to build sentences from elements.


A grammatically correct sentence is all very well, but if we want to do anything with it we need to be able to tell its truth value.  That is to say, if a sentence can be seen as an observation about the way the world is, we want, given a source of information about the world to plug into it, to be able to tell whether that observation is accurate.  The next step gives us part of this information, in that given a grammatically correct sentence, the semantics of the language tell us how to derive the truth value of a sentence from information about a class of special elements within it: its predicates.

A predicate is a unit that predicates a property of an object (the thing can be pretty well anything, from a referenced thing in the world, to another predicate to a complete sentence) in such a way that the result of doing so is a truth value.  For example consider the following:

  1. The grass is green
  2. ‘All your base are belong to us’ is a grammatically correct sentence

Here 1 applies the predicate ‘is green’ to the object ‘the grass’, giving the truth value ‘true’, while 2 applies the predicate ‘is a grammatically correct sentence’ to the object ‘all your base are belong to us’, giving the truth value ‘false’.  Given a predicate one can, in principle, define its extension and antiextension, which are respectively the collection of objects of which it is true / false.

My assertion, which appears to be true of all known natural languages, and which goes back in philosophy to Alfred Tarski, is that once I know the extension and antiextension of all predicates in a sentence, and know which of these all objects in a sentence belong to, then the semantics of a language tell me how to derive the truth value of a sentence from that information and the structure of sentence.  Consider the examples:

  1. The grass is green
  2. The dog, which had long hair, was rolling in something that smelled horrible

1 is obvious: as noted above, we just check whether the object ‘the grass’ is in the extension of the predicate ‘is green’.  If it is then the sentence is true.  2 is more interesting; to see how it works, let me recast it:

  1. There was a thing x such that x smelled horrible and the dog was rolling in x and the dog had long hair

So the sentence is true precisely when (a) the dog had long hair, and there is some thing x such that (b) x smelled horrible and (c) there is a relation of ‘was rolling in’ between the dog and x.  So the truth value of the sentence reduces to evaluation of three predicates.


Now we have our predicates with their extensions and antiextensions.  At the moment we have a purely formal system of symbols that bears no relation to the world as we perceive it.  How do we know how to relate the objects in a sentence to objects in the world?  In other words, how do we know what ‘the dog’ in the sentence above refers to?  This actually turns out to involve three steps.  First we have to identify what the things are that our world consists of, then we have to describe each type of thing, so we can recognise it when we see it, then we have to identify which of the things we discriminate within the world is the thing referenced in our sentence.

For the moment we stick with the third of these steps.  Say we have correctly discriminated the world into a collection of things.  We then need to be able to look at that collection and relate objects within our sentences to those things.  This is what we mean by reference: a term like ‘the dog’ in our sentence above is said to refer if it corresponds precisely to a thing in the world that we have discriminated as being of the kind ‘dog’.  Reference is therefore, as we can see, absolutely necessary if we are to be able to make any sentence we utter concrete, in the sense of relating to the world we perceive.  Moreover, even with sentences dealing with purely abstract matters, if terms do not refer, that is, if they cannot be assigned to (abstract) things of specific, well-understood, commonly agreed kinds, then there is no way that I can understand your utterances, for there is no way that I can relate the objects in your sentences to anything in my conceptual world.  Thus without reference, language as a tool for communication is useless.


The final thing we have to deal with is the first two steps outlined above as preconditions for reference, that is to say building a conceptual model of the kinds of things the world is made of, then describing each kind of thing in such a way that we can discriminate instances of it within the world and ask questions about its properties (that is, assign it to the extensions or antiextensions of predicates).

This turns out to be the part of the structure which simultaneously is the most critical for evaluating the ‘meaning’ of sentences and the one about which we can say least.  The first of these claims should be obvious, in that if I divide up the world in a different way to you then you may utter sentences that, from you point of view, reference specific objects, and yet, from my point of view, those objects do not even exist.  A simple case of this would occur if I had been blind from birth, in which case colour terms would be entirely meaningless to me; words like ‘red’ and ‘green’ would be valid words, and I would even be able to determine the truth value of sentences like:

  1. Green is a colour
  2. An object can be red all over and green all over simultaneously

But those sentences treat ‘red’ and ‘green’ as objects of predicates like ‘is a colour’, not as predicates in their own right.  As predicates, they have no reference and hence no (anti)extension, so I genuinely have no way of answering as to the truth value of:

  1. This dog is brown

As an additional subtlety, given the sentences:

  1. Unripe tomatoes are green, ripe tomatoes are red
  2. This tomato is green

Then if I were blind from birth, I could answer as to the truth value of 1, because I can learn these facts about the habitual colours of tomatoes, and yet I have no way of answering 2 other than asking someone else to do it for me.  Going the other way, say I were a human being and you were an animal with sonar-based senses (e.g. a dolphin).  To such an animal, objects properties go beyond their visible externals and include their internal constitution in terms of density, mass distribution, etc.  Thus your ontology would contain large quantities of information that simply vanishes on translation to mine; you would distinguish classes of objects that I saw as being identical. Ontology is inherently private.


We conclude from this that two speakers of a language can easily agree on syntax and semantics, as these are the mechanics of language, which depend only on the internal structure of a sentence and not at all on the outside world.  Reference begins to be problematic, for example consider the sentence:

  1. Cicero was troubled by serious crime

Does ‘Cicero’ reference the American city or the Roman Senator?  In either case the sentence is true, so we have to deduce reference from context.  Thus reference depends not just on the sentence itself, but on the context in which it is placed.  This context has two aspects.  First, we can assign reference to particular terms by ostention, that is by (literally) pointing at an object while using the term we wish to assign it to, e.g. saying ‘This dog is brown’ while pointing out a particular dog.  This can be generalised to apply to a very wide range of cases.  It provides what we can consider the occasion specific part of the context by indicating those references that cannot be deduced from the sentence or from background knowledge.  So, second comes background knowledge or what Quine calls a conceptual scheme.  I do not need to have the term ‘dog’ in the sentence above defined for me because you assume that I know what a dog is.

How can you test that I know what a dog is?  The test is simply that you and I should agree on the contexts in which the term ‘dog’ can be used in a sentence and on the truth of the resulting sentences (at least in cases where we can both make sense of those sentences).  So if I were to answer ‘It’s not a dog, it’s a canary’ that would imply a failure of common reference. However,we can determine whether you and I agree on the class of objects referenced by the term ‘dog’, and if we do then we assume that we have a common reference.

As soon as we move onto ontology that breaks down entirely.  It may be that I break the world down in a way entirely alien to you, but have still been able to spot common features in things you reference as ‘dog’, and so can agree on the reference of the term, even if my ontology is entirely different.  For example, if I had the senses of a spider with eight eyes, complex chemical sensors (sense of smell) and very sensitive motion detectors, my ontology might classify all items based on whether they were moving or not, so I would consider a moving dog as distinct from a stationary dog, not out of perversity or choice, but simply because my brain was wired in such a way that all visual percepts automatically came to me with a motion indicator attached to them.  Again, if I were a robot which had eight distance sensors instead of two eyes, my ‘visual’ perception of the world would be as structures in an eight-dimensional space and would (as for the dolphin) include information about internal structures of objects, and again this information would be an inherent part of my perception, not just something tagged on to a more basic perception.  So if perception differs ontology will differ.

But now, none of us have the same perceptions as one another and none of us have the same conceptual schemes as one another.  You and I will be trained by common culture in how to break things down as far as reference goes, and in so far as our common neural anatomy goes, but as we move beyond reference into ontology, as ontology is always private, we have no way of telling whether we do, in fact, share an ontology or not, because our only tool for testing this claim is language, and language can only tell us about reference.  Therefore ontological relativity is necessary, not because we can prove it is true, but because it is necessarily impossible to prove that it is not true.

Against Standard Ontologies

1 Introduction

It is a truism that lay persons rush in where experts fear to tread. We are too well aware of the many enthusiasts who insist that they have built a perpetual motion machine, that they can square the circle, and so on and so forth. Where philosophers have long concluded that there can be no such thing as a single standard ontology, non-philosophers ignore such minor issues and set about trying to build one (e.g. SUMO, see [4]).

Unfortunately, it is still the case that there can be no such thing as a standard ontology. As I will show in this note, at best there can be a number of local ontologies, each dealing with a small, well understood problem domain, where there is only one point of view. This latter criterion is crucial: if I am building ontologies in (say) robotics, I have to accept that the points of view of the robot’s designer, programmer and user are very different, not to mention the point of view of the robot itself. Thus each of these must involve a separate ontology.

I proceed by setting out the arguments for ontological relativity, the claim that multiple equally valid ontologies are endemic. Having done this I show that there are, in fact, very severe constraints on what a candidate ontology can look like, imposed not by a world-view but by the requirement for philosophical coherence.

2 Ontological relativity

An ontology is a (hopefully systematic) collection of types whose intersections are such that by applying subsets of the types in the collection to a thing we can reach the point at which we have a sound description of that thing, and some understanding of its structure. But this is immensely problematical.

2.1 Multiple Ontologies

Consider first the case of types of things whose existence is debated. For example, I may believe in angels, you may not. So, even if an ontology extended to include the category of ‘imaginary things’ we would end up categorising angels in different ways. Thus there is no way that even one ontology can be applied in a consistent and unambiguous way across all cases and individuals.

Now consider the case where I am a classical physicist and you are trained in quantum mechanics. Your conceptual world contains ideas such as ‘wave function’, ‘S-matrix’, ‘state vector’ and so on and so forth mine does not. Thus it is not a matter of our having a common set of categories but disagreeing as to how to categorise a thing; in this case you have categories that I do not even know of the existence of. Therefore either we must conclude that multiple ontological frameworks must coexist, or we must assert that progress will inevitably drive us to bigger and better ontologies, or we must become Platonists and assert that there is a single ‘correct’ ontology, but we have not yet discovered it all. Of these options, the second is dumbfounding in its arrogance and, less pejoratively, is merely a weak form of the third. The third is unprovable and also faintly worrying for all those of us who are not Platonists. Therefore multiple ontologies must coexist.

2.2 Coexisting ontologies

Third, we can do serious damage to the Platonist point of view. Consider Quine’s famous example from [6] where you and I see a rabbit, you say gavagai and I deduce that gavagai means rabbit. Which seems perfectly sound, until we consider the assumptions inherent in this deduction. I have assumed that you ontologise the world into things in the same way as me, so you look at what I think of as a rabbit and see a single thing. But you could use an ontology in which the basic unit is the body part, and then there are names for particular collections of types of body part, so gavagai actually refers to the components of what I would call a rabbit.

Quine showed that, in fact, there is no way of distinguishing by purely extensional communication whether gavagai means rabbit or ‘a particular collection of types of body part’, meaning that both ontologies are equally valid and the difference causes no problem in communication. It is therefore impossible to privilege one over the other; any attempt to do so would inevitably end up deriving from personal prejudice than any rigorous criterion. Thus, not only are multiple ontologies possible, they are endemic. In [6] Quine coined the term ontological relativity to refer to this concept that in fact there can be no preferred ontology.

2.3 Local ontologies

Therefore we must conclude that there is no global ontology that can be applied by fiat. At best there are local ontologies, tailored to specific problems or domains, between which we translate. This should not, of course, come as a surprise to anyone who regularly switches between vocabularies depending on context (e.g. technical, formal, informal).

3 Constraints on ontologies

3.1 Concrete vs Abstract

We need to be very careful with the formulation of the categories that make up ontologies, for the way we formulate them can depend on the precise world-view we want to adopt. Moreover, they can result in severe constraints being imposed on the resulting ontology. Thus any candidate ontology must be verified not just against its creators’ view of the world, but against meta-ontological requirements of coherence and consistency. In this section I demonstrate this fact by analysing one apparently safe top-level categorisation, into concrete and abstract.

3.1.1 What is concrete?

What, precisely, do we mean by concrete? The folk-epistemology denition that something is concrete if it is real is far from helpful, because if I am a Platonist then, as far as I am concerned, \aleph_0 is real, whereas if I am a constructivist I might assert that only finite integers are real, if I were an empiricist I might deny the negative integers, and if I were a strict empiricist I might wonder whether it is actually provable that the integer 472,533,956 is realised anywhere in the physical world. So the naive view founders on ontological relativity.

So say that a thing is real if it can be realised; that seems safe enough. A horse can be realised, so horses are real. But what about unicorns? The fact that no realised unicorns have been discovered does not mean that they cannot be realised, only that they have not been realised; there is a clear distinction between absence and impossibility. Now, we might decide to rule against unicorns because they are imaginary, but consider the case of the top quark. Top quarks have been demonstrated to be realised, so top quarks are concrete. But the top quark as a thing was hypothesised long before it was discovered, so what was its ontological status after its invention but before its discovery? If unicorns are abstract, so must the top quark have been, in which case it suddenly underwent transition from abstract to concrete upon its discovery. Thus either, once again, ontological relativity rears its ugly head, or else we have to accept the Platonic position that anything we can construct hypothetically is, in fact, concrete.

3.1.2 Types and kinds

In fact things get much worse. When we speak of things, do we consider a thing to be anything that is realisable, or does it have to correspond to a particular object. To put it more formally, can types and kinds be things? To return to my example, horse is actually a type, in that it consists of a collection of qualities that allow us to ascribe identity to one particular class of things. But surely types cannot be concrete, for (unless we are Platonists) surely the concept horse cannot be realised, precisely because it is, in the truest sense, an abstraction.

So let us suppose that all types and kinds are abstract. What, then is there left to be concrete? That question is very hard to answer, because once we have taken away all types, kinds and properties (for properties are merely a kind of type), what is left is formless, undistinguished stuff. Indeed, as Quine has pointed out ([7]), even proper names can be thought of as properties, as they are essentially predicates that allow us to distinguish one thing from the rest, and hence are a property held only by that thing. Even within the context of Kripke’s rather more Platonic universe, the rigid designator ends up as being a kind of label that picks out a particular thing ([3]), and is hence a property or type. Thus, once we have stripped away all types and kinds what is left is things that are undistinguished and undistinguishable, the unknowable thing in itself. The concrete category might well exist, but in as far as the purpose of an ontology is to enable proper categorisation of things, then it is useless, because it is not susceptible to categorisation.

Therefore it follows that when we are building an ontology, we might, if we so wished, make an initial division into concrete and abstract, but we would immediately find that at that point we had, at least in the concrete category, gone as far as we could go, and that all subsequent work must involve the imposition of structure upon the abstract. Therefore, any ontology that attempts to maintain a distinction between the concrete and the abstract while imposing structure on the concrete is incoherent.

3.2 Hierarchies and other structures

There is a common assumption among practitioners of practical ontology that ontologies must be hierarchical, that is to say that each type or kind is a specialisation of precisely one (more general) type or kind, and so on all the way back to a single root kind. Thus the categories that make up the ontology form a simple tree. This top-down approach is strongly rooted in pre-modern systematic philosophy (see [1] and [5] for examples) but it is not obvious how realistic it is.

3.2.1 Hierarchical models are not sufficient

Consider, for example the case of the platypus. A platypus is a type of mammal, but it is also a type of egg-laying animal and those two types cannot be placed in a hierarchical relation to one another. Hence, the type platypus cannot be derived from only one parent type. As a more conceptual example, the C declaration

typedef union
  long l;
  double d;
} longdouble;

creates a type which is simultaneously a type of long and a type of double; in fact it is polymorphic and can be taken to be of either type.

Consider also this problem. Say I decide that a relation is a type of thing within my ontology. So it must sit somewhere in my hierarchy. But any realised relation is a relation (one type) between one or more things (one or more additional types), and so the realised relation derives from at least two types, and may derive from any number. This is evidence of a certain problem with naive ontologies: if one tries to make an ontology all-embracing then it has to end up being self-describing, so meta-ontological structures such as relation become part of the ontology and end up being related to almost everything.

3.2.2 Recursive types

In fact, we can go further. Clearly any reasonable ontology must allow for recursive types. For example, in Haskell we might specify the type of (rather ironically) a binary tree as

data Tree a = Leaf a | Node ( Tree a ) ( Tree a )

In general, we can only define the type binary tree in terms of itself, and this is far from being the only example. A sound ontology has to allow for recursive definitions, but a hierarchy cannot.

3.2.3 Functional types

A classic example of a type that will not fit in any hierarchy is the function type, that is to say a type of things that change the type of other things. So, for example, transducers are a type of thing that convert one type of energy to another, e.g. microphones, which convert sound energy into an electric current. We can model this as

transducer :: a -> b

where a and b are the input and output types of energy. So this function type depends crucially on two types, the input and the output.

3.2.4 Conditional types

This is complex enough, but the example of the tree demonstrates just how far from being a hierarchy ontology can get. Recall that we defined

data Tree a = Leaf a | Node ( Tree a ) ( Tree a)

Here a is a parameter that can stand for any type. So this prescription tells me how to make a binary tree of type a. Continuing down this route, we can be more stringent, for example

data (Eq a ) => Set a = Set [a]

says that I can make a set whose elements are things of any type a that happens to belong to the type Eq. In other words, I am given a type of types (i.e. Eq) and from it construct a function

Set : : (Eq a ) => a -> Set a

This is a conditional type, in that it imposes a condition on a: if a is of type Eq then Set a is a type.

To make this concrete consider the types heap of sand, heap of bricks, heap of clothes. These fall into a pattern, in that though each of them is a type in its own right, underlying them is a more general type, the type heap. Each type of heap is formed by combining heap of… with another type from within a fairly wide class of types. So we combine a type (heap) with a type of types (types of things you can form into heaps) and derive a function (that takes a heapable type into the type of heaps of things of that type).

It need hardly be said that this is entirely incompatible with notions of hierarchical or tree-based ontologies. A more subtle structure, such as that found in typed lambda calculus (see [2]), is probably required.

3.2.5 Conclusions

So we conclude that a viable ontology cannot be hierarchical or tree-based. This is not to say that it cannot have a parent-child structure, but whatever structure we choose must allow that (i) a type may have multiple parents, (ii) a type may be its own parent, and (iii) the most general rule for deriving more specialised types from less specialised must accomodate at least function types and conditional types.


  1. Aristotle. The Physics.
  2. H Barendregt. “Lambda calculi with types”. In: Handbook of Logic in Computer Science. Vol. II.
  3. S Kripke. Naming and Necessity.
  4. I Niles and A Pease. Towards a Standard Upper Ontology. 2008.
  5. Proclus. The Elements of Theology.
  6. W Quine. “Ontological Relativity”. In: Ontological Relativity and other essays.
  7. W Quine. Set Theory and its Logic.