# The Porter Zone

Philosophical musings and more

## INTRODUCTION

An ontology is essentially a systematic model for how we break up the world around us into things. This clearly plays a role in perception, as it is the ontology that lets us construct mappings that say that to such and such a collection of qualia I will assign such and such a thing, hence allowing me to build a model of the world I inhabit. Ontology also plays a major role in language, in that language is itself an abstract formal means of communicating information, but that information more often than not consists of statements referring to objects, and it is an ontology that lets me translate these referring statements into things. So if you ask me to get you some strawberries, I use my ontology to translate the term ‘strawberry’ into something that I can recognise (red, small pips on surface, etc). Thus when one individual communicates to another, it is by using an ontology that they both connect referring terms in statements to referents in the world.

What this means is that if communication is to be unambiguous, the parties involved in the communication must share an ontology. Now, for the parties to completely share an ontology is in principle impossible, as demonstrated in our essay Against Standard Ontologies, but we can, by deliberately limiting the scope of our communication, limiting the complexity of our language and formalising the ways in which we refer to things, do something to minimise the risk of systemic confusion.  Therefore this paper attempts to understand what formal structures are required to provide a sufficiently rich world-picture to enable useful communication, while keeping the level of formality sufficiently high that we can avoid systemic misunderstanding, and also while acknowledging that the robot’s world-view will be radically different to ours. This last point is crucial, in that it means that our ontology must exist at a sufficiently high level that none of the referring terms we wish to use in communication actually refer to sensory percepts. Rather there must be several levels of abstraction between any referring term and the qualia we expect to perceive when we recognise the referred thing.

## THE LANGUAGE

Before we get on to types, it is worth saying a little about what we expect of languages, because that will inevitably influence the way that we express a type system, and maybe even the kind of type system we use. I keep the discussion at the level of syntax and structural semantics rather than content.

### Basic structures

So, a language is a formal system with a vocabulary of symbols which can be combined in various ways to produce sentences. We posit the basic rules:

• Symbols in the language are functional or non-functional.
• Functional symbols can belong to two classes, that of referring terms and that of properties

So we have the beginnings of an ontology. Here non-functional symbols are connectives, such as ‘and’, ‘or’, etc. Symbols that are functional can refer to objects, or they can describe objects, and hence be properties. Note the crucial fact that I have not said that the division into referring terms and properties is exclusive: a symbol can be both a reference to a thing and a way of describing something else (e.g. ‘red’ is both a noun and an adjective).

In terms of an ontology, we have here defined two top-level and potentially overlapping kinds (that is to say concepts at a higher level than type): Referring and Property, so there are two top-level properties with these names that provide a basic classification. The next key step is the following:

• Given any referring symbol x and description P then I can ascribe the property P to x, so I can form the sentence ‘x is P’.

And now I finally need:

• All sentences have a truth value.

### Truth values

Even the statement that sentences have a truth value is mired in controversy, because what do we mean by a truth value? Do we mean standard Boolean True and False, or do we include intermediate values like Unknown and Undefined, or do we go even further any allow a continuous-valued logic where the truth value is a probability? All of these choices are possible and have persuasive arguments in their favour: intermediate truth values are useful when knowledge is incomplete and probabilities are useful when it is uncertain. But suppose I am using probabilistic truth-values and I have a property P and I say ‘x is P’; do I mean by this that x is P with probability 0.3? It might mean that I am uncertain about what properties x has, and I am certain with probability 0.3 that it is has P, or it might mean that x is some mixture, a proportion of which, given by 0.3, is P. The first of these applications of probabilistic logic is uncontroversial; the second is problematic, for not all types can be mixed. Therefore we conclude that there are two kinds of probabilistic thinking:

• Truth values dealing with probability arising from uncertainty. A typical sentence is ‘x is P’ which is true with probability p. This means that we are uncertain of the truth of sentence but believe that it is true with probability p.
• Truth values dealing with mixing. A typical sentence is ‘q% of x is P which, if it is true, means that we believe that x is a mixture, q% of which is P.

Note that the two kinds of probability may be combined, e.g. ‘q% of x is P’ with probability q.

### Properties and referring symbols

We can now consider a quite important question, which is whether types are always properties or whether it is possible to have a type that is not a property. Saul Kripke (in Naming and Necessity) argues that proper names are such types. That is to say, that ‘Julius Caesar’ is not a property, or even shorthand for a property, but is an unanalysed label picking out that one individual. This is, to say the least problematic. First, each of us may know what ‘Julian Caesar’ means to us, and we may be able to communicate about him, but it would be a very brave person to say that my idea of Caesar is the same as yours. Second, how does this picking out work in the absence of a property-based description? Surely ‘Julius Caesar’ is just a meaningless jingle to which we have assigned meaning by associating it with a number of properties.  Now let us look at things the other way: is it possible to have a property that does not refer? The answer to this depends largely on whether or not we ascribe only extensional properties to a property.  That is to say, if a property is defined only in terms of the set of objects of which it is true, then there is no need for it to refer. However, such an approach is extremely limiting because unless we have all properties of all objects pre-programmed into us, then we cannot use properties to describe an object we have not previously encountered. We do not have such pre-programming and neither will a useful robot; we have to be able to generalise. Therefore properties require a means of connecting them to the world of things, that is to say a way of referring to them.

If we take this thought to its furthest extent, it seems that the only real distinction between properties and referring symbols is that properties will in general refer to more than one thing. But even here the distinction is minimal, in that any collective symbol, e.g. ‘robot’ turns out on inspection to be nothing more than a description in disguise. It seems then that, if we follow Kripke:

• Property x implies Referring x.
• There is a kind ProperName such that ProperName x implies Referring x.
• Referring x and not Property x implies ProperName x.

It is worth making a quick observation about free variables, that is to say symbols like ‘it’ or ‘the thing’ that allow one to substitute an desired entity in place of the symbol. These symbols are at first sight neither referring nor properties, and yet they can be either, e.g. ‘It is my head’ or ‘it is my favourite colour’, where in the latter ‘colour’ is object to the property ‘my favourite’ and yet is itself a class of property.

### Analysed and unanalysed symbols

Kripke also insisted that his proper names be unanalysed. There is considerable value in the concept of unanalysed symbols. At least one unanalysed proper name exists, that is to say ‘me’ (whether the ‘me’ be a human or a robot). Others exist in the form of objects that are self-evident, manifest facts in the universe of the speaker. So a robot may have a number of sensors which are, to it, things whose nature is fixed, whose identity is self-evident, and which have no properties other than existence. However, there are
two points here. First, the information produced by those sensors need not be unanalysed or undescribed.  Second, to a human observing the robot, each sensor on the robot will belong to a particular type, which may itself be defined in terms of another type, and so on. Thus we see that ontologies are indeed relative.

So:

• There is a kind Unanalysed which consists of symbols that are atomic, and are not defined in terms of other terms.
• ProperName x implies Unanalysed x.
• There are a number of unanalysed properties.

Unanalysed symbols refer to the things that we do not need to have defined because they are manifest and obvious (and hence they are very slippery, e.g. I know that ‘me’ is well-defined, but ‘you’ is hugely uncertain). There are also descriptions that are unanalysed, so a robot does not need to analyse ‘sensor’ any further.  We can build a hierarchy of symbols using the relation ‘is defined in terms of’, so when trying to define a symbol we analyse the way we describe it and then we define those descriptions by seeing how they are described, and so on until we reach unanalysed terms. Sufficient unanalysed terms must exist
that this hierarchy is finite. Therefore:

• The property symbols of the language form a finite hierarchy with relationships based on ‘is defined in terms of’. There are no a priori constraints on the structure of the resulting directed graph save that it must contain no cycles.
• The resulting directed graph is rooted in symbols x such that ‘Unanalysed x’ is true.

Note that this hierarchy is not a hierarchy in the usual ontological sense, because it is not based on the property that a description expresses. The hierarchy tell us how symbols are defined, not what they mean.

## THE TYPE SYSTEM

Now we have the basics and the top-level kinds settled, we can begin to consider what kind of model we should use for types. To start, let us see what we need to do with types.

### Ascription

First, given any entity we need to be able to ascribe a type to it. As noted above, this applies equally to things and properties: a thing can be of one or many types, and a property can also be of one or many types. Therefore:

• There should be a way to ascribe a type to a symbol in a language.

Before we go any further with this we need to ask is a type part of our language or not? All of the kinds I introduced above were metalinguistic and it may seem reasonable for types to follow that model, and so to inhabit a metalanguage rather than the language proper. This view therefore assumes that types cannot be reified, or at least cannot be described within the language, for they exist without it. But now consider a type like ‘mammal’; that this is a type might be disputed, but it is certainly treated as one in common usage. This type is clearly not metalinguistic as it can be defined in terms of other properties within the language, and so it seems that types must themselves be symbols within the language unless we want to limit quite severely the expressiveness of the type system. This proves to be quite a constraint, as most formal type systems rely on metalinguistic type labels. Once types become symbols within the language then one can have types of types and so on and so forth, as well as interaction between types and unanalysed terms. In particular, it means that the type system can evolve with the language, which is clearly a good thing. Therefore it is clear that considerable restraint is needed to ensure that whatever type system is used does not become over-complex. The constraint, to make up for this profusion of riches, is that we are now severely limited as to how the types are expressed and ascribed. Mechanisms such as the type-labels of the typed lambda calculus, though usable, become extremely limiting because now we have symbols acting in two roles depending on whether they are a referring symbol or an ascribed type. A much simpler, and more natural, approach is to treat types as predicates or properties within the language, so a referring symbol x is of type T if ‘x is a T’ evaluates to True.

• There is a kind Type such that Type x implies Description x and all types have kind Type.
• Type ascription is effected by predication, so ‘x is a T’ is a model sentence.

One may think that as a corollary of this, no unanalysed symbol may belong to any type within the language. But consider, ‘me’ is an unanalysed symbol, being a proper name, and yet it carries the description of ‘person’ or ‘robot’ or whatever. Therefore in fact any referring symbol can be ascribed a type.

### B. The type system

So we are now at the position that a type is a description of kind Type and we assert that a thing x has type T by saying ‘x is a T’. Now we need to discuss relations between types. A common model for ontologies is to use types that are formed into a hierarchy, so each object lies in a particular place on a tree of types, and so is a member of one type, which is itself a member of another and so on up to the root. We saw in Against Standard Ontologies that this model is untenable so something more complex is required. In order to clarify a possible confusion, note that types, being descriptions, are indeed hierarchical, but the hierarchy involves their definition and not their membership, that is to say type T1 being defined in terms of type T2 does not imply that everything that is a T1 is also a T2. Therefore there is no inconsistency in our model.

So how are types organised? It makes immediate sense to introduce a relation along the lines of ‘is a’, on types which in fact generalises the predication relation that ascribes a type to an entity in a way consistent with our contention that types are referring symbols. Thus I can say ‘T1 is a T2’ which implies
that if any x obeys ‘x is a T1’ then in addition ‘x is a T2’. Therefore:

• The relation ‘x is a T’, where x is any referring object and T is a type or kind, is transitive, so ‘x is a T1’ and‘T1 is a T2’ imply ‘x is a T2’.

Note that this means that the relation ‘is a’ is not that between class and superclass or between object and class, but is rather a more complex relation that comprises both of these. However it can be useful to think of a type as a class and a referring term as an object (recalling that classes themselves are objects), and I shall refer to this analogy with object-orientation repeatedly below.

So we can model types as a graph. There is no obvious a priori structure to this graph, in particular, any one entity may belong to more than one type, for example ‘sonar sensor’ is a ‘distance sensor’ and an ‘imaging sensor’, and though ‘sonar sensor’ and ‘imaging sensor’ both have type ‘sensor’ they are themselves distinct types. Therefore, in object oriented terms, we are dealing with a type system that allows multiple inheritance.

More intriguingly, we have noted above that even unanalysable descriptions may themselves have types. Therefore the unanalysable symbols are not themselves the roots of the directed graph constructed by the relation ‘is a’. We must, however, ensure that the graph of types is finite by avoiding cycles. There is a subtle point here. Consider the recursively defined binary tree type:

• If Type A then a Tree A is either:
• A Leaf , which is a value of type A
• A Node, which is two Tree A one for each of the left-hand and right-hand sub-trees

Here, though the definition is infinite, the recursion is functional rather than one based on typology; that is to say, the type itself creates no recursion within the graph of types.

• Types form a directed graph based on ‘is a’ which is finite and contains no cycles.

However there is another point here, which argues against a purely graph-theoretic view of types, which is that a recursive type such as Tree is not well-described within such a hierarchy. In particular it is an example of the critical concept of a parameterised type, which leads us into the next section.

### Predication

Say I have a predicate P in my language, so I can apply it to any term x to create a new term Px.  Say x has type T1 and P has type T2, what type does Px have? Clearly it can depend only on T2 or only on T1, and in some complex cases it can depend on the precise values of P and x. The obvious way of handling this is with generalised arrow types. Recall that a basic arrow type is a type

T = T1 > T2

such that if P has type T and x has type T1 then Px must have type T2. I want to generalise this a little. What we need of a predication type is first a guard expression which states that I can only compose P with x if the type of x obeys some condition, and second a rule which states that there is some function such that the type of Px is the result of applying this function to x.

• The most general form of type we need takes the form ‘Px is a Tx provided Gx’, where G is a Boolean predicate and T  is a predicate that maps referring objects to types.

Using this model I can describe the binary tree type above by saying that it takes any type A and returns, based on it, a type Tree A as defined above. This kind of parameterisation is absolutely crucial, for example, in data analysis and artificial intelligence, where I want to take streams of data from any number of sources, which might therefore have different types, but to apply standardised processing to them regardless of the underlying type. And, more to the point, this is something I cannot necessarily do with a standard object oriented inheritance paradigm, because the range of types I want to work with might be so disparate that in fact they share no meaningful common ancestor.

### Conclusion

In conclusion our need is for something quite subtle that involves a number of different structures, some based on object orientation, some closer to functional programming. We have to effect a difficult balancing act so as to keep them all in play and not allow any of the different paradigms to take over or become too complex.

# Introduction

In his essay (appropriately) titled ‘Ontological Relativity’, Willard Quine introduced the notion that there is no such thing as a fixed, standard ontology, but instead ontology must be relativised, so each individual has one or more ontologies unique to them, and that we use language as a means to (where possible) translate between them.  The key point in his argument was that it is impossible, purely by means of language for me to determine whether you and I ontologise concepts for which we have a common term in the same way.  That means that we cannot, as one may have thought, use language to establish a consensus ontology, as we cannot, purely based on language, derive a unique meaning for common terms.  To use Quine’s example, we may have an agreed term ‘rabbit’, and we may even agree on what it denotes, but we have no way of determining whether it should ontologise as ‘an animal of such and such a shape’ or as ‘a collection of such and such kinds of body parts’.  In the absence of a consensus ontology, we must therefore conclude that there is complete ontological relativity, a fact which is one of the starting points for my essay Against Standard Ontologies.

Now, Quine’s argument is very persuasive, but it depends largely on rather tendentious thought experiments, such as the infamous case of the rabbit Gavagai.  This is not to say that these thought experiments are invalid, but as they depend on somewhat unusual special circumstances to acquire their force, they inevitably lead to the question of whether ontological relativity is truly endemic, or whether it is purely a feature of extreme cases within the realm of possible ontologies, and that most of the time we can actually establish a consensus ontology.  Therefore, in this essay I shall present a formal argument based in the structure of language, that does not depend in any way on special examples, which shows that any reasonably complex language can and must exhibit ontological relativity.

# Argument

I am going to walk through the structure of language stage by stage, starting from the individual units of language and building up via grammatically correct sentences to sentences with truth value, sentences with reference to a model of the world and finally sentences that refer to the world as we perceive it.  In the process we will see precisely where ontology enters and why it must be relativised.

## About language and ontology

So we start from the basic units of language.  In English these are words, but in other languages (especially agglutinating languages) these might be lexemes that glue together to form words.  Therefore I will us the abstract term ‘element’ to refer to the basic atomic unit of language, that is to say the collection of basic units that can be combined and recombined to form utterances.

### Syntax

It seems to be a general fact that in all natural languages (at least all the ones we know about) elements combine to form utterances.  Utterances themselves generally consist of one or more segments, each of which is capable of standing on its own as a complete, formally correct unit of speech.  That is to say, these segments can be uttered on their own and be assigned a ‘meaning’ (more on that anon).  To see my meaning more precisely consider the sequences of English words:

1. The cat sat on the mat
2. He ate them because he

Here 1 can stand on its own.  It does not beg any questions.  However, 2 is incomplete, as we do not know what it was that he ate them because of.  I will call these basic segments sentences.   Thus 1 is a sentence and 2 is not.  The rules specifying whether a sequence of elements is or is not a sentence constitute the grammar or syntax of a language.  So syntax tells us how to build sentences from elements.

### Semantics

A grammatically correct sentence is all very well, but if we want to do anything with it we need to be able to tell its truth value.  That is to say, if a sentence can be seen as an observation about the way the world is, we want, given a source of information about the world to plug into it, to be able to tell whether that observation is accurate.  The next step gives us part of this information, in that given a grammatically correct sentence, the semantics of the language tell us how to derive the truth value of a sentence from information about a class of special elements within it: its predicates.

A predicate is a unit that predicates a property of an object (the thing can be pretty well anything, from a referenced thing in the world, to another predicate to a complete sentence) in such a way that the result of doing so is a truth value.  For example consider the following:

1. The grass is green
2. ‘All your base are belong to us’ is a grammatically correct sentence

Here 1 applies the predicate ‘is green’ to the object ‘the grass’, giving the truth value ‘true’, while 2 applies the predicate ‘is a grammatically correct sentence’ to the object ‘all your base are belong to us’, giving the truth value ‘false’.  Given a predicate one can, in principle, define its extension and antiextension, which are respectively the collection of objects of which it is true / false.

My assertion, which appears to be true of all known natural languages, and which goes back in philosophy to Alfred Tarski, is that once I know the extension and antiextension of all predicates in a sentence, and know which of these all objects in a sentence belong to, then the semantics of a language tell me how to derive the truth value of a sentence from that information and the structure of sentence.  Consider the examples:

1. The grass is green
2. The dog, which had long hair, was rolling in something that smelled horrible

1 is obvious: as noted above, we just check whether the object ‘the grass’ is in the extension of the predicate ‘is green’.  If it is then the sentence is true.  2 is more interesting; to see how it works, let me recast it:

1. There was a thing x such that x smelled horrible and the dog was rolling in x and the dog had long hair

So the sentence is true precisely when (a) the dog had long hair, and there is some thing x such that (b) x smelled horrible and (c) there is a relation of ‘was rolling in’ between the dog and x.  So the truth value of the sentence reduces to evaluation of three predicates.

### Reference

Now we have our predicates with their extensions and antiextensions.  At the moment we have a purely formal system of symbols that bears no relation to the world as we perceive it.  How do we know how to relate the objects in a sentence to objects in the world?  In other words, how do we know what ‘the dog’ in the sentence above refers to?  This actually turns out to involve three steps.  First we have to identify what the things are that our world consists of, then we have to describe each type of thing, so we can recognise it when we see it, then we have to identify which of the things we discriminate within the world is the thing referenced in our sentence.

For the moment we stick with the third of these steps.  Say we have correctly discriminated the world into a collection of things.  We then need to be able to look at that collection and relate objects within our sentences to those things.  This is what we mean by reference: a term like ‘the dog’ in our sentence above is said to refer if it corresponds precisely to a thing in the world that we have discriminated as being of the kind ‘dog’.  Reference is therefore, as we can see, absolutely necessary if we are to be able to make any sentence we utter concrete, in the sense of relating to the world we perceive.  Moreover, even with sentences dealing with purely abstract matters, if terms do not refer, that is, if they cannot be assigned to (abstract) things of specific, well-understood, commonly agreed kinds, then there is no way that I can understand your utterances, for there is no way that I can relate the objects in your sentences to anything in my conceptual world.  Thus without reference, language as a tool for communication is useless.

### Ontology

The final thing we have to deal with is the first two steps outlined above as preconditions for reference, that is to say building a conceptual model of the kinds of things the world is made of, then describing each kind of thing in such a way that we can discriminate instances of it within the world and ask questions about its properties (that is, assign it to the extensions or antiextensions of predicates).

This turns out to be the part of the structure which simultaneously is the most critical for evaluating the ‘meaning’ of sentences and the one about which we can say least.  The first of these claims should be obvious, in that if I divide up the world in a different way to you then you may utter sentences that, from you point of view, reference specific objects, and yet, from my point of view, those objects do not even exist.  A simple case of this would occur if I had been blind from birth, in which case colour terms would be entirely meaningless to me; words like ‘red’ and ‘green’ would be valid words, and I would even be able to determine the truth value of sentences like:

1. Green is a colour
2. An object can be red all over and green all over simultaneously

But those sentences treat ‘red’ and ‘green’ as objects of predicates like ‘is a colour’, not as predicates in their own right.  As predicates, they have no reference and hence no (anti)extension, so I genuinely have no way of answering as to the truth value of:

1. This dog is brown

As an additional subtlety, given the sentences:

1. Unripe tomatoes are green, ripe tomatoes are red
2. This tomato is green

Then if I were blind from birth, I could answer as to the truth value of 1, because I can learn these facts about the habitual colours of tomatoes, and yet I have no way of answering 2 other than asking someone else to do it for me.  Going the other way, say I were a human being and you were an animal with sonar-based senses (e.g. a dolphin).  To such an animal, objects properties go beyond their visible externals and include their internal constitution in terms of density, mass distribution, etc.  Thus your ontology would contain large quantities of information that simply vanishes on translation to mine; you would distinguish classes of objects that I saw as being identical. Ontology is inherently private.

## Analysis

We conclude from this that two speakers of a language can easily agree on syntax and semantics, as these are the mechanics of language, which depend only on the internal structure of a sentence and not at all on the outside world.  Reference begins to be problematic, for example consider the sentence:

1. Cicero was troubled by serious crime

Does ‘Cicero’ reference the American city or the Roman Senator?  In either case the sentence is true, so we have to deduce reference from context.  Thus reference depends not just on the sentence itself, but on the context in which it is placed.  This context has two aspects.  First, we can assign reference to particular terms by ostention, that is by (literally) pointing at an object while using the term we wish to assign it to, e.g. saying ‘This dog is brown’ while pointing out a particular dog.  This can be generalised to apply to a very wide range of cases.  It provides what we can consider the occasion specific part of the context by indicating those references that cannot be deduced from the sentence or from background knowledge.  So, second comes background knowledge or what Quine calls a conceptual scheme.  I do not need to have the term ‘dog’ in the sentence above defined for me because you assume that I know what a dog is.

How can you test that I know what a dog is?  The test is simply that you and I should agree on the contexts in which the term ‘dog’ can be used in a sentence and on the truth of the resulting sentences (at least in cases where we can both make sense of those sentences).  So if I were to answer ‘It’s not a dog, it’s a canary’ that would imply a failure of common reference. However,we can determine whether you and I agree on the class of objects referenced by the term ‘dog’, and if we do then we assume that we have a common reference.

As soon as we move onto ontology that breaks down entirely.  It may be that I break the world down in a way entirely alien to you, but have still been able to spot common features in things you reference as ‘dog’, and so can agree on the reference of the term, even if my ontology is entirely different.  For example, if I had the senses of a spider with eight eyes, complex chemical sensors (sense of smell) and very sensitive motion detectors, my ontology might classify all items based on whether they were moving or not, so I would consider a moving dog as distinct from a stationary dog, not out of perversity or choice, but simply because my brain was wired in such a way that all visual percepts automatically came to me with a motion indicator attached to them.  Again, if I were a robot which had eight distance sensors instead of two eyes, my ‘visual’ perception of the world would be as structures in an eight-dimensional space and would (as for the dolphin) include information about internal structures of objects, and again this information would be an inherent part of my perception, not just something tagged on to a more basic perception.  So if perception differs ontology will differ.

But now, none of us have the same perceptions as one another and none of us have the same conceptual schemes as one another.  You and I will be trained by common culture in how to break things down as far as reference goes, and in so far as our common neural anatomy goes, but as we move beyond reference into ontology, as ontology is always private, we have no way of telling whether we do, in fact, share an ontology or not, because our only tool for testing this claim is language, and language can only tell us about reference.  Therefore ontological relativity is necessary, not because we can prove it is true, but because it is necessarily impossible to prove that it is not true.

# 1 Introduction

It is a truism that lay persons rush in where experts fear to tread. We are too well aware of the many enthusiasts who insist that they have built a perpetual motion machine, that they can square the circle, and so on and so forth. Where philosophers have long concluded that there can be no such thing as a single standard ontology, non-philosophers ignore such minor issues and set about trying to build one (e.g. SUMO, see [4]).

Unfortunately, it is still the case that there can be no such thing as a standard ontology. As I will show in this note, at best there can be a number of local ontologies, each dealing with a small, well understood problem domain, where there is only one point of view. This latter criterion is crucial: if I am building ontologies in (say) robotics, I have to accept that the points of view of the robot’s designer, programmer and user are very different, not to mention the point of view of the robot itself. Thus each of these must involve a separate ontology.

I proceed by setting out the arguments for ontological relativity, the claim that multiple equally valid ontologies are endemic. Having done this I show that there are, in fact, very severe constraints on what a candidate ontology can look like, imposed not by a world-view but by the requirement for philosophical coherence.

# 2 Ontological relativity

An ontology is a (hopefully systematic) collection of types whose intersections are such that by applying subsets of the types in the collection to a thing we can reach the point at which we have a sound description of that thing, and some understanding of its structure. But this is immensely problematical.

## 2.1 Multiple Ontologies

Consider first the case of types of things whose existence is debated. For example, I may believe in angels, you may not. So, even if an ontology extended to include the category of ‘imaginary things’ we would end up categorising angels in different ways. Thus there is no way that even one ontology can be applied in a consistent and unambiguous way across all cases and individuals.

Now consider the case where I am a classical physicist and you are trained in quantum mechanics. Your conceptual world contains ideas such as ‘wave function’, ‘S-matrix’, ‘state vector’ and so on and so forth mine does not. Thus it is not a matter of our having a common set of categories but disagreeing as to how to categorise a thing; in this case you have categories that I do not even know of the existence of. Therefore either we must conclude that multiple ontological frameworks must coexist, or we must assert that progress will inevitably drive us to bigger and better ontologies, or we must become Platonists and assert that there is a single ‘correct’ ontology, but we have not yet discovered it all. Of these options, the second is dumbfounding in its arrogance and, less pejoratively, is merely a weak form of the third. The third is unprovable and also faintly worrying for all those of us who are not Platonists. Therefore multiple ontologies must coexist.

## 2.2 Coexisting ontologies

Third, we can do serious damage to the Platonist point of view. Consider Quine’s famous example from [6] where you and I see a rabbit, you say gavagai and I deduce that gavagai means rabbit. Which seems perfectly sound, until we consider the assumptions inherent in this deduction. I have assumed that you ontologise the world into things in the same way as me, so you look at what I think of as a rabbit and see a single thing. But you could use an ontology in which the basic unit is the body part, and then there are names for particular collections of types of body part, so gavagai actually refers to the components of what I would call a rabbit.

Quine showed that, in fact, there is no way of distinguishing by purely extensional communication whether gavagai means rabbit or ‘a particular collection of types of body part’, meaning that both ontologies are equally valid and the difference causes no problem in communication. It is therefore impossible to privilege one over the other; any attempt to do so would inevitably end up deriving from personal prejudice than any rigorous criterion. Thus, not only are multiple ontologies possible, they are endemic. In [6] Quine coined the term ontological relativity to refer to this concept that in fact there can be no preferred ontology.

## 2.3 Local ontologies

Therefore we must conclude that there is no global ontology that can be applied by fiat. At best there are local ontologies, tailored to specific problems or domains, between which we translate. This should not, of course, come as a surprise to anyone who regularly switches between vocabularies depending on context (e.g. technical, formal, informal).

# 3 Constraints on ontologies

## 3.1 Concrete vs Abstract

We need to be very careful with the formulation of the categories that make up ontologies, for the way we formulate them can depend on the precise world-view we want to adopt. Moreover, they can result in severe constraints being imposed on the resulting ontology. Thus any candidate ontology must be verified not just against its creators’ view of the world, but against meta-ontological requirements of coherence and consistency. In this section I demonstrate this fact by analysing one apparently safe top-level categorisation, into concrete and abstract.

### 3.1.1 What is concrete?

What, precisely, do we mean by concrete? The folk-epistemology denition that something is concrete if it is real is far from helpful, because if I am a Platonist then, as far as I am concerned, $\aleph_0$ is real, whereas if I am a constructivist I might assert that only finite integers are real, if I were an empiricist I might deny the negative integers, and if I were a strict empiricist I might wonder whether it is actually provable that the integer 472,533,956 is realised anywhere in the physical world. So the naive view founders on ontological relativity.

So say that a thing is real if it can be realised; that seems safe enough. A horse can be realised, so horses are real. But what about unicorns? The fact that no realised unicorns have been discovered does not mean that they cannot be realised, only that they have not been realised; there is a clear distinction between absence and impossibility. Now, we might decide to rule against unicorns because they are imaginary, but consider the case of the top quark. Top quarks have been demonstrated to be realised, so top quarks are concrete. But the top quark as a thing was hypothesised long before it was discovered, so what was its ontological status after its invention but before its discovery? If unicorns are abstract, so must the top quark have been, in which case it suddenly underwent transition from abstract to concrete upon its discovery. Thus either, once again, ontological relativity rears its ugly head, or else we have to accept the Platonic position that anything we can construct hypothetically is, in fact, concrete.

### 3.1.2 Types and kinds

In fact things get much worse. When we speak of things, do we consider a thing to be anything that is realisable, or does it have to correspond to a particular object. To put it more formally, can types and kinds be things? To return to my example, horse is actually a type, in that it consists of a collection of qualities that allow us to ascribe identity to one particular class of things. But surely types cannot be concrete, for (unless we are Platonists) surely the concept horse cannot be realised, precisely because it is, in the truest sense, an abstraction.

So let us suppose that all types and kinds are abstract. What, then is there left to be concrete? That question is very hard to answer, because once we have taken away all types, kinds and properties (for properties are merely a kind of type), what is left is formless, undistinguished stuff. Indeed, as Quine has pointed out ([7]), even proper names can be thought of as properties, as they are essentially predicates that allow us to distinguish one thing from the rest, and hence are a property held only by that thing. Even within the context of Kripke’s rather more Platonic universe, the rigid designator ends up as being a kind of label that picks out a particular thing ([3]), and is hence a property or type. Thus, once we have stripped away all types and kinds what is left is things that are undistinguished and undistinguishable, the unknowable thing in itself. The concrete category might well exist, but in as far as the purpose of an ontology is to enable proper categorisation of things, then it is useless, because it is not susceptible to categorisation.

Therefore it follows that when we are building an ontology, we might, if we so wished, make an initial division into concrete and abstract, but we would immediately find that at that point we had, at least in the concrete category, gone as far as we could go, and that all subsequent work must involve the imposition of structure upon the abstract. Therefore, any ontology that attempts to maintain a distinction between the concrete and the abstract while imposing structure on the concrete is incoherent.

## 3.2 Hierarchies and other structures

There is a common assumption among practitioners of practical ontology that ontologies must be hierarchical, that is to say that each type or kind is a specialisation of precisely one (more general) type or kind, and so on all the way back to a single root kind. Thus the categories that make up the ontology form a simple tree. This top-down approach is strongly rooted in pre-modern systematic philosophy (see [1] and [5] for examples) but it is not obvious how realistic it is.

### 3.2.1 Hierarchical models are not sufficient

Consider, for example the case of the platypus. A platypus is a type of mammal, but it is also a type of egg-laying animal and those two types cannot be placed in a hierarchical relation to one another. Hence, the type platypus cannot be derived from only one parent type. As a more conceptual example, the C declaration

```typedef union
{
long l;
double d;
} longdouble;```

creates a type which is simultaneously a type of long and a type of double; in fact it is polymorphic and can be taken to be of either type.

Consider also this problem. Say I decide that a relation is a type of thing within my ontology. So it must sit somewhere in my hierarchy. But any realised relation is a relation (one type) between one or more things (one or more additional types), and so the realised relation derives from at least two types, and may derive from any number. This is evidence of a certain problem with naive ontologies: if one tries to make an ontology all-embracing then it has to end up being self-describing, so meta-ontological structures such as relation become part of the ontology and end up being related to almost everything.

### 3.2.2 Recursive types

In fact, we can go further. Clearly any reasonable ontology must allow for recursive types. For example, in Haskell we might specify the type of (rather ironically) a binary tree as

data Tree a = Leaf a | Node ( Tree a ) ( Tree a )

In general, we can only define the type binary tree in terms of itself, and this is far from being the only example. A sound ontology has to allow for recursive definitions, but a hierarchy cannot.

### 3.2.3 Functional types

A classic example of a type that will not fit in any hierarchy is the function type, that is to say a type of things that change the type of other things. So, for example, transducers are a type of thing that convert one type of energy to another, e.g. microphones, which convert sound energy into an electric current. We can model this as

transducer :: a -> b

where a and b are the input and output types of energy. So this function type depends crucially on two types, the input and the output.

### 3.2.4 Conditional types

This is complex enough, but the example of the tree demonstrates just how far from being a hierarchy ontology can get. Recall that we defined

data Tree a = Leaf a | Node ( Tree a ) ( Tree a)

Here a is a parameter that can stand for any type. So this prescription tells me how to make a binary tree of type a. Continuing down this route, we can be more stringent, for example

data (Eq a ) => Set a = Set [a]

says that I can make a set whose elements are things of any type a that happens to belong to the type Eq. In other words, I am given a type of types (i.e. Eq) and from it construct a function

Set : : (Eq a ) => a -> Set a

This is a conditional type, in that it imposes a condition on a: if a is of type Eq then Set a is a type.

To make this concrete consider the types heap of sand, heap of bricks, heap of clothes. These fall into a pattern, in that though each of them is a type in its own right, underlying them is a more general type, the type heap. Each type of heap is formed by combining heap of… with another type from within a fairly wide class of types. So we combine a type (heap) with a type of types (types of things you can form into heaps) and derive a function (that takes a heapable type into the type of heaps of things of that type).

It need hardly be said that this is entirely incompatible with notions of hierarchical or tree-based ontologies. A more subtle structure, such as that found in typed lambda calculus (see [2]), is probably required.

### 3.2.5 Conclusions

So we conclude that a viable ontology cannot be hierarchical or tree-based. This is not to say that it cannot have a parent-child structure, but whatever structure we choose must allow that (i) a type may have multiple parents, (ii) a type may be its own parent, and (iii) the most general rule for deriving more specialised types from less specialised must accomodate at least function types and conditional types.

# References

1. Aristotle. The Physics.
2. H Barendregt. “Lambda calculi with types”. In: Handbook of Logic in Computer Science. Vol. II.
3. S Kripke. Naming and Necessity.
4. I Niles and A Pease. Towards a Standard Upper Ontology. 2008.
5. Proclus. The Elements of Theology.
6. W Quine. “Ontological Relativity”. In: Ontological Relativity and other essays.
7. W Quine. Set Theory and its Logic.

# Introduction

We are in the habit of assuming that the way the world appears to us is the way that it is.  Folk epistemology (and, rather regrettably, some academic epistemology) tends towards the Platonic notion that when we perceive an object then we, as it were, directly apprehend its true nature in our minds.  And yet any amount of evidence suggests that this is not true.  People who are red-green colour-blind cannot distinguish red from green, but can distinguish shades of green that those of us with trichromatic vision cannot, and none of us can equal the amazing mantis shrimp with their sixteen separate colour receptors.   We perceive not the world as it is, but those parts of it that are mediated to us via our senses.

Thus it is only reasonable to say that the world as we perceive it is born of an interaction between whatever it is that is actually out there and whatever our senses are capable of perceiving.  But we can now go one stage further and note that there is considerable evidence that it is not just the hard-wired circuitry in our heads, but the ideas in our minds that structure the world we see.  In a famous experiment, people simply do not see a man wearing a gorilla suit walking through an office because they know that you don’t get gorillas in offices.  Likewise, people who have as mother tongue a language that does not distinguish blue and green as colours tend to get very confused when those whose mother tongues do distinguish them insist that they are different.

It follows from this that there is the exciting possibility that much of the world as we think we see it, the ontology, the basic structure, is in fact an artefact of our perceptual and psychological systems, and that there is no simple or direct relationship between it and the thing in itself (whatever that is).  The only question is how much can we be certain about?  Well, quantum mechanics suggests that our idea of ‘thing’ is unreliable, as it replaces localised things with global wave-functions that we happen to perceive as localised things, but it leaves the concepts of time and space intact.  In this essay I intend to argue that in fact even these, the apparent bedrock of being are illusory, and that there is no such thing as the flow of time, the arrow of time, or physical space (no matter how many dimensions it has).  Instead there is a complex of instants and patches of space that we assemble because it is in our nature to expect a continuous flow of events.

So, the argument will go as follows.  First I will examine why time and space might be illusions.  Then I will analyse the appearance of continuous time and space and attempt to determine why it arises, and what it is that creates the illusion of an arrow of time.  Finally I will note some interesting correspondences between the model and ideas from modern physics.

# Why might time and space be illusions?

This section is intended by way of a taster, setting out some of the reasons for believing that time and space might not be fundamental concepts after all.  It also provides an opportunity to make first use of a style of argument to be used repeatedly in this effort, which essentially involves taking conventional assumptions and standing them on their heads.  Finally, I will look at reasons why abandoning the ideas of large-scale time and space might not be so bad a thing.

To start off, when I speak of time and space I mean large-scale structures with continuous variation, hence the idea that time is a continuous line leading from then to now, and that space is three (or more) dimensional, continuing in all directions.  In other words, it is the Copernican hypothesis that space and time are big and look pretty much the same everywhere.

In place of this, I ask what it means that we sense the passage of time?  Clearly we don’t experience the passage of time itself directly (though see below), so what makes us know that time is passing?  What we actually sense is change; if nothing ever changed, could we have any sense of time passing?  Likewise, if there were no spatial variation in the nature of things, would we have any concept of position or distance?  The answer to both questions has to be no.  Which then means that in fact time and space are mental constructs that we have invented in order to understand change.  In other words, we have taken the standard view that we detect change because we know about time and space, and turned it on its head.

Suppose the concepts of time and space are features of our psychology rather than of the universe.  Suppose the arrow of time, the infamous second law of thermodynamics are in fact consequences of our existence in the sense that it is impossible for them to be false, because by virtue of our nature we can perceive only that that is in accord with them.  What this means is that we are freed from any number of worries; for example the problem in quantum mechanics that observed reality appears to be created by the observer becomes a simple tautology.  By placing ourselves firmly into the world, and confronting the effect that our preconceptions have on what we think we perceive, as opposed to taking the traditional scientific view of treating the world as if we are somehow not part of it, we can see perhaps the first glimpses of the real truth that reality, if there is such a thing, is so far divorced from what we think it is that it is, quite literally, inconceivable.

# Time and space as illusions

## The sensation of time

### Problems with the naive theory of time

We are so used to the sensation of time passing, of an endless ‘tick’ repeating in the background of our lives, that it is very easy to assume that it is a constant, uniform structure, and that it is innate.  Let me put some flesh on these ideas.  When I say that we expect the time sense to be constant I mean that the rate at which time appears to pass should not change, so what is an hour now was an hour yesterday and will be an hour tomorrow.  When I say that it appears to be uniform I mean that we expect there to be common agreement as to the passage of time, so if I think an hour has passed, you will agree and will not think that in fact it was a year or three seconds.  Finally, when I say that it is innate I mean that it seems to us that the passage of time is part of the fabric of reality; it is not something that we create ourselves, but is simply there, waiting for us to experience it.

The first problem with this naive time sense is that it is not at all clear where it comes from.  Do we observe changing events in the outside world (waves crashing, clocks ticking, clouds moving) and deduce the passage of time from them, or is there an immanent sense of the passage of time that we experience directly?  Let us consider these two possibilities.

#### External sources

The problem with external sources is that there is no obvious external ‘tick’.  Certainly, there are a number of apparent natural rhythms, from the sub-nanosecond vibrations of caesium atoms to the five billion years half-life of uranium atoms, but if we restrict ourselves to the cycles we can perceive directly (the day, the lunar month, the year) there is a problem in that they tend to be quite long.  We are discussing here not unconscious bodily cycles like the circadian rhythm, but the conscious time-sense, which tends to work at a scale much shorter than even the day.  If it were to derive from (say) the diurnal cycle we would have to posit that we have a quite sophisticated internal timer capable of dividing one day into some number of equal units.  Which means that in order to derive our fast ‘tick’ from the slow natural ‘tick’ we need an internal time-sense.

So, can I proceed using irregular external stimuli to drive the time sense?  The problem is now that I run into problems with constancy and uniformity.  So, I cannot guarantee that the stimuli I experience are the same as those you experience, and so the only way to make the time-sense uniform is for us to have some further datum that tells us how fast the observed changing events are moving relative to the uniform ‘tick’.  But this means that once again we need an internal time-sense that has no basis in external stimuli.  As for constancy, the only way to extrapolate a constant ‘tick’ from irregular stimuli is to have a pre-existing concept of what a constant ‘tick’ is, which requires a purely internal time-sense.

Therefore, though the time sense may, as appears to be the case with the circadian rhythm, rely on external data to correct systematic errors, it cannot be purely external.  There must be an internal sense of time that understands the concepts of constancy and uniformity.

#### Immanent sources

So say that the time-sense is immanent.  Now we run into all kinds of problems, because, as we all know, any internal sense of time that we have is as far from being constant as one can get.  We have all experienced the phenomenon whereby waiting thirty seconds for a computer to switch on can seem like forever, and yet when we are happily absorbed hours can pass in what subjectively feel like moments.  Our time-sense is not constant and also, as we see from the fact that I can be happily absorbed while you are consumed with boredom, it is far from uniform.

So, subjective time is neither constant nor uniform.  Is there, perhaps an objective time-sense that provide a basic ‘tick’ distinct from the extremely variable subjective sense of time, that helps drive the naive time-sense?  There are two points of attack on this: first the origin of the tick and second whether such an objective sense does in fact exist.

Consider the source of the tick.  We have bodily rhythms: heartbeats, breathing, circadian rhythms.  The first two of these are inherently variable and the third is long and has been shown to be very far from regular when it is not regulated by exposure to the external stimulus of the sun’s diurnal cycle.  Thus these sources fail on the same basis as the external time source, in that using them as the source of a ‘tick’ simply begs the existence of a yet more fundamental ‘tick’ used to regularise and sub-divide them.  The only alternative is an immanent time sense which is constant and uniform, and yet is not directly apparent.  That is to say, we sense time not by interpreting other sensible data, but by access to some transcendent source of information that is otherwise entirely undetectable.  This saves the theory of the naive time-sense but at an enormous cost, for such an in-principle undetectable, and therefore unprovable, time-sense smacks equally of theology and desperation.

Consider now the reality of the objective time-sense and, by implication, the naive time sense as a whole.  Do we actually have any evidence for its existence other than an idea that it should exist?  The evidence is tenuous and very susceptible to the turning-on-its-head style of argument.  Is it the fact that events have naturally defined time-stamps indicating when they started, when they ended and how long they lasted, or is it just that as we are used to living in a world of clocks, we expect to be able to impose that structure on the world?  If I say that such and such an event lasted one hour, do you agree because your time-sense tells you that it did, or because you, like me, refer to clocks that say it lasted an hour?  Are precise time-measurements possible because they are real, or do they seem real because they are possible?  The fact that we appear to feel the passage of time that clocks represent is as likely to be a result of our knowing what they claim to represent as it is a result of our having any genuine innate sense of what (say) a minute means.  It seems that we believe in an objective time sense, not because we are aware of it, but because we are led to believe, by our cultural assumption that time is real, that such a sense should exist.  On our own, all we can directly attest is the hopelessly irregular subjective time-sense.

Thus we must abandon the naive theory.

### Alternatives to the naive theory

Consider alternatives to the naive time-sense that do not suffer from these problems.  It is clear from the discussion above that this means ditching at least the concepts of constancy and uniformity, for though we could establish some kind of innate time sense, it was highly subjective and irregular.  Once again we look for sources of this time-sense, but this time we will be a bit more focussed.  The question is now do we sense the passage of time because of concatenations of events that we interpret as indicating the passage of time, or is it that there is some genuine innate sense of time that we then apply to events.

#### Innate subjective time

Revisiting the argument above, internally we have a rather irregular long-period timer, in the form of the circadian rhythm, and a highly subjective time-sense that is a measure less of time than of boredom.  This subjective time-sense can get seriously inaccurate if it is not reset by regular reference to outside sources.  We have all seen this in the way that strong absorption can lead to a complete loss of idea as to what the time is.  Moreover it is well-known that sensory deprivation can lead to a complete failure of the subjective time-sense, in that it seems to simply cease to function.  This is most notable in sleep, where our idea of how long we have slept generally does not match the measured time, and time in dreams is often wildly at variance with the measured duration of the time of dreaming.  Thus while we may have internal timers that are capable of giving very approximate timing information, they are not constant, they are not necessarily consistent with one another, or themselves and they depend on external stimuli to keep them accurate.

So there is no reliable internal sense of time beyond the observation that this seemed to take longer than that, so we have very crude relative duration.  Moving on to more general concepts of tense, we have a very clear sense of ‘now’, represented by the eternal instant in which we live, and it seems fairly well-attested that our memories are organised so that we have a (rather unreliable) sense of ‘before’, ‘after’ and ‘simultaneously with’.  Moreover we have the concept of aspect, in that we can think of events as complete or continuing, and so relate memories based on continuing activity.

#### Internal subjective time: the arrow of time

One other aspect of the innate time-sense is the apparent ‘arrow of time’, the fact that it seems clear to us that time moves only in one direction.  Discussion of this tends to get bogged down in confusion as to whether the arrow is cause or effect.  To see the issue, consider the following.  We have two facts:

1. We sense that we are moving always from the past to the future.
2. We see glasses break and trees fall, rather than seeing trees erect themselves and glasses reconstitute themselves.

Of these, (1) is vacuous.  In fact we sense a constant present and have a changing memory which purports to represent evidence of previous present moments; I discuss this further below.  (2) is more interesting, as it appears to be genuine evidence for a time-related effect that cannot be attributed to psychology, but in fact it is susceptible to the standard turned-on-its-head argument.  Is it the case that the second law of thermodynamics is true, and things do tend towards states of higher entropy, or is it simply that we assume that they do and hence our perceptual machinery forces the world to appear that way?  I discuss this further below.

#### External subjective time

Turning to external stimuli, we have abandoned regular time sources (which tend to be slow).  Clearly things in the world around us change: leaves fall, waves break, wind blows.  But none of these provide anything like a constant time or consistent source.  It seems that all we can deduce from the world around us is that change is inevitable and that one concatenation of events will generally lead to another.  We can apparently deduce concepts of ordering and simultaneity, and (very crudely) of relative time, and we can relate instants by saying that at those two instants some particular event was still in progress, but one again it is not clear whether we have these concepts (see above) because they exist in the real world, or that we see them in the real world because that is how we organise our memory.

### Alternative time-senses

So it seems that I have no internal source of time and all I can deduce from the outside world is that things change.  I see certain structures in the temporal organisation of the world around me, but it is not clear whether these are inherent in reality or merely artefacts arising from the structure of my mind.

#### The simple time-sense

Let us start by examining the basic temporal sense described above.  What I will call the simple time sense relies on four primitive concepts:

1. The Present: There is  a specific instant in which we always exist.  All our perceptions exist in the present.  All else is memory.
2. Ordering: Memory is organised so that we can say that one memory happened before, after or simultaneously with another.
3. Aspect: Memory is organised so that we can say that different memories are memories of the same event at different times, and said event may continue into the present.
4. Duration: we can classify events by saying that one was longer or shorter than another.

These concepts are generally represented in languages as tense and aspectual structures, which are syntactical, while more exact concepts of time require idiomatic structures (consider the sheer number of ways of telling the time in English).  This is clear confirmation that while precise time is something grafted on to our basic nature, the simple time sense is, as it were, baked in.

It is very easy to see how the simple time sense might lead one to infer the existence of universal time: from tense and aspect one can easily construct the idea of time as a constant progression forming a line, with events arranged along it and now moving along it, so memories are memories of ‘earlier’ instances of ‘now’.  Again, we have turned the standard view, that our simple time sense and our languages reflect the existence of time in the world, on its head.  Much of our apparent view of the world around us is a result of interaction between highly unreliable sense data and mental categories and concepts, so why should time be exempt? Assuming (as we shall) that it is not, then the obvious question is whether any of the concepts in the simple time sense can be taken as evidence for a temporal structure in the external world, of whether they can all be treated as artifactual.

#### The endless ‘now’

The fundamental concept is that of the present.  If time is real, ‘now’ represents a particular time-slice through our perception of reality.  But if the concept of time-slice exists only in our minds then instead what we have is an eternal ‘now’ and an ever-growing collection of memories.  We sense that these memories represent other instances of ‘now’ and we organise them into a quasi-linear progression, giving rise to the simple time-sense.  In fact, it is noteworthy that often our memories are not linear, so we can have memories of two instants and yet have no idea of whether one comes before or after the other.  In other words, rather than being a straight line, our natural time-sense seems to be structured more live a river with many sources, all of which converge on ‘now’, for the one fact we can guarantee is that all memory happened before ‘now’.  Or, at least, to turn the observation on its head again, we apply the blanket term ‘before’ to all that we remember; the concept of the remembered past as a fundamental asymmetry is just as much a creation of psychology as time itself.

#### ‘Now’ creates tense, aspect and duration

It seems that this one, guaranteed fact is the basis of the concept of tense.  If I know ‘now’ that some event was ‘before’, and then at some other time I examine my memory of this ‘now’, as part of that memory I will recall that I knew that the event was ‘before’.  Therefore I now have two events, both past, because both in memory, but my memory tells me that one is in the past of the other.  Hence, based on this model, the entire edifice of our apparent sense of temporal ordering can be reduced to building chains of memories.

This is, indeed, very much how our minds work when we try to determine the order in which events occurred, ignoring for the moment such aides memoires as temporal labels attached to memories.  I ignore these because they clearly go beyond any innate time sense that we might have and into the territory of artificial constructs.  Thus the tense concept derives from the combination of ‘now’ and memory.  In other, words, there is no a priori linear organisation of the temporal sense; it is an artefact.

Similarly, we can generate the concept of aspect from memory and the simple apposition of ‘now’ and ‘past’.  Thus far I have considered memories of other instances of ‘now’, but of course we tend to separate out memorable events as memories in their own rights.  Then these memories carry with them information like which instances of ‘now’ they are associated with, which other events they coexist with, and so on and so forth.  This allows us to co-ordinate events aspectually, by saying that this event coexisted with that event and the other event, but (memory tells us) that event and the other event were not part of the same instance of ‘now’, and therefore were not simultaneous.

Finally, duration also arises in this way.  Naturally enough, events seem further way in time from us if we can remember more events between ‘now’ and them.  Indeed, this gives a neat explanation for the fact that time seems to slow down when we are deluged by events, because the constant onrush of new facts to remember creates many instances of ‘now’ to remember, and so a greater apparent distance in memory between us and the recent past (this might also explain the massively speeded up time-sense of dreams).  Likewise, periods where little worth remembering happens feel short, even if we know they are not, as little has been committed to memory.

#### The simple time-sense revisited

It therefore seems that the simple time-sense, that which we can genuinely point to as being somehow innate, can be reduced to the following basic structure:

1. We have an immanent sense of a ‘now’ state which is a unified single view of the world provided by our sensory apparatus.
2. We have memories of events and objects, including references to instances of ‘now’ other than the current one (that is, we can remember experiencing other ‘now’ states).
3. We build links between memories so if an event was already a memory at the time of the ‘now’ state we are remembering / experiencing, we interpret that as meaning that the event lies ‘before’ the remembered / experienced ‘now’ state.

As we have shown, that is all that is required.  But that means that the sense of time is no such thing.  Rather than there being any sense of time or temporal continuity, there is merely memory and a constantly changing present.  And if the present did not change, there would be no memory, and hence no concept of time.  Therefore time in the sense of a universal external ‘tick’ does not exist; it is an accounting device that arises more or less by default when we attempt to give structure to our memories.

### Non-time and the illusion of time

If time is as illusory as I say it is, if in fact our sense of time is an artefact that arises from the linkage of memories, then there are some obvious questions that I need to answer if I am to make my case at all convincing.  The first question, which is relatively easy to dismiss, is why do we have the time sense provided by memory at all?  Why do we not simply live in the eternal present or, like simpler animals, remember only the last few instants?  The answer to that is that we are not simple animals and our survival strategy is predicated on decision-making based on large stores of memory.  Therefore we need memories, and if we are to function well, those memories need to be organised.

The next group of questions are rather more perplexing and deal with the rather deep question of why it is that, if there is no such thing as time, the universe around us seems to behave in such a way as to make us believe that there is.  There are four related questions:

1. Why do we perceive the world as changing from instant to instant when there is no fundamental external concept of time to make it change?
2. What is it that selects the succession if instants presented to us?
3. Why, if there is no deep concept of time, is it that the information we perceive and piece together in memory is so very orderly?
4. How do we as individuals come to agree with one another as to the apparent order of events?

The fundamental worry underlying all these questions is that in dismissing the concept of time from the psychological realm, replacing it with loose associations of memory, I have failed to note that the physical realm requires some motive agency to drive it forward and create the changing sensory impressions that we perceive.

In fact this is not the case.  There is a very persuasive model for how a timeless world could exist and yet result in our perceiving an apparently changing series of instants.  Moreover it brings an elegant explanation for the arrow of time and consensus history.  Therefore in this section I will start by sketching an answer to questions 1 and 2 and then show how that answer deals with questions 3 and 4.

#### The appearance of time

The fundamental observation that underlies my attempt to explain away time is this: that we do not just perceive an instant, the eternal now, but that we perceive a single instant.  That is to say, our minds integrate all the information available to them from sensory resources into a single view, a single picture of the world.  Though we can think of multiple things at once, and think of something other than what we perceive, we appear to be rigidly locked into the single point of view of the outside world, so we can only be aware of being in one instant.

Note the critical rider that this is not to say that I cannot experience several instants simultaneously; merely that any stream of consciousness can only inhabit one.  This inspires the following thought: our problem with apparently denying time only to reinvent it as the thing that presents moments to my consciousness arises because we have tacitly assumed that there is a real succession of instants reflecting my awareness of such a succession.  But suppose that instead of this, I actually experience all possible instants simultaneously and that my mind selects from this plethora a sequence of individual instants as its focus of consciousness, this sequence giving rise to the appearance of a stream of consciousness and hence of time.  This is because though I may exist in all moments, I can only be aware of one as my current eternal ‘now’.

This is a rather startling idea, and at first sight it seems to succeed only in shifting the time concept from under one carpet to another, but it will turn out to remove the need for a concept of time entirely.  So let us explore.  The key idea is that all possible instants coexist in some unordered way, unrelated to one another and with no hierarchy or ordering.  Then we are presented not with one moment that the universe selects for us but by all of the moments in this ensemble and that our minds, in a wholly automatic and non-volitional way that I will explain below, select moments from the ensemble to be the single focus of attention, with the result that time begins to appear to flow.   This means that there is no need to say that I am always located in particular instant.  I am located in every instant, and it only seems as if I am only in one.  Therefore the problem of the selection of ‘now’ evaporates.

But it is still not clear how the apparent succession of instants occurs.  Why do I not just stick in one instant forever?  Also, it would be nice to have some idea of how the single focus of attention is selected, given that this is meant to be happening almost automatically, with no deliberation or intention on our part.

Consider any two instants.  We can say how similar they are, and how easy it would be to turn the world-picture of one into the world-picture of the other.  This can be made very precise if we use the right physics (what the right physics is is, however, another question), but all we need is the observation that this means we can say how likely it is that one world-picture can be turned into another.  This likelihood is going to be in some way (see the comments about physics above) a measure of the number of ways of getting from the first world-picture to the second.

Here it is worth taking a brief digression.  Is it not always the case that the number of ways of getting from one world-picture to another is precisely one?  Well, it would be if we had access to perfect information about the structure of the world, but as it happens we do not; our world-pictures present inadequate information about the bits of the world that we can see, and there are large chunks of it about which we neither know nor care.  So the number of ways of getting from one world-picture to another is essentially a measure of the number of ways that the bits of the world that we don’t care about can comport themselves while the bits that we do care about make the required transition.

Back to the main argument.  Given any sequence of instants, we can obtain the probability of that sequence by combining the likelihoods of moving from each instant in the sequence to the next.  Now say I sit in the middle of all this.  The idea I want to propose now is that actually I experience all possible sequences of moments and have memories corresponding to each sequence.  So I have not, in fact, selected one sequence from many.  Rather I experience the entire set of sequences.

So why does one get picked?  Because when I look in my memory I see instants represented based on their probability of occurrence; that is to say, the more ways there are of creating a particular world-picture, the more I see it.  And this, in a simple form of free market, means that I end up seeing, most of the time, memories associated with that sequence of instants that has the maximal probability.  I may also see memories from near-by sequences with almost maximal probability, creating a kind of shagginess about my memories of certain facts, but I will, on the whole, see only the sequence of maximal probability.  And then this selects the instant I see as my ‘now’, because if my memory is already on a particular sequence that means I am going to continue to see that sequence or one of higher probability.

Let me rehearse that argument, as it is crucial.  High probability sequences essentially swamp low probability sequences in memory because they produce far more world-pictures, because there are more ways of achieving them.  This means that when we introspect we see only what happens on or near the sequence of maximal probability.  And this means that, having ended up being pushed into the realm of maximal probability by sheer force of numbers, we end up sticking in it because we have nowhere else to go.

So I do not need any concept of time.  I can be equally present in all instants, though some of them are extremely hard to get to from mainstream instants.  We replace the concept of the passage of time with the concept of looking at the collection of all possible sequences of events with their representation depending on probability.  But note that now the concept of any privileged arrow of time has gone out of the window.  We are not saying that the passage of time selects events.  We are saying that we select sequences of events based on their probability as compared with other sequences as a whole and then we derive time from those sequences.  We have succeeded in turning the concept of time on its head: time does not define memory; memory defines time.

Note that I am not implying that this selection of probable sequences is either hard-wired into the cosmos or is done intentionally by our minds.  Rather, memories compete in our minds in a kind of free market, with the most common (for which read those that arose from the most probable sequences) winning out.  It is not that we choose the most probable path; it is simply that when we look in memory, it is memories of the  most probably path that we are overwhelmingly likely to find.

#### Orderly information and the arrow of time

Why should the selected high probability sequence of instants be orderly in the sense that it leads to smooth changes and obeys the apparent arrow of time?  The first question is reasonably simple to answer: the closer two instants are to one another, the higher the probability than one succeeds the other.  Therefore sequences where change is discontinuous will be of low probability and so are likely to be selected against.

The arrow of time is quite interesting.  At first sight one might think it is answered, because we have selected a particular path using a non-time-based approach and that forces us into accepting a particular apparent time direction.  But what about the apparent truthfulness of the second law of thermodynamics?  Why is it that the time direction we selected in a purely mechanistic way should end up favouring an overall transition from less to more disordered states of being?

If I am in some instant and I am looking at possible instants to transition to, there will generally be many more ways of transitioning to an instant that is more disordered, simply because by increasing disorder I am reducing the strictures I place on myself in selecting an instant.  Therefore the transition from one instant to another will generally favour transitions that increase disorder.  Now, when I build an entire sequence of instants, it may be that considerations involving the sequence as a whole mean that the most probable overall sequence will occasionally decrease disorder for a little while, but the overall trend will be to increased disorder.  This is exactly what we see, where local events can apparently reverse the flow of entropy over a short time-period, but the overall trend is for entropy to increase.  So the arrow of time too is an artefact.

#### Consensus history

Finally, where does consensus history come from?  There is a very simple answer: from memory and debate.  The mechanism I have proposed involves no volition; different individuals will see the same events in slightly different ways, and so end up with different sequences of instants.  But as they are in more or less the same circumstances, their sequences of instants will not differ very much.  There will indeed be, as noted above, a rather fuzzy quality to them, in that no two individuals, even in the same circumstance, will necessary suffer the exact same sequence of events, but this fuzziness will be no more than we have come to expect when trying to build history from testimony.  Indeed, experiments with eyewitness testimony make it clear that history, even in the short-term, is very far from being a deterministic repetition of agreed facts.

In addition, there is one volitional factor that we must include in the framework, that is to say the matter of influence: by communication we can influence one another, and so introduce biases into our memories, resulting in shifts in the selected sequence of events.  Thus, in a rather elegant turning on head of the traditional view, it seems that historians do indeed make history as opposed to discover it.

## The sensation of space

### Approach

Inevitably, after deconstructing the concept of time, the next place to look is the concept of space.  This is in many ways a simpler problem, in that though space is as (apparently) all-pervasive as time, there is no equivalent of the arrow of time.  That is to say, there is no concept of necessary movement from there to here which parallels the flow from then to now, and there is no privileged direction which parallels the future-pointing vector provided by the second law of thermodynamics.  Therefore we need only consider the concept of space and localisation within it.

However there are still issues to cover in showing that spatial concepts have no a priori existence.  We will follow the same path as that followed in the discussion of time, starting from an investigation of inner and outer sources for the concept of space, moving on to the innate theory of space that encodes our basic spatial ideas, and then proposing a new model.  The discussion can be more brief than that of time, partly because much of the argument is simply a matter of taking ideas from above and substituting space and spatial concepts for time and temporal concepts.  But the main reason is, rather surprisingly, that in analysing time we have, it turns out, already done most of the heavy lifting required to understand space.  In fact, the probabilistic instant-based model for temporal perception given above is also a model for spatial perception.  This coincidence can be seen as being striking support for our theory.

### The naive sense of space

So, we have a naive sense of space, which is perhaps better described as a sense of spatial positioning and size.  That is to say we expect there to be an innate, constant and uniform concept of relative position and size for objects, with constancy and uniformity taking the same meanings as for time, so we expect the measures of relative position or size to apply universally and to be agreed on by all observers (as usual, modulo relativistic considerations).

We cannot expect an innate sense of absolute position or size to have these properties.  Absolute position requires a fixed point to act as the origin from which all distances and positions are measured.  We all have an innate fixed point, that is, ourselves, so we can each of us establish a personal innate and constant sense of absolute position.  But this choice of fixed point is not uniform, for yours differs from mine; a uniform theory of absolute position requires that we agree on some one fixed point as origin.  But no such naturally privileged point exists, so the choice of point must be arbitrary, meaning that the resulting sense of absolute position is not innate.  Similarly, absolute size requires a fixed object to act as the standard scale, and so the same argument applies.  Therefore the most we can expect is relative position and size.

Now looking at possible sources for the sense, much the same argument as was used for time works here.  An externally-sourced sense is impossible, because there is no a priori standard unit, and an internally-sourced sense founders on the fact that first we tend to be hopeless at estimating distance and size, second that given we can disagree with one another even about such crude concepts as ‘large’ and ‘small’, what hope is there of uniformity, and third that our perception of distance and size is hopelessly mired in the problems inherent in our sensory (primarily visual) system which, as anyone familiar with forced perspective knows, is capable of convincing us that any two objects are smaller or larger than one another regardless of their ‘actual’ relative size.  Therefore the naive sense of space is not viable.

### Alternatives to the naive theory

From now on I will discuss only the sense of size.  This is not because my arguments do not apply to distance, rather it is to spare the reader from me repeatedly saying ‘and the same applies for distance’.  It is also because the sense of size is clearly more fundamental than that of distance, given that we often deduce distance from apparent size.  Therefore, all of the following arguments relating to the sense of size apply equally, mutatis mutandis, to the sense of distance.

So what is the alternative to the naive sense of size?  As with time, let us investigate the possibilities for a sense of size that is not required to be constant or uniform, but is still innate.  As it turns out, there is no need to look at external and immanent sources separately, as there is a huge problem that applies equally wherever the source originates.

Consider again the case of forced perspective.  It is entirely possible that I can make radically different judgements as to the relative size of two objects based entirely on where I am positioned relative to them.  Moreover, I can be fooled into thinking that tiny models are gigantic by suitable use of perspective.  So whatever the sense of size is, the size it measures is not an intrinsic property of the thing being measured.  Rather, it depends entirely and only on the world-picture that my senses paint in my mind.  As such it is not a feature of external reality or the thing in itself, but is psychological in its origin.  Therefore the sense of size is innate, but is not immanent, in that it does not provide any deep and direct connection between our minds and deep reality.  It is in us and of us.

### The simple spatial sense

The sense of size seems to work as follows.  I experience a world-picture, fed to me by my sensory apparatus, and discern things within it.  I compare them and label them as larger or smaller than one another.  Then, chaining these relative sizes together, I arrive at an overall picture of the sizes of the things I see.  In order to do this I make an arbitrary choice of scale factor, which turns relative sizes into absolute sizes.  Support for this hypothesis comes from the fact that the issued with the sense of size discussed above arise from confusions in relative size.  If we dealt directly in absolute size, there would be no forced perspective.

So, the sense of size depends on two processes:

1. Breaking up the world-picture into chunks corresponding to ‘things’.
2. Comparing the size of the resulting chunks.

The second process is not particularly enlightening or interesting from a philosophical point of view.  It could be achieved by something so simple as counting the number of active neurones in two patches of the visual cortex.  The first process is much more significant, but before we can consider it, we must discuss the origin of the concept of space itself.

### The sense of space

#### Where does space come from?

We have seen that there is no reason to believe that much of the nature of the space we appear to see is intrinsic to reality, but now we need to address the fundamental question of why do we see things as inhabiting an ambient space at all?  We do we not just see isolated and disconnected things?

The answer is surely inherent in that last statement.  Our senses give us information to use in planning actions.  Thus the concepts of the sequence of instants and sequence of locations join, and just as the sequence of instants, by virtue of being a sequence, creates the impression of a flow of time, the sequence of locations creates the impression of  flow of something.  That is to say, we experience a way of relating the collection of things I see ‘now’ to the collections of things I saw in my memory of the preceding ‘now’.  So space is a mental construct that allows us to picture a sequence of locations at consecutive instants.

To aid in planning, I need to be able to relate the things I see ‘now’ to the things I saw at a previous ‘now’.  That is to say, I need to have a way of relating ‘here’ to my world-picture that makes sense of how that world-picture changes as I progress along a sequence of instants.  There are a number of possibilities for how this can be achieved, but the most obvious are that we model ‘here’ as static and the things as changing, or we model ‘here’ as changing and the things as (largely) static.  Clearly the latter is the case.  The purpose of the model is to facilitate our selection of sequences of instants, so we need to plot a course; this is much easier with a static model.  In addition, keeping the model static has the advantage that all we have to update from instant to instant is where ‘here’ is; there is no need to keep track of the behaviour of all the things we perceive, because they remain (largely) static.

In this case, the inherent limitation of our minds that we can only handle one world-picture at a time means that I end up with a sequence of instances of the model, one per instant. Now, remember that I use this model for planning how I select instants.  That means that I have to be able to conceptualise the sequence of world-pictures resulting from any plausible sequence of instants.

#### The world as neighbourhoods

I could make this model by being aware of all possible sequences of instants as a collection of sequences.  This is simple, but it has three problems.  First, I experience nothing of the sort.  Second,  to do so would mean I have to know in advance about possible sequences I might select, and yet my knowledge of the model is based purely in ‘now’, I have no foreknowledge of what is to come.  Third, it violates the single world-picture restriction, by expecting me to be conscious of multiple potential futures.  If the spatial model can only exist in ‘now’ then the only information to hand is where ‘here’ is, my knowledge of the things I perceive, and possibilities for how this can change as I transition to the next ‘now’.  In other words, it has to be compatible with the temporal model of probabilistic paths through instants, where the transition probability from instant to instant now takes into account the difference between successive models (which was always inherent in the discussion above, anyway).

So I assign to the model of ‘here now’ a collection of models of ‘there then’ each with an associated probability.  Then all my information about position is built up by joining these nearby collections together in sequences, one per instant.  What changes between successive models is my location.  So to go from the current model to the new model I need to change a small neighbourhood of ‘here’ corresponding to the set of possible ‘theres’ deriving from the most probable succeeding instants.  And I patch these neighbourhoods together, from moment to moment, to construct my trajectory.

Now I can generalise.  I have the concept of a neighbourhood of ‘here’, which is essentially my current ‘here’ and ranked candidates for my next ‘here’.  And so there is an ensemble of possible instants, which collectively represent all possible states of ‘here’.  So as I progress from instant to instant I construct the appearance of a path, in that I change from one ‘here’ to another.  And so I end up patching all these neighbourhoods together.

I do not have to be present in all instants and neighbourhoods.  That is to say, the ensemble of possible instants will contain instants where I am ‘here’ and instants where I am not.  Moreover, each choice I make of moving from ‘here’ now to ‘there’ the instant after now involves not just a selection of ‘there’ with me in it, but a selection of a collection of nearby ‘theres’ with me not in them, in that by selecting going ‘there’ next, I rule out these other ‘theres’ as my destination.  So the mere fact of patching together my trajectory makes locations off that trajectory more or less probable, and then, very naturally, the probability of my trajectory is influenced by the probability of patching together all these neighbouring locations.

But now I can go on patching together neighbourhoods, joining them via their connections, until I end up with some kind of maximal world-structure at that instant, and again the total probability of this structure is what I need to consider, not just the probability of the particular neighbourhood that I inhabit.  So, in fact, I get ‘space’, as a smooth geometrical object, for free.

Once again, we have turned things on their head: instead of space being fundamental as a smooth geometrical object, which can be split into smaller and smaller patches, we see that the fundamental object is the infinitesimal patch centred on ‘here’ and we construct space by gluing these together.

#### Why is space uniform and Euclidian?

Geometrical objects are still potentially very complex, with curvature, distortions, holes, etc. But we perceive space as being uniform; that is to say that we perceive that things inhabit a three-dimensional space that looks the same everywhere, so that wherever ‘here’ is and wherever we look in it, sizes and lengths stay the same.  This is part of the appearance of constancy that we expect: in other words, if two things look the same size, then they are.

Of course, this is not true.  We have seen that we can only be aware of relative size.  When we see two apparently equally sized things one of which we have trained ourselves to know is larger than the other, we deduce that the larger must be further away, but equally well it could be smaller than we expect and at the same distance.  Or it could be that there is some distortion in the patch of space it inhabits that interferes with the overall scale factor.

So the reason that we see space as uniform is the same as the reason we suffer from the illusion of forced perspective.  It is that we assume that space is uniform, just as we assume that things that look the same size are.  In fact, as the only information we have is relative sizes, and given that we have no intrinsic sense of the geometry of space, it must be the case that we have to build a space which is uniform, for it is only thanks to the assumption of uniformity that it is possible for us to build a model of things’ positions from their relative sizes.  As all we know is relative sizes, to turn these into a model of distances and positions we must assume some rule, and there is only one natural and consistent way of doing this.  Turning assumptions on their heads again: we do not compute distances from sizes the way we do because space is flat; we perceive space as being flat because we compute distances from sizes the way we do.

#### Why is space three-dimensional?

There is no particular reason why space should be three-dimensional, but then again, there is no particular reason why it should not be.  This may just be an aspect of our mental processing.  If it is indeed intrinsic to our minds that we receive two slightly different sets of visual sense data organised as flat arrays, then three-dimensionality is intrinsic to us, but not necessarily intrinsic to the universe.  For even if the universe were seven-dimensional, we would not be able to see it as anything other than three-dimensional, because that is what our visual senses force upon us (note that those who have the use of only one eye from birth have no concept of a third dimension).

So, as with time, it may be the case that our mental models are consequences of deep physical truths about the world, or it may be the case that these apparent truths are consequences of the way we build our mental models.

### The origin of things

#### The complete model

I said earlier that I had to model our perception of things.  Before I do that, let me summarise the model as it stands at the moment; doing so will help motivate the way we construct things.

1. The basic entities in the world are instants, to each of which is attached a neighbourhood, which is a collection of putative next instants, ranked by the probability of transition from ‘now’ to that instant.  It provides relations that allow me to connect my current ‘here’ to possible next ‘heres’ (for geometers, this choice of terminology is not accidental).
2. I inhabit the set of all possible sequences of instants; sequences because of the single point-of-view restriction that my mind imposes.  However, again because of the single point-of-view restriction, I am aware of only one such sequence at a time and, by force of numbers, that sequence will be a sequence of maximal (or close to it) probability.
3. My sense of time arises from passage along a sequence.  My sense of space arises from the fact that a sequence contains at each instant the object created by patching neighbourhoods together in all possible ways.  Taking the maximum probability sequence gives the maximum probability succession of objects, which are just three-dimensional Euclidian spaces at successive instants.

Note that the construction of space from patched together neighbourhoods uses the same machinery as the construction of time from patched together instants.  Moreover, they are one and the same process: selecting a sequence of instants based on probability automatically patches the consequent instants via their connections to one another, and we then build a mental model to accommodate the resulting structure.

#### Where do things come from?

So, I am suggesting that the fundamental entities that go to make up what we perceive as space are neighbourhoods, patches of possibility that we translate into individual points and the directions that connect them to other points.  Of course, these neighbourhoods will not all be identical: if they were then there would be no most probably state, so we would perceive nothing.  Rather there can be variation in preferences as to which other neighbourhoods they can connect to and how they can to them, so one can imagine stronger and weaker connections, leading to tightly clumped and more diffuse assemblages and thence beginning to give the semblance of structure or fabric to the resulting ensemble.

Massive bodies curve space-time

This is, I think, the final crucial insight.  We are accustomed to thinking of objects dictating the shape of the space around them, so in General Relativity, we say that a massive body warps space-time (see figure), but, to turn this on its head, what if the truly fundamental objects are the warps and curves in space-time, and the objects are artefacts that we see because we are not equipped to detect space-time curvature directly?  That interpretation is much more in the spirit of Einstein’s theory, and it is exactly what I have just described.  Preferential attachment will cause inhomogeneities in the most probable collection of patched together neighbourhoods, and I am proposing that what we think of as ‘things’ are simply our minds’ way of representing those inhomogeneities to us, in so far as we can detect them at all.

The consequences of this are quite stunning.  As well as space and time, things themselves dissolve into contingent associations of formless and unknowable units of ‘stuff’.  Everything we think we know falls apart, and we are left certain only of Descartes’ founding assertion: I think.

# Correspondences with physics

## Quantum fields

The reader who is apprised of quantum field theory will be well aware that the description I have given of how a plethora of possible states with probabilistic transitions between them gives rise to apparently stable time and space and things is rather similar to Feynman’s sum over histories model of quantum fields.  This is not surprising.  For those who are not apprised, it is a principle in quantum mechanics that if you want to find the probability of a system starting in state A and ending in state B you have to sum the probabilities of all ways of getting from A to B.  Note, all, not just paths that obey the laws of physics or are plausible, but all possible paths.  And then, as if by magic, it turns out that what pops out as the most probable path is exactly the one that obeys the laws of physics.

This is just what I have proposed above.  Essentially we have a system consisting of a multitude of tiny patches, each corresponding to a single instant, we throw them all together and, by the magic of probability, something surprisingly like space-time emerges.  The reason for this seeming magic is exactly the same as the reason why Feynman’s trick works; less probable states cancel and concatenations of states that behave in a systematic way tend to have higher probability.  But the underlying picture is not one of space-time, just as in physics the underlying picture is not one of electrons like billiard balls, but of a strange, almost formless something about which we can know directly absolutely nothing.

## Gravity and black holes

I have already discussed the deep relation between my model of things and General Relativity.  Einstein always considered the fact that it was necessary to introduce matter into his theory ‘by hand’ as a failing; here we see the beginnings of a route to removing that failing.

Other interesting correspondences exist.  According to General Relativity, clocks move more slowly in the presence of a high gravitational field, and according to our understanding of black holes, a high gravitational field corresponds with a high entropy flux, that is to say enhanced change in the nature of things.  But by our model, as time is a percept caused by our need for book-keeping on our memories, we end up being able to argue in reverse that if more happens then we get more memories, so time will inevitably appear to slow down.  The correspondence is tantalising, as are many others, for example the concept of `space-time foam’ current in some attempts to unify gravity with quantum theory is very similar to our concept of reality as being made up of tiny germs of space, each existing individually, with space as an emergent property of the ensemble.