The Porter Zone

Philosophical musings and more

Monthly Archives: October 2010

The ‘other’ in culture

What is the ‘other’?

The ‘other’ is a key idea in what is commonly known as post-modern thought. Writers such as Lacan made it into a key tool in the analysis of culture, where it essentially defined as applying to any group that society chooses to marginalise, wishes to exclude or subordinate. This has led to some rather strange conclusions, such as Foucault’s notion that mental illness is a label used by society to render ‘other’ those it wishes to exclude.
Passing on to more sensible applications of the theory, it has borne fruit in the concept of multiculturalism, where we acknowledge that ‘other’ groups exist and create a situation where none is forced to adopt a majoritarian (or ‘other’) culture, but where the fruits of all the existing cultures are available to them.
Unfortunately, there are negative applications too. The ‘other’ has been turned into a political tool, with the ideal of supporting the political aspirations of groups that are identified as being ‘other’. As people sufficiently broad-minded to support all ‘other’ groups are exceedingly rare, this generally turns into an excuse to support some ‘other’ groups and ignore others, the choice being based on personal preference. So, in this situation the ‘other’ theory becomes an elaborate way of giving personal prejudice the veneer of philosophical justification. Also, self-identified ‘other’ groups have tended to use the concept of being ‘other’ as a tool to assert their political presence. This has led to the concept of ‘other’ separatism that I will discuss below.

A definition of ‘other’

Before I start analysing the consequences of identification as ‘other’, it is worth seeing whether the rather woolly definition given above is philosophically meaningful. I will therefore show multiple cultural communities can come to exist even within one apparent cultural group. That is to say, how it is that within one ‘culture’ that official culture may be the majoritarian culture, that of the majority, but there will be minority ‘other’ cultures. The consequence is rather startling, as it directly contradicts established wisdom on the nature of the ‘other’.
I use an evolutionary model to show how subgroups can coalesce out of the majoritarian culture over time. Say we have a population of individuals within a group, who vary statistically around the mean. This is not to say that there is such a thing as a ‘normal’ person; rather there is a purely notional concept of a ‘mean’ person, who is, in a sense, the statistical average of the population as a whole. Obviously such a person need not, and most likely will not, exist. Now assume that there is a common culture across the group. This can consist of any form of information that can be passed from individual to individual, with errors creeping in en route. Provided that the strength of communication between individuals is (statistically) uniform across the population. In that case then the culture will preserve itself as a unity, though it and its mean value will change over time.
Now suppose that there is a subgroup of the population such that communication links between members of the subgroup are always stronger than those between members of the subgroup and individuals outside the subgroup. Then changes in the passed information are retained within the subgroup, and the averaging-out effect of the wider group is reduced. So, after not very long, the cultural information within the subgroup will have begun to diverge from that within the main population (the biological analogue to this is groups of animals that become to a greater or lesser extent isolated from the main population and eventually speciate, e.g. Darwin’s finches).
Translating this concretely, we find indeed that marginalised groups tend to communicate more within-group than without, whether due to persecution (religious minorities, homosexuals), discrimination (women, people of non-majority skin colour) or choice (closed sects). And these tend to be the groups that we think of as ‘other’. So, we can draw from this analysis two key points about ‘other’ cultures:
  • There is nothing essential about ‘other’ cultures, so there is no inherent aspect of an individual that marks them out as ‘other’; it is entirely possible for a majoritarian individual to be part of a minority ‘other’ culture if they happen to have closer ties to members of that ‘other’ group than to majoritarians.
  • There is nothing special about the membership rules for ‘other’ cultures; just about any assemblage of people can form a non-majoritarian culture, so long as they have unusually strong within-group links.
  • ‘Other’ groups need not arise from exclusion by majoritarian society; they form as a result of preferential attachment, and this need not arise purely from exclusion: it can be a result of choice.

These facts are incredibly important, as the theorists of ‘otherness’ would have us believe the exact opposite. In particular, they undermine the notion that the ‘other’ is the excluded and disempowered.

One could, of course, try to come up with some definition that means that some ‘other’ groups (e.g. women, homosexuals, non-majority skin colour groups) are the real ‘other’ while others (mainstream Christians, aristocrats) are not. But then that depends on a value judgement, and not any well-defined criterion. Being ‘oppressed’ is a popular criterion, but it has the problem that while we should clearly (that is, if we happen to be liberals) stand against oppression, and aim to undo it and its effects, that has nothing to do with culture. To assert (as one hears from time to time) that oppression somehow makes the resulting ‘other’ culture more authentic is special pleading; the judgement is based not on anything inherent in the products of the culture but on the imposition of an external idea. So, any such judgement, being based on personal choice, must be capricious; in fact, it usually seems to be a function of the commentator’s political views. But a definition of other that (essentially) boils down to “‘other’ is what I say it is” is meaningless.
Hence, the only meaningful definition of ‘other’ is, it turns out, a non-majoritarian cultural group. One may form a personal ranking of said groups based on one’s political and cultural preferences, but one should not mistake this preference for a general theory.

Monoculture, polyculture, multiculture

Many world-views

Let us start the argument by considering Christian theology. There are different schools of Christian feminist theology applying to latina women, black women and white women. Now, if they are Christian then they must all be referring to one God, and yet they take radically different views of what that God is. That is to say, it is entirely plausible that one’s starting point in discovering God will depend on ones sex and race, and that the questions one asks will also be so contingent. But the end-point should be discovery of truths about the one God, which means that these truths should remain true regardless of ones sex or race. Moreover, the same applies to majoritarian theologians, which implies that their theology cannot be rejected out of hand.

But, of course, that is precisely what those who define themselves as ‘other’ do do: they reject majoritarian theology as being somehow tainted and arrive at pictures of God that seem more like pictures of themselves than of any universal deity. Of course, they could assert that their God is not universal, in which case the argument stops here, but in that case they are not Christian, which they say they are, so let us continue. Consider specifically the rejection of majoritarian theology (the same argument applies to the necessary rejection of differing ‘other’ theologies). The only intellectually tenable way of doing this is to assert that (say) Aquinas was mistaken, because he had the world-view of a man in a male-dominated culture, and that world-view has been shown to be, or is taken to be incorrect. But in that case, what guarantee is there that a feminist / womanist / latinista theologian’s world-view is any better? The hidden assumption in the preceding statement is that majoritarian thinkers’ world-view is flawed whereas that of the particular ‘other’ to which the theologian making the argument belongs is not.

Rejecting majoritarianism

The frequently rehearsed argument justifying this rejection is as follows. (1) The ‘other’ group has been oppressed by majoritarians, and now they have thrown off the shackles of that oppression; (2) they reject being forced themselves to act and think as majoritarians; (3) they assert that the majoritarian world-view is not useful to them; (4) By extension, they assert that the majoritarian world-view is not useful at all, and that any product of it is of no value to them; (5) depending on how relativist they are, they either (5a) assert the existence of an epistemic barrier between their culture and majoritarian culture, or (5b) assert that majoritarian culture is entirely worthless. Now let us analyse this. Step (2) is trivially correct; it is not a matter of logic, but of justice. Step (3) is, as I have hinted above, questionable, as it may be that not all aspects of the majoritarian world-view are pernicious, but it is certainly the case that they should start from their own world-view and see whether there is anything useful to be gained from adopting parts of the majoritarian view, and not vice versa. Step (4) is where the argument breaks down; it and step (5) are not logical or philosophical statements, but political, being the starting point for a power-grab of greater or lesser extent (depending of which of (5a) or (5b) is chosen). And, as one would expect of political statements, they have no basis in observed fact, but are emotional statements designed to resonate with those who feel anger against majoritarian culture.

Therefore the argument in favour of rejecting the majoritarian culture is unsound. However, consider its consequences. Deploying (5b), the ‘other’ group silences the majoritarians and becomes the new majoritarian culture. But then different ‘other’ groups can do the same to that group and each other, until in the end everyone is silenced. Deploying (5a) and fractioning from majoritarian culture leads to a regression of smaller-and-smaller non-communicating monocultures, which continue to fraction until they reach the end-point of one-person cultures, and hence silence. Or the argument can be dismissed, in which case it is necessary to accept that all groups, including ‘other’ and majoritarians, have a part to play. So either everybody’s views should be taken into consideration or nobody’s should.

To say that nobody’s views should be considered is, of course, the end state of deconstruction, but it is something of a counsel of despair. We can do better than that. Say we have a number of schools of theological thought, each of which sets out from some world-view (and bear in mind that even the majoritarian culture is hugely fractured in this respect). What we could do is to have a big fight, with the strongest group getting to decide what is true. That is what is said to have happened in the past (though a quick look at the sheer variety of theological ideas espoused by majoritarians suggests that the true position is somewhat less black and white), and it is clearly not an acceptable approach. So, instead we could announce that each ‘other’ group has its own version of Christianity, that they are all equally valid, and that to try achieve consistency between them is disallowed, as it dilutes their status as ‘other’. That is essentially what we have now. It is a position much beloved of post-deconstructionists, who revel in a false ‘diversity’ of ‘truth’. False because in fact the logical consequence of their position is that there should be a number of totalitarian groups within each of which only one ‘truth’ is permitted. True diversity can be achieved only if all accept that they must listen to the views of those who are our ‘other’, regardless of how ‘other’ we may consider ourselves to be.

Quine’s rabbit

So how can we listen to the ‘other’, for it is surely true that something obviously true within one world-view can be not obvious at all in another? The following is a highly condensed version of an argument of W.V.O.Quine. Say you and I have no langage in common, and I note that whenever you see a rabbit you use the word ‘gavagai’. What do I do? I could assert that it’s your culture and I have no right to interfere, in which case we are off down the road to island monocultures with no intercommunication save the occasional sling-shot. Or I could conclude that ‘gavagai’ means rabbit. Now it may actually be that in your culture you discuss animals not as wholes, but as a collection of body parts, so ‘gavagai’ refers to a collection of two short legs, two long legs, a body, two long ears, etc. Now there is no way that I could ever know that ‘gavagai’ conveyed much more information than the word ‘rabbit’, for I would hear ‘gavagai’ for rabbit, I would take words for individual body parts as referring to those parts, and so on. But this means that though some meaning would be lost in translating from your language to mine, the part that is lost is precisely that which you cannot express linguistically. And, similarly in translating from my language to yours (to assume otherwise is a form of inverted chauvinism).

Before anyone objects, I am aware that this is a purely linguistic argument. I am not thereby denying the possibility of meaning conveyed by numinous states. However, that meaning becomes culture, which is a shared public thing, only to the extent that it can be communicated, which requires expressive ‘language’ of some form, whether it be natural language, symbolic language or the emotional languages of art. Therefore, within this broadened scope of ‘language’, it follows that those ideas that a group can communicate internally using the expressive means available to it, can be translated and communicated externally. To deny this implies not only that intercommunication between cultures is impossible (a commonplace of neo-deconstructionism), but also that cultures cannot intracommunicate, so individuals are locked inside their own heads. So there are no epistemic barriers between cultures, or, to put it another way, contrary to what a university acquaintance of mine once claimed, the lyrics ofBohemian Rhapsody do not hold secrets that can only be understood if one is gay.

Putting the rabbit into action

In the case of a feminist theologian talking to Aquinas, this means that though Aquinas may not appreciate the feminist’s private meaning (or she his), they can be confident that they understand one another in as far as they limit themselves to the expressible. Which means that if the feminist theologian disagrees with Aquinas, they can identify which of his premises she takes issue with, though he may not be able to explain to her why he believes it (because he cannot explain that even to himself). And at this point they can have a discussion, which might lead to each of them understanding the other better.

So I am not denying the value of differing perspectives, far from it. I am saying that in academia as in culture, we need the input of many different world-views, as they are the only way we can become aware of unjustified cultural assumptions that shape our thinking, and begin to understand what is baggage that we can let go and what is real core belief that we cannot. Or, in a wider context, what is assumption about the way art should be that can be challenged, and what is essential to our artistic identity. We can only do this if we have a cultural marketplace, where artefacts are valued based on their merit, not their tribal adherence. As soon as we start privileging certain artefacts on the basis of their ‘other’ status, or asserting that there are epistemic barriers between groups, we are taking the first step on the road to the isolated monocultures.

Multiculturalism or death

So to conclude this part of the argument, I am arguing for multiculturalism, in which we do not destroy individual cultures or preserve them in aspic. Rather we allow them to join to a wider discourse in the hope of producing greater. The alternatives are not pleasant. In monoculturalism one group gets to assert its pre-eminence and suppresses all ‘other’ cultures. This has been tried and found wanting. In polyculturalism we defend many small totalitarian cultures in the only way we can, by retreating from contact with one another. And once that has been done it will happen again, with each of the small cultures fractioning into smaller cultures, until we achieve the end-state of the deconstructionist programme: six billion cultures, each consisting of one individual locked inside their own head. That way lies silence and death.

Appendix: epistemic barriers

I argued above that there are no epistemic barriers between groups. The argument was elementary, but relied on a certain amount of hand-waving. There is a much more powerful general argument which does, however, assume a certain amount of philosophical machinery, namely knowledge of the sorites paradox. I present this argument here, but it does not affect the argument, so readers may, if they choose, skip to the next section.
Briefly, the outcome of the sorites paradox can be stated as follows. If I have a population of things to each of which I can assign a parameter (age, gender, sexuality, race, etc), and two types such that:
  1. Things at one end of the range of the parameter are of type 1 and things at the other end of type 2
  2. If thing A has one type then so do all other things with parameter value sufficiently close to that assigned to A

Then one of the following is true:

  1. The two types are identical
  2. The range of parameters can be divided into two regions that are clearly separated from each other

Let the population be the human population parameterised by some variable used to define groups as ‘other’ and let the types correspond to communities of intelligibility, so within a type individuals are mutually intelligible. Then the preconditions are met (clearly a small change does not effect intelligibility) and so one of the two outcomes is true. In outcome 1 the two types are identical, so the whole population is mutually intelligible and there is no epistemic barrier. In outcome 2 there is an epistemic barrier and the population can be divided into two groups, one of type 1, the other of type 2, with a clear gap between them in terms of values of the parameter. But the standard variables – gender (not sex), sexuality, race – are all extremely malleable, so this gap is unrealistic. Therefore there is no epistemic barrier. To put it more succinctly: we are one species; an epistemic barrier would require us to be two or more.

 

The danger of being ‘other’

Cultural isolationism

Increasingly, self-identified ‘other’ groups announce that majoritarian culture is of no value to them, and that their own culture is all they need. As such, cultural products of members of that group are asserted to adhere to different norms to those of other groups, and so cannot and should not be held to the same standard. In extreme cases it is even argued that non-members cannot appreciate or comprehend the group’s culture. This is the cultural equivalent of the fragmentation of Christian theology discussed above.

Now, the end-result of this agenda, if taken in its strong form, is obvious: every person is their own culture and communication is impossible, so all that is possible is silence and death. So, if it is obvious, why do the ‘other’ separatists not realise this fact? There seem to be three possibilities
First, the ‘one big push’ approach. The idea is that all that is needed is to destroy majoritarian culture, and then everyone will be happy. Apart from the fact that this is incredibly destructive, it forgets the fact, by their own logic, if majoritarian culture is and can be of no value to them, then ‘other’ culture is and can be of no value to majoritarians: there is a key principle at work here, that everyone is someone else’s ‘other’. So the consequence would appear to be that majoritarians have no right to exist. So, say we remove the majoritarian culture and do – something – with the majoritarians. Why should the remaining ‘other’ groups not start to bicker and fracture, which one would have thought more, not less, likely, in the absence of the oppressor. The idea, espoused by theorists of the ‘other’ that their shared experience of oppression will make them more reasonable, more amenable, ‘better, is simply a statement of faith, and it has no evidential basis (indeed, the fact that feminist theology comes in White, Black and Latina forms, and Latina theology has two violently disagreeing sub-factions suggests, on the contrary, that, to coin a phrase, ‘other’ individuals are human, all too human). Therefore, this theory cannot be taken seriously.
Second, the ‘monocultural other’ approach. The idea is that while the majoritarian view does not represent the entire population, hence ‘other’ groups form, the ‘other’ groups are each uniform in culture and therefore never form their own sub-‘other’ groups. But people are inherently variable, and so these sub-groups will form unless there is some mechanism to prevent that from happening. There is a commonly expressed belief that members of ‘other’ groups are somehow more cooperative than majoritarians, but this is simply a restatement of the ‘shared experience’ idea demolished above. Another belief is that those who disagree with the set of cultural beliefs that the writer considers to be authentically ‘other’ are not really ‘other’ at all (e.g. culturally majoritarian women are said to have subordinated themselves to the patriarchy); ironically this is the kind of exclusion that it is claimed led to the creation of the ‘other’ group in the first place. The only way to achieve the required uniformity is to impose it, which requires totalitarianism. Therefore, this theory cannot be taken seriously.
Third, the ‘multicultural other’ approach. The ‘other’ group accepts internal variation and adopts internal multiculturalism. This could work, but then why not multiculturalism across the entire population? There are two possibilities. One can assert that the majoritarian culture is inherently incapable of sharing with other cultures. Apart from being an exceedingly pejorative assertion, on a par with saying that ‘all men are sexists’ (which, regrettably, is not something I made up), this seems to ignore the fact that cultures are malleable things, and so it is entirely possible for majoritarian culture to reform, should it be given a reason to do so. So this possibility is based in prejudice, not fact. The other possibility is that one refuses to make the compromises required of any culture (see above) if it is to enter into multicultural cohabitation, so one insists on the purity of one’s isolated culture. This is simply selfish. So the ‘multicultural other’ could work but is insufficiently ambitious, and the only plausible reason for not extending it to full multiculturalism is an isolationism that means that the members of this ‘other’ group have turned majoritarians into their very own, marginalised, ‘other’.
So out of all that argument, it seems that the reason why ‘other’ separatists do not see the consequences of their position is based either in delusional beliefs about human nature, delusional beliefs about their own ‘superiority’ and prejudice against their own ‘other’. In other words, all the qualities that they rightly criticise in majoritarian culture. And this is because of the key point they miss: whatever our cultural categorisation, we are human, and so share essential human nature, including both the positive traits and the negative, one of the foremost of which is tribalism. Why tribalism is so fundamental a human characteristic is a question for another day.

Isolationism and art

So far we have seen the isolationism is incoherent and unsustainable. But let’s, for the moment, pretend that an isolationist cultural group, or even individual manages to sustain existence. There is actually an insidious danger inherent in defining oneself in terms of being ‘other’ that means there is a very high probability that art produced by any individual who so identifies themselves cannot be great.
Why should this be true? At first sight it sounds like a pejorative statement, but it is not. I repeat again, we are dealing with an individual who entirely identifies themselves and their cultural activities in terms of being ‘other’. That is to say, they accept a cultural definition of their ‘other’ group, and, essentially, say ‘this is what I do; here and no further’. So they have made a conscious decision to limit the toolkit available to them in creating art, both technical and expressive, to that hallowed by the current definition of what their culture is.
Before I explore why this prevents great art, consider for a moment what majoritarian art with the characteristic of defining itself in terms of the artist’s majoritarian culture is like. There is a very simple answer: academicism. Composers like Cherubini, so loathed by Berlioz, painters like Landseer and Munnings are what you get: solid, competent and completely without any spark that takes them beyond competence and into greatness. Because, basically, defining yourself entirely in terms of a culture – any culture – means that you can never do anything new.
And so, back to the main argument. As I have observed elsewhere, great art always has something of the other about it. But the point is that that isn’t the ‘other’ the creator belongs to; rather it is other to that ‘other’. So to create great art, the artist has to transcend their culture and go beyond it. But if the artist is self-defined by their ‘other’ status, that is what they cannot do. And so they cannot create great art.
This is, by the way, one reason why the proliferation of different ‘other’ schools of art is not entirely a good thing. By promulgating the view that ‘other’ art does not need to be measured by the same standards as ordinary art, they essentially absolve the artist from the need to strive for greatness, because quality is measured not in terms of how their work transcends their ‘other’ status, but in how it conforms to it. Hence a new spectre appears: not only do the ‘other’ groups become monocultures, but those cultures will with and die, or at least end up irrelevantly preserved in aspic, while individuals who seek greatness desert the ‘other’ for the mainstream.

 

One metaculture, many cultures

So, to conclude, there is nothing essentialist about culture. To say, as I have heard one critic say, that male writers cannot write convincing women characters, and indeed, should not be allowed to do so (I wish I were joking) is nonsense. To switch ‘other’, Thomas Mann could write convincingly about an elderly ephebophile in one book and an exuberantly heterosexual young man in another. Stravinsky could write brilliant jazz-inflected music without diminishing jazz or his own art.

The analysis above showed that there is only one stable situation, which I will now give a new name. What we need is a metaculture, within which many cultures exist. Each of us exists predominantly within one of those cultures. But rather than being told that that is where we must stay, and that appropriating ideas from other cultures is oppression or imperialism (take your pick) we should have in front of us the whole toolkit making up the metaculture, and be able to appropriate what we need from it. And then if we create something new, it doesn’t become the property of our culture, it becomes part of the metaculture.

So we celebrate the diversity of people, not as members of homogeneous groups, but as people, and allow each to form their own personal cultural toolkit that they use and extend as they reach for the one other that really matters – that of transcendance. That is the individual’s personal culture, and it, the selection of tools and the way they are used, reflects the individual’s nature. Some may prefer to stick broadly with the tools of a particular culture, and to extend and enrich that culture, others may prefer to create fusions of many cultures, and to create new things that belong only to the metaculture. Both approaches are equally acceptable. But to say that one can only use the tools that the doctors of cultural theory have hallowed for one’s use, based on one’s officially identified status as ‘other’ (or not) is fascism.

The creative mind

Introduction

I’ve always been interested in how creativity actually works.  Now I am myself fairly creative, in that I write here, I write fiction, I write music, etcetera.  I’m not saying that what I write is necessarily good.  That’s not important for the sake of this argument.  The key thing is that creation is something I’m used to, that I have a hand in.  

So what I’m going to do in this short piece is to introspect a bit and look at how my creative process works, and then see if there are any lessons that can be learned from it.  The discussion follows on naturally from that at the end of my piece ‘The Tyranny of Realism’, where I discussed the nature of greatness in art.  Which once again, is not a quality I claim for myself.  I am merely the lab rat from observation of which ideas follow.

My creative process 

This is how I work.  In fact, until I went into therapy the whole process disconcerted me greatly, as I seemed to be creating art without any very strong hold on what it was that emerged.  That is to say, music I wrote just happened; attempts to plan it went horribly wrong, which disconcerted me.  Even more bizarrely, designs for IT systems could appear in my mind fully-formed without my having done any actual conscious work to arrive at them.

And then I discovered two things: the Myers-Briggs indicator and the psychology of Jung.  The first taught me that the approach to creation that disconcerted me so much was simply intuition at work.  The second gave me the framework I need to understand what is going on.  So here, as a result of these insights, is what I think happens in my psyche when creative work (which can be writing music or words, writing this piece, designing a piece of software, whatever) is going on.

I am very strongly intuitive.  In my Myers-Briggs score, my score for intuition is maximal, so it seems I couldn’t get much more intuitive than I am.  What happens in the creative process is this.  Ideas pop into my mind, seemingly from nowhere.  I have no warning, but I can call on this capability at will, so if I sit down in front of a computer and get into the zone, ideas seem to flow straight from the aether into my fingers.  To put it mildly this can be disconcerting.  I just spent an hour writing (some fiction, part of a prejudicial satire on Twilight as it happens) and I knew roughly what I expected to happen next plot-wise.  Which it did, up to a point, in that it ended up in the right place, but the path it chose to follow to get there was completely different from the one I had mapped out.  And what didn’t happen was that I had an idea and thought ‘that would be better’ to myself.  No, what happened was that I let go of control and the new approach simply happened.  

This isn’t an isolated occurrence.  When half way through a novel I needed a house-maid to open a door for my hero, so she appeared, and she had a little bit of dialogue, and that was it.  A throw-away character, or so I thought.  What I hadn’t expected to happen was that she ended up dominating the second half of the novel, but she did.  And I never, consciously, made that decision.  It was made by whatever it is that feeds my fingers – call it my intuition. As it happens I’m not alone in this.  The very great Polish writer Stanislaw Lem has said that when he sat down to write Solaris, he had no ideas about planet-wide sentient oceans, or phantasms of dead lovers, or any of the other material that makes Solaris the amazing novel that it is.  He just sat down to write and it happened to him.  Similarly, Igor Stravinsky said that he felt that Le Sacre du Printemps was composed through him rather than by him.  As a third example consider Shostakovich’s rebuke to a student who was not getting on well with writing the second movement of a symphony because inspiration was slow in coming: ‘You should not be waiting for inspiration, you should be writing your second movement’.

Now, I said this was disconcerting.  Why?  Well, first off, there’s the sense of lack of control that I get because I’m not consciously driving the piece forward in a controlled way.  In fact, as I said, when I have tried to do that, the results were terrible.  My intuition refused to play ball, and I had to build music rationally, almost mathematically (though, in fact, good mathematics is created intuitively too).  In fact, the results were so terrible that I ended up binning them.  So it seems that the lack of control is endemic to the creative process, which therefore makes me very suspicion when commentators claim that, say, Carl Nielsen had organised his music according to pre-determined rigorous models and the music itself is, as it were, superfluous.

The other source of disconcertment is perhaps only worrying if you are as self-critical as I.  That is to say, if I had sat down to write the scene with the house-maid at a different time, on a different day, would she still have taken over the latter part of the novel?  Or would she have been as minor as she was expected to be?  This apparent lack of determination is very worrying, as it leads to the next question: if what I actually write is so contingent, is there actually any piece of music or story or whatever ‘out there’ that’s being composed / written?  The obvious answer is ‘no’, and I’ll return to why that can be disturbing later.

Conclusions about creativity

Creativity is not Platonic

Let’s start with the last point I made, the one about the role of contingency in creation.  From the philosophical position I have reached now, that isn’t particularly disturbing, but before I sat down to try to work out what was going on when did creative ‘stuff’, I was something of a Platonist about creativity.  That is to say, I had the notion that when an artist sits down to write, say, Macbeth, then there is some ideal form of the play ‘out there’ somewhere, and it is being gradually discovered, refined and committed to paper.

Where did I get this notion from?  Well, I’m sure you’ll find that it’s from the common popular model of how artists work.  So you get ideas like the sculptor searching for the sculpture within the stone interpreted along these lines.  Which is interesting, but wrong, because what that idea is saying is that the sculptor starts without preconceptions and lets intuition and the events of the moment, the feel of the stone, drive their hammer.  But the misconception that the finished article is inside there somewhere is commonplace.

So what I’m saying is this: if other artists work like me, and I’ve got some evidence to suggest that perhaps they do, then in fact creativity is totally non-Platonic.  There is no ‘piece’ out there, there is just a source of ideas that the artist shapes into the finished article.  So it’s meaningless to ask what would have happened if I had written that part of my novel on a different day, or if Lem had started Solaris on a different day, because creativity has two parts: an intuitive source and an intellectual foundry.  The whole process is of necessity contingent, as there is no grand plan that intuition is following, or at least none we can see.  There is just a process whereby raw ideas are shaped into a growing piece of art. 

Let me pick up on a point from above.  I said that there was no grand plan that we are aware of.  It could be that deep in my intuition there is in fact a plan that I am not consciously privy to.  But does that change anything?  I think not.  The end result is still a creative process that appears to me, at a conscious level, to be entirely contingent.  By way of example, I have on occasion sat down to write a new piece of music with, in my head, thematic material that I know I have used before in other pieces, and yet this latest piece is fresh and new, not a reflection of the older pieces.  This is entirely plausible in either model.  Either I take my themes and then, driven by intuition, assemble them into a musical fabric in a contingent manner, or my intuition gives me the themes and then, based on its platonic ideal, directs me in constructing it.  There is no way I can tell these two scenarios apart, saving the arrival of a signed message from my intuition saying ‘I planned it all’.  So given the choice of ‘there is no plan, the process is contingent’ or ‘there is a plan, but such that you can’t tell it there, and so the process appears as if it’s contingent’, I prefer to apply Occam’s razor, select the simpler model of contingency, and learn to embrace contingency.  The Platonic model may bring comfort to some, but as the end result is the same, the value of that comfort has to be questioned. 

(The frequent use of the word ‘constructed’ in the preceding paragraph should act as a warning flag about philosophical approach.  Indeed, I personally replace Platonism with Constrictivism in art, in mathematics and in epistemology.  The consequences of this are rather interesting, but would take us too far off topic, and so are a subject for another time.)

Creativity is not intellectually driven 

The next point is that it’s clear from my discussion of the creative process that the conscious mind plays a subservient role.  In fact, as I’ve said a couple of times, if one lets the conscious mind take control, things can go badly wrong.  This explains why the music of so many of the mid-twentieth century Darmstadt school composers sounds rather arid: they thought that using the serial technique was what made Schoenberg et al what they were, whereas in reality Schoenberg was a great composer who happened to use the serial technique as a tool.  This is rather like thinking that if one memorises Messiaen’s book The Technique of my Musical Language, one will write music like Messiaen; Messiaen omitted one thing from that book: himself.  And therefore, we can learn much by analysing the great artists of the past, but should be wary about what we do with the fruits of that analysis.

This means then that we should, as I noted above, be very wary of academic commentators who seek out the content of art in the formal structures it adopts, as this is the same mistake as thinking that the meaning of a sentence is to be found in the grammar of the language in which it is expressed.  So, some commentators think they have understood a late piece by Stravinsky, say his Requiem Canticles, once they have been able to map every note back to the twelve-tone series in a reasonable deterministic way.  And all they have discovered is the tools he used to reify the products of his intuition.

The point behind these digressions is this.  All three composers, as well as writers and painters like Klee and Malevich, brought a (more or less) rigorous set of intellectual tools to the table.  But they created great art by making these tools and their intellect the servants of the source of creativity in the intuitive unconscious mind.  Their imitators created bad art by taking the intellectual rigour of the tools to be the source of genius and trying to do without the true source in the intuition.

The role of the intellect

So creativity starts when the unconscious mind produces an idea.  The role of the intellect is to take that idea and form it into something that can be written down or played, or painted, and in the process give it life.  If we have to get, if not Platonic at least Aristotelian, this is where it can happen: raw ideas are a kind of essence; before they turn into realised art they must also acquire accidents.

The raw material, whether musical or textual, emerges as pure essence, pure potential from the unconscious mind, and at that point one can’t do much with it.  If it is writing one has to turn it into sentences, if music into notes, harmonies, playable / singable parts, etc.  We’ve seen that the intellect should have no role in saying what the essence is (or else disaster strikes), but it still has a role, because we need to carry out that shaping process that gives the essence accidents that reify it.

This means that the conscious mind is dethroned from its proud position as the creator, the mighty genius overseeing the work it produces, and it becomes the servant of the intuition.  What it can do is bring to bear a toolkit of techniques that the artist has developed that allows them to realise art from raw essence.  So style is a major part of this, as it depends very much on the toolkit of formal ideas that the artist has available to them (think of Klee’s paintings with their continuous lines and the rule that crossing a line forces a change of colour).  The intuition tells one how the stone should feel, but it’s only the intellect that can work out how to translate that into movements of the hammer, and that is based on a lifetime of study and learning.

What is an idea? 

Let me address this tangentially.  I have said that ideas emerge from the unconscious into the conscious mind, where they are studied and shaped to make them ready to progress to the outer world.  So that means that the artist must have a very close relationship with their unconscious mind.  When I spoke above about getting into ‘the zone’, what I meant, in this language, was that one can open a pathway between the conscious and unconscious minds, and, if one is lucky, when that pathway is open the unconscious mind will give creative energy to the conscious mind.

So what makes the unconscious mind willing to give?  It clearly doesn’t all the time, as many artists go through sometimes quite long periods of silence.  According to Jung a well-developed psyche has two key features.  First, the conscious mind has not alienated the unconscious, so not only is it easy to open that pathway, but once it is open, the unconscious will be willing to give of its goodness.  Second, the beneficent aspects of the unconscious mind are well-developed.

The second point is critical.  Within my unconscious mind is the Shadow, repository of those things that I keep hidden, even from myself.  What comes from it can be amazing insights, but is more likely to be negative ideas, which will not be a good basis for creation. So the artist needs to have a well-developed positive unconscious mind, which means that psychological space is reclaimed from the shadow and turned over to the psychologically ‘good’.  This is part of what happens in the process of individuation.

In my case the positive unconscious mind is my anima (men have an anima, women an animus).  If she is well-developed then she can begin to take over space that used to be occupied by the Shadow (this is the source of the insights), and she becomes the source of creative energy.  It is, perhaps, not surprising that in man-dominated culture ‘the muse’ is generally portrayed as a woman.

So an idea becomes in this model the energy produced by my anima when I open the path to hear her.  And it is perhaps not surprising that a well-developed intuition can produce music, or sentences, or mathematics, or even designs for IT systems.  Ones anima/us is part of oneself and has the same skills as oneself.  In fact may well be the source of those skills.

 

The tyranny of realism

‘There are deeper strata of truth in cinema, and there is such a thing as poetic, ecstatic truth. It is mysterious and elusive, and can be reached only through fabrication and imagination and stylization’ – Werner Herzog

A surprising contradiction

If you look at the history of film, two big developments are immediately obvious:

  1. The technology to create visual effects has grown ever more advanced, to the point where now it is almost impossible to tell what is ‘real’ and what is an effect, and (in principle) if you can visualise it, it should be possible to put it on the screen.
  2. Films have grown progressively more and more realistic, in the sense that they now almost entirely eschew the deliberately ‘unreal’ aspects of older films (think the Moloch machine turning into the god Moloch in Metropolis, or the consistent avoidance of right-angles in the sets of The Cabinet of Dr Caligari). Instead we get an ever increasing desire to make events on screen look as ‘real’ as possible.

This doesn’t make a lot of sense. Why is that that just when we finally have the technology to create astounding artistic effects, twisting reality to create images that will provoke, disturb or inspire the watcher, instead we use them to make things look more real, less artistic, less like the product of someone’s imagination?

I don’t think that point 1 really needs further discussion. The progression from 1902’s A Trip to the Moon to 2008’s Wall-E and beyond is, I believe obvious. But point 2 needs some discussion: I’ve given some examples already, but it’s worth hammering it home; in addition there are some subtleties that are worth drawing out. So I’ll do that, then I’ll try to work out what this means. And then, just to show off, I’ll link these developments to trends in twentieth century art, and conclude with some observations on the role of the new in the creation of great art.

Films just keep getting less interesting

What do I mean by ‘interesting’?

Where by ‘interesting’ I mean that as time progresses there is less and less by way of artistically interesting visual effects in film. That isn’t to say that there aren’t amazing images. It is quite astonishing how realistically film can depict things blowing up, or people killing one another in inventively gory ways, or the exact details of an alien planet. But once you’ve watched V for Vendetta or Saw 47 or Avatar, what are you left with? And I don’t mean in terms of memories of the plot (so a strong urge to vomit, or a sneaking suspicion that there is more to leadership than having a neat mask does not count): I mean in terms of artistic visual effects? The thing is that the explosions and dismemberments and big blue buggers are so realistic that they just skate over the surface of the imagination without ever provoking a visual artistic response.

Perhaps I’m not making myself clear. Let’s take an example from painting. Look at anything by Jack Vettriano, say this one:

Okay, it looks pleasant; it’s mildly erotic – the curve of the dancing woman’s rump is nicely limned; it’s a bit surreal. And sure it provokes you to wonder what the back-story is that lead to this couple dancing on the beach. But is there anything in the image that creates a moment of revelation, some new insight, the ‘wow’ factor that leads to a new understanding of art, culture and yourself? No.

Right, now look at this:

This is Max Ernst’s The Angel of Hearth and Home. I remember when I actually saw it face-to-face, as it were, I just stood for ten minutes staring at it, trying to take in that amazing image. It communicates raw energy, a terrible jubilation, but also a feeling that something very, very evil is going on. This is done without sign-posts like a woman’s sexy bottom, a maid with an umbrella, or dancing on a beach. No, it comes straight out of the image without any need for interpretation. And this is completely independent of whether or not you like the painting, just as I insist the relevant factor in film should be independent of the plot (or indeed of whether the film as a whole is good or not).

To summarise then: what I am looking for is cases where images and artistic effects are used to create an emotional state independent of plot or event. Where merely seeing the image makes an impact. And that this impact happens at a deep, pre-conscious level. You respond consciously to the Vettriano; you respond viscerally to the Ernst.

So take the much vaunted 3D immersion within Pandora in Avatar. Sure, there’s a ‘wow’ factor, but it’s because you’re exploring the contents of the screen. Nothing there is actually very surprising. And as the film’s purpose is to make you think that this is a real, mundane, boring (if alien) place, it has no ambition to give your unconscious mind a shock. Now consider one amazing moment from The Cabinet of Doctor Caligari:

This image on its own has a jarring emotional impact: it conveys fear, dread and other forms of disquiet too complex to put into words. It creates that ‘spine-tingling’ effect, which is, of course, entirely under unconscious control, and which is a sure sign of being moved at a deep emotional level. And here’s the thing. That’s a still from The Cabinet of Doctor Caligari, and it has emotional impact without any need to know its context. Here’s another example, from Fury:

In the movie this is presented to us in a shot of its own, with no commentary. Lang knew that he was creating an image. Even out of context it is still immensely powerful as a depiction of fear, terror, horror. And the technical resources required were: one woman. At the other extreme (in several senses), here’s a (rather impressive) still from Avatar:

Yes, as I said, it’s impressive. The structures are intellectually interesting (is that really geologically possible?). But does it have the sudden jolt or spine-tingle factor? No. It’s just a landscape, rendered so realistically that I could be there. Indeed, critics praised the film for precisely this ‘you could be there’ quality, and in the process missed the point. We’re used to reality; the artistic shock comes from that moment when we go beyond reality and touch something other: the moment of transcendance.

So that’s what I am looking for: the ability of an image, or a sequence of images to have a direct emotional impact without any need to think about context or plot. I find it in abundance in pre-war movies, and very little thereafter, dying away to essentially nothing today.

So are all the interesting films really old?

Let me make an immediate exception. When I generalise about modern movies lacking the artistic shock factor, let me except once and for all the entire output of Werner Herzog. Herzog, with his stated aim of showing us what is true as opposed to what is real, is about as far from Avatar or Harry Potter or Wall-E or Random-Animal-Man or Latest-Generic-Tim-Burton-Movie or any other modern effects-fest as you can get. So I will observe right now that Herzog does the spine-tingle factor in abundance, then set him to one side as an honourable exception to a dishonourable trend.
Let me make another exception. I am talking about mainstream Western (west European and North American) cinema here: the kind of film that might get general release. The point is that The Cabinet of Doctor Caligari and Nosferatu and Der Mude Tod and Metropolis and M and The Man with the Movie Camera and The Goat and Fantasia and Glen or Glenda (remember: a movie doesn’t have to be good to be visually arresting) and most of the Fred & Ginger movies and . . . so on were all mainstream movies (or intended to be), not the sort of thing you’d have to go to a art house cinema or a film club to see. So it is with mainstream movies today that I will compare them. My purpose is to detect a broad cultural trend, and you don’t do that by examining what lies on the fringes.
So: let’s start with the past. I listed a bunch of movies just now. I’m sure you can think of more. Here’s are some of my favourite visual jolt moments from them:
  • The Cabinet of Doctor Caligari: the way that all the angles are wrong-angles, and the ground often consists of broken fragments pitched at changing angles creates a continuous feeling of unease, that something is very, very wrong, as a background to the entire movie. This is brilliant use of pure visceral effect that needs no words or plot device to communicate it.
  • The Cabinet of Doctor Caligari: the somnambulist opens his eyes (see the still above).
  • Nosferatu: Count Orlok rising, as if on a plank (which of course, Max Schreck was) out of darkness.
  • Der Mude Tod: the moment when we see the opening in the wall of Death’s domain, a narrow bright slit in a dark wall, with an enormous shadow.
  • Metropolis: the Moloch machine in operation.
  • Metropolis: the Moloch machine becomes Moloch the God.
  • Metropolis: the montage of eyes watching the false Maria.
  • M: Peter Lorre’s haunted face.
  • The Man with the Movie Camera: the Bolshoi Theatre folds in on itself.
  • The Goat: a speeding train comes at us and stops with a close-up of Buster Keaton sitting on the front of the engine.
  • Fantasia: the abstract animation of the Bach Toccata and Fugue.
  • Glen or Glenda: Barbara lying crushed by the tree.
  • Glen or Glenda: the accusing fingers point at Glen.
  • Fred & Ginger: in the seven ‘canonical’ Fred & Ginger movies, the sets are amazing Art Deco structures which create a subconscious feeling of stylisation and artifice absolutely essential to such artificial films. This is just a less extreme version of The Cabinet of Doctor Caligari’s approach to set design

I think that will do. As I said, I’m sure you’ll be able to think of more. So, basically in pre-war movies there’s plenty of visual excitement. In other words, pre-war, film-makers had no trouble with the idea of using what was a visual medium to create purely visual effects that tampered with viewers’ expectations of reality, and hence created the aesthetic shock (Ed Wood was clearly a hang-over from the pre-war era, but then that is evident from his whole output: he may have been filming in the ’50s, but he was emotionally rooted in the ’30s).

After the war cinema seems to have lost its way, and the idea of using the visual medium to expand reality was gradually replaced with using it to create an increasingly great semblance to reality. This was true even when the events depicted were technically speaking impossible. Rather than glorying in their impossibility, and making an artistic statement out of it, film-makers preferred to try to con their audiences into think that they were possible after all. By the time of Star Wars, the battle was pretty much lost: the point was to make the audience think it was real rather than to present them with something unreal, but make it so compelling that they were sucked into it anyway, and ended up conspiring with the film-maker to transcend mere reality and replace it with something else.
In fact, one of the few modern effects to be remotely ‘unreal’ is bullet time. This is a fascinating effect, given that it allows us to create an image and then, slowly and deliberately, examine it from all possible viewpoints. But its use in practice is, to be mild, uninspiring, because it always seems to be used with determinedly ‘real’ (in the sense of pretending to be real) events. So this effect has potential, but it needs to be placed in the hands of a visionary intent on creating art, not a hack intent on creating dollars.
Let me finish this section with what I think is an interesting observation. You might be tempted to object to my claims about modern movies by saying ‘but there are way-out movies today: what about Being John Malkovich, or The Eternal Sunshine of the Spotless Mind‘. Well what about them? Sure, there are weird events galore. But, for example, the portal into Malkovich’s head is portrayed with immense realism, with real mud and all. Similarly, when the house collapses near the end of The Eternal Sunshine of the Spotless Mind, it doesn’t do it in any visually exciting way: it just falls apart, plank by plank. Even in movies with completely surreal screenplays we end up with the film-makers determinedly setting out to render that surrealism in as realistic a manner as possible. They want to make it believable. And in the process they remove the magic, they remove the wonder, they remove anything that raises the film’s visual presentation above mundane, boring reality.
So to repeat: cinema is intrinsically a visual medium, and yet modern film-makers, rather than making use of the near-infinite possibilities offered them by CGI and creating truly artistic visual effects, prefer to play it safe, and try to make the things they depict look as real as possible. It is as if they don’t trust their audience to be able to manage the challenge of following a supra-realistic discourse. This is a critical observation.

Okay, so cinema isn’t visually exciting any more: what happened?

Let’s start off by knocking some ideas on the head. First, it isn’t the shift in power in the film industry from Europe to the USA. Many of my examples are Hollywood products, and it wouldn’t be hard to find more. Second, it isn’t the advent of sound. Many of my examples are talkies, and I could have mentioned even more (The Testament of Doctor MabuseFurySleeping Beauty, anything choreographed by Busby Berkeley).
So what is it then? Recall the critical observation at the end of the last section: film-makers not trusting their audience to be able to cope with anything other than hyper-realism. Let’s explore this. In fact, there are several ways of looking at it.
The need for control
Bizarrely, in this era of big, dumb action films, where the dialogue is usually reduced to exclamations of terror and the heroine’s sole function in the movie is to show off her cleavage (I would have said figure, only, for interesting reasons I intend to go into in a future essay, shapely figures are in short supply in Hollywood right now), film-makers don’t want to stir the audiences emotionally. Let me make that statement more precise. They’re quite happy for us to react emotionally to their movies, but they want to be able to dictate the emotions that we feel. So you are meant to feel awe on seeing that still from Avatar, you are meant to feel lust when you see Susan Sarandon take her top off, you are meant to feel excited when you see one big heap of junk thump another big heap of junk in Transformers, and so on. What they don’t want is for you to feel your own emotions.
Now the problem with artistic effects is that they’re quite hard to pull off, precisely because you’re dealing with the unconscious mind, which is a complex mix of pre-human instinctive reactions, structures common to all humans (the collective unconscious) and material deriving from the individual’s experiences. You can (as my examples above showed) do it with very great artistry, but it’s a subtle and complex business, requiring a lot of time and effort, and it’s bound to be a bit hit-and-miss because you’re using something unpredictable (your unconscious mind) to try to influence something unpredictable and disparate (the audience’s unconscious minds), so success is not guaranteed.
If you’re driven by the bottom line, success is required, so you want to be sure of your audience’s reactions. Much better to either use the screenplay to tell everyone what to think and feel, or, better yet, short-circuit the human parts of the unconscious mind, with all their complexity and variability, and target the one part of the human psyche that is absolutely predictable: the instinctive unconscious that hasn’t changed to any great extent in the last few million years (just as each of us has within our brain a complete, fully functioning reptilian brain, so we have a pre-human hang-over in our psyche). If I show a straight man a picture of Susan Sarandon’s breasts, he will get aroused, and the same will happen if I show a straight woman Keanu Reeves (why?). If I show them someone being disembowelled, they will cringe with disgust. If I show them an explosion, they will react with shock and amazement. And, for reasons I really don’t care to think about, if I show them someone farting, there’s a good chance they’ll laugh.  And all of those responses are pre-human, and easily correlated with specific classes of stimuli.
So, as the art of cinema has become more of a business, as the need for predictable return on investment has become ever greater, artistic effects have been left behind and replaced with simplistic visual effects guaranteed to produce precisely calculated results from the audience. And here’s the thing: the instinctive mind gets confused if things depart from reality, because reality is what it’s wired to process in its basic mission of controlling the ‘food, flee, fuck, fight’ circuits in our brains. So if you want to make films that work at this very basic level, you have to aim for total realism. And the end result is that if I eschew any hint of visual interest, and instead go for immersive ‘reality’, I can predict how audiences will react, and my accountants will be very happy. Or, to put it another way, we’re making movies that would appeal to chimpanzees.
Fear of transcendance
I’ve just given a sound business-based reason for avoiding the use of transcendant effects, but there’s another, subtler trend going on. Culturally we seem to have become suspicious of the very idea of transcendance, as if reaching for the supramundane is somehow bad or elitist (one of our culture’s ultimate terms of derogation). Now the experience of the artistic jolt is a transcendant moment: you are almost literally taken out of yourself, losing conscious control and your individuality in the process. In Jungian terms, we could say that art is talking directly to us through the collective unconscious that is part of all of us. And individuality is very prized in our post-war culture: what matters is me, not me and my interaction with society. When Baroness Thatcher said that ‘there is no such thing as society’ she was wrong in principle, but in terms of modern values she was right: we are no longer a society, but a collection of individuals. And it is since the war, with its terrible examples of collective beastliness, that the idea of the individual and individual rights have come to the fore. The fascist dictatorships, with their emphasis on subordination to the collective, and their terrible acts of collective murder, made the rise of the individual, and the downfall of transcendance and that which causes it, inevitable.
Consider that other great source of transcendance: religion. Christian worship (I shall limit my discussion to Christianity as I am discussing mainstream Western culture; however my argument can be applied more widely) is, of its nature, a collective thing. We enact rituals with the purpose of (so the theory goes) losing our individuality within God by a collective re-enactment of Christ’s self-sacrifice. No wonder church attendance has fallen off since the war: it’s nothing to do with a lack of religiosity – Eastern religions and a particular kind of Christianity are booming – but because people want to be individuals. So, what’s happening with these successful religions, then? Well, to be honest they’re all religion-lite. Mysticism is big, because people like the implications of exclusivity and specialness that it implies. But of course, it’s a cut-down version. There are heaps of dot-it-yourself religion books about Meister Eckhart, but the Meister wouldn’t recognise what is being said in his name: very tellingly, his insistence on the need for the death of individuality before transcendance can be achieved has been quietly dropped. And the same is true for Buddhism, Hinduism, Gnosticism, Zoroastrianism and Christianity. Complex theological ideas are replaced with simple rules and self-help; overcoming the self transforms into self-worship. So we have religion re-packaged for the age of the individual. And transcendance is, outside the small rump of traditional believers, a thing of the past. No wonder films are so boring. Because, let me make it clear, I am not arguing for a return to traditional religion. Far from it. I am arguing that the decline of conventional religion is a symptom of an underlying cause, one of whose other symptoms is artistically dull movies.
The desire for comfort
My final point is based in the fact that though the artistic shock may come from the unconscious mind, it has quite a significant impact on the conscious mind, in the form of powerful emotions. Now these aren’t the simple animal emotions of the kind discussed above, but more complex, conscious, specifically human, emotions. And as such they are very hard to describe: our language for describing emotions is based around the simple, animal emotions of fear, pain, lust, hunger, anger and so on. The best description seems to be as a massive release of energy, coupled to a heightened awareness, as if the consequence of the artistic shock is to remove barriers to true perception of the world, to allow one to perceive things as they really are. And this is not the same as photographic realism; that is just a precise reproduction of the world as it appears to our normal, limited senses, which has nothing to do with the supra-real world one experiences (however briefly) after the artistic shock. In Herzog’s words, this is the distinction between truth and ecstatic truth.
Now a huge energy flow and heightened awareness can be very exciting. In fact so exciting that it can be among the most intense emotional experiences one can experience (it is no often that discussion of reactions to art – and indeed of mystical experiences – is often couched in quasi-sexual terminology). And while the energy is flowing, and one is living with the consequences of the transcendant moment or moments of contact with the other, great things can happen: inspiration, creativity and more. But remember that these emotions are conscious, and so they perturb the viewer’s conscious state. In other words, there’s intellectual effort involved. This is principally in the form of intense concentration, in which one examines the world through new eyes, but in addition, as one comes down from the ‘high’ of the transcendant state, there is a feeling both of euphoria, but also of being drained: all that energy had to come from somewhere.
So what I’m saying is that the artistic shock heightens the experience of watching a movie immensely (think how mundane The Cabinet of Doctor Caligari would be without all those wrong-angles), but it involves quite a lot of intellectual effort. And this leads to part of what I think has happened: when film was (comparatively speaking) new, audiences were prepared to put in the work in return for getting not just entertainment, but heightened entertainment. Compare The Wizard of Oz to any of the seven ‘canonical’ Fred & Ginger movies. The Wizard of Oz is an amazing spectacle, but that’s all it is. The Fred & Ginger movies are self-conscious works of art: the team behind them (led by Fred) were happy to assume that their audience would put in the effort. Of course, as good art, the movies can be enjoyed as pure entertainment if that is all one desires, but they make no effort to hide their aspiration to be more.
But after the war, people had had their fill of austerity and hard work and effort. They just wanted to be entertained. And so entertainment is what they were given, and so things snowballed, and we reached the point we are at now where mainstream movies are, on the whole, simple commodities which audiences take in, pretty much in the same way that might scratch an itch. Yes, movie critics might complain and rate Transformers 2 as one of the worst movies of the year, but it was perfect mindless entertainment, so it was a smash, grossing $402,076,689 in the USA alone. By way of comparison, Synecdoche New York, a truly great movie, which makes no bones about expecting its audiences to engage their brains throughout, grossed $3,081,925 (all figures from IMDB). And given that (as observed above) the bottom line is, increasingly, what matters to studios, mindless films are, increasingly, what the mainstream produces. And artistic merit is not a consideration.

Okay, so where else can we take this argument?

I think there’s a lot of room for extending this argument to other art-forms, and, as a consequence, make the beginnings of an attempt at answering one of the most mystifying features of art in the twentieth century: the growing disconnect between high art and popular art. In 1927, two novels were published: To The Lighthouse and Inspector French and the Starvel Tragedy. Well, To The Lighthouse is a masterpiece, and the other, to be charitable, isn’t, but no prize for guessing which sold more copies. Salvador Dali’s later kitsch made him a very rich man; the incomparably greater Max Ernst was only ever well-off. Harrison Birtwistle is arguably one of the two greatest living composers, and yet, for reasons that I shall never understand, it’s Andrew Lloyd Webber that people seem to like. And in all these cases, just as with the movies, serious critical taste is absolutely at variance with that of the public. And why?

First a quick comment about what I mean by realism and irrealism. I could get away without definitions for cinema, because the terms were kind of obvious, but now I need to make things more precise. What we consider real or irreal is, one might think, determined by our senses, but, as we know only too well, the relationship between what we experience and any objective reality that is possibly out there somewhere is, at best, somewhat tenuous. Reality is culturally determined. For example, I claim that there are separate colours green and blue. A native speaker of Vietnamese would see only one colour: xanh. So what I experience as ‘real’ depends on my cultural baggage. ‘Real’ is what is considered the norm in my culture in its depiction of how the world works; ‘irreal’ is everything else.  So culture, by defining the vocabulary used by ‘normal’ art (as opposed to innovative or conservative art) defines what we expect things to look / sound like, which defines what is real.

Argument 1: control

Here the argument for film was that you can sell more cinema seats if you can predict audience response. This actually works quite well for the other forms.

  • There’s no way of predicting how readers will respond to To The Lighthouse, just that they will respond strongly. Contrast this with what publishers churn out now: lots of nice simple novels about basic emotions. Rename Eat, Pray, Love as Hunger, Self-Love, Fucking and you kind of get the point (note the nice appearance of religion-lite and the cult of individuality). And this appeal to basic emotions is culturally ‘real’ if you portray people and events that, even if they are caricatures, are immediately recognisable, set within a simple, linear narrative.
  • The situation for plastic arts is almost identical to that for film. Vettriano’s picture produces a clear and simple response of ‘I wouldn’t mind doing her‘. Ernst’s is far more complex (and quite negative) emotionally.
  • For Western culture, tonal music is culturally ‘real’. Now, atonal music is actually rather good at depicting complex (usually negative) emotions, but if you want big, bold, simple emotions, tonality’s what you need.

Argument 2: fear

Carries over to all the other art forms without modification. Note in passing that atonal music is very clearly ‘other’ by its very nature, and hence worryingly closer to transcendance.

Argument 3: comfort

Again, carries over to other art forms. What is unfamiliar is less comfortable; what is demanding is less comfortable. Culturally ‘irreal’ art is both.

So basically the situation is the same as in film. And we even see in high culture evidence of a terrible malaise that has overtaken non-mainstream film. That is to say, independent film-makers have grown so used to the world of hyper-realism that they seem to fear breaking the rule of ‘everything should look as real as possible’. As I said before, Werner Herzog breaks this rule left, right and centre, but he stands alone. Who are his followers? And the same is increasingly true in other media. ‘Classical’ composers have started writing tonal music again. We are told that accessibility matters. Even, apparently, if that means compromising your artistic values.

So there you are. This isn’t a problem unique to film. Realism (or its equivalent) has taken hold everywhere. So when do we start the campaign to take the arts back for irrealism, then?

Conclusion: transcendance, newness and greatness in art

My notion of artistic ‘realism’ has some interesting consequences, one of which is that our idea of what is real changes.  But that is surely the case: one need only look at, say, how portraiture has changed over the centuries to see that.  For an ancient Egyptian, being true-to-life meant making as much of the subject visible as possible, leading to the curious flattened-out (and physically impossible) stance in Egyptian portraits.  But to an Egyptian our portraits would look unreal.  Likewise, we are not surprised to see all kinds of colours in a face, and yet in the latter part of the nineteenth century the idea was revolutionary.  Going back to my discussion of film, we can interpret the shift back to realism after the war as being a retreat in what was culturally ‘acceptable’ compared to more adventurous tastes before the war: this is just a restatement of my earlier arguments in the new, more general, language.

A gradually shifting definition of what is ‘real’ and what is culturally acceptable is what causes the ‘shock of the new’ effect.  It is very hard now for us to feel viscerally just how revolutionary the early impressionists were, but those we remember we don’t remember for shocking us, st least not in a ‘shock of the new’ way.  Many of the composers of the sturm und drang movement of the eighteenth century are justly forgotten: they didn’t look beyond the surface effect created by the new tools to see where they could lead, creating a promise that was only realised in the next century.  In cinema, effects technologies amaze when new and are old hat a few years later; merely using the technology may be enough to wow the first audiences, but it will not create lasting art.  More generally, innovations in the creative vocabulary shock, but do not of themselves create transcendance.  This could, of course, be a factor in the ‘censorship of time’ phenomenon that I have discussed elsewhere.  Works that seem transcendant masterpieces to their contemporaries are, with time, revealed as purely shocking, and pure shock does not last.

Therefore newness does not imply transcendance.  But transcendance requires a form of newness. I do not mean that transcendance can only be achieved with the latest technical means.  The Grosse Fuge is still transcendant today (though it was loathed in its own day).  But in the course of achieving transcendance, going beyond the real, the art shock creates an emotional space within the consumer that is something wholly new and unexpected.

Now, in principle, a great genius could still create transcendant art today using the technical means available in 1826.  However, thinking back into an earlier cultural epoch without producing pastiche is well-nigh impossible, as too may great artists of the twentieth century discovered to their cost.  And pastiche, almost by definition, cannot be transcendant.  Similarly, using the vocabulary created by the cultural norm is unlikely to create transcendance, if only because it will create work that is part of the reality it is trying to transcend.  Generalising the way that (as Roger Ebert has observed) greatness in a movie director lies ‘between the frames’, greatness in art can almost be thought in lying in the artist’s having, through effort, transcended the norms of the art of their time.  This doesn’t mean that their transcendant language has to be avant garde: Sibelius is a case in point, his amazing final works creating a wholly new sound-world within a (more or less) traditional tonal language.  But the transcendant language must be significantly other, and we feel that otherness down the ages.  This is a key observation: great art sits significantly outside the cultural norms of its time.

So transcendance does not require the latest technical means, but an artist who sticks to the artistic language of the present or past without a compelling aesthetic reason is risking degeneration into kitsch.  For example, in film using practical effects rather than CGI is perversity unless there is something about the practical effects that CGI cannot (yet) create, or some aesthetic purpose behind their use (in some unclear way it seems obvious that Fitzcarraldo would not work with a CGI boat, but it is hard to see what the director of The Eternal Sunshine of the Spotless Mind gained artistically by insisting that all his effect be practical).  Because of this, great artists have always pushed the bounds of the possible (though maybe in unexpected directions), seeking that new tool that might help them capture transcendance, while lesser artists have been content to stay in the cultural shallows.  So while neophilia has its noticeable demerits, even more so does neophobia.  Great art comes from the creative tension between new and old and not from over-enthusiastic exploitation of new tools, or a deliberate refusal to expand the expressive vocabulary.

The censorship of time

Let’s face it, art is stuffed

It’s a commonly held view that modern art, regardless of the art-form is worse than the art of the past.  Cultural conservatives point to Michelangelo or Crivelli and then point to . . . whatever bizarre collection of failed comedians make up this year’s Turner Prize shortlist and say ‘There, I told you, Sir Alfred Munnings was right, modern art is worthless.’

And this isn’t just true of the plastic arts:

  • Theatre.  The past has Shakespeare, Johnson, Sheridan, Wilde, Shaw, Coward.  We have innumerable ‘play of the film’ shows and Andrew Lloyd Webber.  
  • Film.  The past had Murnau, Lang, Wilder, Cukor, Hawks, Bergman, Tarkovski.  We have, well we do have Werner Herzog, but no-one actually goes to see his movies.  This is the age of franchises: Terminator, Saw, Transformers, <insert animal here>man.  We live in an era where a film about (to use Peter O’Toole’s immortal phrase) ‘blue Barbie dolls’ was hailed as a masterpiece because said Barbie dolls looked so real you could almost pretend that it wasn’t a work of art you were experiencing.  Way to miss the point.
  • Music.  Modern pop music is just too depressing to contemplate: what a comedown from Morrison, Townshend and Hendrix to Spears.  So let’s talk classical music.  To be blunt, I could sit here for days listing great composers of the past; as for the present I can think of precisely two great living composers – Henze and Birtwistle – and they’re getting on a bit.
  • Books.  Defoe, Burney, Richardson, Austen, Thackeray, Bronte, Dickens, Gaskell, Trollope, Eliot, Woolf.  And that’s just the British ones.  And now we have shelves and shelves of chick-lit, endless whimsical books with whimsical titles by that bloke with the funny name who writes about whimsical things happening in Africa, and, at the pinnacle, that towering genius Stephenie Meyer.  Then, it isn’t that long ago that ‘The Lord of the Rings’ was voted the greatest novel ever written.  ‘Nuff sed. 

So yes, it’s clear, isn’t it that the arts are in terminal decline?

No.

No?

Absolutely not, because what we have failed to take into account is what I call the censorship of time.

Perhaps not then

The thing is, you see, we suffer from a unique disadvantage when it comes to comparing contemporary art with older art: we live now.  Which means we are surrounded by contemporary art and experience all of it, bad and good.  There’s no selection other than throwing the book away or switching the radio off.

With the art of the past there has been selection.  What we read / watch / listen to / look at today isn’t the totality of what was produced.  No way.  There was loads and loads and loads of total garbage churned out by the shedload.  We just get the bits that lasted.

So that’s the first half of the censorship of time theory: we see all contemporary art, but only the best of older art.  But what about the disturbing phenomenon that people right now seem to embrace dreck with a positively unseemly abandon?

Well, this isn’t new.  Beethoven was widely considered a madman and his music unlistenable.  When Londoners first heard the start of his fifth symphony, you know, the ‘da da da dum’ bit, they burst into laughter – it was so funny, my dear, to even expect people to take music like that seriously.  In one infamous week, The Beatles were pipped to the number one spot on the charts by – Engelbert Humperdinck.  The Who had very few number ones at all.  Virginia Woolf’s books sold so few copies they scarcely paid for publishing expenses.  Turner was widely considered to be insane.  Metropolis, now thought one of the greatest films ever made, was hailed as a disaster when it was issued.  In fact, the American hack who butchered it complained in his autobiography how hard it was turning Metropolis into something watchable, and congratulated himself on managing to do so.   People didn’t flock to the theatres to hear Wilde’s latest epigrams or Shaw’s latest intellectual conundra; no they went to see Marie Lloyd sing ‘Oh Mister Porter’ and waggle her boobs at them.

So basically, the point is this, back in whatever era you care to contemplate they had bad art, bad music, bad films, bad plays and bad books.  Hordes of them.  And they were (on the whole) what people preferred.  It is only with the passage of time that the trash has been winnowed out, leaving the (apparently) impeccable artistic record of the past.  And we are living slap-bang in the middle of said winnowing process for the art of now.  Is it surprising that, for devotees of the modern, it isn’t necessarily a pleasant place to be?

Against Godwin’s Law

Godwin’s law: true but abused

Originally formulated in 1989 by Mike Godwin, his law states: ‘As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches 1.’  Now I have no trouble with this statement: evidence suggests that it is almost certainly correct.

However there is another ‘Godwin’s Law’ (I shall use scare-quotes from now on to distinguish it from the original law).  In the words of Wikipedia:  ‘For example, there is a tradition in many newsgroups and other Internet discussion forums that once such a comparison is made, the thread is finished and whoever mentioned the Nazis has automatically “lost” whatever debate was in progress.‘  This statement, which is related to Godwin’s Law only in that it includes the word ‘Nazis’, is what I wish to deplore; I intend to show that it is a dangerous means of stifling debate that allows proponents of untenable propositions to maintain the appearance of being in the right by rubbishing those who differ from them.  

My attack will take the following form.  First I will demonstrate that ‘Godwin’s Law’ is useless as a tool for telling whether a statement is true or not.  It can give the wrong result and it can give the right result.  Second, I will examine some forms of argument where an appeal to Hitler or the Nazis may be justified (in context): specifically slippery-slope arguments and counter-examples.

Now the point of my argument is not that ‘The Nazis did it’ is always the best, or even a good argument: in fact it can become rather tiresome.  Rather, we should treat comparisons to Hitler and the Nazis on a case-by-case basis, rather than simply assuming that anyone who invokes them has lost the argument.  We should be wary of those who wish to reduce argument to a discussion of extreme cases, but a lazy application of ‘Godwin’s Law’ is not the answer.  As usual, we need better, subtler tools.

Analysis

‘Godwin’s Law’ is useless

Let me show that ‘Godwin’s Law’ has no power, positive or negative, for determining the truth of an argument.  In other words it is useless as a tool for deciding who ‘won’ an argument.  

So here are two arguments.  Consider:

A: All vegetarians are peaceable people

B: Hitler was a vegetarian

A: ‘Godwin’s Law’.  You lose!

Here A claims to have won the argument, in spite of the fact that B has successfully negated their premise, so in this case ‘Godwin’s Law’ gives the wrong result.  Now consider:

A: Performing experiments on unwilling human subjects does not mean a society is not open

B: The Nazis did it

A: ‘Godwin’s Law’.  You lose!

Here A claims to have won the argument, and happens to be in the right, but only by accident, as it is not ‘Godwin’s Law’ that makes the Nazis an invalid counter-example, but rather the fact that a number of open societies (e.g. UK, US) have acted in ways that run counter to their founding ideals, but this does not negate their open status.  

Valid uses of Hitler and the Nazis in arguments

The slippery-slope argument

What I mean by a slippery-slope argument is a situation where I assert that some proposition is generally true.  If:

  1. I can point to some starting-point scenario in which it is provably true
  2. I can also show / assert that if the proposition is true for some one scenario, then it is true for all other scenarios sufficiently close to the specified one

Then we conclude that the hypothesis is true in any scenario that can be ‘connected’ to the starting-point, in the sense that I can get from one to the other by applying small modifications.  This is called the slippery-slope for obvious reasons: if you start at the top of the slope and can roll a little way down, there is nothing to stop you getting to the bottom.

Now if I can find a scenario where the proposition is not true, then there are two possibilities: either there is a class of scenarios that cannot be connected to the starting-point, where the proposition is not true; or assumption 2 is not (always) true, in which case the conclusion is not true.  So in either case the proposition is not universally applicable.  

As an example, consider the proposition ‘all disputes can be resolved peacefully’ which leads to the slippery slope proposition:

P: This dispute can be resolved peacefully

It is indisputably the case that there exist disputes that can be resolved peacefully, so P is true somewhere.  Moreover, it seems reasonable that if a given dispute can be resolved peacefully, a slightly more or less complex dispute can also be resolved peacefully.  So conditions 1 and 2 for the slippery-slope hold, and all disputes connected to my starting-point can be resolved peacefully.  But my original proposition says ‘all disputes’.  Are all disputes connected to a clearly peacefully-resolvable starting-point?

And this is where Hitler and the Nazis comes in.  They were so extreme, so unreasonable, so unpeaceable, so inhuman that they can provide a counter-example to more-or-less any attempt to assert a universal positive about human nature or sociology.  In this case, applying P’ to ‘the crisis precipitated by the invasion of Poland’ cannot (despite some rather inventive special pleading by agenda-ridden historians) realistically be said to give a positive result.  Thus P is false.  What is interesting, of course, is to determine:

  • Whether the Nazis are an isolated special case, or whether there is a more-or-less large class of disputes insusceptible to peaceful resolution
  • Whether there is some external criterion that we can apply to disputes to determine whether they are susceptible to peaceful resolution (think how much that would simplify the work of the UN!)

Now, you could argue ‘all you’ve done is find a counter-example, why bother with the slippery-slope stuff?’   Well, my intention here is not to prove the original proposition untrue, but to prove that it has limited applicability, to make some effort at determining the limits of said applicability, and to hint at the fact that there may be something deeper going on that repays further study.  If I merely wish to prove a proposition untrue, a simple counter-example will do, and to them we now turn.

Counter-examples

The slippery-slope argument eventually came down to finding a counter-example to a proposition.  However, more generally, the sheer excessive negativeness of Hitler and the Nazis, the fact that they negate so many of the values we hold dear, makes them brilliant counterexamples for unwisely thought-through generalisations.  E.g.

  • P: Dog-lovers are good people
  • Hitler was a dog-lover

or

  • P: A pure, untainted Englishwoman could never be drawn to anything but good
  • Unity Mitford fell in love with Hitler

or even

  • P: Bad people must be as evil in their manner as they are in their hearts
  • Hitler was generally considered rather charming

In a sense what I am drawing attention to is a combination of the banality of evil, mixed in with the regrettable tendency of normally good people to be drawn to it.  Yes, Hitler was a monster, responsible for the deaths of tens of millions of people and architect of a systematic attempt to remove specific cultural / racial groups from the Earth, but he still loved his dog very much.  Regrettably, evil is not as simple as some would like us to believe, with its proponents being pantomime villains, and any tool that allows us to reveal the complexity beneath that simple label should be used.  So I say a resounding ‘no’ to ‘Godwin’s Law’.