Monday, 30 April 2007

The Personal Impact of Postmodernism

Postmodernism sounds like such an academic sort of issue, doesn't it? Why should I care about such an abstract idea? Why should I even bother to understand what it is, when it's so -- well, Ivory Towerish.

Unfortunately, postmodernism has long since escaped from the ivory towers of academia. It is well and truly at large in the world. In fact, postmodernism has probably had an impact on you even today, so pervasive is its influence.

In my last post, I talked a little about one of the impacts of postmodernism: the destruction of context. With all context stripped away, we are deprived of any meaningful way to make sensible and wise decisions. This doesn't bother postmoderns, who don't believe in such things (everything is right for you, after all), but it has serious consequences on those of us who live in the real world.

But there is another personal impact of postmodernism. I know, because I have been whacked by this. It's taken me a long time to write about this, because it was, frankly, so painful. But the time has come, so let's get going.

Imagine you had a small group that, every week, got together to study and learn and share together. Over the years, sharing in each others trials and triumphs, and learning about the world and reality together, you grow closer and closer. Eventually your group is so comfortable with one another that you really are a group of friends. And, as with all friends, you become casual and relaxed with one another, welcoming each other into your homes without any great formality, and sharing together with absolute trust.

Now imagine that, into this group comes a new person (or two). Obviously the group has to make concessions for this new person. People are on their best behavior, trying not to be too casual or over-relaxed. Still, the nature of the group, it's interests and character, won't change just because someone new comes along. The new person has a few splutters, but then seems to fit in, and the group flexes appropriately to make them feel welcome and as much a part as possible.

Then something happens. The new member finds an excuse to feel excluded from the group, it doesn't matter why or how. A power struggle ensues, with the new member threatening to drag the group leaders before the leadership they are responsible to. The new member demands that the group change its behavior to his or her preferences, or face the consequences. The new member (well, now a thoroughly ex-member) even makes legal threats. Eventually, at the request of the group's leaders, the leaders responsible for all groups get the two parties together: group leaders and new ex-member. The ex-member makes bizarre claims, demands, and outrageous racial slurs. When the mediating leadership is unimpressed by this, and the group leadership unable to comply to anything so unreasonable, the ex-member leaves the larger community and moves on to who knows where. Maybe the cycle repeats.

What is going on here? Well, what has happened is that one of the results of postmodernism has reared its ugly head. Postmodernism denies absolutes. There is no objective truth for the postmodernist. The new member was heavily influenced by postmodernism, and as a result, was open to all sorts of strange influences (since there's no way to tell strange from sensible when context is stripped away). He or she was offended by the group's robust camaraderie, and doubly offended by the group's agreement on a range of issues that they had carefully studied together, and come to agreement on.

Now, to a postmodernist, there is only one way to achieve agreement. Argument, debate, and careful instruction are not possible when there are no absolutes. No, only propaganda and force can bring about agreement. The group's consensus was clearly not the result of force, so the new member concluded that it was the result of propaganda, or brain washing (or the cultic force of personality). When attempting to undermine this failed, he or she resorted to the only alternative open to a postmodernist: force. He or she attempted to use the threat of authority (initially ecclesiastical, then legal) to force agreement. Attempts by the group's leaders to reason about this failed utterly, almost as if the two parties spoke a different language. And, in a sense, they did. There was no common ground: in postmodernism there can never be any common ground. Eventually, the only resolution was to part ways.

This tragic little drama will be played out in increasing numbers in the coming decades, as postmodernism sinks deeper into the modern psyche. With no hope of common ground, more and more people will resort to force when they encounter what they can only interpret as coercion.

And more and more innocent people will feel the terrible, personal impact of postmodernism.

Sunday, 22 April 2007

Crisis of Context

Without context there is no meaning. And yet context is something our society has almost rid itself of.

Why is context so important, and why have we fled from it so?


Context is a very broad term, and it simply means the surroundings of something that give it a setting in which to understand it.

More formally, you could define context as a set of coherent concepts, some of which have already been determined as having strong correspondence to reality.

Context is important in communication (see for some discussion on context in computer UI's), the arts (see this amazing story in the Washington Post, one of the most powerful examples of how important context is that I've seen -- it's a great read, too), science, society, etc.

Loss of Context

Yet our society has lost so much respect for context. How is that evidenced? Well, think about watching TV. As you watch it (unless it's something like ABC in Australia, or BBC in the UK), the program you're watching will be interrupted every few minutes by some 30 second segments talking about something (usually products or services) with no relationship whatsoever to what you are watching. These "advertisements" are completely without context, and yet we have grown so familiar with them that we can provide our own context for them.

Another example is in the "sound bites" that make up TV "reporting". We expect, even demand, that everything that someone has said be boiled down to a ten-second "sound bite". But such brief comments can never provide us with sufficient context to really understand anything.

Even Reality TV pretends to be able to reduce people's lives to an hour a week, stripping their actions of all context.

Finally, we expect to learn something of significance in the tiny half-hour or hour long segments we allow for learning or training. We try to create "quality" time with our kids by packing lots of context-free activity into the shortest time possible, etc. etc.

You may think it ironic that someone with a blog called "Eclectica" is writing about lack of context, but pay attention to both my first post and the second part of the subtitle of the blog. These are deliberate and careful attempts to give some context to what I'm saying here.

Why Context was Lost

So why did we move so aggressively away from contextuality in our society? Why did we value it so poorly that we thought we could discard it in our headlong rush into busyness? The answer's simple, and the more formal definition above gives it away.

Context is only necessary if you value the correspondence property of truth statements. Or, to put it more simply, if you don't care whether things are really real or not, but just care about whether they "work for you", then context becomes unimportant.

The whole point of context is to provide us with a sufficient surrounding network of truths in order to be able to fit the new, potential truth we are learning into our network of beliefs. If our network of beliefs is very loose (ie. we don't care much about whether the beliefs conflict with one another) or disconnected from reality (ie. we don't bother testing our beliefs against reality), then context really isn't much use to us.

And this is precisely the direction that society moved in over the last century. Nowdays this movement is called postmodernism. As is hinted at by the name, postmodernism is a reaction to modernism. Modernism values coherence and correspondence of beliefs, but refuses to accept the existence of anything beyond nature. (I've mentioned it before, but called it by a different name, naturalism or scientism, both of which are variants of modernism.) The problem with modernism is that it gives absolutely no foundation for reason, logic, meaning, or any sort of real thought. Post-modernism recognises this and, rather than invoking supernaturalism as a foundation for these things (as Christianity does), or throwing away reason and embracing supernaturalism (as New Age does), postmodernism continues to embrace naturalism (nothing exists apart from nature) and discards reason and logic.

Without reason and logic, context is worthless. In fact, postmodernism denies the significance of context, claiming that the interpretation of truths is not contextual, but rather purely up to the individual (known as deconstructionism). While full-blown postmodernism is not accepted by most people, it has had a huge influence on our society, and the devaluing of context is one of those influences.

Context is on the way back

The good news is that context is on the way back. An easy way to see this is the rise in popularity of the "serial" programme on TV. Back in the 90's Babylon 5 revolutionised TV SciFi with it's five-year story, with complex, pre-planned character arcs for all the major characters, and it's multi-layered mysteries and story. Now, more than ten years later, this trend has truly penetrated the mainstream, with shows like 24, Lost, Prison Break, and the like demanding the viewer pay attention over an extended period of time. Each episode has so much context that the "Previously on ..." teasers are simply not enough to understand the story.

The arts are well ahead of the rest of society at the moment, but there is hope that things like the news will rediscover the value of context.

Context and Christianity

What does this have to do with Christianity? Well, Christianity was not immune to the influence of post-modernism. Over the years, Christians became so ignorant of the importance of context that they fell into all sorts of error due to failing to take context into account.

Greg Koukl has an excellent resource called "Never Read a Bible Verse", which explains what should not need explaining: you need to understand the context of a verse in order to understand the verse.

Context is crucial to understanding the Bible, and it's also crucial to explaining it to others. We need to insist that people pay attention to the context, even when they wish to ignore it, and we need to be prepared to show why it's so important.

So, what examples of egregious context-ignorance can you think of? What ways have you found useful for explaining the importance of context? This is an important issue, and I'll be talking about it more in the future.

Sunday, 15 April 2007

Accuracy vs Precision

One of the most useful things I learnt in first year Engineering (I think it was in my Physics class) was the dual concept of Accuracy and Precision. The way that scientists and engineers think of these terms is a principle that is incredibly useful, and yet one that is not immediately obvious.

Since relatively few people learn this, and even those who do often don't extend the lessons to other areas of their thought, these concepts tend to languish generally unused. Let's see if I can help put an end to that unjustified neglect, shall we?


Let's use the simple scientific context to explain the terms:

  • Accuracy is how close a measurement is to the true value
  • Precision is how small the unit of measurement was (or, how many decimal places)

An example will help. Let's say there are two scales to measure weight: Scale A is accurate but imprecise, Scale B is precise but inaccurate.

Let's say we are trying to measure a lump of metal which has a true weight of 10.05 kg.

Now let's say that Scale A has a precision of 0.1 kgs (ie. it reads in kgs to one decimal place) and an accuracy of +/- 0.005kg (ie. it's measurement is guaranteed to be no less that 0.005kg less than the true weight and no more than 0.005kg more).

Scale A will therefore measure the weight as between 10.045 and 10.055kg, and will round this to either 10.0 or 10.1. We can see that the we can only really speak of the scale as having an accuracy of +/- 0.05, due to the limited precision.

Now let's say Scale B has a precision of 0.001 kgs, and an accuracy of 0.5kg.

The lump of metal will be measured as between 9.55 and 10.55kg, and Scale B will display any value between these. Basically, any of the decimal points are completely meaningless in Scale B, so a measurement of 9.557kg, despite seeming more authoritative than 10.0kg, is actually less accurate.

Thus, Scale B, with more decimal places on its readout, and which the uninitiated would have taken as being more useful to a scientist trying to measure weight, turns out to be much less useful than Scale A.

Application Outside Simple Measurement

This principle is easily applied outside simple measurement. For example, statistics are often quoted to an extraordinary precision (40.5% of so-and-so do such-and-such), and people tend to incorrectly apply them as if they spoke accurately to each instance of so-and-so, even when the standard deviation may be huge, which indicates that not even close to 40.5% of so-and-so's are such-and-such.

Radioactive dating techniques are an obvious area where high precision and low accuracy is prevalent -- radioactive dates often conflict with one another on the order of millions of years, so clearly, no matter how precise they are (and they're very precise) they are wildly inaccurate.

Another example is confusing a wealth of documentation or discussion on a topic (say, the inferiority of certain races, documentation pre-WWII) with the accuracy of the general opinion (ie. it's closeness to the truth). There are numerous examples of this: the documentation on biological evolution (voluminous and laboriously detailed, ie. precise) vs. its accuracy (hopelessly inaccurate, with massive problems like homoplasy -- independent evolution resulting in similar physical forms -- completely ignored); documentation on Freudian psychology (huge and detailed) vs. accuracy (now almost universally condemned as hopelessly incorrect), etc., etc.

Science is rich in fields with large amounts of enormously precise, hopelessly inaccurate documentation. This is not a condemnation of science, but merely a reflection of its fallible nature.

Other areas we see this is in medicine, where a doctor makes a precise, but often inaccurate diagnosis (of course, we almost force them into this situation, since we are rarely happy with vague diagnoses and would generally prefer a more precise, if less accurate, diagnosis and prescription).

Other examples are advertising claims, often very precise (lose 18kg in two weeks!) but hopelessly inaccurate.

Even ideologies or ideas can be precise and detailed, but terribly inaccurate. In fact, it is often a temptation for people to extend arguments beyond their expertise, and they end up making precise, but inaccurate statements in support of a precise but inaccurate position.

Take away lesson

So, what can we take away from this?

  1. Don't confuse precision with accuracy. Precision is essentially worthless without accuracy, while accuracy is always worthwhile, but is increasingly valuable as combined with increasing precision.
  2. An accurate but imprecise position (on almost anything) is more valuable than a precise but inaccurate one.
  3. Don't ask for more precision than something or someone can accurately deliver -- you'll just be burdening yourself with misleading detail.
  4. Focus on accuracy first, precision second, and things will generally work out better.

Sunday, 8 April 2007

The Improbability of God (another fallacy)

I recently read Dawkins debating Francis Collins in Time (God vs Science, Nov 13, 2006). Now Dawkins should have made mincemeat out of Collins, who is a Theistic Evolutionist, and so very much open to attack due to his views having a serious lack of coherence. However Dawkins failed to capitalise on his opponent's weaknesses, and instead made huge gaffes like, "The problem is that this says, because something [the anthropic principal] is vastly improbable, we need a God to explain it. But that God himself would be even more improbable."

This may sound like a reasonable statement to you if you share Dawkins' assumptions. But actually thinking about this statement reveals that it is completely irrational.

Dawkins is assuming that God is like a material thing, brought about by chance, so that its complexity is improbable. But a creator God either is, or isn't. You can't say, "Well, for God to have such-and-such a characteristic would require such-and-such a probability" like you can for an amoeba or a shrew.

Why not? Because the amoeba and shrew are contingent. This is a philosophical term which means that they're not necessary. The universe would have continued existing whether or not the worm or shrew evolved (assuming Dawkins' worldview) or were created (from a Christian worldview). God, on the other hand, is not contingent. He is what is called in philosophy "necessary". While a creator God could exist by himself, the universe (in such a worldview) could not, since it is contingent upon God's creating it. But God is not contingent on anything -- he is the ground of all reality: he is necessary.

In Dawkins' naturalistic worldview, the material universe (the space-time continuum or whatever you want to call it; Sagan called it The Cosmos) is necessary, but any of the forms it takes (such as worms and shrews) are contingent.

It is possible to calculate the probability of contingencies, since they don't have to happen (and you can therefore compare the chances of them happening to the chances of them not happening). But it is impossible, by definition, to calculate the probability of necessary things (since they are necessary, they have to be, there is no way for them to not happen so you can't compare the chances of them not happening to the chances of them happening).

The only way, then, we can choose which thing we hold necessary (or the ground of all reality or being) is as our presuppositions or assumptions. Does this mean we can't test these assumptions or presuppositions? That we're just doomed to be randomly either right or wrong, depending on which side of the fence we land on? Not at all. If our presuppositions are valid, they will lead to a valid worldview, namely one that is coherent (internally consistent) and that corresponds to reality. This is why there are no straightforward deductive or inductive arguments for the existence or non-existence of God. However, I would argue that only the presupposition of a creator God supports a valid worldview (for a number of reasons, perhaps topics of future posts, but you can also find them addressed in various places on the web and in books).

So when Dawkins claims that the existence of God is improbable, he is doing one of two things: 1) revealing his utter lack of understanding of philosophy (dangerous for someone very publicly playing a philosopher, as he does in The God Delusion) or 2) confusing God with his creation (i.e. making God contingent, but contingent upon what?).

Either mistake is pretty fundamental, and since this is a core part of his argument, seriously weakens it.

Monday, 2 April 2007

Irrelevance of Memes

Memes are an idea that some people find very appealing. They are generally used in reference to some common, powerful belief system shared by a (usually fairly large) number of people. They are often, but not always, used in a hostile sense (e.g. to dismiss a belief as merely a meme).

Memes don't add anything that doesn't already exist in the concept of ideas or beliefs. We already know that ideas and beliefs can be passed between people, and that some are more compelling than others. Memes don't actually help us understand why this is so.

But memes aren't merely superfluous, they are fallacious.

The problem with memes is that they are supposedly like viruses, but viruses of the mind. The difficulty with this is that a virus is just a virus. It isn't a referent -- it doesn't point to anything beyond itself (it has consequences beyond itself, but it doesn't mean anything, it doesn't have semantic content).

But a meme, in its role as an idea or belief, is a referent, it does point beyond itself.

To claim that an idea or belief is popular because it's a meme, and a meme has some sort of "infectious" quality, is to ignore the fact that an idea or belief points beyond itself to some correspondence with reality (or not) -- some truth or falsity.

Elaborating on the way the meme is infectious, self-sustaining, or whatever, completely ignores the semantic content of the "meme" -- the truth or falseness of the idea or belief. So proclaiming a belief as suspect because it has been labeled a meme is merely making the assumption that being able to describe why an idea appeals somehow speaks to its truthfulness.

Equally, saying that the meme "infected" its host at a young age is discussing the origins of the belief -- its history in that person's thinking -- not its truth or falsity. This is condemning the truthfulness of the idea because of its transmission method.

So to claim that a belief is merely a meme, and therefore to be dismissed, is either begging the question (i.e. not addressing the truth or falsity of the idea itself, and merely assuming that it is false), or committing the genetic fallacy (i.e. claiming that the origins of the idea speak to its truthfulness). These are two quite common fallacies in reasoning, and descriptions of them can be found in any decent logic text.

Given this, it is then clear that the whole idea of memes is really, for all practical purposes, irrelevant.