Can we view 9/11 as history? A Hobsbawmian perspective.

Do you remember what you were doing at the precise moment when you heard about the 11 September 2001 terror attacks on New York and Washington? I do – and I’m not American. I’m a Kiwi. But I remember. Here in New Zealand, on the other side of the date-line, initial news broke in the early hours of 12 September. My wife – listening to overnight talkback radio on earpieces – heard the news and jabbed me in the ribs. ‘Wake up, a plane’s hit a building in New York.’

Thinking about tragic accidents, we got up to see whether anything was on TV. It was. And then the news got worse. Way worse. The fact that there was live coverage, here in New Zealand, underscored the scale of the tragedy as a world event.

A fireman calls for 10 more colleagues amidst the ruins of the World Trade Centre, 10 September 2001. US Navy, Public Domain, via Wikimedia Commons.

A fireman calls for 10 more colleagues amidst the ruins of the World Trade Centre, 10 September 2001. US Navy, Public Domain, via Wikimedia Commons.

That reveals the huge dimension of those events 13 years ago. A human tragedy of appalling scale that became a defining moment not just for New York, not just for the United States – but for the planet. One that helped shape the first decade of the twenty-first century for everybody in the developed world, not least because of the behaviours, attitudes and oppositions that followed, ranging from tighter security for air travellers to wars in Iraq and Afghanistan.

The time is not yet ripe to consider these events history, for they are not. But when they are – in two, three generations, when young children view 2001 much as we view 1941, a distant time of grandparents and great grandparents – how will we see the 9/11 attacks then?

The answer, to me, emerges from the way that history, for professional historians, is all about meaning – about finding the broad shapes and patterns that turn the world of the past into the world of the present. These patterns seldom match the number system we use to count the passing years.

When we look at history that way we cannot go past the work of Eric Hobsbawm, who was to my mind perhaps the greatest historian of the twentieth century. I do not make such statement lightly. He took the long view. The historian’s view. A view divorced from the round-number dates into which we usually divide the past, like the end of a decade or a century.

For Hobsbawm, centuries were defined by the patterns of social and economic trends. That was why he called the nineteenth century a ‘long century’, marked by its ‘age of revolution’. To him, this century began in 1789 with the revolution that ended the ancien regime in France and which began a pattern of industrial-driven social dislocation and revolt. It ended in 1914 when the ‘guns of August’ heralded the end of the old European order in its entirety. Of course the trends that led to these pivotal moments pre-dated the specific instant by decades. Nothing, historically, comes out of a vacuum. But these dates offered defining events that, for Hobsbawm, brought the wider underlying trends into a decisive and overt reality.

USS Arizona, 7 December 1941. Public domain, http://www.ibiblio.org/hyperwar/ OnlineLibrary/photos/images/ac00001/ ac05904.jpg

Distances of history. In 2087, the tragedy of 9/11 will be as far removed in time as Pearl Harbor is today. How will people view it? Public domain.

Following the same logic, Hobsbawm also argued that the twentieth century was ‘short’ – beginning in 1914, with that collapse of the old order and the rise, in its place, of a tripartite world in which democracy was initially on the losing side of totalitarian fascism and communism. That resolved with the victory (luckily) of democracy – an event Hobsbawm argued was marked by the collapse of the Soviet Union, the revolutionary state that had emerged from the First World War.

The decisive date, for Hobsbawm, was the formal end of the Cold War in 1992. By this reasoning the twenty-first century began in 1993. But I wonder. We cannot know our future – cannot say whether there will be any long and over-arching socio-political pattern to the twenty-first century. But so far, one does seem to be emerging, for the early part of it at least.

Like Hobsbawm’s long and short centuries, this shape has been defined by trends bubbling away well before the pivotal moment. They were evident for quite some time through the late twentieth century, partially masked by the over-powering priorities of the Cold War. But if we want to point, in Hobsbawmian fashion, to a defining moment – a point where those underlying issues suddenly became present and urgent in everyday consciousness, it has to be 9/11. Sure, that leaves us with a 9-year interregnum after the end of the twentieth century – but, as I say, history at the thematic level never does tidily match up with numeric dates or round numbers.

And will future historians look back on the twenty-first as a long century? A short one? That’s up to us, really – meaning, everybody on the planet – and the choices we make.

Copyright © Matthew Wright 2014

Why celebrity phone hacking is really everyone’s problem

Until last week, I’d never heard of Jennifer Lawrence, still less known that she apparently had salacious selfies on her phone’s cloud account. Now, it seems, everybody in the world has the news, and apparently the stolen pictures will be made into an art exhibition. Do I care (just checking the care-o-meter here)? No.

But what I do care about is the fact that the celebrity selfie hacking scandal is everyone’s problem.

1195428087807981914johnny_automatic_card_trick_svg_medMy worry has got nothing to do with the way the public debate has been sidetracked by red-herrring arguments, all flowing from the cult of celebrity that began, in the modern sense, as a Hollywood marketing device during the second decade of the twentieth century. That’s why these pictures get targeted. Hey – get a life. Celebrity Bits are the same as Everybody Else’s Bits. Get over it. Celebrities are also entitled to their privacy and property, just like everybody else.

No – the problem is the principle of data security. Everybody’s data security. It’s an arms race, on-line and off. People store all sorts of things on electronic media these days. Medical records, bank account details, passwords. Some of it ends up in the cloud. Some doesn’t, but even home computers may not be safe. Hacking goes on all the time, often looking for your bank account. It’s a sad indictment of human nature that those perpetrating this vandalism look on it as an assertion of superiority. I believe the term is ‘owned’, spelt ‘pwned’.

Artwork by Plognark http://www.plognark.com/ Creative Commons license

Artwork by Plognark http://www.plognark.com/ Creative Commons license

It’s not going to be resolved by passing laws or codes of conduct. Some immoral asshole out there, somewhere, will spoil the party.

All we can do is be vigilant. Various services are introducing two-step authentication, in which you can’t just log on by password, you have to add a code that’s sent to your phone.

You still need a strong password. I am amazed that the most popular password is – uh – ‘password’, pronounced ‘Yes, I WANT you to steal my stuff’. Other stupid passwords include ‘123456’, the names of pop-culture icons (‘HarryPotter’) or something published elsewhere, like your pet’s name.

But even a password that can’t be associated with you has to meet certain criteria. The reason is mathematical – specifically, factorial, a term denoted with an exclamation mark. In point of fact, the math of password security gets complex, because any human-generated password won’t be truly random – and terms such as ‘entropy’ enter the mix when figuring crackability. But at the end of the day, the more characters the better, and the more variables per character the better. Check this out:

  1. Any English word. There are around 1,000,000 unique words in English (including ‘callipygian’) but that’s not many for a hack-bot looking for word matches. Your account can be cracked in less than a minute.
  2. Mis-spelt English word. Doesn’t raise the odds. Hackers expect mis-spellings or number substitutions.
  3. Eight truly random lower case letters. Better. There are 208,827,064,576 combinations of the 26-letter alpha set in lower case.
  4. Eight truly random lower and upper case letters. Even better. These produce 53,459,728,531,456 potential passwords.
  5. Eight truly random keystrokes chosen from the entire available set. Best. There are 645,753,531,245,761 possible passwords.

If you use 10 truly random keystrokes, you end up with 3,255,243,551,009,881,201 possible combinations. But even that is still crackable, given time – so the other step is to change the password. Often.

Make it a habit. And – just out of interest, seeing as we’re talking about true randomness, does anybody know what the term ‘one time pad’ means?

Copyright © Matthew Wright 2014

The real truth of the First World War

There has been a growing consensus among historians in recent years that the First and Second World Wars were not separate events. They were two acts in a 31-year drama that began in 1914.

Ration party of the Royal Irish Rifles on the Somme, probably 1 July 1916. Public domain, from Wikimedia Commons http://en.wikipedia.org/wiki/File:Royal_Irish_Rifles_ration_party_Somme_July_1916.jpg

Ration party of the Royal Irish Rifles on the Somme, probably 1 July 1916. Public domain, Wikimedia Commons.

Indeed, there are reasons to argue that this war was followed by a third act, set up by the collapse of the old order in the First World War – the rise of Communism, which was not resolved by the Second World War and led to the Cold War. That did not end until 1992. These events defined the society, politics and economics of the twentieth century; and it is for these reasons that Eric Hobsbawm has argued that this century – in those terms – was a ‘short’ century, beginning in 1914 and ending in 1992.

I’m inclined to agree. As far as the two World Wars are concerned there is little doubt about the integration between them. Briefly the argument is this. In 1918, the German state collapsed, but the advancing Allies were still – certainly by George Patton’s estimate – a few weeks off being able to beat the German army. The result was that Germany essentially retained an unbroken field army. This was dispersed by Versailles, but the soldiers, brought up like the rest of Germany on the notion of ‘Reich’, felt cheated. Into the breach leaped a shell-shocked veteran of the Ypres front, sporting the Charlie Chaplin moustache he’d devised for gas-mask wear.

SMS Baden, one of the last of Germany's First World War super-dreadnoughts.

SMS Baden, one of the last of Germany’s First World War super-dreadnoughts. Public domain.

It wasn’t difficult for Hitler to whip up support based on the popular sense of injustice and denied destiny, drawing power from disaffected former soldiers who formed a significant demographic group. It was also not hard for him to find a sub-culture within Germany who could be blamed. All of this was wrapped in the guise of a ‘new order’, but actually it was not – the Nazis, in short, did not come out of a vacuum; they merely re-framed an idea that already existed. This connection was realised by the British as the Second World War came to an end and they wondered how to avoid repeating the mistakes of 1919. As early as 1943, Sir Robert Vansittart argued that Hitler was merely a symptom. The deeper problem was that Versailles hadn’t broken eighty-odd years’ worth of Bismarckian ‘Reich’ mentality.

Wright_Shattered Glory coverThis perspective demands a different view of the First World War. So far, non-military historians in New Zealand – working in ignorance of the military realties – have simply added an intellectual layer to the cliche of the First World War as a psychologically inexplicable void into which the rational world fell as a result of mechanistic international systems, the pig-headedness of stupid governments and the incompetence of Chateau-bound general officers. There has even been an attempt by one New Zealand historian to re-cast Britain and the Allies as the aggressive, evil villains of the piece. Military historians have not been seduced by such fantasies, but have still been captured by a pervasive framework of sadness, remembrance and sacrifice. Into this, again for New Zealand, has been stirred mythologies of nationalism, of the ‘birth’ of today’s nation on the shores of Gallipoli in 1915. The result of this heady mix has been a narrow orthodoxy and an equally narrow exploration of events in terms of that orthodoxy.

Landing at D-Day. Photo by Chief Photographer's Mate (CPHOM) Robert F. Sargent, U.S. Coast Guard. Public Domain.

Landing on D-Day, 6 June 1944. Photo by Chief Photographer’s Mate (CPHOM) Robert F. Sargent, U.S. Coast Guard. Public Domain.

I question this framework, not least because of the argument that the Second World War was a specific outcome of the First. The implication of the two being different aspects of a single struggle is clear; there are questions yet to be investigated about the ‘why’ of the First World War. The issue is the extent to which the ‘Reich’ mentality was perceived as a genuine threat in 1914 when Britain (in particular) debated whether to enter the conflict, and whether and how that answer drove the Allies to persist even after available offence (infantry) had proven itself inadequate against the defence (wire, machine guns and trenches). We have to remember that fear of German imperialism had already driven Europe’s alliance structures from the 1880s. And, for New Zealand, the question is how did that intersect with – and potentially drive – the sense of pro-British imperialism that did so much to define our mind-set in the generation before 1914?

These sorts of questions are beginning to be asked in British historical circles now. I keep being invited to symposia at various universities over there, where these matters are being discussed. Unfortunately we are a long way off being able to properly pose such queries in New Zealand. Yet, realistically, that interpretation needs to be explored. Perhaps I should do it. What do you think?

Copyright © Matthew Wright 2014

Is high-tech REALLY indistinguishable from magic?

A fellow blogger asked for help the other week. What was the specific source – by page reference – to Arthur C. Clarke’s ‘Third Law’?

It was first published in his book Profiles of the Future – which was variously issued from 1958. My edition is the revised version published by Pan Books of London in 1973. And on p. 39 of that edition, as a footnote, Clarke outlines the Law: ‘Any sufficiently advanced technology is indistinguishable from magic’.

It was a throw-away point in a footnote to a lengthy chapter discussing the way conservative twentieth century science usually fails to admit to progress.

Fair point in that context, but I couldn’t help thinking of Europe’s history of exploration around the globe, which was built around wowing locals with techno-trickery and then bashing them with it. Toledo steel was one of several ways in which Hernan Cortez and subsequent marauders knocked over South and Middle American kingdoms in the sixteenth century.

It was a disparity that became extreme as Europe’s technical base improved, leading – ultimately – to the appalling massacre in 1893 of spear-wielding Matabele warriors by a handful of Cecil Rhodes’ Maxim gunners.  ‘Whatever happens/we have got/ the Maxim Gun/ and they have not,’ Hilaire Belloc intoned in wake of the battle.

The conceit of the age – echoed in Clarke’s Law – was that the indigenous peoples who saw European technology looked on it as magic. And it’s true to the extent that, if we lack any concept of the principle behind something, it may as well be magic. The notion of TV, for instance, was absolutely magical before the discovery of electromagnetic transmission; and even a top scientist from (let’s say) the late seventeenth century would have little chance of comprehending one, if they saw it. But I bet that if the principle was explained, they’d soon realise it wasn’t magic at all – just following a principle not yet known.

The same’s true, I think, of the way Europe’s technology was received across the world as it spread during their age of expansion. I think that sometimes the words of magic were used by indigenous peoples seeing the British demonstrate – usually – firearms. But that didn’t betray lack of understanding of the foreign technical concepts. The actual problem was they didn’t initially have the wording. The best evidence I have for this is in the collision between industrialising Britain and Maori in New Zealand, during the early nineteenth century.

Maori picked up British industrial products very quickly from the 1810s, including armaments. These were acculturated – drawn into Maori systems of tikanga (culture), in part by co-opting words already in use. The musket became the ‘pu’, for instance – a word for a blowpipe. But Maori very well understood the principles – certainly going out of their way to learn about armaments and warfare. Some rangatira (chiefs) even made the journey to London to learn more, among them Hongi Hika, who visited the arsenal at Woolwich in 1821 and learned of musket-age warfare and defences; and Te Rauparaha, who was taught about trench warfare in Sydney in 1830.

For ‘contact-age’ Maori, British industrial technology was not ‘magic’ at all – it was something to be investigated, understood and co-opted for use in New Zealand. And I suspect that’s how the same technology was also received by indigenous peoples elsewhere.

I don’t know whether Clarke thought of it that way; I suspect his targets, more particularly, were fuddy-duddies in his own establishment who wouldn’t accept that there might be new scientific principles.

Is there a technology you regard as potentially ‘magical’ to others?

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

Click to buy e-book from Amazon

Click to buy e-book from Amazon

 

 

 

 

 

Swearing and cussing? Sirrah! It’s a lot of craven murrain

The other week the Prime Minister of New Zealand used a word in public that literally means the ordure of a male cow. The colloquial meaning the PM deployed it for was ‘rubbish’.

William Shakespeare, the 'Flower' portrait c1820-1840, public domain via Wikimedia Commons.

‘Thou dankish unchin-snouted malt-worm!’ William Shakespeare, the ‘Flower’ portrait c1820-1840, public domain via Wikimedia Commons.

Oooh, naughty. Or is it? Way back in 1970, the same word was publicly used by Germaine Greer when she visited New Zealand. Then, police issued an arrest warrant. This time? The PM is in the middle of an election campaign in which everything he says or does will win or lose voters – and nobody batted an eye.

But of course. In New Zealand, today’s generation don’t regard this term as particularly offensive. I’ve seen the same word used in book titles, in the US it was the title of a Penn and Teller series, and so on. But that’s swearing. Words come and go. If they didn’t, we’d all swear like that impious swiver, Will Shakespeare.  Zounds! (God’s Wounds). The big word of his day was fie. But wait, there’s more. Not satisfied with the general vocabulary – which included some of the Anglo Saxon we use – the immortal bard is usually credited with coining around 1700 new words, many of them boisterously intended. You can check some of them out for yourself – here’s a Shakespeare insult generator.

What changes is the degree of offence society considers the word causes to ‘polite’ ears. That’s how Benjamin Tabart was able to use Shakespeare’s vilest word in his 1807 childrens’ tale ‘Jack and the Beanstalk’. Of course, by that time the hot potato word was ‘damn’, so offensive in polite society it was soon censored to d—d. That became a swear word too – ‘dashed’.

As always, older swear words that now seem acceptable aren’t directed ‘at’ anything. They’re abstract intensifiers that have lost connection with their original meaning. That’s different from offensive words intended to demean others’ behaviours, beliefs or cultures, which never become acceptable, any time. The fact that new terms of this latter kind keep turning up says quite a bit about the unpleasant side of the human condition.

But abstract intensifiers, directed at revealing one’s response to an ordinary event – like stepping in dog poo – are something else, and the funny thing is that any word will do, providing it’s understood. Sci-fi authors coin new ones often as devices for reinforcing the difference between ours and their future society. In Battlestar Galactica (2003-2009) the word was ‘frack’. An obvious homophone, but it worked well anyway. Or there’s Larry Niven’s Ringworld-series ‘futz’, which to me sounded like a mashup with putz. But you can’t fault the logic – the ‘different but not TOO different’ principle demanded of accessible SF.

I’ve only seen one place where a different word emerged. It was in Harry Harrison’s Bill the Galactic Hero. The forbidden term, the deeply offensive word of his galactic future, repeatedly used by his ‘starship troopers’? Bowb. It echoed 1930s slang, but Harrison made it the verboten word and used it with stunning effect – a multi-purpose obscene noun, verb and adjective with which readers instantly identified because of the context. ‘What’s this, bowb your buddy week?’ a trooper demands as his power suit fails and nobody stops him drowning. ‘It’s always bowb your buddy week’, the gunnery corporal tells the troops as the man sinks.

Bowb. Conveying the intensity of personal emotional response to the abstract without the current-day offence.  And that, of course, is the essence of writing – transmitting the intended emotion to the reader. Way cleverer than using existing swear words.

Trouble is, when I use bowb  in conversations, people look at me funny and think I’m a gleeking, beef-witted dewberry.

Copyright © Matthew Wright 2014

 

Fringe thinking fruit-loops or just misunderstood?

I am often bemused at the way some people seem to think. Particularly those who advocate what we might call ‘fringe’ theories.

I took this photo of the Moeraki boulders in 2007. They fact that they are not perfect spheres is evident.

Moeraki boulders, north of Dunedin. It’s been argued that they are weights used by Chinese sailors to raise sail. As I know the natural geological origin of them, that’s not a theory I believe myself, but hey…

These are often portrayed in pseudo-scientific terms; there is a hypothesis. Then comes the apparent basis for the hypothesis, frequently explicitly titled ‘the evidence’ or ‘the facts’. And finally, the fringe thinker tells us that this evidence therefore proves the proposal. QED.

All of which sounds suitably watertight, except that – every time – the connection between the hypothesis and the evidence offered to support it is non-existent by actual scientific measure. Or the evidence is presented without proper context.

Some years ago I was asked to review a book which hypothesised that a Chinese civilisation had existed in New Zealand before what they called ‘Maori’ arrived. (I think they mean ‘Polynesians’, but hey…)

This Chinese hypothesis stood against orthodox archaeology which discredited the notion of a ‘pre-Maori’ settlement as early as 1923, and has since shown that New Zealand was settled by Polynesians around 1280 AD. They were the first humans to ever walk this land. Their Polynesian settler culture, later, developed into a distinct form whose people called themselves Maori. In other words, the Maori never ‘arrived’ – they were indigenous to New Zealand.

This picture has been built from a multi-disciplinary approach; archaeology, linguistics, genetic analysis, and available oral record. Data from all these different forms of scholarship fits together. It is also consistent with the wider picture of how the South Pacific was settled, including the places the Polynesian settlers came from.

Nonetheless, that didn’t stop someone touring the South Island looking for ‘facts’ to ‘prove’ that a Chinese civilisation had been thriving here before they were (inevitably) conquered by arriving Maori. This ‘evidence’ was packed off to the Rafter Radiation Laboratory in Gracefield, Lower Hutt, for carbon dating. And sure enough, it was of suitable age. Proof, of course, that the hypothesis had been ‘scientifically’ proven. Aha! QED.

Except, of course, it wasn’t proof at all. Like any good journalist I rang the head of the lab and discovered that they’d been given some bagged samples of debris, which they were asked to test. They did, and provided the answer without comment. The problem was that the material had been provided without context. This meant the results were scientifically meaningless.

I’m contemplating writing a book myself on the pseudo-science phenomenon with its hilarious syllogisms and wonderful exploration of every logical fallacy so far discovered. How do these crazy ideas get such traction? Why do they seem to appeal more than the obvious science?

Would anybody be interested if I wrote something on this whole intriguing phenomenon?

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

Click to buy e-book from Amazon

Click to buy e-book from Amazon

 

Lamenting the sadness of war, and of New Zealand’s war historians

Flags are at half mast today across New Zealand to mark the hundredth anniversary of the start of the First World War.

A shell bursting near New Zealand troops, Bailleul, World War I. Royal New Zealand Returned and Services' Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013399-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/23121937

A shell bursting near New Zealand troops, Bailleul, World War I. Royal New Zealand Returned and Services’ Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013399-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/23121937

Over 100,000 young Kiwi men were drawn into that conflict over a four year span. Of these, more than 58,000 became casualties, 16,500 of them dead. For a country of just on a million souls it was a heart-wrenching tragedy.

New Zealand, of course, was far from alone.

That human cost was multiplied by the fact that survivors came back damaged; this was the war that introduced ‘shell shock’ – post traumatic stress disorder – to the world on the largest scale. During the 1920s, broken men tried to pick up the shattered threads of their lives as best they could. There was often little help. An experience wonderfully described in J L Carr’s A Month In The Country.

Today the overwhelming impression of the war – certainly the way that New Zealand historiography and popular recollection has been shaped – is of unrelenting tragedy. A senseless war of senseless slaughter in which stupid generals didn’t know what to do, other than send innocent men walking very slowly towards machine guns.

Call it the ‘Blackadder’ interpretation.

World War 1 New Zealand machine gunners using a captured German position, Puisiuex, France. Royal New Zealand Returned and Services' Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013511-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/22304585

World War 1 New Zealand machine gunners using a captured German position, Puisiuex, France. Royal New Zealand Returned and Services’ Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013511-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/22304585

This has been the overwhelming tenor of the key interpretations of the war, shaping even academic history. From the military viewpoint it’s not true. Despite the appalling casualty lists and human cost, the tactical reality on the ground was a good deal more sophisticated than historians usually allow. And there is a good deal else that has yet to be discussed – lost, until now, amidst the overwhelming power of human sorrow. The war’s beginning has been portrayed, narrative-style, as a mechanistic result of nationalist pride and inflexible European alliance systems. In fact, there were choices; but the underlying motives for the decision to fight have barely been discussed by historians.  Could it be that, from the viewpoint of British and French politicians in 1914, it was necessary – even essential – to make a stand? A lot was said at the time about German ‘frightfulness’. Was this propaganda or a fair assessment? How far can the underlying trends and issues be validly traced?

A New Zealand 18 pound gun in action at Beaussart, France, during World War I. Royal New Zealand Returned and Services' Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013221-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/22371427

A New Zealand 18 pound gun in action at Beaussart, France, during World War I. Royal New Zealand Returned and Services’ Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013221-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/22371427

As yet, these debates have barely begun. They are being raised in Britain – I keep getting invited to contribute papers to symposia and conferences there, via the Royal Historical Society of which I am a Fellow.

Whether I can do anything about exploring the same ideas in New Zealand is moot. I write and publish on my own merits. Alas, New Zealand’s local public- and university-funded military historical crowd – all of whom prosper on full-time salaries at my expense as taxpayer – have rewarded my independent commercial work in their field by treating me like a war criminal. I know these strangers only through their public worth-denials of my scholarship and the commercial work I do to complement their taxpayer-funded activities. They do not respond to my correspondence, I cannot get added to mailing lists, and I have been unable to join their symposia even as audience – I only found out about the latest by accident. All from strangers who have felt unable to approach me directly in the first instance, but have been happy enough to go behind my back to attack me in public and then cowered behind silence when approached over their conduct. However, I’ve been told their status is such that I have no grounds to criticise them.

westernTo me the study of history – as with all human endeavour – is all about positively working together with good will, generous spirit and kindness. Grow the pie, and everybody benefits. But I appear to be a lone voice. And the experience makes me ask why I am paying the salaries, travel expenses and subsidising the publications of this little group through my taxes. There is a LOT of public money sloshing around the First World War centenary in New Zealand. Should it all be accumulated to a few public servants and academics who flourish at taxpayer expense and whose response to commercial authors seeking to work with them is to publicly attack and exclude the interloper?

Wright_Shattered Glory coverThe practical outcome is there seems little chance of my getting support for what I want to do. I’d like to look at New Zealand’s First World War from a different perspective – not to dislodge the ‘Blackadder’ view, but to add to it. There are many questions, including issues to do with New Zealand’s national identity – something I touched, briefly, in my book Shattered Glory (Penguin, 2010). But I can’t see myself being in a position to take that further.

But enough about the schreklichkeit of New Zealand’s military-historical academics. Instead, let’s take a moment to pause and think about the realities of the world a century ago – a world when, for a few brief weeks at least, the notion of a new war seemed somehow adventurous. It would, most of those who flocked to enlist were certain, be over by Christmas 1914.

Of course it wasn’t. As always, the enthusiastic young men, the hopeful patriots, the eager populations of 1914 did not know their future.

More on this soon.

Copyright © Matthew Wright 2014