Is high-tech REALLY indistinguishable from magic?

A fellow blogger asked for help the other week. What was the specific source – by page reference – to Arthur C. Clarke’s ‘Third Law’?

It was first published in his book Profiles of the Future – which was variously issued from 1958. My edition is the revised version published by Pan Books of London in 1973. And on p. 39 of that edition, as a footnote, Clarke outlines the Law: ‘Any sufficiently advanced technology is indistinguishable from magic’.

It was a throw-away point in a footnote to a lengthy chapter discussing the way conservative twentieth century science usually fails to admit to progress.

Fair point in that context, but I couldn’t help thinking of Europe’s history of exploration around the globe, which was built around wowing locals with techno-trickery and then bashing them with it. Toledo steel was one of several ways in which Hernan Cortez and subsequent marauders knocked over South and Middle American kingdoms in the sixteenth century.

It was a disparity that became extreme as Europe’s technical base improved, leading – ultimately – to the appalling massacre in 1893 of spear-wielding Matabele warriors by a handful of Cecil Rhodes’ Maxim gunners.  ‘Whatever happens/we have got/ the Maxim Gun/ and they have not,’ Hilaire Belloc intoned in wake of the battle.

The conceit of the age – echoed in Clarke’s Law – was that the indigenous peoples who saw European technology looked on it as magic. And it’s true to the extent that, if we lack any concept of the principle behind something, it may as well be magic. The notion of TV, for instance, was absolutely magical before the discovery of electromagnetic transmission; and even a top scientist from (let’s say) the late seventeenth century would have little chance of comprehending one, if they saw it. But I bet that if the principle was explained, they’d soon realise it wasn’t magic at all – just following a principle not yet known.

The same’s true, I think, of the way Europe’s technology was received across the world as it spread during their age of expansion. I think that sometimes the words of magic were used by indigenous peoples seeing the British demonstrate – usually – firearms. But that didn’t betray lack of understanding of the foreign technical concepts. The actual problem was they didn’t initially have the wording. The best evidence I have for this is in the collision between industrialising Britain and Maori in New Zealand, during the early nineteenth century.

Maori picked up British industrial products very quickly from the 1810s, including armaments. These were acculturated – drawn into Maori systems of tikanga (culture), in part by co-opting words already in use. The musket became the ‘pu’, for instance – a word for a blowpipe. But Maori very well understood the principles – certainly going out of their way to learn about armaments and warfare. Some rangatira (chiefs) even made the journey to London to learn more, among them Hongi Hika, who visited the arsenal at Woolwich in 1821 and learned of musket-age warfare and defences; and Te Rauparaha, who was taught about trench warfare in Sydney in 1830.

For ‘contact-age’ Maori, British industrial technology was not ‘magic’ at all – it was something to be investigated, understood and co-opted for use in New Zealand. And I suspect that’s how the same technology was also received by indigenous peoples elsewhere.

I don’t know whether Clarke thought of it that way; I suspect his targets, more particularly, were fuddy-duddies in his own establishment who wouldn’t accept that there might be new scientific principles.

Is there a technology you regard as potentially ‘magical’ to others?

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

Click to buy e-book from Amazon

Click to buy e-book from Amazon

 

 

 

 

 

Swearing and cussing? Sirrah! It’s a lot of craven murrain

The other week the Prime Minister of New Zealand used a word in public that literally means the ordure of a male cow. The colloquial meaning the PM deployed it for was ‘rubbish’.

William Shakespeare, the 'Flower' portrait c1820-1840, public domain via Wikimedia Commons.

‘Thou dankish unchin-snouted malt-worm!’ William Shakespeare, the ‘Flower’ portrait c1820-1840, public domain via Wikimedia Commons.

Oooh, naughty. Or is it? Way back in 1970, the same word was publicly used by Germaine Greer when she visited New Zealand. Then, police issued an arrest warrant. This time? The PM is in the middle of an election campaign in which everything he says or does will win or lose voters – and nobody batted an eye.

But of course. In New Zealand, today’s generation don’t regard this term as particularly offensive. I’ve seen the same word used in book titles, in the US it was the title of a Penn and Teller series, and so on. But that’s swearing. Words come and go. If they didn’t, we’d all swear like that impious swiver, Will Shakespeare.  Zounds! (God’s Wounds). The big word of his day was fie. But wait, there’s more. Not satisfied with the general vocabulary – which included some of the Anglo Saxon we use – the immortal bard is usually credited with coining around 1700 new words, many of them boisterously intended. You can check some of them out for yourself – here’s a Shakespeare insult generator.

What changes is the degree of offence society considers the word causes to ‘polite’ ears. That’s how Benjamin Tabart was able to use Shakespeare’s vilest word in his 1807 childrens’ tale ‘Jack and the Beanstalk’. Of course, by that time the hot potato word was ‘damn’, so offensive in polite society it was soon censored to d—d. That became a swear word too – ‘dashed’.

As always, older swear words that now seem acceptable aren’t directed ‘at’ anything. They’re abstract intensifiers that have lost connection with their original meaning. That’s different from offensive words intended to demean others’ behaviours, beliefs or cultures, which never become acceptable, any time. The fact that new terms of this latter kind keep turning up says quite a bit about the unpleasant side of the human condition.

But abstract intensifiers, directed at revealing one’s response to an ordinary event – like stepping in dog poo – are something else, and the funny thing is that any word will do, providing it’s understood. Sci-fi authors coin new ones often as devices for reinforcing the difference between ours and their future society. In Battlestar Galactica (2003-2009) the word was ‘frack’. An obvious homophone, but it worked well anyway. Or there’s Larry Niven’s Ringworld-series ‘futz’, which to me sounded like a mashup with putz. But you can’t fault the logic – the ‘different but not TOO different’ principle demanded of accessible SF.

I’ve only seen one place where a different word emerged. It was in Harry Harrison’s Bill the Galactic Hero. The forbidden term, the deeply offensive word of his galactic future, repeatedly used by his ‘starship troopers’? Bowb. It echoed 1930s slang, but Harrison made it the verboten word and used it with stunning effect – a multi-purpose obscene noun, verb and adjective with which readers instantly identified because of the context. ‘What’s this, bowb your buddy week?’ a trooper demands as his power suit fails and nobody stops him drowning. ‘It’s always bowb your buddy week’, the gunnery corporal tells the troops as the man sinks.

Bowb. Conveying the intensity of personal emotional response to the abstract without the current-day offence.  And that, of course, is the essence of writing – transmitting the intended emotion to the reader. Way cleverer than using existing swear words.

Trouble is, when I use bowb  in conversations, people look at me funny and think I’m a gleeking, beef-witted dewberry.

Copyright © Matthew Wright 2014

 

Fringe thinking fruit-loops or just misunderstood?

I am often bemused at the way some people seem to think. Particularly those who advocate what we might call ‘fringe’ theories.

I took this photo of the Moeraki boulders in 2007. They fact that they are not perfect spheres is evident.

Moeraki boulders, north of Dunedin. It’s been argued that they are weights used by Chinese sailors to raise sail. As I know the natural geological origin of them, that’s not a theory I believe myself, but hey…

These are often portrayed in pseudo-scientific terms; there is a hypothesis. Then comes the apparent basis for the hypothesis, frequently explicitly titled ‘the evidence’ or ‘the facts’. And finally, the fringe thinker tells us that this evidence therefore proves the proposal. QED.

All of which sounds suitably watertight, except that – every time – the connection between the hypothesis and the evidence offered to support it is non-existent by actual scientific measure. Or the evidence is presented without proper context.

Some years ago I was asked to review a book which hypothesised that a Chinese civilisation had existed in New Zealand before what they called ‘Maori’ arrived. (I think they mean ‘Polynesians’, but hey…)

This Chinese hypothesis stood against orthodox archaeology which discredited the notion of a ‘pre-Maori’ settlement as early as 1923, and has since shown that New Zealand was settled by Polynesians around 1280 AD. They were the first humans to ever walk this land. Their Polynesian settler culture, later, developed into a distinct form whose people called themselves Maori. In other words, the Maori never ‘arrived’ – they were indigenous to New Zealand.

This picture has been built from a multi-disciplinary approach; archaeology, linguistics, genetic analysis, and available oral record. Data from all these different forms of scholarship fits together. It is also consistent with the wider picture of how the South Pacific was settled, including the places the Polynesian settlers came from.

Nonetheless, that didn’t stop someone touring the South Island looking for ‘facts’ to ‘prove’ that a Chinese civilisation had been thriving here before they were (inevitably) conquered by arriving Maori. This ‘evidence’ was packed off to the Rafter Radiation Laboratory in Gracefield, Lower Hutt, for carbon dating. And sure enough, it was of suitable age. Proof, of course, that the hypothesis had been ‘scientifically’ proven. Aha! QED.

Except, of course, it wasn’t proof at all. Like any good journalist I rang the head of the lab and discovered that they’d been given some bagged samples of debris, which they were asked to test. They did, and provided the answer without comment. The problem was that the material had been provided without context. This meant the results were scientifically meaningless.

I’m contemplating writing a book myself on the pseudo-science phenomenon with its hilarious syllogisms and wonderful exploration of every logical fallacy so far discovered. How do these crazy ideas get such traction? Why do they seem to appeal more than the obvious science?

Would anybody be interested if I wrote something on this whole intriguing phenomenon?

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

Click to buy e-book from Amazon

Click to buy e-book from Amazon

 

Lamenting the sadness of war, and of New Zealand’s war historians

Flags are at half mast today across New Zealand to mark the hundredth anniversary of the start of the First World War.

A shell bursting near New Zealand troops, Bailleul, World War I. Royal New Zealand Returned and Services' Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013399-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/23121937

A shell bursting near New Zealand troops, Bailleul, World War I. Royal New Zealand Returned and Services’ Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013399-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/23121937

Over 100,000 young Kiwi men were drawn into that conflict over a four year span. Of these, more than 58,000 became casualties, 16,500 of them dead. For a country of just on a million souls it was a heart-wrenching tragedy.

New Zealand, of course, was far from alone.

That human cost was multiplied by the fact that survivors came back damaged; this was the war that introduced ‘shell shock’ – post traumatic stress disorder – to the world on the largest scale. During the 1920s, broken men tried to pick up the shattered threads of their lives as best they could. There was often little help. An experience wonderfully described in J L Carr’s A Month In The Country.

Today the overwhelming impression of the war – certainly the way that New Zealand historiography and popular recollection has been shaped – is of unrelenting tragedy. A senseless war of senseless slaughter in which stupid generals didn’t know what to do, other than send innocent men walking very slowly towards machine guns.

Call it the ‘Blackadder’ interpretation.

World War 1 New Zealand machine gunners using a captured German position, Puisiuex, France. Royal New Zealand Returned and Services' Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013511-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/22304585

World War 1 New Zealand machine gunners using a captured German position, Puisiuex, France. Royal New Zealand Returned and Services’ Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013511-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/22304585

This has been the overwhelming tenor of the key interpretations of the war, shaping even academic history. From the military viewpoint it’s not true. Despite the appalling casualty lists and human cost, the tactical reality on the ground was a good deal more sophisticated than historians usually allow. And there is a good deal else that has yet to be discussed – lost, until now, amidst the overwhelming power of human sorrow. The war’s beginning has been portrayed, narrative-style, as a mechanistic result of nationalist pride and inflexible European alliance systems. In fact, there were choices; but the underlying motives for the decision to fight have barely been discussed by historians.  Could it be that, from the viewpoint of British and French politicians in 1914, it was necessary – even essential – to make a stand? A lot was said at the time about German ‘frightfulness’. Was this propaganda or a fair assessment? How far can the underlying trends and issues be validly traced?

A New Zealand 18 pound gun in action at Beaussart, France, during World War I. Royal New Zealand Returned and Services' Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013221-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/22371427

A New Zealand 18 pound gun in action at Beaussart, France, during World War I. Royal New Zealand Returned and Services’ Association :New Zealand official negatives, World War 1914-1918. Ref: 1/2-013221-G. Alexander Turnbull Library, Wellington, New Zealand. http://natlib.govt.nz/records/22371427

As yet, these debates have barely begun. They are being raised in Britain – I keep getting invited to contribute papers to symposia and conferences there, via the Royal Historical Society of which I am a Fellow.

Whether I can do anything about exploring the same ideas in New Zealand is moot. I write and publish on my own merits. Alas, New Zealand’s local public- and university-funded military historical crowd – all of whom prosper on full-time salaries at my expense as taxpayer – have rewarded my independent commercial work in their field by treating me like a war criminal. I know these strangers only through their public worth-denials of my scholarship and the commercial work I do to complement their taxpayer-funded activities. They do not respond to my correspondence, I cannot get added to mailing lists, and I have been unable to join their symposia even as audience – I only found out about the latest by accident. All from strangers who have felt unable to approach me directly in the first instance, but have been happy enough to go behind my back to attack me in public and then cowered behind silence when approached over their conduct. However, I’ve been told their status is such that I have no grounds to criticise them.

westernTo me the study of history – as with all human endeavour – is all about positively working together with good will, generous spirit and kindness. Grow the pie, and everybody benefits. But I appear to be a lone voice. And the experience makes me ask why I am paying the salaries, travel expenses and subsidising the publications of this little group through my taxes. There is a LOT of public money sloshing around the First World War centenary in New Zealand. Should it all be accumulated to a few public servants and academics who flourish at taxpayer expense and whose response to commercial authors seeking to work with them is to publicly attack and exclude the interloper?

Wright_Shattered Glory coverThe practical outcome is there seems little chance of my getting support for what I want to do. I’d like to look at New Zealand’s First World War from a different perspective – not to dislodge the ‘Blackadder’ view, but to add to it. There are many questions, including issues to do with New Zealand’s national identity – something I touched, briefly, in my book Shattered Glory (Penguin, 2010). But I can’t see myself being in a position to take that further.

But enough about the schreklichkeit of New Zealand’s military-historical academics. Instead, let’s take a moment to pause and think about the realities of the world a century ago – a world when, for a few brief weeks at least, the notion of a new war seemed somehow adventurous. It would, most of those who flocked to enlist were certain, be over by Christmas 1914.

Of course it wasn’t. As always, the enthusiastic young men, the hopeful patriots, the eager populations of 1914 did not know their future.

More on this soon.

Copyright © Matthew Wright 2014

The paradox of Europe’s high-fat, low heart-disease diets

I am always fascinated by the way science occasionally comes up with ‘insoluble questions’ or ‘paradoxes’. After a while, these tricky queries go away because, it turns out, everybody was barking up a tree to which they had been led by an expert whose ideas had captured peer and public attention.

The Rue de Lafayette one night in 2004

Photo I took of the Rue de Lafayette in central Paris. I scoffed as much high-fat French cuisine as I could get down this boulevard. And it was delicious.

The big one, these days, is the link between high cholesterol and heart disease.  This has been dogma for decades. After the Second World War, US scientists theorised that saturated fats contributed to high cholesterol, hence clogged arteries, and therefore caused heart disease. The idea was enshrined in a US Department of Agriculture guideline in 1980.

Low fat, it seemed, was the way ahead – and it was embraced by the food industry in the US, followed by large parts of the rest of the western world.

Except Europe. They didn’t much change – and traditional French, German and Italian cuisine is awash with saturated fats and high-cholesterol foods. Yet they suffer less heart disease and are less obese than Americans. What’s more, since 1980 obesity has become a major issue in the United States and other countries that have followed the US low-fat lead, such as New Zealand.

A paradox! Something science can’t explain. Or is it?

The problem is that research often tests only what can be funded, something often framed by commercial priorities. This framework is further shaped by one of the philosophical flaws of western rational thinking; the notion that complex questions can be eventually reduced to single-cause questions and answers.

Reality is far less co-operative. The real world isn’t black-and-white. It’s not even shades of grey. It’s filled with mathematically complex systems that can sometimes settle into states of meta-stability, or which appear to present superficial patterns to initial human observation. An observation framed by the innate human tendency to see patterns in the first instance.

For me, from my philosophical perspective, it’s intriguing that recent research suggests that the link between saturated fat and ischemic (blood-flow related) heart disease is more tenuous than thought. Certainly it’s been well accepted – and was, even fifty years ago when the low-fat message was being developed – that types of cholesterol are utterly vital. If you had none at all in your system, you’d die, because it plays a crucial role in human biochemistry on a number of levels. Cholesterol even makes it possible for you to synthesise Vitamin D when exposed to sunlight. It’s one of the things humans can produce – your liver actually makes it, for these reasons.

As I understand it, recent studies suggest that the effort to diagnose and fix the problem of ‘heart attacks’ based on a simplistic mid-twentieth century premise – something picked up by much of western society as dogma – has been one of the factors implicated in a new epidemic of health problems. There is evidence that the current epidemic of diabetes (especially Type 2) and other diseases is one symptom of the way carbohydrates were substituted for fatty foods a generation ago, and of the way food manufacturers also compensated for a reduction in saturated fats by adding sugar or artificial sweeteners. Use of corn syrup in the US, for example, is up by 198 percent on 1970 figures.

I’m not a medical doctor. And from the scientific perspective all this demands testing. But the intellectual mechanisms behind this picture seem obvious to me from the principles of logic and philosophy – I learned the latter, incidentally, at post-grad level from Peter Munz, one of only two students of both Karl Popper (the inventor of modern scientific method) and Ludwig Wittgenstein (who theorised that language distorts understanding). I am in no doubt that language alone cannot convey pure concept; and I think the onus is on us to extend our understanding through careful reason – which includes being reasonable.

What am I getting at? Start with a premise and an if-then chain of reasoning, and you can build a compelling argument that is watertight of itself – but it doesn’t mean the answer is right. Data may be incomplete; or the interplay of possibilities may not be fully considered.

What follows? A human failing – self-evident smugness, pride in the ‘discovery’, followed by over-compensation that reverses the old thinking without properly considering the lateral issues. Why? Because very few people are equipped to think ‘sideways’, and scientists aren’t exceptions.

Which would be fine if it was confined to academic papers. But it isn’t. Is it.

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

 

Science: Nil. Stupidity: 1,000,000,000

It was Albert Einstein, I believe, who suggested only two things were infinite. The universe and stupidity. And he wasn’t sure about the universe.

According to media reports, Yoshihiro Kawaoka of the University of Wisconsin-Madison has been tinkering with the H1N1 flu virus that triggered a pandemic in 2009 and killed 500,000. Apparently, he’s altered it to take away human immunity built up since 2009. There are solid scientific reasons for doing so – we learn how to make better vaccines. Excellent motive.

Except – e-e-e-except…the modified virus poses a threat if it escapes. Estimates of casualties range from a billion people down to  merely 400,000,000. Kawaoka’s effort has been criticised as irresponsible, and response generally, seems critical.

I’m not a virologist. But I know what happened when the Justinian plague and the Black Death hit Europe, or when Europe’s diseases hit the Americas and Australasia. I know what happened in 1918-19. Diseases to which humans had no immunity. And I think if someone shows something can be done, somebody else will repeat it on that knowledge alone.

What worries me is the wider trend towards tinkering with viruses in labs. We can, I fear, only get away for so long without an accident. Professor Simon Wain-Hobson, of the Virology Department at the Pasteur Institute in Paris, is reported as using more direct terms. ‘If society understood what was going on,’ he was quoted in the Independent, ‘‘they would say “What the F… are you doing?”’

Quite right, too.

Artwork by Plognark http://www.plognark.com/ Creative Commons license

Artwork by Plognark http://www.plognark.com/ Creative Commons license

Copyright © Matthew Wright 2014

Sherlock’s public domain – but will writing new stories be elementary?

A recent US court ruling that 50 Sherlock Holmes stories published before December 1923 are in public domain – hence free for all to use – raises questions about whether we’re about to be inundated with a flood of new Holmes adventures.

Holmes in action, illustration by Sidney Paget for Strand Magazine. Public domain, via Wikipedia.

Holmes in action during the ‘Adventure of the Abbey Grange’, illustration by Sidney Paget for Strand Magazine. Public domain, via Wikipedia.

It’s subject to possible appeal, I suppose. But it’s a tricky issue. Here in New Zealand, all Sir Arthur Conan Doyle’s works have been public domain since 31 December 1980, the end of the fiftieth year after his death. But copyright terms and protections vary and his material has remained in copyright elsewhere. Some countries run 75 or 100-year copyrights after death, and the US has more than one term. The US court case came about, it seems, when a licensing deal with the Doyle estate tripped up.

To me, that raises a question. Sure, that ruling means any author can freely go ahead and use Sherlock Holmes and all the concepts and ideas that pre-date 1923 in stories of their own. This includes most of the classic Holmes imagery from the deerstalker cap to the pipe to the violin to the fact that it’s always 1895 and Hansom cabs are the way around London.

But should they?

Sherlock Holmes revisited has been done by authors. Nicholas Meyers’ The Seven Percent Solution, for instance. Or Fred Saberhagen’s The Holmes-Dracula File. And there have been innumerable adaptations of the stories for movies or TV.

Another Paget illustratioon for Strand magazine.

Another Paget illustration, from the ‘Adventure of the Golden Pince-Nez’, for Strand magazine. Public domain, via Wikipedia.

As far as I am concerned, the only two adaptations that have come close to the spirit and intent of the Conan Doyle original were both by the BBC. There was the Jeremy Brett/Edward Hardwicke adaptation of the 1980s, which was utterly faithful to Doyle’s work in essential details. And there was the 2010 Benedict Cumberbatch/Martin Freeman re-telling, which was so faithful to the spirit that we can easily imagine Conan Doyle writing it, were he starting out today. Don’t forget, Holmes was set in what was, when Doyle started, the modern world.

I question whether re-imagining the Holmes character is effective. There’s been stupid Holmes and smart Watson (Michael Caine/Ben Kingsley Without a Clue, 1988). Or Holmes as action hero (Robert Downey/Jude Law Sherlock Holmes, 2009). But Holmes, as Conan Doyle imagined him, is iconic – so aren’t these new characters? Riffing on the old, but really something else?

That highlights what, for me, is the key issue for any author writing ‘new’ Holmes stories. Sure, there’s a market. But Holmes stories are hard to do well – and really, it’s elevated fan fiction. Isn’t it better for an author to invent something new?

Thoughts?

Copyright © Matthew Wright 2014