The paradox of Europe’s high-fat, low heart-disease diets

I am always fascinated by the way science occasionally comes up with ‘insoluble questions’ or ‘paradoxes’. After a while, these tricky queries go away because, it turns out, everybody was barking up a tree to which they had been led by an expert whose ideas had captured peer and public attention.

The Rue de Lafayette one night in 2004

Photo I took of the Rue de Lafayette in central Paris. I scoffed as much high-fat French cuisine as I could get down this boulevard. And it was delicious.

The big one, these days, is the link between high cholesterol and heart disease.  This has been dogma for decades. After the Second World War, US scientists theorised that saturated fats contributed to high cholesterol, hence clogged arteries, and therefore caused heart disease. The idea was enshrined in a US Department of Agriculture guideline in 1980.

Low fat, it seemed, was the way ahead – and it was embraced by the food industry in the US, followed by large parts of the rest of the western world.

Except Europe. They didn’t much change – and traditional French, German and Italian cuisine is awash with saturated fats and high-cholesterol foods. Yet they suffer less heart disease and are less obese than Americans. What’s more, since 1980 obesity has become a major issue in the United States and other countries that have followed the US low-fat lead, such as New Zealand.

A paradox! Something science can’t explain. Or is it?

The problem is that research often tests only what can be funded, something often framed by commercial priorities. This framework is further shaped by one of the philosophical flaws of western rational thinking; the notion that complex questions can be eventually reduced to single-cause questions and answers.

Reality is far less co-operative. The real world isn’t black-and-white. It’s not even shades of grey. It’s filled with mathematically complex systems that can sometimes settle into states of meta-stability, or which appear to present superficial patterns to initial human observation. An observation framed by the innate human tendency to see patterns in the first instance.

For me, from my philosophical perspective, it’s intriguing that recent research suggests that the link between saturated fat and ischemic (blood-flow related) heart disease is more tenuous than thought. Certainly it’s been well accepted – and was, even fifty years ago when the low-fat message was being developed – that types of cholesterol are utterly vital. If you had none at all in your system, you’d die, because it plays a crucial role in human biochemistry on a number of levels. Cholesterol even makes it possible for you to synthesise Vitamin D when exposed to sunlight. It’s one of the things humans can produce – your liver actually makes it, for these reasons.

As I understand it, recent studies suggest that the effort to diagnose and fix the problem of ‘heart attacks’ based on a simplistic mid-twentieth century premise – something picked up by much of western society as dogma – has been one of the factors implicated in a new epidemic of health problems. There is evidence that the current epidemic of diabetes (especially Type 2) and other diseases is one symptom of the way carbohydrates were substituted for fatty foods a generation ago, and of the way food manufacturers also compensated for a reduction in saturated fats by adding sugar or artificial sweeteners. Use of corn syrup in the US, for example, is up by 198 percent on 1970 figures.

I’m not a medical doctor. And from the scientific perspective all this demands testing. But the intellectual mechanisms behind this picture seem obvious to me from the principles of logic and philosophy – I learned the latter, incidentally, at post-grad level from Peter Munz, one of only two students of both Karl Popper (the inventor of modern scientific method) and Ludwig Wittgenstein (who theorised that language distorts understanding). I am in no doubt that language alone cannot convey pure concept; and I think the onus is on us to extend our understanding through careful reason – which includes being reasonable.

What am I getting at? Start with a premise and an if-then chain of reasoning, and you can build a compelling argument that is watertight of itself – but it doesn’t mean the answer is right. Data may be incomplete; or the interplay of possibilities may not be fully considered.

What follows? A human failing – self-evident smugness, pride in the ‘discovery’, followed by over-compensation that reverses the old thinking without properly considering the lateral issues. Why? Because very few people are equipped to think ‘sideways’, and scientists aren’t exceptions.

Which would be fine if it was confined to academic papers. But it isn’t. Is it.

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

 

Science: Nil. Stupidity: 1,000,000,000

It was Albert Einstein, I believe, who suggested only two things were infinite. The universe and stupidity. And he wasn’t sure about the universe.

According to media reports, Yoshihiro Kawaoka of the University of Wisconsin-Madison has been tinkering with the H1N1 flu virus that triggered a pandemic in 2009 and killed 500,000. Apparently, he’s altered it to take away human immunity built up since 2009. There are solid scientific reasons for doing so – we learn how to make better vaccines. Excellent motive.

Except – e-e-e-except…the modified virus poses a threat if it escapes. Estimates of casualties range from a billion people down to  merely 400,000,000. Kawaoka’s effort has been criticised as irresponsible, and response generally, seems critical.

I’m not a virologist. But I know what happened when the Justinian plague and the Black Death hit Europe, or when Europe’s diseases hit the Americas and Australasia. I know what happened in 1918-19. Diseases to which humans had no immunity. And I think if someone shows something can be done, somebody else will repeat it on that knowledge alone.

What worries me is the wider trend towards tinkering with viruses in labs. We can, I fear, only get away for so long without an accident. Professor Simon Wain-Hobson, of the Virology Department at the Pasteur Institute in Paris, is reported as using more direct terms. ‘If society understood what was going on,’ he was quoted in the Independent, ‘‘they would say “What the F… are you doing?”’

Quite right, too.

Artwork by Plognark http://www.plognark.com/ Creative Commons license

Artwork by Plognark http://www.plognark.com/ Creative Commons license

Copyright © Matthew Wright 2014

Sherlock’s public domain – but will writing new stories be elementary?

A recent US court ruling that 50 Sherlock Holmes stories published before December 1923 are in public domain – hence free for all to use – raises questions about whether we’re about to be inundated with a flood of new Holmes adventures.

Holmes in action, illustration by Sidney Paget for Strand Magazine. Public domain, via Wikipedia.

Holmes in action during the ‘Adventure of the Abbey Grange’, illustration by Sidney Paget for Strand Magazine. Public domain, via Wikipedia.

It’s subject to possible appeal, I suppose. But it’s a tricky issue. Here in New Zealand, all Sir Arthur Conan Doyle’s works have been public domain since 31 December 1980, the end of the fiftieth year after his death. But copyright terms and protections vary and his material has remained in copyright elsewhere. Some countries run 75 or 100-year copyrights after death, and the US has more than one term. The US court case came about, it seems, when a licensing deal with the Doyle estate tripped up.

To me, that raises a question. Sure, that ruling means any author can freely go ahead and use Sherlock Holmes and all the concepts and ideas that pre-date 1923 in stories of their own. This includes most of the classic Holmes imagery from the deerstalker cap to the pipe to the violin to the fact that it’s always 1895 and Hansom cabs are the way around London.

But should they?

Sherlock Holmes revisited has been done by authors. Nicholas Meyers’ The Seven Percent Solution, for instance. Or Fred Saberhagen’s The Holmes-Dracula File. And there have been innumerable adaptations of the stories for movies or TV.

Another Paget illustratioon for Strand magazine.

Another Paget illustration, from the ‘Adventure of the Golden Pince-Nez’, for Strand magazine. Public domain, via Wikipedia.

As far as I am concerned, the only two adaptations that have come close to the spirit and intent of the Conan Doyle original were both by the BBC. There was the Jeremy Brett/Edward Hardwicke adaptation of the 1980s, which was utterly faithful to Doyle’s work in essential details. And there was the 2010 Benedict Cumberbatch/Martin Freeman re-telling, which was so faithful to the spirit that we can easily imagine Conan Doyle writing it, were he starting out today. Don’t forget, Holmes was set in what was, when Doyle started, the modern world.

I question whether re-imagining the Holmes character is effective. There’s been stupid Holmes and smart Watson (Michael Caine/Ben Kingsley Without a Clue, 1988). Or Holmes as action hero (Robert Downey/Jude Law Sherlock Holmes, 2009). But Holmes, as Conan Doyle imagined him, is iconic – so aren’t these new characters? Riffing on the old, but really something else?

That highlights what, for me, is the key issue for any author writing ‘new’ Holmes stories. Sure, there’s a market. But Holmes stories are hard to do well – and really, it’s elevated fan fiction. Isn’t it better for an author to invent something new?

Thoughts?

Copyright © Matthew Wright 2014

Seventy years since the battle that shaped our world

It is seventy years since a friend of my family looked into the sky above his village in England and saw a cloud of aircraft fly over. And over. And over. The sky was filled with aircraft, and they were all going one way – to France.

Landing at D-Day. Photo by Chief Photographer's Mate (CPHOM) Robert F. Sargent, U.S. Coast Guard. Public Domain.

Landing at D-Day. Photo by Chief Photographer’s Mate (CPHOM) Robert F. Sargent, U.S. Coast Guard. Public Domain.

It was D-day, the first day of Operation OVERLORD – the Allied landing on the shores of Nazi-occupied Europe. It remains perhaps the most complex, audacious and risky military actions in the history of the world. The battle plan – two years or more in the making – relied on taking some of the strongest defences ever built in that war, and was detailed down to individual pill-boxes. Even after the landings, the lodgement was stuck in a maze of hedge-rows and ditches and there was every risk that the Germans might bring superior forces to bear before the Allies could get enough forces pushed into the lodgement.

The world we know today was shaped by events on that Normandy coast. If the Allies had been knocked off the lodgement – or if the storm that delayed the landing on 5 June had destroyed the invasion fleet – what then? Another assault could not have been staged for years, if at all. Part of the impact was surprise; Hitler, particularly, never expected them to land in Normandy. If it had failed, the Allies could have carried on their campaign in Italy, their blockade of the Axis economy and their air campaign against the German heartland. But they could not have got involved in war on the ground in northern Europe.

Naval bombardment plan for D-Day. Public domain, via Wikipedia.

Naval bombardment plan for D-Day. Public domain, via Wikipedia.

That doesn’t mean that the Germans would have got away with it. if OVERLORD had failed, the war in Europe would still have been over by mid-late 1945 anyway, because by D-Day the Germans had already lost the war in the east. The monstrous battles around the Kursk salient in mid-1943 effectively ended any chance of the Germans fighting Stalin to a stalemate. After that the only real question was how long their commanders, tactically hobbled by Hitler’s foaming ‘no retreat’ demands, could delay the Soviet advance.

In absence of an Allied threat to western Europe, the Germans could have transferred the 50 divisions they had in the west to the eastern front. But it would have only deferred the inevitable. By this time the Soviets had around 300 divisions committed to the struggle. The Luftwaffe had lost air superiority, and that wasn’t going to change in a hurry – if at all. We can forget the ‘Luftwaffe 1946’ dieselpunk fantasy. Aside from the fact that Nazi super-science wasn’t actually all that advanced, the Germans were desperately short of key materials thanks to the Allied blockade. Particularly oil and chromium. Albert Speer estimated that war production would have to halt by early 1946, come what may, on the back of the chromium shortage alone.

If OVERLORD had failed, in short, the face of post-war Europe would have been Soviet. The spectre isn’t one of Soviet tanks sitting on the Channel coast, but of the Iron Curtain descending further west – perhaps on the Rhine – and France and likely Austria becoming Soviet puppets. The Cold War would have had a very different face – one without a strong Western Europe. And that begs questions about how it might have played out. I figure the Soviet system would still have collapsed – totalitarian systems do, sooner or later – but the detail of the later twentieth century would have been very different.

Thoughts?

Copyright © Matthew Wright 2014

Helping some guy who was having a heart attack – and thoughts on our duty of care

Last Sunday my wife and I were out for a walk along the Hutt river, which flows into Wellington harbour. It was a pleasant autumn morning. And then we found someone lying at the bottom of the stop-bank.

He looked derelict. He might have been sleeping, or maybe drunk or something. But he didn’t look right, so I ran down the slope and called to him.

The Hutt river, looking south towards the rail bridge. Usually there's a lot more water in it than this.

The Hutt river and its stop banks.

He stuck his head up and for a moment there was nobody in his eyes. He had, he said, just been discharged from hospital. He was on his way home, though the suburb he named was in the opposite direction. Then I saw he still had ECG leads on his chest.

‘I’m going to call an ambulance,’ I said. He didn’t like that.

‘I don’t want to go back,’ he wheezed. ‘Want to help me? Gimme ten bucks and I’ll get a taxi home.’

‘No, you need medical help.’

He didn’t want medical help. After a bit of debate I finally said:

‘Look, I can’t not help you!’

He didn’t look cyanotic, but he was agitated and incoherent, obviously having a cardiac episode. I went back to my wife, told her what was happening, and we called an ambulance. They arrived within five minutes and took him back to hospital. I hope he was OK.

The moment got me thinking about ethics and morality and that sort of thing. We were infringing on his right to be left alone if he demanded it – and he was demanding it. He was pretty aggro about it too, which may have been symptomatic of having a heart attack. Or maybe in his own mind he was tired of life. I don’t know. Certainly, I am sure, he was tired of being in hospital.

But it wasn’t a moral dilemma for me. He was in serious trouble. He was in pain, his life was possibly on the line. There was no decision to make. He had to be helped, and the best way wasn’t to call a taxi and send him home – it was to get medical support. Fast.

These things are not optional.

Copyright © Matthew Wright 2014

 

OK, so I’m 1.8 percent Thog the Caveman. Cool.

It’s official. A paper published last week argues that Neandertals were just as smart as we are.

Wright_NeanderthalWhat gives, you say? Isn’t human history a glorious ascent from rats to car salesmen to politicians and finally to humans? A linear progress in which ‘advance’ is by brain size and where stupid Neandertals were doomed to be out-competed by us?

Actually…no. That’s nineteenth century thinking, which mashed period free market principles into evolutionary theory. A misconception perpetuated by that trope of walking apes, deriving from the ‘March of Progress’ that Rudolf Zallinger drew for Time-Life books in 1965.

It’s good news for me. As someone of European descent, I apparently have up to 1.8 percent Neandertal genes. I always thought that explained why I spend up to 1.8 percent of my time swilling beer, belching, dropping wheelies in my car, and head-butting large concrete objects while grunting ‘ugh ugh, Thog bring mam-muth steak to wo-man.’ But if Neandertals were as smart as us, I guess I’ll have to find another explanation. Probably the Cro Magnon coming through, I suppose.

A diagram I made of where we think everybody was, mostly, using my trusty Celestia installation and some painting tools.

A diagram I made using my Celestia installation and some painting tools.

So how did this come about? Scientific thinking has moved past nineteenth century market philosophy. Paleo-anthropologists like Stephen J. Gould argue that evolution isn’t about ‘directional advance’, still less measured by the increasing size of a body part. It’s about change through time, which isn’t directional.

Current theory suggests the human template hasn’t changed since H. erectus appeared around 1.8 million years ago. This hominid, fossil evidence indicates, spread from Britain to Java, and isolated populations survived up to 140,000 years ago. Remains show that H. erectus was like us from the neck down ( ‘post-cranial morphology’), had command of fire and made tools. The archetypal fossil is KNM-WT15000, ‘Turkana Boy’, a 9-11 year old male who might have grown to 6’1″ had he lived to adulthood. A trove of H. erectus skulls recently discovered in Dmanisi, Georgia, suggests that contemporary species previously thought separate – H. antecessor, H.ergaster, even H. habilis – were actually H. erectus.

Studies indicate that about 700,000 years ago a new species, H. heidelbergensis, ‘Heidelberg Man’, diverged from H. erectus and also migrated out of Africa – probably a side effect of following favourable climatic zones. They had brains within the modern size range.  The increase has been attributed, paleoanthropologists argue, to tool-making and reduction in jaw size that came about as a consequence of cooking. Later, theory goes, Heidelberg Man speciated into us – H. sapiens – in Africa, Neandertals in Africa and Europe, and Denisovans in Siberia. Neandertals had a bigger brain than ours, and were physically more than twice as strong. Trying to rank these species as ‘advances’ on each other is like saying lions are more advanced than tigers.

I’ve seen it argued that all represented different ways of being human.

Neanderthal family group approximately 60,000 years ago. Artwork by Randii Oliver, public domain, courtesy NASA/JPL-Caltech.

Neandertal family, 60,000 years ago. Randii Oliver, public domain, courtesy NASA/JPL-Caltech.

According to the genetic record, Europeans have Neandertal genes because Neandertal men got frisky with Sapiens women in the Levant, 60,000 years ago. It was Neandertal men with Sapiens women because, if it was the other way around, the genes wouldn’t have been passed to our species. Genetic evidence suggests male progeny would have been sterile. Nor were Neandertals the only ones up for it. Recent genetic studies point to interbreeding between up to four close-related human species, back in the paleolithic.

This hasn’t reduced the genetic quirk about every modern human. Genetically, we are unusually close by biological standards. There is far less variation than has been observed in other primates. We are so close, in fact, that if we were dogs, we’d be the same breed. I prefer to think Labradors rather than those foo foo French things. The reason is that, around 75,000 years ago, we came very close to extinction – thus, we are all descended from a tiny and genetically homogenous group. Because Neandertals and Denisovans, genetically, were 99.5% identical to us, a small intrusion of their genes makes no difference.

I don’t know I’d want to meet a Neandertal. We were wimps – ‘gracile’. With upright foreheads and diminutive jaws, we’d have looked like children (‘neotony’). We were the geeky looking ones who would have got the atomic wedgies.

On the other hand, every other kind of human was wiped out by the deep cold and droughts of the last big glacial cycles. We weren’t. See what I mean when I trunk on in this blog about geeks winning?

Copyright © Matthew Wright 2014

My hypothesis that English is a loose language

I’ve always thought English is a loose language. Take the words ‘theory’ and ‘hypothesis’, for instance. Even dictionary definitions sometimes mix their meanings up.

Albert Einstein lecturing in 1921 - after he'd published both the Special and General Theories of Relativity. Public domain, via Wikimedia Commons.

Albert Einstein lecturing in 1921 – after he’d published the Special and General Theories of Relativity. Public domain, via Wikimedia Commons.

Scientifically, the word ‘theory’ means a ‘hypothesis’ that has been established to be true by empirical data. Take Einstein’s two theories of relativity, Special (1906) and General (1917). We call them ‘theories’, by name, but everybody with a GPS-equipped cellphone or GPS system encounters proof that Einstein was right, every time they use it.

This is because GPS satellite clocks have a correction built into them to cope with Special Relativity time dilation that occurs because they’re moving at a different velocity than the surface of the Earth. It’s miniscule –  6 millionths of a second loss every 24 hours. There’s also the need to cope with General Relativity time acceleration relative to the surface of the earth, because they’re in orbit, putting them further away from the mathematical centre of Earth’s mass than we are on the surface of the planet. That totals 45 millionths of a second gain every 24 hours.

If all this sounds supremely geeky and too tiny to worry about, millionths of a second count,  because its on differences at that order of magnitude that GPS calculates positions. If the net relativity error of 39 millionths of a second every 24 hours wasn’t corrected, GPS would kick up positional errors of up to 12 km on the ground. Einstein, in short, was totally right and if we didn’t use Einstein’s equations to correct GPS, we’d be lost. Literally. Yet we still call his discovery a ‘theory’.

Hypothesis,on the other hand, is the idea someone comes up with to explain something. Then they run tests to figure out the rules. Take gravity. Everybody knew it existed. However, Newton figured he could come up with rules – his hypothesis. Once Newton had a hypothesis, he was able to run experiments and sort out actually how it worked – creating his theory of gravity.

Neptune. A picture I made with my trusty Celestia installation (cool, free science software).

Neptune. Discovered by mathematics, thanks to Newton’s theories. A picture I made with Celestia (cool, free science software).

One of the reasons why these explanations are called ‘theory’ is because science sometimes finds refinements. Einstein’s theory of General Relativity is also a theory of gravity, integrating the extremes of time and space Einstein described in his Special theory. It replaced Newton’s theory. But that didn’t mean Newton was wrong in the terms he observed and described. On the contrary, his equations still work perfectly for the things around which he developed the theory.

So in the strictest sense, ‘hypothesis’ means ‘how we think things work’, while ‘theory’ means ‘how we’ve shown things to work’. Science sometimes creates supersets of theories, like onion skins, that explain things differently – but usually don’t invalidate the core of the earlier theory.

And my hypothesis, which I think should be elevated to theory status on this evidence, is that English is a pretty loose language. Thoughts?

Copyright © Matthew Wright 2014

 

And now, some shameless self promotion:

It’s also available on iTunes: https://itunes.apple.com/nz/book/bateman-illustrated-history/id835233637?mt=11

Buy the print edition here: http://www.batemanpublishing.co.nz/ProductDetail?CategoryId=96&ProductId=1410

The Big Bang theory wins again. So does Einstein.

It’s a great time to be a geek. We’re learning all sorts of extreme stuff. There’s a team led by John Kovac, from the Harvard-Smithsonian Center for Astrophysics, who’ve been beavering away on one of the fundamental questions of modern cosmology. The secret has demanded some extreme research in an extreme place. Antarctica. There’s a telescope there, BICEP2, that’s been collecting data on the cosmic background temperature. Last week, the team published their initial results.

Timeline of the universe - with the Wilkinson Microwave Antisotropy Probe at the end. Click to enlarge. Public domain, NASA.

Timeline of the universe – with the Wilkinson Microwave Antisotropy Probe at the end. Click to enlarge. Public domain, NASA.

The theory they were testing is as extreme as such things get and goes like this. Straight after the Big Bang, the universe was miniscule and very hot. Then it expanded – unbelievably fast in the first few trillionth trillionths of a second, but then much more slowly. After a while it was cool enough for the particles we know and love today to be formed. This ‘recombination’ epoch occurred perhaps 380,000 years after the Big Bang. One of the outcomes was that photons were released from the plasma fog – physicists call this ‘photon decoupling’.

What couldn’t quite be proven was that the early rate of expansion – ‘inflation’ – had been very high.

But now it has. And the method combines the very best of cool and of geek. This early universe can still be seen, out at the edge of visibility. That initial photon release is called the ‘cosmic microwave background’ (CMB), first predicted in 1948 by Ralph Alpher and others, and observed in 1965 by accident when it interfered with the reception of a radio being built in Bell Laboratories. That started a flurry of research. Its temperature is around 2.725 degrees kelvin, a shade above absolute zero. It’s that temperature because it’s been red-shifted (the wavelengths radiated from it have stretched, because the universe is expanding, and stuff further away gets stretched more). The equation works backwards from today’s CMB temperature, 2.725 degrees Kelvin, thus: Tr = 2.725(1 + z).

The COBE satellite map of the CMB. NASA, public domain, via Wikipedia.

The COBE anisotropic satellite map of the CMB. NASA, public domain, via Wikipedia.

The thing is that, way back – we’re talking 13.8 billion years – the universe was a tiny fraction of its current size, and the components were much closer together. Imagine a deflated balloon. Splat paint across the balloon. Now inflate the balloon. See how the paint splats move further apart from each other? But they’re still the original pattern of the splat. In the same sort of way, the CMB background pattern is a snapshot of the way the universe was when ‘photon decoupling’ occurred. It’s crucial to proving the Big Bang theory. It’s long been known that the background is largely homogenous (proving that it was once all in close proximity) but carries tiny irregularities in the pattern (anisotropy). What the BICEP2 team discovered is that the variations are polarised in a swirling pattern, a so-called B-mode.

The reason the radiation is polarised that way is because early inflation was faster than light-speed, and the gravity waves within it were stretched, rippling the fabric of space-time in a particular way and creating the swirls. Discovering the swirls, in short, identifies both the early rate of expansion (which took the universe from a nanometer to 250,000,0000 light years diameter in 0.00000000000000000000000000000001 of a second…I think I counted right…) and gives us an indirect view of gravitational waves for the first time. How cool is that?

Albert Einstein lecturing in 1921 - after he'd published both the Special and General Theories of Relativity. Public domain, via Wikimedia Commons.

Albert Einstein lecturing in 1921 – after he’d published both the Special and General Theories of Relativity. Public domain, via Wikimedia Commons.

What’s a ‘gravitational wave’? They were first predicted nearly a century ago by Albert Einstein, whose General Theory of Relativity’of 1917 was actually a theory of gravity. According to Einstein, space and time are an entwined ‘fabric’. Energy and mass (which, themselves, are the same thing) distort that fabric. Think of a thin rubber sheet (space-time), then drop a marble (mass/energy) into it. The marble will sink, stretching the sheet. Gravitational waves? Einstein’s theory made clear that these waves had to exist. They’re ripples in the fabric.

One of the outcomes of last week’s discovery is the implication that ‘multiverses’ exist. Another is that there is not only a particle to transmit gravity, a ‘graviton’, but also an ‘inflaton’ which pushes the universe apart. Theorists suspect that ‘inflatons’ have a half-life and they were prevalent only in the very early universe.

There’s more to come from this, including new questions. But one thing is certain. Einstein’s been proven right. Again.

Copyright © Matthew Wright 2014

Coming up: More geekery, fun writing tips, and more.

Write it now: do writers always perch on a soap-box?

Back in the early 1980s, when I was a history student at Victoria University, one of the other students took me aside and nodded towards the lecturer. ‘D’you know he’s really a Liberal?’

Hmmn

Hmmn

The Professor in question was one of New Zealand’s leading historians of the day on the Liberal party, which was in government 1891-1912 and imploded in the early 1920s. The world had long since moved on, rendering interest in them academic. Which, I suppose, is why this Professor was studying them.

That didn’t make him a Liberal, personally. But the distinction, it seemed, was lost on his students, to whom interest and personal advocacy were one and the same. The idea’s not unique to universities – though on my experience the angry, sanctimonious and half-educated youth who inhabited the history department at the time set the gold standard.

Post-Vietnam anti-war rhetoric was well entrenched. Post-colonial thinking was on the rise. Failure to advocate it was a fast road to social ostracism, buoyed on unsubtle intellectual bullying that enforced conformity to the breathless ‘new order’. Those who failed to conform lost out socially and found that career doors were not opened.

Conflation of interest with advocacy happens in the real world too – for writers it’s an occupational hazard. Freelance journos are bound to crash into the social no-no de jour sooner or later – they write on such a wide range, and even those who focus their brand into a particular subject get tarred eventually. Non-fiction book writers hit it. Want to write a book on how the Nazis took over Germany? Be careful.

Novellists hit it – I recall reading that Jerry Pournelle and Larry Niven took a lot of stick for setting  The Mote In God’s Eye in a human Empire. Were they advocating Imperialism? Not at all. This was simply the setting.

That’s not to say that writing can’t be a soap-box. Often it is. But it can also be abstract – and it’s important for the writer to understand how that works – to signal the difference. Also for readers to appreciate it.

For me the trick is stepping away from the bus. Looking back and figuring out just what it is that frames the way we think. It doesn’t mean rejecting that – but it does mean understanding it. From that, it’s possible to be properly abstract. Or, indeed, to get back on the soap box, this time in an informed way.

Your thoughts?

Copyright © Matthew Wright 2014

Coming up: More writing tips, science geekery and fun. Check it out.

I miss my future. It’s been taken from me.

I miss my future. When I was a kid, 21st-century food was going to be pre-packaged space pap. We would all, inevitably, be eating  paste out of tubes. It was futuristic. It was progress.

On  the way to Mars, concept for 1981 flight,via NASA.

The future of 1970: a Mars mission, 1981 style.

Today? We’re in that future. And I still cook fresh veggies and steak. Some of it from the garden (the veggies, not the steak).

When I was a teenager, plastic cards were going to kill cash. In the 21st century we’d just have cards. It was inevitable. It was the future. Get with the program. Today? We use more cash than ever, but chequebooks died.

When I was in my twenties, video was going to kill the movies. It was inevitable. We just had to accept it. When I last looked, movies were bigger than ever – didn’t The Hobbit, Part 2,889,332 just rake in a billion at the box office?

And, of course, personal computers were going to give us the paperless office. Except that today every office is awash with …yup, paper, generated by what we produce on computer, churning out of giant multi-function copiers that run endlessly, every second the office is open.

Did we fail to adopt all these things hard or fast enough? Is it just that technology hasn’t quite delivered what was expected – but it will, it will? No. The problem is with the way we think – with the faulty way we imagine change occurs over time with technology and people. With the way we assume any novelty will dominate our whole future. With the way we inevitably home in on single-cause reasons for change, when in reality anything to do with human society is going to exist in many more than fifty shades of grey. The problem is a fundamental misunderstanding – driven by the simplistic ‘progressive’ mind-set that has so dominated popular thinking since the Age of Reason.

I know all that. But still…I miss my future.

Copyright © Matthew Wright 2014

Coming up: More writing tips, science, history and more. Watch this space.