Pluto frenzy is upon us this month

The world’s in Pluto frenzy this month. NASA’s SOFIA observatory aircraft has been operating out of Christchurch, New Zealand, to capture data on Pluto’s atmosphere via star transit spectrometry – and on 14 July, some 3662 days after leaving Earth, the New Horizons probe will storm past Pluto and its family of moons.

Simulated view of Pluto and Charon - speculative only at this stage - which I made with my Celestia software.

Simulated view of Pluto and Charon – speculative only at this stage – which I made with Celestia.

It’s our first visit to that world – and last, for the foreseeable future. And what an achievement! That probe is the fastest object ever built by humanity, and it’s already returned new data about the Pluto system. In the weeks after the encounter, as it transmits its hoard of information back – New Horizons will revolutionise everything we know about that remote world and its moons. Always assuming it doesn’t bang into anything, of course. At 51,500 km/h, an encounter with a grain of sand would do serious mischief. The fact that Pluto has one giant moon – Charon – and four smaller ones suggests the system might have been formed by an ancient collision, and there could be debris along the encounter path.

Pluto and Charon on 25 and 27 June 2015. Public domain, NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute. Click to enlarge.

The real thing: Pluto and Charon on 25 and 27 June 2015. Public domain, NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute. Yup, Pluto’s a red planet. Click to enlarge.

On the other hand, JPL officials are fairly sure the risk is minimal. The NASA team under Alan Stern used New Horizons’ long-range imager (LORRI) to look ahead for debris on 22, 23 and 26 June, concluding that the intended path ahead was safe. The Pluto system is in a state of gravitational resonance, which means any debris is expected to be clustered in discrete positions. Mostly.

New Horions' track through the Pluto system. Public domain, NASA/JPL.

New Horizons’ track through the Pluto system. Public domain, NASA/JPL.

You’ll notice I haven’t mentioned the word ‘dwarf planet’. That’s because I think it’s a stupid definition. It was voted in by the International Astronomical Union in 2006, on the last day of a conference when just over 400 delegates out of 10,000 in the Union remained to vote. Some 237 voted for a resolution defining ‘planet’ in terms that meant Pluto and a lot of other new Kuiper belt objects, and Ceres, were ‘dwarf planets’. The nays totalled 157, so the fact is that Pluto was demoted on a majority of 80, in a motion where 95 percent of members did not vote at all. To me, that’s not particularly valid – and I’m far from the only one to think that. However, despite a meme circulating Facebook to the contrary, it hasn’t been rescinded.

Part of the public howl of protest was driven by the fact that Pluto – from its discovery by Clyde Tombaugh in 1930, right up until 2006 – was always the ninth planet. Walt Disney renamed Goofy’s dog Rover after it. Pluto became iconic – to the people of the mid-twentieth century, the last, lonely world out on the edge of our solar system (probably). It was a social definition. And then suddenly 237 scientists out of 10,000 killed a popular idea that been integral with society for 76 years.

But in any case, the definition of ‘planet’ on which the IAU voted is a rubbish one. Among other things, it requires the planet to have ‘cleared’ its vicinity of debris. Even Jupiter doesn’t match that, thanks to its Trojan asteroids. And to me, it has a philosophical problem: it’s trapped by the requirement in western thought to compartmentalise – to divide a complex and often smoothly gradiated universe into sharply defined categories.

Frequently it’s an ill-fit, and the IAU definition of ‘planet’ is no exception. The problem is that the reality of our solar system, particularly as it unfolded for us from 1992 (specifically), clearly defies such classification. Trying to jam its different contents into pre-defined ‘scientific’ categories misleads, because the bits of rock, dust, ice and gases that orbit the Sun in various ways are more complex than this.

More next week.

Copyright © Matthew Wright 2015

Is the APA’s ‘internet gaming disorder’ really a fair label for ordinary gamers?

The American Psychiatric Association recently called for study into a condition they call ‘Internet Gaming Disorder’. My gripe? However much it’s been intellectualised, ‘psychiatry’ is not a science because its diagnoses depend on personal opinion, not on testable (technically, ‘falsifiable’) empirical criteria. Where somebody is obviously in trouble, that’s not a problem. But for normal people who end up labelled ‘faulty’ because their behaviour appears to match whatever society’s latest transient panic happens to be, it is.

Screen shot from Id's classic 1992 shooter Wolfenstein 3D. Which wasnt, actually, in 3D, but hey...

Trust me, I’m a psychologist…

That’s the issue. There are often genuine reasons to be concerned. But social panics are also triggered by nothing more than reaction to change. And all I can see is that the ‘Internet Gaming Disorder’ scale will be turned into yet another intellectualised device for social control by which ‘psychiatrists’ validate their own sense of self-worth at the expense of normal people, this time targeting the behaviour of a generation who spend their time interacting with each other on screen instead of face to face.

Don’t forget, it’s only forty years since the APA tried to classify ‘introversion’ as a disorder.

You can imagine what would have happened if they’d succeeded. Suddenly, introverts – who we know today are a normal part of the human spectrum – would have been told their basic nature was a clinical abnormality. Then they’d be ‘cured’ by relentless assaults on their self-worth and by being forced to spend as much time as possible trying to engage with large groups of people and then told how faulty they were for not coping. After all, it’s ‘normal’ to get energy from socialising in large groups, so just go out and do it, and learn how to make yourself a ‘normal’ person, and it’s your fault if you fail, because it proves you didn’t try hard enough and are personally worthless.

Obviously there are genuine psychiatric illnesses – which are diagnosable and treatable – but I can’t help thinking that others are defined by pop-social criteria, given gloss by the unerring ability humanity has to intellectualise itself into fantasy. This was certainly true in the early-mid twentieth century, when ‘psychology’ emerged from a specific German intellectual sub-culture, as a reaction to the pop-social sexual mores of the day. This emerging pseudo-science, styling itself a true science (but not, because of the failure to meet falsifiability criteria), keyed into a period mind-set that sought to reduce a multi-shaded universe – including the human condition – to arbitrary and polarised categories.

The key false-premise that gave ‘psychology’ its power was the supposition that everybody – with the exception of the ‘psychologist’ – was ‘psychologically defective’. Neurotic. This was never questioned. When fed into period conformity to social imperatives, it meant that ‘psychology’ was less a tool for discoveries about the human condition as a means for bullying normal people who didn’t exactly meet narrow and often artificially (socially transiently) defined behaviours. That spoke more about the nature of period society and the personal insecurities of the ‘psychologists’ than about human reality.1195428087807981914johnny_automatic_card_trick_svg_med

The concept of ‘psychiatry’ emerged, in part, from the union of this pseudo-scientific illusion with medicine; and I am not sure things have changed today – for instance, one available diagnosis today is “ODD” (Oppositional Defiance Disorder), which is an obvious label with which a ‘psychologist’ can invalidate the last-ditch defence of someone who’s come to them for help and doesn’t submit to their ego and power.

What of the idea that ‘Internet Gaming Disorder’ is worth investigating? In a social sense internet gaming is a specialised framework for interaction – a way in which people, often on different sides of the world, associate with each other. The framework is very specific, and mediated by computer.

To me this is a key issue, because I suspect a lot of gamers are also introverts; and the computer enables them to interact with others without losing energy. Gaming also frames a specific sub-culture. Those in it respect the status of achievement within those terms. The computer enables them to interact, and to validate that particular interaction with people they respect. Of course this doesn’t describe the whole life, personalities or social interactions of people who happen to spend time gaming; but validation in various ways is one of the drivers of the human condition; and another is the desire of strangers to validate themselves by taking that away – bullying, which (alas) I think is probably also innate.

That’s why I have alarm bells going when I find the APA trying to call computer gaming a disorder.

Obviously gamers cover a wide spectrum, and no doubt a proportion who focus on it will do it excessively, for various reasons – perhaps including trying to get away from being bullied. But in the main, I suspect life is well in hand and gaming is simply a way of socialising via an abstract medium. The problem I have is that the APA’s question risks all gamers being swept up in a catch-all label of ‘disorder’, just like ‘introverts’ nearly were forty years ago, along with left-handers and anybody else who didn’t conform to ‘psychologically’ normal.

I should add – I don’t game. I would, if I had the time, the co-ordination skills – and an internet service that had a competitive ping-time. I don’t. But in any event, that’s not the issue I’m concerned with today.

Thoughts?

Copyright © Matthew Wright 2015

Is Russia stirring up the moon landing loon conspiracies?

It seems this week that Russia’s ‘Investigating Committee’ wants an investigation into the US moon landings of 1969-72 – not so much to reveal them as fake, but to find out where missing moon rocks have gone.

Buzz Aldrin on the Moon in July 1969 with the Solar Wind Experiment - a device to measure the wind from the sun. Public domain, NASA.

Buzz Aldrin on the Moon in July 1969 with the Solar Wind Experiment – a device to measure the wind from the sun. Public domain, NASA.

I know where one is – a scrap weighing less than 1 gm, which is in the Carter Observatory in Wellington, New Zealand. (‘We don’ want your moonrock, silly English k’nigget. We already got one. It’s ver’ nice.’) However, apparently other small fragments – such as the one in the Netherlands Rijksmuseum – have been tested and found to be fake. NASA, it seems, lost track of some of its gifts.

I expect this will fire up the conspiracy camp. You know, the loons who pore over pictures of the lunar expeditions looking to ‘prove’ that NASA and the 400,000 expert professional engineers, scientists, and everybody else in the US who were directly involved in the Apollo project spent billions faking the landings, yet were so incompetent they made kiddie-grade mistakes. For instance, getting the studio lighting wrong or forgetting to put jet-blast spall under the landing motor, none of which were noticed noticed at the time – including by the Soviets – but which are somehow blatantly obvious to the conspiracy theorists.

I mention the Soviets because they lost the moon landing race, big time. And the Cold War was in full swing – prestige was at stake and the whole reason for the race in the first place was to fight that war by abstraction and proxy. If there had been the slightest hint that the Americans had faked anything – well, the ‘gotcha’ from Moscow would have been audible around the world.

Apollo 12 lifting off. The SIV stage is the one just clear of the tower. Moments after this photo was taken, spacecraft and tower were hit by lightning. Photo: NASA http://www.hq.nasa.gov/ alsj/a12/ ap12-KSC-69PC-672.jpg

Apollo 12 lifting off. The Saturn SIV stage is the one just clear of the tower. Moments after this photo was taken, spacecraft and tower were hit by lightning. Photo: NASA http://www.hq.nasa.gov/ alsj/a12/ ap12-KSC-69PC-672.jpg

As I’ve mentioned before, there WAS a lunar landing conspiracy at the time – but it wasn’t American. It was Soviet. The problem was that, although John F Kennedy threw down the gauntlet in 1961, there was no commitment to respond, at first, in the Soviet hierarchy. When the Politburo did allow work towards a moon mission, it was late, underfunded, and the effort was split between rival design bureaux, all of whom had their own ideas. Still, it’s possible they might still have done it – perhaps, at least, been first to orbit the Moon, in 1968 – had Sergei Korolev not died in 1966.

To call Korolev a genius is an understatement. He was a brilliant, brilliant designer and a hands-on engineer, directly responsible for orbiting Sputnik in 1957 and then Vostok – with Yuri Gagarin aboard – in 1961, giving the Soviets an dramatic early lead in the ‘space race’ as a direct result of his personal attention to every bolt, wire, system and joint in the rockets and spacecraft developed by his bureau. Stuff worked because Korolev was tweaking it. And his fundamentals were sound: his Soyuz rocket (nee R7/A1) and Soyuz spacecraft remain in use today – updated, modified and developed, but still his basic design.

Without him, his bureau lost direction. They never did solve problems with their giant N-1 booster. But the pressure was on, and with the Apollo programme back on track by early 1968, the Soviets floated plans to put a manned mission into lunar orbit late that year. The CIA was aware of the plan, tipping off NASA – which prompted the daring Apollo 8 mission, only the second flight of Apollo, that put Americans into lunar orbit in December. The Soviet effort failed when the N-1 exploded on test launch.

F-1 motor firing on test. Public domain, via Wikipedia.

Saturn first-stage F-1 motor firing on test. Public domain, via Wikipedia.

In July 1969 the Soviets tried a last-ditch ploy, despatching a robot probe to return lunar soil to Earth before Apollo 11. It also failed – and once Armstrong, Collins and Aldrin were back on Earth, the Soviets denied they had ever been in the moon race at all. Never. Nix. Not ever.

In fact, they had all the hardware – including a huge lunar roving vehicle, Lunokhod, that they later sent for an unmanned mission. Today their lunar lander – which reached unmanned test-flight stage – is on display in Moscow.  The spacesuits used on the ISS today are descendants of the Kretchet design intended for lunar EVA.

And some of the motors built for the ill-fated N-1 programme have been used in (wait for it) American launch vehicles – stored for 30 years and then used. Some of them blew up, but that didn’t reduce the fact that they’d originally been built to take Soviets to the Moon.

Copyright © Matthew Wright 2015

Has fast food made us lazy cooks? You decide…

I was presented the other day with a slightly disturbing story about an academic (has PhD, works at a university) whose day reportedly begins by slothing out of bed around 11.00 am and ambling to the nearest fried chicken joint, swiftly ingesting 7896 of the 2000 calories a normal adult needs in a day, along with triple the allowance of salt (without counting nitrides).

I was a little surprised – I mean, strokes, heart disease, diabetes and other problems pale into insignificance beside the possibility of being followed around by Stewie Griffin with his tuba:

So how is it that fast food has got so ubiquitous today? It seems to me we need to go back to the industrial revolution – 250-odd years ago now – with its cousin, the agricultural revolution (think Jethro Tull’s seed screw) – to explain it. These shifts eventually solved the eternal human problem; getting enough food. The problem was that food production, in general, also got industrialised and commercialised – and often didn’t end up focussing on what was good for health, but on what was good for profit. That’s true of a lot more than just fast food – but there’s a lot more fast food around of late than there used to be too; and a 2013 WHO report identified deregulation as one of the drivers of a rise in fast food uptake.

Triple cheesburger by Jpneok, public domain from https://openclipart.org/detail/204444/fast-food-triple-cheeseburger

Triple cheesburger by Jpneok, public domain from https://openclipart.org/detail/204444/fast-food-triple-cheeseburger

The way fast food is advertised to us underlines the fact that it’s a commercial product. It’s pushed at us for its taste, for its convenience – and let’s face it, if you’re a harassed parent, coming home tired from work to a house full of kids making noises like steam sirens while bouncing off the walls – isn’t it easier to take the family down town for Yummy Duck Bites and Schlob Burger, or get a pizza delivered, or scoff down that ubiquitous box of pressure-fried Gallus gallus domesticus? What’s more, it all tastes delicious because it’s packed with the things humans are geared to like, because we couldn’t get them easily back in hunter-gatherer days– salts, sugars and fats.

It’s easy to see how it’s become so ubiquitous. Problem is, fast food is also packed with deceptively high numbers of calories (thanks to the fats, mainly). And what happens then? Well, let me go pick up that tuba. That’s quite apart from what fast food does to essential gut bacteria. The take-out lesson? I’m sure it’s OK to eat fast food on a ‘sometimes’ basis. But I doubt it’s good to eat every day. Sure, the ingredients that go in are top-quality vegetables, milk, beef, chicken, and so on – but then they’re processed, filled with chemicals to preserve them, to condition the cooking oil, to help ensure a consistency of product, and so forth.

What do I recommend? I have healthy and fast home-cooked ‘Kiwi bloke’ recipes that nobody in my household other than me eats, which I’ll be happy to share in the comments – ask.

Copyright © Matthew Wright 2015

And now Kiwis are facing a potential mega-quake and tsunami. But of course…

This week’s news that a previously unsuspected magnitude 8+ mega-quake could hit central New Zealand and then douse the place with tsunami isn’t too surprising to me. I wrote the most recent pop-sci book on our earthquakes. It was published by Penguin Random House last year.

Living On Shaky Ground 200 pxWhile I was writing the book I had a chat with a seismologist at the University of Canterbury, who pointed out that New Zealand is staring down the barrel of some fairly large tectonic guns. The big one on land is the Alpine Fault, which ruptures with 8+ intensity every few hundred years. The last big rupture was in the 1770s, meaning another is due about now – the probability of it happening before 2100 is around 92 percent.

Another risk factor is the Taupo volcano – another product of tectonic plate collision. This is one of the biggest volcanoes on the planet, and evidence is that a monster eruption about 27,000 years ago threw the world into an ice age. It’s got every potential to wreak similar havoc again – check out Piper Bayard’s awesome novel Firelands for her take on what might happen in the US when Taupo next ‘blows’ the world climate. We won’t mention New Zealand’s likely fate in that scenario…

OK, so I'm a geek. Today anyway. From the left: laptop, i7 4771 desktop, i7 860 desktop.

Me in ‘science writing’ mode. From the left: laptop, i7 4771 desktop, i7 860 desktop.

But New Zealand also faces another major tectonic challenge, the Hikurangi Trench, a subduction zone where the Pacific plate plunges under the Australian, off the coast of the North Island. My contact at Canterbury pointed out that this is the other big gun – a potential 8+ quake followed by tsunami that could wipe out the east coast of the North Island.

That’s where the new study comes in. It’s already known that the Southern Hikurangi Margin – the plate collision between Cook Strait and Cape Turnagain – is locked, meaning strains are building up. When they break, it’s going to be devastating – a quake of magnitude 8.4 – 8.7, triggering massive onshore destruction from Napier to Blenheim, followed by tsunami. Now, it seems, this region generates such quakes a couple of times a millennium. Two have been identified; one 880-800 years ago, a second 520-470 years ago.

This picture of post-quake Napier isn't well known; it is from my collection and was published for the first time in the 2006 edition of my book Quake- Hawke's Bay 1931.

This picture of post-quake Napier isn’t well known; it is from my collection and was published for the first time in the 2006 edition of my book ‘Quake- Hawke’s Bay 1931′.

Uh – yay. On the other hand, it doesn’t really change the risk factors. New Zealand shakes. The end. The issue isn’t worrying – it’s quantifying the risk, which is why work to explore past quakes is so important.

The report also highlights something for me. The discovery that a mega-thrust quake hit central New Zealand somewhere between 1495 and 1545 – seems to unravel one mystery that has long puzzled me. At a date usually put down to roughly around 1460, plus or minus, New Zealand was riven by a rapid-fire succession of great earthquakes, all thought to be over magnitude 7.5 and most over magnitude 8. They included movement on the Alpine fault, another movement in Wellington that turned Miramar into a peninsula, and another in Hawke’s Bay where a dramatic down-thrust created the Ahuriri lagoon.

Things get a bit vague when sorting out timing because the traces of past quakes are difficult to date beyond a broad range of possible dates.

The Wellington event was so huge it went down in Maori oral tradition – Haowhenua, the Land Swallower. Why swallower? That was odd, given the quake was an upthrust – but actually, it DID eat land that counted to Maori. Massive tsunami flooded the southern North Island coasts, inundating important gardens near Lake Onoke on the south of the Wairarapa. In short, swallowing the land. I was, I believe, the first one to publish that explanation, not that anybody noticed. But I digress.

The point is that the date-range for the “1460” series overlaps the date range for the newly discovered mega-thrust quake – which included tsunami. And it explains why New Zealand was, apparently, hit by so many large quakes in quick succession. Even if they were not the same event – and, seismologically, they probably weren’t – the way strains and stresses redistribute after a major quake is well known to be liable to trigger another. Is that what actually happened? Research is ongoing. We’ll see.

Copyright © Matthew Wright 2015

Chickenosaurus lives! But should we really play God with genes?

In what has to be one of the biggest ‘ewwww-factor’ experiments in a while, paleontologists at Yale recently tweaked chicken DNA to give the birds toothed jaws, a bit like Velociraptor. Although there was a lot of work involved in finding out which two DNA strands to tamper with, the process apparently didn’t add anything to the chicken genome – it merely switched off protein-inhibitors that stopped existing genes from working.

Think Velociraptors were like Jurassic Park? Think again. They were about the size of a large turkey...and looked like this...

“I used to be a chicken. Now I’m a fake GMO Velociraptor. And I’m MAD!”

The result was dino-jaws instead of a beak. The fact that this could be done has been known since 2011. It’s just – well, the actual doing of it is a bit mad. We don’t know what gene-tampering will produce, and the team who did it were surprised by the extent of the changes they produced – the birds also developed dino-palates.

Still, this is just a lab test. I mean, what could possibly go wrong? Uh…yah…

It’s like this folks. Sure, science is cool. We wouldn’t have all the things we enjoy today without it. But sometimes, it goes overboard. And to me, this is one of those moments. OK, we can do it – but should we play God? We don’t actually know the consequences, and it worries me that we might find out the hard way.

I’m not talking horror movies – I doubt we’ll end up with Chickenosaurs lurking in dark corners, waiting to leap out on hapless humans, Jurassic Franchise style. But genetics can so often throw curve balls. What else does that genetic alteration do? We don’t know – and when we push the edges, when we industrialise science we don’t fully understand, bad shit happens, usually out of left field. The words ‘thalidomide’ (‘stops morning sickness’), radium (‘go on, lick the brush before you hand-paint the watch dial’) and one or two other tragic miscalculations spring to mind.

Tyrannosaur jaws. Makes Jaws look like Mr Gummy. Photo I took hand-held at 1/25, ISO 1600, f.35. Just saying. Click to enlarge.

Tyrannosaur jaws. Makes Jaws look like Mr Gummy. Photo I took hand-held at 1/25, ISO 1600, f.35. Just saying. Click to enlarge.

Plus side (a very, very small plus side) is that it looks like some science has come out of the experiment – specifically, how birds developed beaks rather than the toothed jaws of other dinosaurs. But that particular discovery, surely, didn’t need us to make a mutant Dinochicken to nail it home. We already know that birds didn’t ‘evolve from’ dinosaurs. They are dinosaurs; a specialist flying variety, but dinosaurs through and through. Just this year, paleontologists pushed back the likely origin of birds, meaning they lived alongside their cousins for much of the Jurassic and Cretaceous epochs – underscoring the fact that they were simply another variety, rather than descendants, of the dinosaur family.

The compelling picture has long since emerged showing how this all worked. Dinosaurs first emerged during the Triassic epoch. They differed from mammals and lizards, and though initially they were lizard-like (as were mammals – think ‘Synapsids’), dinosaurs developed their own unique form over time. They had pneumatised bones; many appear to have had feathers for insulation and display; they seem to have been warm-blooded; they laid eggs in nests and they slept with their head tucked under one arm. Many were bipedal, their mostly horizontal bodies balanced by long tails; and we know their arms were feathered – becoming wings in the flying variety.

Guanlong Wucaii - an early Tyrannosaur from China. Photo I took hand-held at 1/3 second exposure, ISO 800, f 5.6. I held my breath.

Guanlong Wucaii – an early Tyrannosaur from China. Photo I took hand-held at 1/3 second exposure, ISO 800, f 5.6. I held my breath.

Many dinosaur families, we now think, became progressively more like modern birds in appearance as time went on. By the Cretaceous period, many dinosaur types – certainly to judge by their fossils – couldn’t fly, but they were bipedal, glossy feathered and brightly coloured. Troodonts, for instance. We also think some had wattles, like turkeys. The feathered varieties confirmed so far include many members of the Tyrannosaur family, not all of which were the size of the one we know and love. Fact is that few dinosaurs were huge, and many species underwent a dramatic shrinking during the Cretaceous period.

Were we suddenly cast into a late Cretaceous forest, we’d find ourselves surrounded by dinosaurs – which to our eyes would look like funny (and quite small) ground-living ‘pseudo-birds’ with toothed ‘beak-like’ snouts. Other dinosaurs – recognisable to us as true birds – might also be in evidence. Birds, themselves, are thought to have lost their teeth and developed beaks around 116 million years ago, though some, such as Hesperornis, still had teeth more recently. Early birds, we think, were a bit rubbish at flying.

I'm on the right - a selfie I took with my SLR, green-screened and slightly foreshortened (uh.... thanks, guys) with some dinosaurs. Cool!

I’m on the right taking an SLR selfie while being mobbed by dinosaurs, thanks to the wonders of green screen.

When the K-T extinction event hit the planet 65 million years ago, it seems, flying dinosaurs (as in, birds) managed to survive it. They were then able to radiate out into new environmental niches, left empty by the extinction. On some of the continents, mammals also filled the niches left empty by dinosaurs. But not all.

Offshore islands – such as the New Zealand archipelago – retained their surviving dinosaur biota. And it’s intriguing that the larger New Zealand varieties – such as the moa (Dinornis)– have skeletal features and feather structure usually associated with ‘archaic’ bird fossils. They survived right up into the last millennium – succumbing, finally, when New Zealand became the last large habitable land mass on the planet to be settled by humans. And why did they die out? Alas, to judge by the industrial-scale oven complexes the Polynesian settlers built at river mouths, moa were delicious.

All of this was known well before we tried playing God with chicken genes. OK – the experiment can’t be undone. But do we need to do it again? I think not.

Copyright © Matthew Wright 2015

3D printed steak chips? It’s enough to make me go all hippy and vegetarian…

Human inventiveness seems limitless these days, so I wasn’t surprised to discover the other week that food technologists have been experimenting with 3d printed meat – currently produced, at astronomical expense, in the shape of chips.

Gallus gallus domesticus on Rarotonga, looking very much like the Red Jungle Fowl (Gallus gallus).

I’ll have my chicken free-range and wild, thanks…

Artificial food has been a long-standing SF staple – brilliantly played by Arthur C. Clarke in his hilarious 1961 satire ‘Food Of The Gods’. All food in this future was synthesised to the point where the very idea of eating something once alive had become offensive. Even the word ‘carnivore’ had to be spelt, lest it nauseate listeners, and synthetic meat had names unassociated with animals. In classic Clarke fashion, of course, there was a twist. Food synthesisers could produce anything. And there was this synth-meat called ‘Ambrosia Plus’, which sold like hotcakes until a rival company found out what the prototype was… (I won’t spoil the fun other than to point out that there’s a verb for a specific sort of meat-eating starting with ‘c’, and it isn’t ‘carnivore’.)

In the real world, 3D printed meat isn’t synthetic – it’s made of actual animal muscle cells which are artificially bred and then sprayed, in layers, to produce the product. Currently it’s a lab technique and the obvious challenge for its gainsayers is to find ways of industrialising it. Also of getting customers past the ‘ewwww’ factor of eating animal tissue bred in a petri dish and vomited into chip shape through a nozzle.

To my mind the key challenge is identifying the total energy requirement – printed meat may NOT be as efficient as current ‘natural’ methods of getting meat to your dinner table, where a large part of the energy comes from sunlight, via a grassy paddock and the digestive systems of ruminants.

Mercifully, we haven’t been told ‘This Is The Way ALL Meat Will Be Eaten In Future’, ‘The Future Is Now’ and other such dribble. Predictions of that sort pivot off the ‘recency effect’, by which whatever just happened is seen as far more important than it really is when set against the wider span of history. We fall into that trap quite often – often, these days, over products launched on the back of commercial ambition. What really happens is that the ‘way of the future’ idea joins a host of others. All of these then blend together and react with society in ways that eventually – and usually generationally – produces changes, but inevitably not the ones predicted by the ‘Future Is Here’ brigade.

In one of the ironies of the way we usually imagine our future, things that do dramatically change the way we live – such as the internet – are often not seen coming, or touted as game-changers. Certainly not in the way that food pills, flying cars and the cashless society have been.

As for artificial meat – well, I expect that if – IF – it can be industrialised, it’ll find a home in hamburger patties. But there seems little chance of it being mistaken for the real deal, still less supplanting a delicious slab of dead cow seared sirloin on the dinner table.

Copyright © Matthew Wright 2015