Is high-tech REALLY indistinguishable from magic?

A fellow blogger asked for help the other week. What was the specific source – by page reference – to Arthur C. Clarke’s ‘Third Law’?

It was first published in his book Profiles of the Future – which was variously issued from 1958. My edition is the revised version published by Pan Books of London in 1973. And on p. 39 of that edition, as a footnote, Clarke outlines the Law: ‘Any sufficiently advanced technology is indistinguishable from magic’.

It was a throw-away point in a footnote to a lengthy chapter discussing the way conservative twentieth century science usually fails to admit to progress.

Fair point in that context, but I couldn’t help thinking of Europe’s history of exploration around the globe, which was built around wowing locals with techno-trickery and then bashing them with it. Toledo steel was one of several ways in which Hernan Cortez and subsequent marauders knocked over South and Middle American kingdoms in the sixteenth century.

It was a disparity that became extreme as Europe’s technical base improved, leading – ultimately – to the appalling massacre in 1893 of spear-wielding Matabele warriors by a handful of Cecil Rhodes’ Maxim gunners.  ‘Whatever happens/we have got/ the Maxim Gun/ and they have not,’ Hilaire Belloc intoned in wake of the battle.

The conceit of the age – echoed in Clarke’s Law – was that the indigenous peoples who saw European technology looked on it as magic. And it’s true to the extent that, if we lack any concept of the principle behind something, it may as well be magic. The notion of TV, for instance, was absolutely magical before the discovery of electromagnetic transmission; and even a top scientist from (let’s say) the late seventeenth century would have little chance of comprehending one, if they saw it. But I bet that if the principle was explained, they’d soon realise it wasn’t magic at all – just following a principle not yet known.

The same’s true, I think, of the way Europe’s technology was received across the world as it spread during their age of expansion. I think that sometimes the words of magic were used by indigenous peoples seeing the British demonstrate – usually – firearms. But that didn’t betray lack of understanding of the foreign technical concepts. The actual problem was they didn’t initially have the wording. The best evidence I have for this is in the collision between industrialising Britain and Maori in New Zealand, during the early nineteenth century.

Maori picked up British industrial products very quickly from the 1810s, including armaments. These were acculturated – drawn into Maori systems of tikanga (culture), in part by co-opting words already in use. The musket became the ‘pu’, for instance – a word for a blowpipe. But Maori very well understood the principles – certainly going out of their way to learn about armaments and warfare. Some rangatira (chiefs) even made the journey to London to learn more, among them Hongi Hika, who visited the arsenal at Woolwich in 1821 and learned of musket-age warfare and defences; and Te Rauparaha, who was taught about trench warfare in Sydney in 1830.

For ‘contact-age’ Maori, British industrial technology was not ‘magic’ at all – it was something to be investigated, understood and co-opted for use in New Zealand. And I suspect that’s how the same technology was also received by indigenous peoples elsewhere.

I don’t know whether Clarke thought of it that way; I suspect his targets, more particularly, were fuddy-duddies in his own establishment who wouldn’t accept that there might be new scientific principles.

Is there a technology you regard as potentially ‘magical’ to others?

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

Click to buy e-book from Amazon

Click to buy e-book from Amazon

 

 

 

 

 

Beware the next Carrington storm – a Q&A wrap-up

After last week’s post on a Carrington storm – a solar event able to do large-scale damage to anything electrical, especially power grids. I fielded a few questions which deserved a post. And I had some new ones of my own…

Does the whole Earth get hit?
The issue isn’t the Coronal Mass Ejection that goes with the flare, but the magnetic storm the CME provokes when it hits us. This affects the whole Earth in one hit, because the Sun-side of Earth’s magnetic field is pushed. The shadow side is pulled and zings back. Here’s an animation:

How powerful are these geomagnetic storms?
It depends on the CME, which – don’t forget – is super-hot plasma. The biggest can mass up to 100,000,000 tonnes, moving at up to 1000 km/second. These can really bang into our magnetic field. The current the geomagnetic storm induces in conductive material on Earth will vary as a result of the speed of the field movement, and of the scale of the conductive material. This acts like an aerial, so the more conductive material, the higher the voltages and current induced in it. That’s why the power grid is vulnerable, because transmission lines act as aerials and transformers have copper windings.

A large solar flare observed on 8 September 2010 by NASA's Solar Dynamics Observatory. Public Domain, NASA.

A large solar flare observed on 8 September 2010 by NASA’s Solar Dynamics Observatory. Public Domain, NASA.

Can the excess voltages be calculated?
The voltage generated in a conductor is a product of the rate of change of magnetic flux and the direction of the field lines relative to the conductive material. In a closed loop like a transformer, for instance, this voltage can be calculated by Faraday’s Law of Induction, via James Clerk Maxwell, which states that the negative of the rate of change is equal to the line integral of the electric field. This is a bit of math that quantifies results when direction and intensity are both changing.

Will a geomagnetic storm burn out all power grids?
It depends on the loading of the grid and on the intensity of the storm, which will differ from place to place because the rate of change and flux direction keep changing. A heavily loaded power grid is more vulnerable because it’s operating closer to its designed tolerances. Needless to say, in this age of engineering to cost, some grids are fully loaded in normal operation. That’s why even the modest geomagnetic storms of in the last few decades have sometimes generated localised blackouts – some grids were vulnerable when others weren’t. With a big enough geomagnetic storm, all power grids would be blown out.

OK, so I'm a geek. Today anyway. From the left: laptop, i7 4771 desktop, i7 860 desktop.

OK, so I’m a geek. Today anyway.

What about domestic appliances – computers, hand-helds and so forth?
It depends on the intensity of the storm. Anything plugged into the mains would suffer a voltage spike. Your stove or kettle wouldn’t notice it. Your computer might lock up. A re-boot might fix it, if the power stayed on. Or gear might be physically damaged. Newer devices are more vulnerable than old, partly because the older stuff was over-engineered. Anything with looped wire in it, like an electric motor – which includes DVD drives – might be at risk. Just about everything relies on low-voltage CPU’s these days, including cars, and it’s possible a really big geomagnetic storm would damage some of these. The effects probably wouldn’t be consistent across all gear because there are so many variables in electrical hardware, including whether it’s operating or not when the storm hits.

So some stuff, like the old Morrie Thou every Kiwi wishes they never got rid of, would still work and we’d otherwise mostly be OK?
Don’t forget, there won’t be any mains power, possibly not for months. No water pumps. No sewerage pumps. No heat. No light. No cooking. No battery charging. Hospitals out of action just when needed. Shall I go on?

Please don’t. Will the storm induce current in anything else?
Gas and oil pipelines. Older plumbing. They’re metal too.

Sounds scary. Is there anything we can do?
NASA has satellites on solar weather watch. They’re also implementing Solar Shield, an early-warning project. Whether anybody pays attention to warnings, or even hears them, is another matter. Even if the warning’s broadcast, who listens to dumb science stuff when the rugby news is about to start? But if you hear a warning, turn everything off, keep things unplugged, get your emergency kit stocked with food and water, buy a can opener, dig a long drop, and so on.

Is there a plus side?
We’d get amazing aurora displays towards the equator. Would that compensate for the damage? Uh…no.

Copyright © Matthew Wright 2014

Apocalypse now: why we must fear a Carrington storm

On 28 August 1859, British astronomer Richard Carrington noticed something unusual on the Sun. A flare, larger than anything he’d seen before.

Solar flare of 16 April 2012, captured by NASA's Solar Dynamics Observatory. Image is red because it wa captured at 304 Angstroms. (NASA/SDO, public domain).

Solar flare of 16 April 2012, captured by NASA’s Solar Dynamics Observatory. Image is red because it was captured at 304 angstroms. (NASA/SDO, public domain).

Three days later, Earth lit up. Aurorae erupted as far south as the Carribean. All hell broke loose in telegraph systems across the world. Lines began spraying sparks. Operators were electrocuted. Other telegraphs worked without being switched on.

Later, we figured it out. The sun ordinarily blasts Earth with a barrage of fast-moving protons and electrons; the solar wind. Most is deflected by the Earth’s magnetic field – particles are trapped by the field, forming the Van Allen radiation belts.

Flares add to this in two ways. The first is through intense electromagnetic radiation – a mix of X-ray frequencies produced by Bremmstrahlung, coupled with enhanced broad-spectrum radiation as a result of synchotron effects – both of them slightly abstruse results of relativistic physics. This strikes Earth, on average, 499 seconds after a major flare erupts in our direction. We’re safe on the surface from the effects; the Earth’s magnetic field and atmosphere stops even radiation on a Carrington scale. In 1859, nobody noticed. But today, astronauts on the ISS wouldn’t be safe. Nor would our satellites.  So aside from the human tragedy unfolding in orbit, we’d lose everything associated with satellites – GPS to transaction systems to weather to Google Earth updates and everything else. Gone.

Buzz Aldrin on the Moon in July 1969 with the Solar Wind Experiment - a device to measure the wind from the sun. Public domain, NASA.

Buzz Aldrin on the Moon in July 1969 with the Solar Wind Experiment. (NASA/public domain).

It gets worse. Some flares also emit a mass of charged particles, known as a CME (Coronal Mass Ejection). Seen from the Sun, Earth is a tiny target in the sky. But sometimes we are in the way, as in 1859. The problem is that a CME  hitting Earth’s magnetic field compresses it. Then the CME passes, whereupon the Earth’s magnetic field bounces back.

The bad juju is the oscillation, which causes inductiion on a huge scale. Induction is a principle of electromagnetics, discovered by Michael Faraday in September 1845 when he moved a conductor through a magnetic field, generating electricity down the conductor as long as it moved. It also works vice-versa – a moving magnetic field induces electricity in a stationary conductor. And electricity can be used to create magnetism. We’ve been able to exploit the effect in all sorts of ways. It’s how electric motors and loudspeakers work, for instance. Also radio, TV, bluetooth, ‘wireless’ internet broadband. Actually, pretty much everything. When inducing an electric current with magnetism, the strength of current is a function of (a) the size of the conductor, and (b) the flux of the magnetic field. Maxwell’s equations apply. The longer the cable, the more current generated in it. That’s how aerials work – like the one in your cellphone, ‘wireless’ router, laptop – and so the list goes on.

Now scale it up. Earth’s magnetic field moves, generating electrical current in all conductive material. Zzzzzzt! That’s why so much current was generated down telegraph lines back in 1859 – they were immense aerials.

Geothermal steam from the Taupo system is used to generate power - up to 13 percent of the North Island's needs, in fact. The techniques were developed right here in New Zealand.

Geothermal power station at Wairakei, New Zealand. This generates up to 13 percent of the North Island’s needs. Note the power lines – vulnerable to induced voltage in a Carrington event.

Fast forward to today. Heavy duty devices like a toaster or kettle don’t contain enough conductive material to induce voltage that will fry them during a CME event, and that’s true of most appliances – though your phone or computer might be damaged, because microprocessor chips and hard drives are vulnerable to very small fluctuations. Personally, if I knew a Carrington storm was coming, I’d unplug my computer at the CPU (the power cable acts as an aerial). But none of it will work afterwards anyway. Why? No mains power. That’s the problem – the power grid. Those 220,000 volt lines. They’re plenty big enough to suffer colossal induced voltages, as are the cable windings inside the transformers that handle them. Power grids around the world go boom.

Yes, we can rebuild the system. Eventually. Estimates suggest a minimum of five months in the UK, for instance, to get enough transformers back on line. Always assuming they were available, which they might not be if every other country in the world also wanted whatever was in stock. In any case, the crisis starts within hours. Modern cities rely on electrically pumped water. Feeling thirsty? Maybe you’re lucky enough to live near a river. You struggle through crowds dipping water. Struggle home with a pan of muddy liquid. No power – how do you boil it? You have a barbecue. What happens when the gas runs out?

Now think about everything that relies on electrically pumped water. Nuclear power stations.  Their diesel generators are not designed to run for weeks or months. Think Fukushima. Over and over. I am SO GLAD I live in nuclear-free New Zealand.

This isn’t speculation. A CME-driven grid burn-out already happened to Quebec in 1989. Luckily the solar storm wasn’t colossal. Studies suggest that 1859 storms occur every 500 years or so, but we’re learning about the Sun all the time, and that may change. We had near-misses from dangerous CME’s in 2012 and earlier this year. We’re vulnerable.

A CME might not take down the whole planet. All depends on its size. But it could still do colossal damage. A study in 2013 put the potential cost of another Carrington storm at $US2,600,000,000,000. If you stacked 2.6 trillion US $1 notes, one on top of another, the pile would be 291,200 km tall, which is a shade over 75 percent the average distance of the Moon. That’s without considering the human cost. But there are ways to ameliorate the issue. Including shutting down the grid and disconnecting things if we get warning. If. The take home lesson? Remember the Carrington storm. Fear it.

If you want to read about how we might cope after a big CME, check out the novels by New Zealand author Bev Robitai. Sunstrike and Sunstrike: The Journey Home.

 

Copyright © Matthew Wright 2014

 

Fringe thinking fruit-loops or just misunderstood?

I am often bemused at the way some people seem to think. Particularly those who advocate what we might call ‘fringe’ theories.

I took this photo of the Moeraki boulders in 2007. They fact that they are not perfect spheres is evident.

Moeraki boulders, north of Dunedin. It’s been argued that they are weights used by Chinese sailors to raise sail. As I know the natural geological origin of them, that’s not a theory I believe myself, but hey…

These are often portrayed in pseudo-scientific terms; there is a hypothesis. Then comes the apparent basis for the hypothesis, frequently explicitly titled ‘the evidence’ or ‘the facts’. And finally, the fringe thinker tells us that this evidence therefore proves the proposal. QED.

All of which sounds suitably watertight, except that – every time – the connection between the hypothesis and the evidence offered to support it is non-existent by actual scientific measure. Or the evidence is presented without proper context.

Some years ago I was asked to review a book which hypothesised that a Chinese civilisation had existed in New Zealand before what they called ‘Maori’ arrived. (I think they mean ‘Polynesians’, but hey…)

This Chinese hypothesis stood against orthodox archaeology which discredited the notion of a ‘pre-Maori’ settlement as early as 1923, and has since shown that New Zealand was settled by Polynesians around 1280 AD. They were the first humans to ever walk this land. Their Polynesian settler culture, later, developed into a distinct form whose people called themselves Maori. In other words, the Maori never ‘arrived’ – they were indigenous to New Zealand.

This picture has been built from a multi-disciplinary approach; archaeology, linguistics, genetic analysis, and available oral record. Data from all these different forms of scholarship fits together. It is also consistent with the wider picture of how the South Pacific was settled, including the places the Polynesian settlers came from.

Nonetheless, that didn’t stop someone touring the South Island looking for ‘facts’ to ‘prove’ that a Chinese civilisation had been thriving here before they were (inevitably) conquered by arriving Maori. This ‘evidence’ was packed off to the Rafter Radiation Laboratory in Gracefield, Lower Hutt, for carbon dating. And sure enough, it was of suitable age. Proof, of course, that the hypothesis had been ‘scientifically’ proven. Aha! QED.

Except, of course, it wasn’t proof at all. Like any good journalist I rang the head of the lab and discovered that they’d been given some bagged samples of debris, which they were asked to test. They did, and provided the answer without comment. The problem was that the material had been provided without context. This meant the results were scientifically meaningless.

I’m contemplating writing a book myself on the pseudo-science phenomenon with its hilarious syllogisms and wonderful exploration of every logical fallacy so far discovered. How do these crazy ideas get such traction? Why do they seem to appeal more than the obvious science?

Would anybody be interested if I wrote something on this whole intriguing phenomenon?

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

Click to buy e-book from Amazon

Click to buy e-book from Amazon

 

The paradox of Europe’s high-fat, low heart-disease diets

I am always fascinated by the way science occasionally comes up with ‘insoluble questions’ or ‘paradoxes’. After a while, these tricky queries go away because, it turns out, everybody was barking up a tree to which they had been led by an expert whose ideas had captured peer and public attention.

The Rue de Lafayette one night in 2004

Photo I took of the Rue de Lafayette in central Paris. I scoffed as much high-fat French cuisine as I could get down this boulevard. And it was delicious.

The big one, these days, is the link between high cholesterol and heart disease.  This has been dogma for decades. After the Second World War, US scientists theorised that saturated fats contributed to high cholesterol, hence clogged arteries, and therefore caused heart disease. The idea was enshrined in a US Department of Agriculture guideline in 1980.

Low fat, it seemed, was the way ahead – and it was embraced by the food industry in the US, followed by large parts of the rest of the western world.

Except Europe. They didn’t much change – and traditional French, German and Italian cuisine is awash with saturated fats and high-cholesterol foods. Yet they suffer less heart disease and are less obese than Americans. What’s more, since 1980 obesity has become a major issue in the United States and other countries that have followed the US low-fat lead, such as New Zealand.

A paradox! Something science can’t explain. Or is it?

The problem is that research often tests only what can be funded, something often framed by commercial priorities. This framework is further shaped by one of the philosophical flaws of western rational thinking; the notion that complex questions can be eventually reduced to single-cause questions and answers.

Reality is far less co-operative. The real world isn’t black-and-white. It’s not even shades of grey. It’s filled with mathematically complex systems that can sometimes settle into states of meta-stability, or which appear to present superficial patterns to initial human observation. An observation framed by the innate human tendency to see patterns in the first instance.

For me, from my philosophical perspective, it’s intriguing that recent research suggests that the link between saturated fat and ischemic (blood-flow related) heart disease is more tenuous than thought. Certainly it’s been well accepted – and was, even fifty years ago when the low-fat message was being developed – that types of cholesterol are utterly vital. If you had none at all in your system, you’d die, because it plays a crucial role in human biochemistry on a number of levels. Cholesterol even makes it possible for you to synthesise Vitamin D when exposed to sunlight. It’s one of the things humans can produce – your liver actually makes it, for these reasons.

As I understand it, recent studies suggest that the effort to diagnose and fix the problem of ‘heart attacks’ based on a simplistic mid-twentieth century premise – something picked up by much of western society as dogma – has been one of the factors implicated in a new epidemic of health problems. There is evidence that the current epidemic of diabetes (especially Type 2) and other diseases is one symptom of the way carbohydrates were substituted for fatty foods a generation ago, and of the way food manufacturers also compensated for a reduction in saturated fats by adding sugar or artificial sweeteners. Use of corn syrup in the US, for example, is up by 198 percent on 1970 figures.

I’m not a medical doctor. And from the scientific perspective all this demands testing. But the intellectual mechanisms behind this picture seem obvious to me from the principles of logic and philosophy – I learned the latter, incidentally, at post-grad level from Peter Munz, one of only two students of both Karl Popper (the inventor of modern scientific method) and Ludwig Wittgenstein (who theorised that language distorts understanding). I am in no doubt that language alone cannot convey pure concept; and I think the onus is on us to extend our understanding through careful reason – which includes being reasonable.

What am I getting at? Start with a premise and an if-then chain of reasoning, and you can build a compelling argument that is watertight of itself – but it doesn’t mean the answer is right. Data may be incomplete; or the interplay of possibilities may not be fully considered.

What follows? A human failing – self-evident smugness, pride in the ‘discovery’, followed by over-compensation that reverses the old thinking without properly considering the lateral issues. Why? Because very few people are equipped to think ‘sideways’, and scientists aren’t exceptions.

Which would be fine if it was confined to academic papers. But it isn’t. Is it.

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

 

A lament to a past that might have been but never was

Conventional wisdom pins the invention of agriculture down to the ‘fertile crescent’ of the Middle East. Possibly starting in Chogha Golan some 11,700 years before the present.

A 1905 map showing Europe at the height of the last glaciation, with modern names overlaid. Public domain.

A 1905 map showing Europe at the end of the last glaciation, with modern names overlaid. Public domain.

This was where humanity started on its journey to the current world of climate change, extinctions, pollution and over-consumption. However, new research suggests agriculture was also invented much earlier by the Gravettian culture who flourished during an inter-glacial period, around what is now the Black Sea, maybe 33,000 years ago. Humans around this time also domesticated dogs – the oldest evidence has been found in Belgium, dated 32,000 years before the present.

That interglacial was apparently brought to a sharp end when New Zealand’s Taupo super-volcano exploded and knocked the world back into a new sequence of Ice Ages, also apparently nipping the agricultural revolution in the bud.

But suppose it hadn’t – that the climate had stayed warm. How would the world be today, 33,000 years after the agricultural revolution instead of about 11 or 12000? There was nothing inevitable about the way technology emerged – if you look at general tech, by which I mean everything from energy harnessed to the things people had in their homes, like combs, pots, pans and so forth, we find little real difference between (say) the Roman period and the Medieval period.

The Oruanui eruption, Taupo, 26,500 BP. From http://en.wikipedia.org/wiki/File:Taupo_2.png

The Oruanui eruption, Taupo, 26,500 BP. From http://en.wikipedia.org/wiki/File:Taupo_2.png

A lot had to do with energy sources – which were limited to wind, fire, falling water, and human and animal power. Even the invention of gunpowder did not much change the calculation: it was not until steam came along that things took off.

The industrial revolution was product of a unique diaspora that combined the thinking of the ‘age of reason’ with a climatic downturn that seemed to prod people into new innovations, financed by a rising band of new-rich Englishmen who’d made their fortunes on Carribean sugar and had money to burn.

Don’t forget – this was partly a result of chance. The Chinese never industrialised despite being just as smart, just as resourceful, and having similar opportunities. The Romans didn’t, either, earlier on, though they had a society as complex and urbanised as our modern one.

The point being that our alternative Gravettian timeline might have rolled along with what we might call the ‘Roman/Medieval’ level, forever. Or they might have industrialised. Steam engines and a moon programme 28,000 years ago? Why not?

There are other dimensions, too. Back then, Neanderthals were alive, well and living in Gibraltar. Sea levels differed – anybody heard of ‘Doggerland’? Or ‘Sahul’?

Whichever way things went, odds are on that if the glaciations hadn’t done for that agricultural revolution 33,000 years ago, we’d be rag-tag bands back in the stone age again now, this time without easily-scoopable fossil fuels and metals.  Pessimistic, but when you look at the way the world’s going now – where else are we going to end up? We lost the space dream, and we’re busy smashing each other and using the resources we’ve got as if there’s no tomorrow. Which there won’t be, if this carries on.

Do you think the Gravettian world might have been different?

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

Click to buy e-book from Amazon

Click to buy e-book from Amazon

 

Busy busy busy busy…with science!

Last year I signed a contract with Penguin Random House to write a science book on a subject close to the hearts of everybody around the Pacific Rim.

OK, so I'm a geek. Today anyway. From the left: laptop, i7 4771 desktop, i7 860 desktop.

OK, I’m a geek. I have three computers (temporarily) on my desk with “2001-esque” wallpaper. Headphones by Sennheiser deliver Nightwish at high volume. Click to enlarge.

A science book? I’m known as a historian. And I can legitimately call myself one if I want – I have post-graduate academic qualifications in that field. Indeed, the Royal Historical Society at University College, London, elected me a Fellow, on merit of my contribution. Which I very much appreciate, it’s one of the highest recognitions of historical scholarship worldwide.

However, I don’t label myself ‘a historian’. Nor is it my sole interest or qualification; I spent longer learning music, formally, than history – and my home field always has been physics. I began learning it aged 4, as I learned to read. Seriously. When I was 16 I won a regional science contest prize for an entry on Einsteinian physics and black holes, which I hadn’t learned at school – I had to read the papers and then deduce the math myself, without help, aged 16. (I am not Sheldon…really…)

What all this adds up to is an interest in understanding stuff – in seeing the shapes and patterns and inter-relationships between things and fields. And so – a book on science. Time was tight, but I wouldn’t have agreed to the contract if I thought quality might be affected. All writing has to be fast and good. If you’ve ever been a journalist (another of my jags) you have no option. The key is having writing as second nature – and planning. Good plans also have built-in capacity to adapt to circumstance, which meant that one weekend I had to sit down with a pile of science papers and:

1. Read those science papers. These included content such as: “Our estimate based on the seismic moment equation of Aki & Richards (2002, p. 48) (Mo = (X x D x RA; where Mo is seismic moment; (X is the rigidity modulus, D is fault plane displacement and RA is rupture area.”

2. Write a draft that drew from this and a lot of other stuff, in English pitched for a general reading audience. I did end up writing occasional sentences like: “This is known as the phase velocity, and is determined by the equation v = √g x d , where v is the velocity of the wave, d is the depth of water, and g is the acceleration of gravity.”  No other way of explaining fluid dynamics, you see… and well, this is science!

3. Revise that draft to clean up the wording. Final word count added to the MS in this 48-hour burst? A shade over 7000. That’s researched and mostly finished for publication. Think about it.

What got sacrificed was social media. That week and most others. I kept this blog going because I’d stacked posts. I’ll be back full force. Soon. What’s more, I’m going to share how to write quality, write accurately and quickly. There is, dare I say, a science to it. More soon.

The book is already being promoted on Random’s website. Check it out.

Science! A good word, that. Sort of thing the late Magnus Pyke might say. Science!

Copyright © Matthew Wright 2014

Click to buy from Fishpond

Click to buy print edition from Fishpond

Click to buy e-book from Amazon

Click to buy e-book from Amazon