Is the APA’s ‘internet gaming disorder’ really a fair label for ordinary gamers?

The American Psychiatric Association recently called for study into a condition they call ‘Internet Gaming Disorder’. My gripe? However much it’s been intellectualised, ‘psychiatry’ is not a science because its diagnoses depend on personal opinion, not on testable (technically, ‘falsifiable’) empirical criteria. Where somebody is obviously in trouble, that’s not a problem. But for normal people who end up labelled ‘faulty’ because their behaviour appears to match whatever society’s latest transient panic happens to be, it is.

Screen shot from Id's classic 1992 shooter Wolfenstein 3D. Which wasnt, actually, in 3D, but hey...

Trust me, I’m a psychologist…

That’s the issue. There are often genuine reasons to be concerned. But social panics are also triggered by nothing more than reaction to change. And all I can see is that the ‘Internet Gaming Disorder’ scale will be turned into yet another intellectualised device for social control by which ‘psychiatrists’ validate their own sense of self-worth at the expense of normal people, this time targeting the behaviour of a generation who spend their time interacting with each other on screen instead of face to face.

Don’t forget, it’s only forty years since the APA tried to classify ‘introversion’ as a disorder.

You can imagine what would have happened if they’d succeeded. Suddenly, introverts – who we know today are a normal part of the human spectrum – would have been told their basic nature was a clinical abnormality. Then they’d be ‘cured’ by relentless assaults on their self-worth and by being forced to spend as much time as possible trying to engage with large groups of people and then told how faulty they were for not coping. After all, it’s ‘normal’ to get energy from socialising in large groups, so just go out and do it, and learn how to make yourself a ‘normal’ person, and it’s your fault if you fail, because it proves you didn’t try hard enough and are personally worthless.

Obviously there are genuine psychiatric illnesses – which are diagnosable and treatable – but I can’t help thinking that others are defined by pop-social criteria, given gloss by the unerring ability humanity has to intellectualise itself into fantasy. This was certainly true in the early-mid twentieth century, when ‘psychology’ emerged from a specific German intellectual sub-culture, as a reaction to the pop-social sexual mores of the day. This emerging pseudo-science, styling itself a true science (but not, because of the failure to meet falsifiability criteria), keyed into a period mind-set that sought to reduce a multi-shaded universe – including the human condition – to arbitrary and polarised categories.

The key false-premise that gave ‘psychology’ its power was the supposition that everybody – with the exception of the ‘psychologist’ – was ‘psychologically defective’. Neurotic. This was never questioned. When fed into period conformity to social imperatives, it meant that ‘psychology’ was less a tool for discoveries about the human condition as a means for bullying normal people who didn’t exactly meet narrow and often artificially (socially transiently) defined behaviours. That spoke more about the nature of period society and the personal insecurities of the ‘psychologists’ than about human reality.1195428087807981914johnny_automatic_card_trick_svg_med

The concept of ‘psychiatry’ emerged, in part, from the union of this pseudo-scientific illusion with medicine; and I am not sure things have changed today – for instance, one available diagnosis today is “ODD” (Oppositional Defiance Disorder), which is an obvious label with which a ‘psychologist’ can invalidate the last-ditch defence of someone who’s come to them for help and doesn’t submit to their ego and power.

What of the idea that ‘Internet Gaming Disorder’ is worth investigating? In a social sense internet gaming is a specialised framework for interaction – a way in which people, often on different sides of the world, associate with each other. The framework is very specific, and mediated by computer.

To me this is a key issue, because I suspect a lot of gamers are also introverts; and the computer enables them to interact with others without losing energy. Gaming also frames a specific sub-culture. Those in it respect the status of achievement within those terms. The computer enables them to interact, and to validate that particular interaction with people they respect. Of course this doesn’t describe the whole life, personalities or social interactions of people who happen to spend time gaming; but validation in various ways is one of the drivers of the human condition; and another is the desire of strangers to validate themselves by taking that away – bullying, which (alas) I think is probably also innate.

That’s why I have alarm bells going when I find the APA trying to call computer gaming a disorder.

Obviously gamers cover a wide spectrum, and no doubt a proportion who focus on it will do it excessively, for various reasons – perhaps including trying to get away from being bullied. But in the main, I suspect life is well in hand and gaming is simply a way of socialising via an abstract medium. The problem I have is that the APA’s question risks all gamers being swept up in a catch-all label of ‘disorder’, just like ‘introverts’ nearly were forty years ago, along with left-handers and anybody else who didn’t conform to ‘psychologically’ normal.

I should add – I don’t game. I would, if I had the time, the co-ordination skills – and an internet service that had a competitive ping-time. I don’t. But in any event, that’s not the issue I’m concerned with today.

Thoughts?

Copyright © Matthew Wright 2015

Has fast food made us lazy cooks? You decide…

I was presented the other day with a slightly disturbing story about an academic (has PhD, works at a university) whose day reportedly begins by slothing out of bed around 11.00 am and ambling to the nearest fried chicken joint, swiftly ingesting 7896 of the 2000 calories a normal adult needs in a day, along with triple the allowance of salt (without counting nitrides).

I was a little surprised – I mean, strokes, heart disease, diabetes and other problems pale into insignificance beside the possibility of being followed around by Stewie Griffin with his tuba:

So how is it that fast food has got so ubiquitous today? It seems to me we need to go back to the industrial revolution – 250-odd years ago now – with its cousin, the agricultural revolution (think Jethro Tull’s seed screw) – to explain it. These shifts eventually solved the eternal human problem; getting enough food. The problem was that food production, in general, also got industrialised and commercialised – and often didn’t end up focussing on what was good for health, but on what was good for profit. That’s true of a lot more than just fast food – but there’s a lot more fast food around of late than there used to be too; and a 2013 WHO report identified deregulation as one of the drivers of a rise in fast food uptake.

Triple cheesburger by Jpneok, public domain from https://openclipart.org/detail/204444/fast-food-triple-cheeseburger

Triple cheesburger by Jpneok, public domain from https://openclipart.org/detail/204444/fast-food-triple-cheeseburger

The way fast food is advertised to us underlines the fact that it’s a commercial product. It’s pushed at us for its taste, for its convenience – and let’s face it, if you’re a harassed parent, coming home tired from work to a house full of kids making noises like steam sirens while bouncing off the walls – isn’t it easier to take the family down town for Yummy Duck Bites and Schlob Burger, or get a pizza delivered, or scoff down that ubiquitous box of pressure-fried Gallus gallus domesticus? What’s more, it all tastes delicious because it’s packed with the things humans are geared to like, because we couldn’t get them easily back in hunter-gatherer days– salts, sugars and fats.

It’s easy to see how it’s become so ubiquitous. Problem is, fast food is also packed with deceptively high numbers of calories (thanks to the fats, mainly). And what happens then? Well, let me go pick up that tuba. That’s quite apart from what fast food does to essential gut bacteria. The take-out lesson? I’m sure it’s OK to eat fast food on a ‘sometimes’ basis. But I doubt it’s good to eat every day. Sure, the ingredients that go in are top-quality vegetables, milk, beef, chicken, and so on – but then they’re processed, filled with chemicals to preserve them, to condition the cooking oil, to help ensure a consistency of product, and so forth.

What do I recommend? I have healthy and fast home-cooked ‘Kiwi bloke’ recipes that nobody in my household other than me eats, which I’ll be happy to share in the comments – ask.

Copyright © Matthew Wright 2015

3D printed steak chips? It’s enough to make me go all hippy and vegetarian…

Human inventiveness seems limitless these days, so I wasn’t surprised to discover the other week that food technologists have been experimenting with 3d printed meat – currently produced, at astronomical expense, in the shape of chips.

Gallus gallus domesticus on Rarotonga, looking very much like the Red Jungle Fowl (Gallus gallus).

I’ll have my chicken free-range and wild, thanks…

Artificial food has been a long-standing SF staple – brilliantly played by Arthur C. Clarke in his hilarious 1961 satire ‘Food Of The Gods’. All food in this future was synthesised to the point where the very idea of eating something once alive had become offensive. Even the word ‘carnivore’ had to be spelt, lest it nauseate listeners, and synthetic meat had names unassociated with animals. In classic Clarke fashion, of course, there was a twist. Food synthesisers could produce anything. And there was this synth-meat called ‘Ambrosia Plus’, which sold like hotcakes until a rival company found out what the prototype was… (I won’t spoil the fun other than to point out that there’s a verb for a specific sort of meat-eating starting with ‘c’, and it isn’t ‘carnivore’.)

In the real world, 3D printed meat isn’t synthetic – it’s made of actual animal muscle cells which are artificially bred and then sprayed, in layers, to produce the product. Currently it’s a lab technique and the obvious challenge for its gainsayers is to find ways of industrialising it. Also of getting customers past the ‘ewwww’ factor of eating animal tissue bred in a petri dish and vomited into chip shape through a nozzle.

To my mind the key challenge is identifying the total energy requirement – printed meat may NOT be as efficient as current ‘natural’ methods of getting meat to your dinner table, where a large part of the energy comes from sunlight, via a grassy paddock and the digestive systems of ruminants.

Mercifully, we haven’t been told ‘This Is The Way ALL Meat Will Be Eaten In Future’, ‘The Future Is Now’ and other such dribble. Predictions of that sort pivot off the ‘recency effect’, by which whatever just happened is seen as far more important than it really is when set against the wider span of history. We fall into that trap quite often – often, these days, over products launched on the back of commercial ambition. What really happens is that the ‘way of the future’ idea joins a host of others. All of these then blend together and react with society in ways that eventually – and usually generationally – produces changes, but inevitably not the ones predicted by the ‘Future Is Here’ brigade.

In one of the ironies of the way we usually imagine our future, things that do dramatically change the way we live – such as the internet – are often not seen coming, or touted as game-changers. Certainly not in the way that food pills, flying cars and the cashless society have been.

As for artificial meat – well, I expect that if – IF – it can be industrialised, it’ll find a home in hamburger patties. But there seems little chance of it being mistaken for the real deal, still less supplanting a delicious slab of dead cow seared sirloin on the dinner table.

Copyright © Matthew Wright 2015

Why does everything taste of chicken, except chicken?

I’ve always had an interest in discovering the secrets of the universe – you know, does dark matter exist, why we can’t have antigravity – and why every weird steak from crocodile to ocelot always has to taste of chicken.

Gallus gallus domesticus on Rarotonga, looking very much like the Red Jungle Fowl (Gallus gallus).

Gallus gallus domesticus on Rarotonga, looking very much like the original Red Jungle Fowl (Gallus gallus).

This last has been puzzling me a lot. Not least because even chicken doesn’t taste of chicken. I found that out in 2012 when I spent a few days in Rarotonga. Over there, chickens run wild – as in, not just free range. Wild. We had one perching on our breakfast table several days in a row, hoping to be fed. They don’t get soaked in antibiotics. They don’t get imprisoned in horrible conditions before being lightly killed, dropped through a macerator, and re-constituted into Chicken Niblets. They are entirely natural. And when anybody wants chicken – let’s say to add to the khorma I bought in an Indian restaurant in Awarua – they go out and catch one.

That natural living means that Rarotongan chickens don’t taste like battery chickens. Actually, they don’t even look like battery chickens. They look more like what they actually were before humans got at them, Red Jungle Fowls, which – like every other bird – are actually a variety of flying dinosaur. Recently a geneticist even found out how to switch on the gene that makes chickens grow dino-jaws instead of a beak, a discovery welcomed by other geneticists with loud cries of ‘nooooooo!’ and similar endorsements.

Here's the diorama - Velicoraptor mongoliensis, Dilong paradoxus, and, off to the right - yup, their close relative, Gallus Gallus. A chicken.

Think birds aren’t dinosaurs? Here’s Velicoraptor mongoliensis, Dilong paradoxus, and, off to the right – yup, their close relative, our friend Gallus Gallus domesticus.

I conclude from all of this that (a) what we call ‘chicken’ doesn’t actually taste of chicken; and (b) if I’m to define ‘tastes of chicken’, I should be thinking of Rarotongan chickens. And I have to say that of all the unusual stuff I’ve eaten over the years, few of them taste of it. For instance:

1. Snail (restaurant in Paris, Rue de Lafayette). These don’t taste of chicken. They taste of garlic flavoured rubber bands.
2. Ostrich (dinner to mark release of one of my books). Definitely not chicken, but could have been confused for filet steak.
3. Something unidentifiable in rice (riverside in Kanchanburi) I know it was meat. It didn’t taste of chicken or, in fact, anything else. I ate it anyway.
4. Goat (my house). Absolutely not chicken. More like a sort of super-strong mutton.
5. Venison (my house). Reminiscent of liver.
6. Duck (my house). Bingo! Yes, this actually did taste of Rarotongan chicken. And duck.

I can only conclude, on this highly – er – scientific analysis, that very little actually tastes of chicken, including chicken. But I may be wrong. Have you ever eaten anything that was meant to taste of chicken – but didn’t?

Copyright © Matthew Wright 2015

Quantum physics just might become rainbow gravity

One of the biggest problems with quantum physics – apart from the way it attracts new age woo – is that it doesn’t reconcile with Einstein’s General Theory of Relativity. The two don’t meet when it comes to gravity. And so one of the major thrusts of physics since the 1940s has been to find that elusive ‘theory of everything’.

The COBE satellite map of the CMB. NASA, public domain, via Wikipedia.

The COBE satellite map of the Cosmic Microwave Background. NASA, public domain, via Wikipedia.

We shouldn’t suppose, of course, that it’s ‘Einstein vs the world’. Our friend Albert was also pivotal to the development of quantum physics – he published, for example, the first paper describing quantum entanglement in 1935.

But he didn’t like this ‘spooky action at a distance’. To Einstein, intuitively, there was something missing from what he and fellow physicists Paul Dirac, Werner Heisenberg, Niels Bohr and others were finding. The so-called Copenhagen interpretation of their observations – which remains the basis of quantum physics today – didn’t ring true. The effects were clear enough (in fact, today we’ve built computers that exploit them), but the explanation wasn’t right.

Einstein’s answer was that he and his colleagues hadn’t yet found everything. And for my money, if Einstein figured there was something yet to discover – well, the onus is on to look for it.

The problem is that, since then, we haven’t found that missing element. All kinds of efforts have been made to reconcile quantum physics – which operates on micro-scales, below a Planck length – with the deterministic macro-universe that Einstein’s General Theory of Relativity described.

None have been compelling, not least because while the math works out for some ideas – like string theory – there has been absolutely no proof that these answers really exist. And while it’s tempting to be drawn by the way the language we’re using (maths) works, we do need to know it’s describing something real.

The Horsehead nebula, Barnard 33, as seen by Hubble. Wonderful, wonderful imagery.

The Horsehead nebula, Barnard 33, as seen by Hubble. Wonderful, wonderful imagery.

Of late, though, there have been proposals that Einstein was quite right. There WAS something missing. Not only that, but the Large Hadron Collider has a good chance of finding it soon, as it’s ramped up to max power.

Here’s how it works. We live in a four-dimensional universe (movement up-down, left-right, forward-back and time). It’s possible other dimensions and universes exist – this is a postulate of string theory. Another idea is that gravity ‘leaks’ between these universes. And this is where the LHC comes in. Currently, in its souped-up new form, the LHC can generate enough energy to produce a micro-sized black hole.

Exactly what this would mean, though, is up for debate. The results could point to some very different models of the universe than the one we’ve been wrestling with since the 1940s.

It could mean that string theory is correct – and provide the first proof of it.

Or, if the black hole is formed while the LHC is running at specified energies, it could mean that ‘rainbow gravity’ is correct. This is a controversial hypothesis – built from Einstein’s theory of Special Relativity – in which the curvature of space-time (caused by the presence of mass) is also affected by the act of observing it. This implies that gravity (which is a function of that curvature) affects particles of different energies, differently. Basically, the wavelength of light (red) is affected differently than a higher (blue). We can’t detect the variance in normal Earth environments, but it should be detectable around a black hole. And if it’s true then – by implication – the Big Bang never happened, because the Big Bang is a function of the way gravity behaves in General Relativity. It also makes a lot of the paradoxes and mysteries associated with bleeding-edge physics go away, because according to rainbow gravity, space-time does not exist below a certain (Planck level) scale.

Another possibility is that the ability of the LHC to make black holes could mean that a ‘parallel universe’ theory is right, and the Copenhagen intepretation isn’t the right explanation for the ‘quantum’ effects we’re seeing. This last is yet another explanation for quantum effects. By this argument what we’re seeing is not weirdness at all, but merely ‘jittering’ at very small scales where multiple universes overlap. These are not the ‘multiple universes’ that Hugh Everett theorised to follow quantum wave function collapse. They are normal Einsteinian universes, where particles are behaving in a perfectly ordinary manner. The math, again, can be made to work out – and actually was, last year, at Griffith University in Queensland, Australia.

It also suggests that our friend Albert was right …again.

Copyright © Matthew Wright 2015

My gripe about the misappropriation of quantum physics by new age woo

A  few years ago I ended up consulting someone over a health matter. This guy seemed to be talking sense, until he started up about ‘quantum healing’. Bad move. You see, I ‘do’ physics.

Artwork by Plognark http://www.plognark.com/ Creative Commons license

Artwork by Plognark http://www.plognark.com/ Creative Commons license

One of his associates had a machine that used low voltage DC electricity to ‘heal’ by ‘quantum’ effects. This was gibberish, of course, and a brief discussion made clear that (a) the meaning of ‘quantum’ didn’t correlate with anything I knew from the work of Paul Dirac, Niels Bohr, Werner Heisenberg and the rest; and (b) invoking the word, alone, sufficed as a full explanation of how this ‘treatment’ worked.

It was, in short, total snake oil. The science is clear: quantum effects – the real ones – don’t work at macro-level. The end.

That’s why ‘quantum jumping’, ‘quantum healing’ and the rest is rubbish. I don’t doubt that ‘quantum healers’ occasionally get results. The placebo effect is well understood. And maybe sometimes they hit on something that does work. But it won’t be for the reasons they state.

Niels Bohr in 1922. Public domain, from Wikipedia.

Niels Bohr in 1922. Public domain, from Wikipedia.

The way quantum physics has been co-opted by new age woo is, I suppose, predictable. The real thing is completely alien to the deterministic world we live in. To help explain indeterminate ‘quantum’ principles, the original physicists offered deterministic metaphors (‘Schroedinger’s cat’) that have since been taken up as if they represented the actual workings of quantum physics.

From this emerged the misconception that the human mind is integral with the outcomes of quantum events, such as the collapse of wave functions. That’s a terribly egocentric view. Physics is more dispassionate; wave-functions resolve without human observation. Bohr pointed that out early on – the experimental outcome is NOT due to the presence of the observer.

What, then, is ‘quantum physics’? Basically, it is an attempt to explain the fact that, when we observe at extremely small scales, the universe appears ‘fuzzy’. The ‘quantum’ explanation for this fuzziness emerged in the first decades of the twentieth century from the work of Max Planck; and from a New Zealander, Ernest Rutherford, whose pioneering experiments with particle physics helped trigger a cascade of analysis. Experiments showed very odd things happening, such as pairs of particles appearing ‘entangled’, meaning they shared the same measurable properties despite being physically separated.This was described in 1935 by Einstein, Podolsky and Rosen – here’s their original paper.

Part of this boiled down to the fact that you can’t measure when the measuring tool is the same size as what you’re measuring. Despite attempts to re-describe measurement conceptually, then and since (e.g. Howard, 1994), this doesn’t seem to be possible at ‘quantum level’. That makes particles (aka ‘waves’) appear indeterminate.

Albert Einstein lecturing in 1921 - after he'd published both the Special and General Theories of Relativity. Public domain, via Wikimedia Commons.

Albert Einstein lecturing in 1921. Public domain, via Wikimedia Commons.

All this is lab stuff, and a long way from new age woo, but it’s what got people such as Einstein, Dirac, Heisenberg, Bohr and others thinking during the early twentieth century. From that emerged quantum physics – specifically, the Copenhagen interpretation, the accepted version of how it’s meant to work. And it does produce results – we’ve built computers that operate via the superposition-of-particle principle. They generate ‘qbits’, for instance, by holding ions in a Paul trap, which operates using radio-frequency AC current – not DC.

The thing is, quantum theory is incompatible with the macro-universe, which Albert Einstein explained in 1917. Yet his General Theory of Relativity has been proven right. Repeatedly. Every time, every test. He was even right about stuff that wasn’t discovered when he developed the theory. Most of us experience how right he was every day – you realise General Relativity makes GPS work properly? Orbiting GPS satellites have to account for relativistic frame-dragging or GPS couldn’t nail your phone’s location to a metre or so.

So far nobody has been able to resolve the dissonance between deterministic macro- and indeterminate-micro scales.  A ‘theory of everything’ has been elusive. Explanations have flowed into the abstract – for instance, deciding that reality consists of vibrating ‘strings’. But no observed proof has ever been found.

Lately, some physicists have been wondering. ‘Quantum’ effects work in the sense described – they’ve been tested. But is the ‘quantum’ explanation for those observations right? Right now there are several other potential explanations – some resurrected from old ideas – that will be tested when Large Hadron Collider starts running at full power. All these hypotheses suggest that Einstein was right to be sceptical about the Copenhagen interpretation, which he believed was incomplete.

These new (old) hypotheses make the need to reconcile Copenhagen-style quantum physics with Einstein’s relativistic macro-scale world go away. They also have the side effect of rendering new age ‘quantum’ invocations even more ridiculous. More soon.

Copyright © Matthew Wright 2015

What ever became of all the good in the world?

I am always astonished at the limitless capacity humanity has for intellectualising itself away from care and kindness.

Quick - burn the intruding historian! Avenge ourselves!

School. If you’re accused, you’re guilty!

Many years ago, when I was at school, there was a coat cupboard at the back of the classroom. Next to the cupboard was a trestle table on which had been set a class construction project. The bell went. The class joyously leaped from their chairs and surged to the cupboard, shoving and ramming each other as they fought to get their coats and escape.

I’d hung back to wait for the scrum to clear and saw the cupboard door being forced back by the desperate mob, into the trestle table. I rushed to try and rescue it – too late. The whole lot collapsed to the floor as I got there. Needless to say I was blamed. Everybody had seen me standing over the ruin and it (again) proved what a stupid and worthless child I was, and how dare I claim I was trying to save it, I totally deserved what was coming to me.

So much for trying to be a Good Samaritan.

But – but you say – surely I had rights? No. I had absolutely none. Back then, teachers given power by the system used it to smash those the system had defined as powerless, the kids, and so validate their own sense of worth. If I was seen near a broken table and the teacher decided I had done it – well, then obviously I’d done it, and how dare I protest my innocence.

The main ethical problem with this sort of behaviour is that guilt-on-accusation and summary justice stand not just against the principles of our justice system, but also of the values of care on which western society prides itself. But that is how society seems to work, certainly these days. We have trial-and-conviction by media even before someone alleged of a crime has been charged, just as one instance.

All of it is a symptom of one side of human nature. A symptom of the way humans intellectualise themselves into unkindness. It stands against what we SHOULD be doing – stands against the values of care, compassion, kindness and tolerance that, surely, must form a cornerstone any society.

There is only one answer. We have to bring kindness back into the world – together. Who’s with me?

Copyright © Matthew Wright 2015

Click to buy from Fishpond.

Buy from Fishpond.

Click to buy from Fishpond

Buy from Fishpond

Click to buy e-book from Amazon

Buy e-book from Amazon