The privacy issues kicked up by the current Facebook data scandal – which reportedly saw $83 billion wiped off its stock value last week – are scary not so much for the specific relevations that triggered it, or for the particular issue itself; but because of the deeper issue underpinning it, one shared by all social media and a good deal of the rest of the world wide web.
To me the world is spiralling, ever-faster, into a Huxleyan nightmare. Remember his Brave New World, where the people found themselves wrapped in tyranny because they had freely given away their privacy? This was distinct from Orwell’s world in which privacy was taken from people by force.
Just in the last decade we’ve been conditioned, as a society, to happily hand over personal and private data to large organisations. They also scarf other data – often without our really knowing it – through the way we interact with them. It’s a surveillance system of the kind that, a couple of generations ago, the totalitarian dictatorships merely dreamed of having.
The issue isn’t what the people we’re giving that data away to will do with it. It’s who might get hold of it later, and what they’ll do with it.
Don’t forget, the one lesson we can draw from the span of human history is that the default behaviour of human nature is to invalidate others in order to validate themselves. The struggle between that and acting in ways supportive of others is one of the pivots of every religion I’m aware of. Humans keep failing at it, especially at the level of large complex societies – meaning anything over about 150 individuals – which is why societies keep falling into dictatorships or tyrannies – over and over – through history. It’s why injustices occur. It’s how the innocent become criminalised.
To me, the revelations of a massive privacy breach is thin end of the wedge stuff. All social media companies do it. The thing is that knowledge is power; and history – again – is littered with examples of injustice where people are victimised not because of what they have done, but because of what ‘the system’ says they have done. Witch hunting.
I see it online in behaviours now – where one side will establish a narrative relative to the other, and then behave as if that is the ‘truth’, whether it is or not. It’s flat out bullying, but that’s how humans innately behave, it seems, and it’s difficult to defeat it. I had a correspondent on this very blog, the other week, who did just that – refused to accept that anything I discussed with him was valid, abused me for my position, and then kept screaming that I was the one refusing to discuss matters.
This sort of behaviour, of itself, isn’t part of the data hand-over problem. But it is when that data is used, perhaps by people very different from those who collected it. What happens if somebody gets it who wants to use it to determine who we are, how we might behave, and so forth, for purposes other than shoving advertisements at us? What happens if they start up a narrative based not on what is true , but by what they insist must be true – and where our ability to say otherwise is taken away from us? That’s how totalitarianism works.
The problem is that all such analysis is done on the basis of partial data (surfing and social interaction habits online) which, itself, is framed by a wholly artificial system (the way that online interactions occur through ‘likes’, brief commentaries made quickly without subtlety, visits to websites recorded without context as to what the individual is actually thinking, other than what the last website was). It’s a form of psychometrics – and, like all psychometrics, the relationship with the actual person is often only coincidental.
My own experience of psychometrics is salutary: I have to answer a test where none of the answers actually fit, but I have to pick one anyway. Then, somebody who doesn’t know me from Adam tells me who I Really Am. As we say in New Zealand, ‘Yeah, right’.The last time I did one it was trivial to reverse-engineer the intellectual botty-dribble on which it was based. And I puzzled the hell out of the ‘psychologist’ running the test when I gamed it.
The illusion of sophistication produced by big data and mathematical algorithms, is simply that: an illusion. Mathematics is not magic; it’s a language. It cannot identify the real character – only a parody shadow based on partial data filtered through an artificial system of interactions, which may or may not be true. And, as I say, what worries me is when conclusions are then drawn about individuals and acted on as if true – when they may well not be.
In that regard, consider this: I don’t think that I am all that different from most people in that sense, but I do have a diverse set of interests. Here are some:
– I like music of all flavours (except country and western)
– I have studied physics, especially cosmology and astrophysics
– I have written books on engineering (including military engineering) – it’s a bloke thing
– I have studied history (including military history but also general social, religious and other pasts)
– human evolution and the human condition
– I am interested in the nature of philosophy and how we think, especially why it is that as a species we keep falling into moral failure
– I like writing, including non-fiction of all flavours; but also fiction, especially fantasy and sci-fi
– I am fascinated by the inter-connections between all these things and a synthesis of understanding generally
What would that add up to, for anybody who didn’t know me and was trying to reverse-engineer my politics, for instance? Or my attitudes? You know – someone trying to fit me into any of pre-programmed ‘patterns’ that people like to slot others into, so they could draw conclusions about my likes or dislikes?
As far as I am concerned, the worst thing anybody can do is judge me and draw conclusions after a brief observation, all without checking for the facts, or knowing me personally, or bothering to find out more. Yet that’s the basis of psychometrics. It’s also how social media profiling works. And, I have to say, it’s also the mechanism behind hostile book reviews where the reviewer makes pronouncements about an author’s character and personal attitudes based solely on a book that, usually, has been written to other purpose.
Of necessity I’ve generalised quite a bit in this post, and doubtless there will be specific instances that stand against the broad remarks I’ve made. But I hope the point’s clear.
What worries me with the whole current trend towards social media, facilitated by large corporations whose primary asset is big data, is the scope for injustice, the scope for the criminalisation and persecution of the innocent for no better reason than that they appear to fit one arbitrary pattern or another. Not now, necessarily, but in the future, and maybe when that data is in the hands of others.
It happens. The one lesson we can draw from history is that this is human nature, particularly when it comes to larger societies.
Copyright © Matthew Wright 2018