Charles Arthur is a journalist, author and speaker, writing on science and technology for over thirty years. He was technology editor of the Guardian from 2005-2014, and afterwards carried out research into social division at Cambridge University. He is the author of three books including Digital Wars, Cyber Wars, and Social Warming which releases this August.
Q: How did you end up as technology editor of the Guardian for almost a decade? What do you find personally interesting about technology as a driver for our age?
A: I’ve been fascinated by technology since childhood: aged 11, I was building circuits using transistors, reading science fiction, fascinated by the potential that machines and advancement might unlock. As I got older, I got into journalism as a career – working at a computer newspaper, then a business magazine, then New Scientist, then The Independent newspaper where I stayed for 10 years, and then the Guardian. Along the way I realised that technology is a substrate and that what really matters is how it affects our behaviour. Does it make us happier? If it’s misused, is the fault in the technology or in us, as humans?
Q: Over the course of those years and since, what have been the opportunities and trends in artificial intelligence and AI-powered applications that most concern and encourage you?
A: Concern? The widespread application to generate algorithmically-determined feeds in social media and search engines which are tuned to keep us engaged, rather than informed. Google’s system doesn’t necessarily give you the most accurate answer; it gives you the answer most people have pointed to. Social media doesn’t filter for truth; it filters for “attention”. But we pay attention to people having a fight in the street. Is that good?
Encourage? The advances in AI able to play recreational games, particularly Go and chess (both of which I play), mean we can see more possibilities; it’s like having aliens show us entirely new ways of thinking.
Overall, AI tends to be used in ways that are very directed at tasks. That’s fine. The problems arise when you set two AIs to do slightly different tasks on the same systems. In my forthcoming book, Social Warming, I point out that Facebook had an AI system which tried to recommend people join Groups, and which would autogenerate birthday and anniversary greetings for people. It also had an AI system which tried to root out terrorist content and remove it. The first one was better at its job than the second, so it autogenerated business pages for terrorists and encourage people to get in touch with them.
Q: Your last book, Cyber Wars: Hacks That Shocked The Business World, focused on computer security. Since it was published, we have seen many additional hacks of both commercial and government databases. Do you see net improvement or devolution in this area since you wrote your book?
A: One of the chapters in that book was about how ransomware had been enabled by the rise in cryptocurrency, which took it from a theoretical concept to a continual problem. I think companies are more aware that they might be hacked; they don’t use email where they can use safer encrypted messaging apps; they expect there might be security risks with new systems. But the “attack surface” of modern systems keeps getting bigger. Hacking will always be with us.
Q: As a professional journalist covering technology, you have had a front row seat for analyzing the effects that social media platforms have had on traditional news sources and standards for accurate reporting of the news. Do you think professional journalism standards themselves have been affected by how news is reported through social media?
A : As I point out in my book, even the best journalists are tempted by the immediacy of Twitter (and to a much lesser extent Facebook) to try to get their story out immediately, ahead of their rivals. That can mean little things get blown out of proportion because they attract attention on social media, and also that details don’t get checked in the headlong rush to be first. It’s then much harder to get the correct information out there.
There’s also a reverse effect – that “journalism” turns into the trawling of what’s on social media. That both reduces its real quality, reduces its apparent quality (everyone can see you’re just rephrasing social media content), and distracts from doing actual journalism about issues that do affect readers.
This isn’t to say that journalism was in some nirvana before social media. (Nor the internet.) But it’s important to recognise what things are, and aren’t, good uses of your time. Rewriting the internet has always struck me as a fairly pointless exercise.
Q: Your new book coming out in August is called Social Warming: The Dangerous and Polarising Effects of Social Media. I’m intrigued by your analogy to global warming. What are the similarities you see?
A: Warming is analogue, not digital: it’s gradual, incremental, and only when it reaches a critical level do you get dramatic changes. It’s not an “on or off” thing. When you boil a pan of water, you don’t notice much different about the water until, oh, suddenly it’s boiling and there are bubbles of steam roiling it. That’s analogue.
Global warming is slow, subtle, but enormous and all-encompassing: the whole planet is getting hotter, but that effect shows up in different places in different ways: some places get much more rain, some get far less. Then everything starts to really go haywire. Where are the glaciers? Why is the city flooding all the time? Where did these mosquitoes come from?
Social warming, similarly, has taken years to come around, but it’s now all around us, and even affects countries where social media use is quite small, because it is used to influence those in power: they always worry about those with broadcast platforms. Social warming is what happens when you squeeze everyone in to one place, and when you make it possible for them to see all the opinions, most of them wrong (because almost everyone else is wrong about something, aren’t they?), that the world holds, and make it possible to challenge those. It’s like squashing a sealed container of gas: it gets hotter and hotter. In the US and UK we see the anger rising and misinformation spreading because of this effect. There’s no release, either.
But, like global warming, we can take measures that will reduce it. The first one is to recognise it’s happening. The second is to understand why it’s happening. The third is to take measures against it, and precisely which measures are among the things I discuss in my book.
Q: In a 2018 article you wrote, “in the attention-based world, truth and accuracy are irrelevant.” Today we are increasingly seeing overt fact checking of online news by traditional news sources and greater efforts by social media platforms to moderate content, especially of major political import. Do you believe such efforts are sincere and have any reasonable hope of success so long as the business model of social media rests on attention?
For traditional news sources, fact-checking what’s online is a form of attention-seeking: people like stories that upend their preconceptions, as long as it upends them in a way they agree with. (The most-read story I wrote at The Guardian was fact-checking a viral claim online saying Samsung was paying a billion-dollar fine to Apple in small change. It wasn’t, but explaining why was quite fun.)
The social media platforms have begun to introduce fact-checking (by third parties) because the sheer scale of fake news and “satire” that was going viral on them effectively threatened their reputation. When it comes to pandemic information, they have tried, though the rapidly changing guidance from health authorities has made it hard to be consistent over time.
Are they sincere? Certainly they want the fact-checking to be effective. But they’re always conflicted: Facebook doesn’t remove content that is judged untrue, and the fact-checking label isn’t as prominent as you’d expect. They’re not trying to be an encyclopaedia or a news publisher, so they want fact-checking to work as long as it doesn’t get in the way of people using their networks. But rather like the battling AI systems in Facebook, one generating content for terrorists and another trying to find terrorist content and remove it, there are two conflicting interests at work here. And only in the most exceptional circumstances would fact-checking be allowed to interfere with getting content on the network. Mark Zuckerberg has said that Holocaust denial and certain other sorts of content will be removed, which is good. But that’s the exception.
Thanks very much for answering our questions, Charles!