Why Musk’s rabble-rousing shows the limits of social media laws | Technology


What can the UK government do about Twitter? What should it do about Twitter? And what does Elon Musk even care?

The multibillionaire owner of the social network, still officially branded as X, has had a fun week stirring up unrest on his platform. Aside from his own posts, a mixture of low-effort memes that look as if they’re lifted straight from 8chan and faux-concerned reposts of far-right personalities, the platform at large briefly became a crucial part of the organisation of the disorder – alongside the other two of the three Ts: TikTok and Telegram.

Everyone agrees that something should be done. Bruce Daisley, former Twitter EMEA VP, suggests personal liability:

In the short term, Musk and fellow executives should be reminded of their criminal liability for their actions under existing laws. Britain’s Online Safety Act 2023 should be beefed up with immediate effect. Prime minister Keir Starmer and his team should reflect if Ofcom – the media regulator that seems to be continuously challenged by the output and behaviour of outfits such as GB News – is fit to deal with the blurringly fast actions of the likes of Musk. In my experience, that threat of personal sanction is much more effective on executives than the risk of corporate fines. Were Musk to continue stirring up unrest, an arrest warrant for him might produce fireworks from his fingertips, but as an international jet-setter it would have the effect of focusing his mind.

Last week, London mayor Sadiq Khan had his own proposal:

‘I think very swiftly the government has realised there needs to be amendments to the Online Safety Act,’ Khan said in an interview with the Guardian. ‘I think what the government should do very quickly is check if it is fit for purpose. I think it’s not fit for purpose.’

Khan said there were ‘things that could be done by responsible social media platforms’ but added: ‘If they don’t sort their own house out, regulation is coming.’

Ewan McGaughey, a professor of law at King’s College, London, had a more specific suggestion for what the government could do when I spoke to him on Monday. The Communications Act 2003 underpins much of Ofcom’s powers, he says, and is used to regulate broadcast TV and radio. But the text of the act doesn’t limit it to just those media:

If we just look at the act alone, Ofcom has the power to regulate online media content because section 232 says a “television licensable content service” includes distribution ‘by any means involving the use of an electronic communications network’. Ofcom could choose to assert its powers. Yet this is highly unlikely because Ofcom knows it would face challenge from the tech companies, including those fuelling riots and conspiracy theories.

Even if Ofcom, or the government, was unwilling to reinterpet the old act, he added, it would only take a simple change to bring Twitter under the much stricter aegis of broadcast controls:

There is no difference, for example, between Elon Musk putting out videos on X about (so called) two-tier policing, or posts on ‘detainment camps’, or that ‘civil war is inevitable’, and ITV or Sky or the BBC broadcasting news stories … The Online Safety Act is completely inadequate, since it only is written to stop ‘illegal’ content, which does not by itself include statements that are wrong, or even dangerous.

The Keep Your Promises Act

Police in Middlesbrough responded to rioters this month ho had been encouraged by posts on social media. Photograph: Gary Calton/The Observer

It’s odd to feel sorry for an inanimate object, but I wonder if the Online Safety Act is getting sort of a rough deal, since it’s barely in effect. The act, a mammoth piece of legislation with more than 200 separate clauses, passed in 2023, but the bulk of its changes will only have power once Ofcom completes a laborious process of consultation and code-of-conduct creation.

In the meantime, all the act offers are a handful of new criminal offences, including bans on cyberflashing and upskirting. Two of the new crimes have been given their first road test this week, after portions of the old malicious communications offence was replaced by the more specific threatening and false communications offences.

But what if everything had moved quicker, and Ofcom had been up and running? Would anything have changed?

The Online Safety Act is a curious piece of legislation: an attempt to corral the worst impulses of the internet, written by a government that was simultaneously trying to position itself as the pro-free-speech side of a burgeoning culture war, and enforced by a regulator that emphatically did not want to end up casting rulings on individual social media posts.

What came out of it could be described as an elegant threading of the needle, or an ungainly botch job, depending on whom you ask. The Online Safety Act doesn’t, on its own, make anything on the web illegal. Instead, it imposes requirements on social media firms to have specific codes of conduct, and to enforce them consistently. For some types of harm, like self-injury, racial abuse, or incitement to racial hatred, the largest services have a duty to at least offer adults the option of not seeing that content, and to keep children from seeing it at all. For material that is illegal, ranging from child abuse imagery to those threatening or false communications, it requires new risk assessments to ensure that companies are actively working to fight it.

It’s easy to see why the act caused such a gigantic shrug when it was passed. Its primary outcome is a new mountain of paperwork, requiring social networks to prove that they’re doing the things they were already doing: trying to moderate racist abuse, trying to tackle child abuse imageryand trying to enforce their terms of service.

The defence of the act is that it functions less as a piece of legislation to force companies to behave differently, and more as something that lets Ofcom beat them around the head with their own promises. The easiest way to get a fine under the Online Safety Act – and following the lead of GDPR, those fines can be a meaty 10% of global turnover – is to loudly insist to your customers that you are doing something to tackle an issue on your platform, and then do nothing.

skip past newsletter promotion

Think of it this way: the bulk of act is designed to tackle the hypothetical foe of a tech CEO who stands up at an inquest and solemnly intones that the awful behaviour they’re seeing is against the terms of their service, before going back to the office and doing nothing at all about the problem.

The issue for Ofcom is that, well, multinational social networks aren’t actually run by cartoon villains who ignore their legal departments, overrule their moderators and merrily insist on enforcing one terms of service for their friends and a different one for their enemies.

Except one.

Do as I say not as I do

Twitter under Elon Musk has become the perfect test case for the Online Safety Act. On paper, the social network is a comparatively normal one. It has a terms of service that blocks broadly the same spread of content as other large networks, if slightly more permissible on pornographic content. It has a moderation team that uses a mixture of automatic and human moderation to remove objectionable content, offers an appeals process for those who think they have been treated unfairly, and dishes out escalating strikes ultimately leading to account bans for infringement.

But there’s another tier to how Twitter works: what Elon Musk says, goes. To give just one example: last summer, a popular rightwing influencer reposted child abuse imagery that had earned its creator 129 years in jail. His motivation for doing so was unclear, but the account received an instant suspension. Then Musk stepped in:

An Elon Musk tweet that says “only people on our CSE team have seen those pictures. For now, we will delete those posts and reinstate the account.”
Photograph: X.com

On paper, Twitter’s terms of service are likely to prohibit many of the worst posts related to the riots in Britain. “Hateful conduct” is banned, as is “inciting, glorifying or expressing desire for violence”. In practice – well, the rules seem to be inconsistently applied. And that’s the point where Ofcom could start being very pushy indeed with Musk and the company he owns.

The wider TechScape

Is AI love healthy? Illustration: Thomas Burden/The Observer



Source link

Leave a Comment