Big tech platforms face reckoning on accountability

William RobertsWednesday 8 March 2023

After billionaire advocate of radical free speech Elon Musk purchased Twitter, dozens of suspended accounts were restored and hundreds of jobs in content moderation and data security axed. Researchers, including the Center for Countering Digital Hate and the Anti-Defamation League, have since reported a surge in anti-Semitic language, hate speech, bullying and abuse on the platform.

Over the years, Twitter under co-founder and former CEO Jack Dorsey created policies against misinformation, hate speech and threats of violence, and enforced sanctions for violations. It suspended right-wing provocateurs and conspiracy theorists, including former US President Donald Trump.

Musk’s new leadership comes as the legal framework for social media platforms is shifting both in the US and Europe. Lawyers on both sides of the Atlantic see a reckoning ahead, as, after decades of letting the internet develop unfettered, the law is catching up. ‘It’s a really fraught moment,’ says David Kaye, a clinical professor of law at the University of California and a former UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. ‘There’s a tremendous amount of uncertainty as to what the environment is going to look like six months from now, 12 months from now.’

The conundrum facing government authorities in democratic countries concerned about objectionable content online is that they often haven’t had the tools to limit platforms without violating human rights and free speech norms. This is changing but getting the policies right will require careful balancing. In the EU, the recently enacted Digital Services Act (DSA) is imposing a new transparency and accountability framework on digital platforms.

It’s a really fraught moment. There’s a tremendous amount of uncertainty as to what the environment is going to look like six months from now

David Kaye
Former UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

‘The DSA is going to be fully applicable as of 2024,’ says Laurent De Muyter, Co-Chair of the IBA Communications Law Committee and a partner at Jones Day in Brussels. ‘It does indeed regulate content, but also goods and services – everything that's online. The goal is that you remove everything that is illegal online. So, it can be illegal goods violating intellectual property rights in the marketplace. It can be hate speech and so on. What is illegal will depend on what’s been defined at national law.’

EU officials have warned Musk that Twitter will have to comply with the DSA, following Twitter’s declaration on 17 February that it had 59.8 million active users in the EU in the prior 45 days, qualifying it as a major platform subject to the law’s most sweeping provisions. The largest platforms, which the DSA sees as posing systemic risks, will be required to meet a far-reaching series of obligations. ‘It’s for each company to organise itself the way it wants. You can fire as many people as you want, as long as you comply with the law and with the DSA,’ says De Muyter. ‘You will still be required to have content moderation that complies with the DSA. You have to make sure that the practice remains compliant, so you don’t have hate speech, for example, on your platform.’

Twitter didn’t respond to Global Insight’s requests for comment. However, in a blog post at the end of November, the company noted that, while it is ‘embracing public testing’, its policies had not changed. ‘Our approach to policy enforcement will rely more heavily on de-amplification of violative content: freedom of speech, but not freedom of reach,’ the post read. The post reaffirmed that Twitter’s Trust & Safety team ‘remains strong and well-resourced’ and that ‘When urgent events manifest on the platform, we ensure that all content moderators have the guidance they need to find and address violative content.’

Meanwhile, the US Supreme Court heard oral arguments in late February in Gonzalez v Google LLC and Twitter, Inc v Taamneh, two cases involving claims brought by families of victims of terrorist attacks. For the first time, the Court will consider whether to narrow the scope of Section 230 of the US Communications Decency Act 1996, which has given internet platforms broad immunity for users’ posts. At issue is whether content recommendation algorithms, which in these two cases allegedly promoted calls to violence by Islamic State, open the platforms to liability. Potentially, the Court could rule that the liability provisions of anti-terrorism statutes apply.

In Taamneh, Twitter’s lawyers have focused their argument on what’s defined as ‘aiding and abetting’ in the US anti-terrorism statute, which provides for triple damages against secondary actors. Liability must come from assisting or enabling the specific terrorist act, not merely from a generalised presence of Islamic State-affiliated propaganda on Twitter, lawyers said in a filing.

‘Information from journalists and reporters about terrorist activity, information trying to debunk or refute claims that terrorists are making, advocacy or documentation of the kinds of impacts of terrorist violence or terrorist activity in a community, all of that potentially falls’ if the Court rules against the platforms, said Emma Llansó, Director of the Free Expression Project at the Center for Democracy & Technology at a briefing in Washington, DC.

In Gonzalez, Section 230 bars claims against providers of ‘interactive computer services’, meaning websites such as YouTube, for any content that the site didn’t create or develop, lawyers for Google said in a filing. By taking user inputs to display thumbnail videos of potential interest, YouTube hadn’t done anything more than apply neutral algorithms – protected under Section 230. If the Court were to rule anti-terrorism liability applies in this case, it would ‘threaten the basic organizational decisions of the modern internet’, Google’s lawyers said.

If that happens, lawyers expect tech platforms to renew their push in Congress for a deal on social media regulation. The outlook for legislation, however, isn’t promising with control of Congress divided between Republicans and Democrats who hold sharply opposing views.

Looking to the future, US courts are likely to be called upon to address attempts by state legislatures to regulate internet content. So-called anti-censorship laws passed by Republicans in Texas and Florida, prohibiting platforms from taking down political speech, have drawn legal challenges that are working their way through federal courts. ‘We are facing – potentially – the oft discussed patchwork of state laws around content regulation and content moderation that would make it incredibly difficult to run an online service,’ Llansó said.

Image credit: Closeup a tweet “the bird is freed” by Elon Musk (@elonmusk) on Twitter website on an iPhone. Vancouver, Canada, Oct 29 2022. Koshiro/AdobeStock.com

Download the IBA Global Insight app

Access expert analysis on international rule of law, business and human rights