Why has the European Commission fined X and does this threaten freedom of speech?

EC flag flying in front of EC building

This article is authored by Dr Stephanie Reynolds, Associate Head of Department (Law) and Co-Director of the Liverpool Public Law Unit.

On 5 December, the European Commission fined social media platform X €120 million as part of its investigation into breaches of the EU’s Digital Services Act (DSA). X owner Elon Musk responded, via his own X account, accusing “EU Commissars [of] deciding upon the fine first and then mak[ing] up fake reasons afterwards”. This, claimed Musk, was the “rule of the unelected bureaucrat!”

These comments reflect a broader narrative, pushed by some other tech magnates and even the US Government, that European countries – including the UK which has introduced its own Online Safety Act 2023 (OSA) – censor their citizens and fail to respect free speech.

Such allegations require scrutiny of whether the Commission’s actions actually amount to censorship, and of whether platforms’ own content moderation practices can undermine free speech independently of regulation. While public actors – like the EU or the UK – can be held directly accountable under human rights law, private platforms are not bound in the same way. In a world where political debate increasingly happens on social media platforms owned by a small number of very wealthy individuals, we need to start thinking seriously about the impact of private power on contemporary democratic functioning and how this might be constrained.

Does the Commission fine amount to censorship?

The Commission’s fine followed a two-year investigation into X’s compliance with the DSA. Crucially, it does not concern the spread of illegal content, which remains under investigation, but more technical matters. The Commission found that X’s use of the ‘blue tick’ breached Article 25(1) DSA, which prohibits platforms from designing interfaces that deceive service users. Once associated with high-profile accounts, the blue tick is now available through X’s premium subscription service without, the Commission decided, sufficiently meaningful identity verification. This makes it harder for users to judge the authenticity of accounts, increasing the risk of scams. In short, the Commission has concluded that X has not met its legal responsibility to keep users safe.

Meanwhile, Article 39 DSA requires very large platforms that display adverts to make a repository of information about them, covering things like ad content, target audience, users reached, who paid for the ad and who benefited from it. For the Commission, such information is essential to the detection of online frauds yet X’s own repositories were not sufficiently transparent or accessible to facilitate the identification of online threats. More generally, the Commission decided that X was not meeting its broader obligations, under Article 40(12) DSA, to provide researchers with access to its public data, ‘undermining their ability to investigate systemic risks to the European Union’.

Overall, then, there is little in the Commission’s decision that directly undermines free speech. Rather it seeks to ensure that users and researchers are furnished with sufficient information about the content they see online to make informed decisions about how to engage with it. This transparency-based approach is increasingly central precisely because human rights law makes it more difficult for public institutions – as compared with private companies – to intervene in online speech.

Moderating content: public regulation or private terms of service?

Historically, many governments left content moderation to platforms, partly to encourage innovation but also, some argued, to enable behind-the-scenes censorship. Since platforms’ terms of service cannot be subject to human rights challenges in the way public rules can, governments could encourage platforms to apply them to certain accounts rather than censor content themselves. However, particularly during Covid-19, European governments, including the UK, became concerned that self-regulation was inadequate to address online misinformation and hate speech.

In practice, the EU DSA’s vague definition of ‘illegal content’ combined with swift removal requirements and heavy fines, raise academic concern about over-removal by platforms implementing its requirements at the digital coal face. The UK OSA’s thresholds are similarly difficult to discern, giving platforms leeway to remove potentially legal content by arguing that they believed it to be ‘illegal’, whilst allowing more troubling content to remain by asserting that they did not think it fell within the definition.

Beyond this, the UK Government ultimately concluded that its free speech obligations rendered it ‘inappropriate’ for the OSA to oblige platforms to remove ‘legal but harmful’ content targeted at adults, including certain ‘racist, misogynistic’ comments. Instead, large platforms are required ‘to set out clearly in their own terms and conditions what harmful material is allowed on their site’. Scholars argue this makes private companies de facto arbiters of free speech, leaving them free to remove valuable political speech while allowing harmful content to persist. This raises further questions as to whether free speech equates to free reach in the digital realm.

Free speech but not free reach?

Self-declared ‘free speech absolutist’, Musk presents X as a ‘global town square’, a notion that coheres with certain free speech theories including that espoused in the US case of Abrams v United States that ‘the best test of truth is the power of the thought to get itself accepted in the competition of market’.

Yet, scholars argue that algorithms, engagement-driven business models that favour sensationalist content, prioritisation services like X premium, and the effect of follower numbers on reach all lead to unequal visibility of online speech in practice. This undermines the very notion of a ‘marketplace of ideas’. Meanwhile, human rights law continues both to shield platforms, as private actors, from responsibility for what is posted to their sites while simultaneously protecting their rights as owners to apply broad content moderation policies to the posts of others. Meanwhile, the media has previously accused X not only of promoting Musk’s content but also of, at times, removing the posts of those who disagree with him. In short, while vigilance against state censorship is always vital, what the interaction between Musk and the European Commission should additionally alert us to is the profound influence of private platforms and their owners over democratic discourse and the unequal realities of online speech.