Online harms get real? Trump’s social media ban and the future of social media regulation

On 15 December, the UK Government set out its proposals to regulate social media in the UK. With the aim of making the UK “the safest place in the world to be online”, a new legal duty of care will be established as and when the Online Harms Bill eventually comes before Parliament. It aim is to force businesses with a significant online component to take more responsibility for the safety of their users and tackle a number of “harms” which could be caused by content or activity on their services, all of which will be overseen and enforced by OFCOM as an independent regulator. Transparency, trust and accountability will be at the heart of the new regime, and the latter is at the heart of the argument around when and how social networks should step in and take immediate action in relation to the conduct of its users.

It’s fair to say that many view the new proposed legislation with some cynicism, and it’s also fair to say that many of the larger businesses which could fall squarely within the scope of the UK’s new regulatory framework will be lobbying as hard as they can to try and steer the discussion in their favour. Many social networks, for example, have often made it clear that they welcome further regulation and have engaged with Governments to demonstrate their own commitment to protecting their users. Many platforms are also still getting to grips with the impact that the changes to data protection law in the wake of the coming into force of the General Data Protection Regulation (GDPR) on their operations – Twitter, for example, was hit with a €450,000 fine from the Irish Data Protection Commission which saw private tweets exposed by a security flaw over a four-year period discovered in 2019, exacerbated by its failure to report its occurrence.

For 20 years, “information society services” within the EU have largely been shielded from civil or criminal liability for the actions of its users as a result of the operation of the E-Commerce Directive and the UK’s E-Commerce Regulations, which brought those protections into force domestically. Essentially, if a social network is unaware and had no real role in the creation of or influence over content either created by or the behaviour of its users, then it is largely protected from any civil or criminal liability arising from it. Once put on notice of an issue, however, they must take “expeditious” action to remove it. The new Online Harms legislation is designed to be compatible with the EU’s existing and future legislation, including its own attempt to regulate social media through the introduction of the impending Digital Services Act. The direction of legislative travel is to introduce significant fines and other penalties against any business in scope of the new legislation if it’s breached. In the UK’s case, OFCOM will have the power to levy fines up of to 10% of Global Financial Turnover or £18 million (whichever is the higher) for the worst examples of non-compliance. Compliance will mean working to protect their users from a list of identified “harms”, which will include disinformation or, in the words of Donald Trump, “FAKE NEWS”.

The Government’s announcement of the Online Harms regime and its previous consultation highlighted the malign influence of fake news on society.  The social networks have faced Governmental scrutiny in both the US and UK in relation to their roles in recent UK and US Elections, driven by concerns over microtargeting and misuse of personal data to promote skewed or untrue content. Many took action to restrict the content and availability of political advertising on their platforms, and Twitter in particular has long been a target for criticism over its failure to do more to rein in some of the more controversial views it’s often used to propagate. Of course, the focus of that criticism has often related to the activity of one of its most high-profile users: President Donald Trump.

Following Trump’s loss to Joe Biden in November 2020’s US Presidential Election, Twitter has taken more action against Trump, at least indirectly. For many years, its position has been that “the tweets must flow”, but in the face of Trump’s repeated, and unfounded claims that election fraud was behind his loss, Twitter have labelled his tweets as containing claims which are disputed. In response, Trump has sought to remove many of the protections from liability which Twitter enjoys under US law - specifically S.230 of the Communications Decency Act. S.230 which provides that an “interactive computer service” can’t be treated as the publisher or speaker of third-party content, and so are largely protected from civil liability. That said, there are exceptions for violations of federal Criminal Law. Even before we get to that issue, Twitter and many other social networks’ User Terms contain a prohibition against any use of their service for unlawful purposes. In Twitter’s case, however, Trump’s position as a head of state has afforded him even greater protection and seen Twitter hesitate from taking action to stop him from tweeting altogether. Given the criticism levelled against Twitter and “Big Tech” more generally, as part of the US Election Campaign when action was taken to stop allegations relating to the dealings of Hunter Biden (allowing their opponents to claim that a legitimate news story was being suppressed), this kind of unilateral action may well have been viewed as a step too far. But all that changed over the last 24 hours.

Their position, and that of Facebook, Snapchat and others, changed last night as a crowd of the President’s supporters stormed the US Capitol during ceremonial proceedings to certify Joe Biden’s victory in the 2020 US Presidential Election. Amplified by right-wing news outlets such as One America Network News and Newsmax, Trump encouraged a crowd at his “Save America Rally” to march on the building in support of Republican politicians and make their voice heard. After a riot ensued, many commentators noted that Trump could have called off the rioters with one tweet. He then posted a video which confirmed that he “loved” them, and asked them to refrain from violence whilst continuing to allege that a landslide victory in the 2020 Election had been “stolen” from him, and the American Public. Shortly after, the apparently unthinkable happened and Trump’s Twitter account was locked for 12 hours out of a fear that his actions could lead to the incitement of violence, with several of his posts removed. Facebook followed suit with a 24-hour restriction, and both platforms have warned of a permanent ban moving forward.

Political campaigning remains largely unregulated in the UK, although reform may be on the way. The Online Harms legislation currently under development is being introduced in part to combat misinformation and “fake news”. Although the UK has been lucky enough never to have to reckon with a similar incident to what took place in Washington overnight, campaigning around the UK election was more closely-scrutinised than ever and took place largely online. Notably, the Conservatives’ use of social media was regularly “fact checked” during the last General Election. Unless and until that position changes, in the wake of a new duty to protect their users from harm we may see Twitter, Facebook and others leaning more heavily on their terms of use, and amending them, to give them more opportunity to take action of their own under the laws that they may choose to write for themselves and their users against the threat of a more interventionist regulator in OFCOM, very significant fines and potential further legal action.

Whatever happens, it’s becoming clearer every day that social media is no longer the ’Wild West’, that regulation is on its way and that online harms can lead to potentially devastating real-world consequences. We’ll keep you updated as the new legislation evolves.

Disclaimer: This document does not present a complete or comprehensive statement of the law, nor does it constitute legal advice. It is intended only to highlight issues that may be of interest to clients of BLM. Specialist legal advice should always be sought in any particular case.

Who to contact


Partner, Head of Creative, Digital & Marketing sector group

View full profile >