UK Online Harms White Paper: What’s the impact?

Steve Kuncewicz, partner and social media specialist at BLM, analyses the strict guidelines set out by the Government and its potential impact on the online world

The Government has made it clear for some time now that it intended - through the Online Harms White Paper, its consultation and now these proposals - to make the UK the “safest place in the world to be online”. They’re doing so by moving away from a traditional position, where online platforms and businesses have largely been shielded from oversight or direct liability, towards the creation of a rigid regulatory framework overseen by OFCOM.

Any business that operates a platform where users are deemed susceptible to “harms” fall within the scope of this legislation and is therefore covered by a new legal duty of care. It’s described by Oliver Dowden as a “new era of accountability” and “the most comprehensive approach yet to online regulation”, and that’s certainly not an exaggeration. 

The public has, if opinion polls are to be believed, supported some form of regulation on social media, particularly in the wake of tragic incidents such as the suicide of Molly Russell, and growing concerns around spread of misinformation. Both are high on the public agenda and with these new proposals meant to directly address these common issues, many will see this as a positive step to protecting users.

Turning guidelines into action

There was a suspicion that these proposals may not be implemented by the Government for some time, or be watered down, However, although this iteration doesn’t go as far as some of the previous proposals, it does confirm that OFCOM will look to levy maximum fines of either £18 Million or 10 per cent of a firm’s global turnover for the most serious failure to comply. This is a similar level of fine as the most serious GDPR violation. In fact, it looks as if GDPR regulations may have provided a framework for the new proposals, given that they also oblige some businesses to make their own assessment of what “legal but harmful” content is disseminated through their platforms and assess related risk accordingly. 

The problem is that there’s no definition provided here as to what defines “harmful content”, with the onus placed upon businesses and platforms to make their own assessment. Much as OFCOM will put in places detailed codes of practice (some interim versions were made available today, dealing with child sexual exploitation and abuse as well as terrorist content) , there is a real concern that forcing businesses with a significant digital presence to police themselves without significant further guidance may lead to inconsistent enforcement, confusion around what they need to do to comply and a potential increase in civil claims. 

As it stands, the new regulation applies to “any company in the world hosting user-generated content online accessible by people in the UK or enabling them to privately or publicly interact with others online”. This covers a very wide range of businesses, although compliance and responsibility will be “tiered”, those with the largest online presences and high-risk features such as Facebook and TikTok, will be expected to do far more than others based on a “reasonably foreseeable risk of causing significant physical or psychological harm to adults”.

Changing treatment of social media users

These guidelines are certainly going to lead to a change in terms of use and the need for “Category 1” businesses to publish transparency reports about the steps they’re taking to deal with online “harms” , and it may well lead to a lot more content being blocked as a safety measure.

There has been a long-standing debate over what responsibility social platform moderators should assume in relation to harmful content, and although these proposals up that ante, they don’t as yet provide clarity on what the definition of “harms” will be. It’s clear, however, that big tech firms are in the Government’s crosshairs, given previous accusations of token gestures. 

Certainly, the safety of children online is the main concern of the new proposals, and they won’t – as some might suggest – provide platforms with license to censor the controversial views of users. There will be more lobbying to come before we end up with draft legislation to review, but just as privacy did a few years ago, the issue of “harm” is going to make its way up to the top of risk registers. We can certainly expect to see changes in the way in which social media users interact with each other and how platforms will allow them to do so as a result. 

Potential damages to the UK digital sector 

There’s been a suggestion that without better definition, this new framework may end up placing a disproportionate amount of compliance pressure on smaller businesses. We won’t know until we see the draft legislation how far that may go, but this may discourage some businesses from coming to the UK until further clarity is provided. Given that the digital sector is a huge contributor to the UK economy and the current uncertainty around the Brexit negotiations, this could have a direct impact upon its health and decisions around where and how much to invest. 

To protect themselves, smaller digital business need to think seriously about what “harm” users are susceptible to. It’s important to map out related risks and make this an issue for the board, ideally designating a “Harm Officer” as a single point of contact.

There’s still some way to go with this bill, and I would expect there to be further amends before it is seen all the way through, but there’s no doubt it will change the UK tech landscape for both users and businesses, large or small.

Steve Kuncewicz, Partner and Head of Creative, Digital & Marketing sector group, BLM

Who to contact

Steve
Kuncewicz

Partner, Head of Creative, Digital & Marketing sector group

View full profile >