Partner Steve Kuncewicz talks political ad bans with International Bar Association
The article will examine Twitter’s recent decision to ban political ads, and criticism against Facebook for not fact-checking them; whether, as campaigners suggest, Google and FB should temporarily ban such ads until after the election; the threat posed to democracy by such ads; whether there’s a responsibility for tech firms to act; and the extent to which regulation can adapt.
Steve Kuncewicz, partner and social media legal expert at commercial law firm BLM shares his views on Twitter’s recent decision to ban political advertising with International Bar Association.
Twitter’s decision to ban all political advertising is the most radical step we’ve seen from a social platform and a direct reaction to scandals such as Cambridge Analytica, which brought about a storm of criticism from all corners of the media. With this decision, they’ll be hoping to stave off the growing scrutiny from both the public and government regulators that seek to have more control on social platforms
Twitter founder Jack Dorsey recognises the power of political ads, and is willing to take action. Indeed, in his own words, “internet advertising is incredibly powerful and very effective…. [and] can be used to influence votes to affect the lives of millions”.
Facebook has faced criticism for not fact-checking their political advertising – is this criticism valid?
Of all platforms, Facebook has been under the most scrutiny and much of this has been valid – especially as Cambridge Analytica almost entirely revolved around their cooperation in surreptitious micro targeting of users. Its decision to pay the Information Commissioner Office’s (ICO) recent £500,000 fine with no admission of liability has done it no favours in the public sphere – especially as Twitter, Google and even Snapchat are all seen to be taking action.
The argument against Facebook has been that they don’t want to turn away political advertising revenue and Dorsey has leant heavily on that point to justify Twitter’s stance that politicians shouldn’t be “paying for reach” and instead have a “more forward-looking political ad regulation”.
However, it seems Facebook is taking increasing note of criticism, recently banning a video from the Conservative Party after the BBC complained use of its footage in the clip could damage “perceptions of its impartiality”.
Should tech firms have a responsibility to act?
If tech firms don’t act, they will eventually be forced to, as has been proven through the recent consultation of the British government’s Online Harms White Paper. It suggests there is a duty of care from tech firms to protect their users.
Self-regulation will always be preferable, and so keeping in-line with public opinions on their ‘responsibility’ is a tech firm’s best way of avoiding this. More enforced rules mean less control on their own platform.
Should this responsibility be made mandatory?
Social media platforms have sought to regulate themselves to avoid the need for the government to do that job for them – they would much prefer to be in full control of their own rules. However, if a strong enough stance on disinformation is not made and public pressure towards misinformation prevails, there’s no question that mandatory regulations will be introduced. The recent consultation on the draft Online Harms White Paper shows there’s already clear evidence of that.
If so, where should the new rules emanate from and what form should they take? Benefits/ possible pitfalls of such a move?
If anywhere, these rules will come from the platforms self-policing themselves, or from a new government arm dedicated to independently reviewing political advertisements. The Online Harms White Paper has already set the scene for this, outlining future plans to regulate social media, hold platforms to account and make the UK “the safest place in the world to be online” through the creation of a new statutory duty of care.
To what extent can current regulation adapt to tackle this potential problem?
The ASA (Advertising Standards Authority) is often asked to intervene based on the content of political advertising, particularly if claims appear untrue or misleading. However, their view since the 1997 General Election has been that by the time an investigation had completed, the election to which the material related would have run its course.
Is there a threat to democracy posed by such ads? What other possible problems can they cause?
There’s been plenty of questions posed around the threat to democracy with these kinds of ads – particularly surrounding Donald Trump’s election win in 2016 and the Brexit referendum. It’s incredibly difficult to measure their exact threat or impact, especially as inquiries into foreign interventions have proved inconclusive, but the level of surreptitious micro targeting is a cause for concern. In social media’s current state, foreign parties with a vested interest in the outcome of another country’s democratic vote can have direct access to the newsfeeds of these voters in an incredibly targeted way – and with enough financial capital, could undoubtedly have a significant influence.
Disclaimer: This document does not present a complete or comprehensive statement of the law, nor does it constitute legal advice. It is intended only to highlight issues that may be of interest to customers of BLM. Specialist legal advice should always be sought in any particular case.