Deepfake regulation: a double-edged sword?
Deepfake technology is rapidly emerging as AI’s latest ‘Pandora’s box’. We are no longer limited to producing parodic content of politicians (who could ever forget the Pope wearing Moncler?), but we are now seeing generative AI being actively weaponized, from misleading political deepfakes, clickbait ads from celebrities to schoolchildren making explicit images of classmates.
As the capabilities of AI tools continue to advance faster than regulations, many are increasingly concerned about the very real threat it poses. New legislation is coming in, but much of it is too limited or vague to comprehensively protect people. And on the other hand, these new rules have implications that could easily catch professionals trying to use generative AI in legitimate ways.
What legal protections currently exist in the UK around deepfake technologies, and what behavior is prohibited?
Graeme Murray is a senior associate at Marks & Clerks and Michael Shaw is a partner at Marks & Clerk.
Visage varieties
First, it is important to actually define what exactly a deepfake is. After all, there are similarities in nature – there is the old saying that seven people in the world look like you – but to what extent are you protected by regulations – and where can you as a company go wrong? A useful example is the 2019 ruling against vape company Diamond Mist. The company’s advertising featured an ad with the slogan “Mo’s crazy about menthol,” accompanied by images of a male model with a bald head and thick eyebrows.
Mo Farah took to Twitter to complain about the possible confusion. Concerned people would think he had endorsed the product. Ultimately, the Advertising Standards Agency (ASA) ruled that the ad did indeed create a ‘misleading impression’: although ‘Mo’ is a common name, the model’s head and eyebrows were ‘sufficiently’ reminiscent of the athlete viewers would associate. Mo Farah as one of the most famous figures in Britain under that name.
Herein lies the crux: While the image wasn’t a deepfake, it was similar enough to confuse viewers, and the same goes for deepfakes. If it’s misleading enough to confuse someone else, you have reasons to consider a lawsuit.
Conversely, as a business you need to consider all possible interpretations of images to ensure you can use generative AI without getting caught up in legal complications. Just because the stock-gen AI photo you use to lead a LinkedIn article seems generic doesn’t mean it is. Voice, gestures and context are all factors taken into account, but ultimately the question is: did it confuse viewers?
Current legislation surrounding deepfakes
To date, there is no legislation in the UK that provides blanket protection against deepfakes. Instead, individuals are protected by a set of rules depending on the nature of the deepfake.
Online Safety Act
The Online Safety Act contains one important provision against deepfakes. While it has been illegal to share intimate or explicit images of someone without their consent since 2015, the Online Safety Act has further strengthened this ruling to make it illegal to also share intimate, AI-generated images of someone without their consent. Crucially, unlike the ruling on genuine intimate content, in the case of deepfake images you do not have to prove that the creator intended to cause discomfort, although it is considered an otherwise serious offense if a sexual intent can be proven. It is essential to note that this ruling does not criminalize the creation of explicit deepfake, but only its sharing. The Online Safety Act is also primarily aimed at removing objectionable content; Many are concerned that facilities will prove ineffective as long as the creation of intimate deepfakes remains unregulated, while perpetrators escape punishment.
Advertising Standards Agency
The ASA intervenes when advertisements contain misleading content. In terms of deepfakes, this mainly occurs with scam advertisements or clickbait; This is unlikely to affect ordinary people, and those running businesses should know not to use celebrities, who, for example, usually have their likeness, gestures and voice trademarked.
More interesting, however, is the gray area of similarity that deepfakes will exacerbate. One thing that was particularly highlighted in the Mo Farah case was that the resemblance does not have to be identical, it just has to confuse the viewer enough to create reasonable doubt. With generative AI using copyrighted material, there is a danger that companies could accidentally violate ASA regulations by using gene AI output that accidentally resembles real-life celebrities enough to cause confusion. In this case, intent is irrelevant: all that matters is whether viewers have been misled, and this could get companies into trouble with the ASA.
Civil law
The final remedy for British citizens is under civil law. While there is no specific legislation addressing deepfakes, individuals may seek redress in the following situations:
- Privacy: A deepfake can be considered a breach of someone’s right to privacy, especially if they can prove that the creator used personal data to create it, which is protected by the UK’s GDPR and the Data Protection Act 2018.
- Harassment: Multiple deepfakes with the intent to cause alarm or fear could form the basis for a harassment charge
- Defamation: If a deepfake negatively impacts someone’s reputation by portraying them in a false or harmful way, there is a potential for a defamation case
In such cases, it is best for a person to seek legal advice on how to proceed.
The future of deepfake legislation
So, where does the legislation go from here? Hopefully moving forward. The UK government has taken a significant step back on this issue in the run-up to the election, but with the EU AI Act leading the way, it is likely that new regulations will be on the way soon.
The bigger problem, however, is enforcement. The three bodies we discussed above, the Online Safety Act, the Advertising Standards Agency and UK Civil Law, are all aimed at regulating output on a case-by-case basis. Currently, Britain has no regulations or proposals to introduce greater safeguards around the programs themselves. Many are even cheering the lack of regulation in Britain as a result of the EU AI Act, hoping it will lead to a boon for the AI industries.
However, current strategies remain inefficient. Victims need legal support to move forward with cases, and creators continue to escape repercussions. Widespread control over technology is also impractical; you only have to look at the GDPR to get an idea of that. Efforts to do this, such as the EU AI Act, still fail to address the problem, as open-source generative technologies remain completely unregulated.
It appears that an independent jury will be needed – an Ofcom for AI – but how independent or effective this will prove remains to be seen. Let’s hope that the new government manages to find a balance between industry, personal protection and business innovation.
We have listed the best business laptops.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, you can read more here: