What is a Deepfake?

Anyone who spends time online these days will either interact directly with deepfakes or come across discussions about them – as they are fast becoming an urgent matter for people to understand and lawmakers to control.

While photo-editing and video-editing software has been around for many years now, advancements in AI (artificial intelligence) technology can manipulate face-swap images to make them look more real than ever using machine learning.

Named for the ‘deep learning’ technologies and ‘fake’ images they produce, deepfakes can be photos, videos, or audio recordings that reproduce someone’s visual and/or verbal likeness, manipulating it to appear as though they are saying or doing something that never really happened.

As the technology improves and the digital alterations become less obvious to the untrained eye, it’s important for people to understand the problems these synthetic images can cause and the potential legal consequences. 

 

Why are deepfakes bad?

While deepfake technology isn’t all bad, as it can be used for entertainment or educational purposes, it can also be implemented maliciously to spread misinformation or to use someone’s image in certain ways without their consent.

With deepfake content often going viral online, the damage is usually done before the content can be identified as fake and taken down, which can mislead millions of people and damage the reputations of those whose images are digitally altered.

Misinformation about politicians or celebrities, or misrepresentations of their image, can be harmful to the individuals on a personal level and to society on a wider level, as believable deepfakes can cause extreme controversies.

Not only can deepfakes be used to create ‘fake news’ or ‘cyberbully’ individuals online, but one of the largest concerns about the way this technology is being used is the creation of non-consensual explicit images – or ‘deepfake pornography’.

The generation of non-consensual pornography that superimposes the face of one person – whether they’re a public figure or not – onto the body of another person engaging in sexual acts is becoming a big problem for online platforms.

 

Are deepfakes against the law?

With so much concern over how this technology might be misused and ongoing conversations about the ethics of creating deepfakes, you may be wondering: are deepfakes illegal?

While deepfakes in general are not against the law and the technology is widely available, the ways they are used and the specific content of the deepfake images may be illegal under different UK laws.

For example, deepfakes can impersonate someone’s face or voice for fraudulent purposes, such as defamation or blackmail, which is especially worrying for public figures like celebrities and politicians. 

In these cases, the person whose image is being used may be able to take those producing and circulating the deepfakes to court on charges relating to privacy or data protection infringement, or libel/slander. 

Additionally, deepfake pornography has now been criminalised in the UK under the Online Safety Act (2023). So, when it comes to AI-generated pornography using the likeness of real people, non-consensual explicit deepfakes are always illegal in the UK.

This means it will be easier for anyone who generates and shares such content to be prosecuted and potentially face prison time, even if they didn’t realise that non-consensual ‘deepfake porn’ is now against the law in the UK.

The post What is a Deepfake? first appeared on IT Security Guru.

The post What is a Deepfake? appeared first on IT Security Guru.