We’ve all seen deepfakes that have made us chuckle on Instagram. Reels such as Queen Elizabeth spitting bars on the London Drill Scene, or Kim Jong Un and Trump in a friendly arm wrestle on a yacht.
Deepfakes are AI-generated photos or videos which mimic a real life person or event using deep learning algorithms, sometimes from scratch but mostly by manipulating a video or photo which already exists.
Many of us are quick to disregard the malicious nature of the deepfakes we’re exposed to, perhaps because it is better to laugh than cry through our concerns over how ‘real’ and believable they look. But there is a darker side to these AI-generated videos and images, which lurks beyond the realm of politics and innocent entertainment.
The word ‘deepfake’ was coined by a Reddit user in 2017, who created a subreddit which shared celebrity face-swapped pornography named ‘deepfakes’. Today, it takes less than 25 minutes at no cost, to make a minute long pornographic deepfake with nothing but a clear image of the victim’s face.
Although many of us claim to keep our exposure of deepfakes on a political and humorous level, the Home Security Heroes’ findings reveal that an overwhelming 98 per cent of deepfakes made are pornographic and 99 per cent of the individuals targeted in these deepfakes are women.
What’s worse, a recent survey by Crest Advisory found that one in four people are unconcerned with sexually explicit deepfakes being shared online without consent. So, why do approximately a quarter of our population appear unconcerned about the darker side of deepfakes?
Members of the police have commented on the problem’s connection to a wider epidemic of violence against women and girls, with the demographic most likely to believe that sexually explicit deepfakes are morally and legally acceptable being young men with misogynistic views.
Detective Chief Superintendent Claire Hammond from the National Centre for Violence Against Women and Girls (VAWG) and Public Protection, commented on AI’s connection to misogyny in the survey’s report, saying: “Technology companies are complicit in this abuse and have made creating and sharing abusive material as simple as clicking a button.”
She also recognised that “taking away the technology is only part of the solution. Until we address the deeply ingrained drivers of misogyny and harmful attitudes towards women and girls across society, we will not make progress.”
With much of the public oblivious that the creation and sharing of pornographic deepfakes is illegal, far more conversations clearly need to be had over the severity of this crime and its growing connection to misogyny.
An understanding of the emotional impact of the crime needs to be better addressed too. Despite the images not being ‘real’, victims targeted in pornographic deepfakes are likely to feel the same psychological impact as those who have experienced sexual assault. They are also highly unlikely to report the crime to the police out of fear that it won’t be taken seriously.
To change these harrowing statistics, we need to deepen our concern and awareness on how AI can be abused in far darker ways than what we might be seeing on our social media feeds. By the end of 2025, a projected eight million deepfakes will have been shared. The overwhelming majority of them won’t be ‘fun’ – instead they will be violating and victimising women in sexually explicit content.
“computer laptop keyboard HP Pavilion Entertainment PC” by GoodNCrazy is licensed under CC BY 2.0.
Like this:
Like Loading...
Related
The Sinister Reality of Deepfakes
We’ve all seen deepfakes that have made us chuckle on Instagram. Reels such as Queen Elizabeth spitting bars on the London Drill Scene, or Kim Jong Un and Trump in a friendly arm wrestle on a yacht.
Deepfakes are AI-generated photos or videos which mimic a real life person or event using deep learning algorithms, sometimes from scratch but mostly by manipulating a video or photo which already exists.
Many of us are quick to disregard the malicious nature of the deepfakes we’re exposed to, perhaps because it is better to laugh than cry through our concerns over how ‘real’ and believable they look. But there is a darker side to these AI-generated videos and images, which lurks beyond the realm of politics and innocent entertainment.
The word ‘deepfake’ was coined by a Reddit user in 2017, who created a subreddit which shared celebrity face-swapped pornography named ‘deepfakes’. Today, it takes less than 25 minutes at no cost, to make a minute long pornographic deepfake with nothing but a clear image of the victim’s face.
Although many of us claim to keep our exposure of deepfakes on a political and humorous level, the Home Security Heroes’ findings reveal that an overwhelming 98 per cent of deepfakes made are pornographic and 99 per cent of the individuals targeted in these deepfakes are women.
What’s worse, a recent survey by Crest Advisory found that one in four people are unconcerned with sexually explicit deepfakes being shared online without consent. So, why do approximately a quarter of our population appear unconcerned about the darker side of deepfakes?
Members of the police have commented on the problem’s connection to a wider epidemic of violence against women and girls, with the demographic most likely to believe that sexually explicit deepfakes are morally and legally acceptable being young men with misogynistic views.
Detective Chief Superintendent Claire Hammond from the National Centre for Violence Against Women and Girls (VAWG) and Public Protection, commented on AI’s connection to misogyny in the survey’s report, saying: “Technology companies are complicit in this abuse and have made creating and sharing abusive material as simple as clicking a button.”
She also recognised that “taking away the technology is only part of the solution. Until we address the deeply ingrained drivers of misogyny and harmful attitudes towards women and girls across society, we will not make progress.”
With much of the public oblivious that the creation and sharing of pornographic deepfakes is illegal, far more conversations clearly need to be had over the severity of this crime and its growing connection to misogyny.
An understanding of the emotional impact of the crime needs to be better addressed too. Despite the images not being ‘real’, victims targeted in pornographic deepfakes are likely to feel the same psychological impact as those who have experienced sexual assault. They are also highly unlikely to report the crime to the police out of fear that it won’t be taken seriously.
To change these harrowing statistics, we need to deepen our concern and awareness on how AI can be abused in far darker ways than what we might be seeing on our social media feeds. By the end of 2025, a projected eight million deepfakes will have been shared. The overwhelming majority of them won’t be ‘fun’ – instead they will be violating and victimising women in sexually explicit content.
“computer laptop keyboard HP Pavilion Entertainment PC” by GoodNCrazy is licensed under CC BY 2.0.
Share this:
Like this:
Related