The deeper harms of deepfakes: with no control over likeness, potential for abuse is huge

Typically, when discussing the risks presented by deepfakes created using generative AI tools, the warnings are about how you might get tricked by a fake boss or politician. But arguably a bigger and much sinister threat is identity theft – which is to say, what happens when the deepfake is you. While scam calls can empty bank accounts, deepfakes that use a person’s likeness in pornographic, violent or humiliating content can leave deeper psychological scars – especially for women.
A new article from the Ada Lovelace Institute looks at deepfakes from a cultural and social perspective, rather than simply as a financial threat.
“In July 2024, Spanish teenagers were put on probation after generating deepfake nudes of their classmates,” writes Julia Smakman. “The girls portrayed reportedly suffered from anxiety attacks and had been scared to come forward, worried they would be blamed for the images. In South Korea, teenage girls were targeted by explicit deepfake images and described similar anguish.”
The problem is already bad, and it’s about to get a lot worse. Smakman invokes the concept of the uncanny valley, noting the limitations and giveaways in today’s generation of deepfakes. “Up until now, these technical barriers have provided a thin layer of protection against complete digital impersonation.”
To run with the uncanny valley thread, if current deepfakes are our Polar Express – an early attempt to harness a technology that creates real-looking avatars – new technology will enable deepfakes on par with today’s best CGI technology. “AI-generated videos are becoming more realistic, capable of longer-form outputs and improved physics,” Smakman notes. “In December 2024, Google gave a sneak peek of its new model Veo to show how well it is already performing on short videos. The demo provided a glimpse of a near future where distinguishing between authentic and artificial videos will become nearly impossible for the average person.”
‘Shouldn’t you be in control of how you appear to the world?’
Considering how much of the internet is pornography, it takes very little imagination to see how having one’s face stolen and convincingly put onto another body in a video could have catastrophic consequences for young women. A 2019 report estimated that 96 percent of all deepfakes online are pornographic, and 99 percent involve women who did not consent to their likeness being used.
“With advanced AI video generation, the harms of deepfake pornography will evolve from static images to indistinguishable video ‘evidence’ of intimate moments that never occurred, creating lasting psychological trauma and reputational damage,” Smakman writes. “And if history is anything to go by, this will disproportionately hurt women and girls.”
Consider how non-consensual imagery might be used for blackmail. Or how deepfakes have supercharged the evolution of “revenge porn” and child sexual abuse materials. Bad actors typically move much faster than those building defenses and crafting regulations. This has led to what Smakman calls “a complete inadequacy of current safeguards.”
Readers of Biometric Update will know how much effort is presently going into responses to the deepfake threat. But Smakman notes that solutions to date have done a pretty weak job, especially in controlling how their own models are used. “Even with their vast resources and public commitments to safety, tech giants like Microsoft failed to prevent the generation of Taylor Swift deepfakes by their own models.”
At the same time, she says, “the present legal and policy tools are not sufficient to protect people against harms impacting their dignity. While governments across the world have started to pay attention to deepfakes, they still lack regulations, rights and mechanisms to effectively control them.”
Smakman says mounting adequate defenses against the weaponization of synthetic media technologies for harm means taking quick action on two key issues. First, developers should limit the availability of their systems, offering them “only to approved businesses under specific conditions for appropriate use, such as responsible AI licensing practices.”
“In general, tech companies should restrict the release of models and their ability to generate videos of people, until regulators and external auditors have established safeguards for misuse and verified their adequacy.”
Second, “policymakers should pressure tech companies to only release models once these safeguards have been put in place and proven to be reliable and effective.”
The proposed solutions do not inspire great confidence. It has been shown time and again what happens when industry is left to regulate itself, and the AI crusaders of Silicon Valley are particularly rapacious in their appetite for scale. Smakman concedes that hers are imperfect answers. “But they are steps to holding the tech industry accountable and to mitigate the considerable risks that AI video generation brings.”
Researcher finds huge cache of deepfake ‘nudify’ images unprotected online
To those who think the risks are overstated: vpnMentor reports on an investigation by cybersecurity researcher Jeremiah Fowler, who breached a non-password-protected database that contained 93,485 images and .Json files belonging to GenNomis by AI-NOMIS – “an AI company based in South Korea that provides face swapping and ‘nudify’ adult content as well as a marketplace where images can be bought or sold.”
The article says that the 47.8 GB database was not password-protected or encrypted, and included “numerous pornographic images, including what appeared to be disturbing AI-generated portrayals of very young people” and “images of celebrities portrayed as children, including Ariana Grande, the Kardashians, Beyoncé, Michelle Obama, Kristen Stewart and others.” Nearly all of the images were explicit and depicted adult content.
GenNomis is a good example of a tool whose developers are unlikely to exercise polite restraint when asked. The platform enables text-to-image prompts to create unrestricted images, AI personas, face-swap images and more. It supports over 45 distinct art styles, including Realistic, Anime, Cartoon, Vintage, and Cyberpunk, “allowing users to tailor their image creations to specific aesthetic preferences.”
Fowler says that while he did not see any personally identifying information (PII) or user data, “it was a wake-up call for how this technology could potentially be abused by users, and how developers must do more to protect themselves and others. This data breach opens a larger conversation on the entire industry of unrestricted image generation.”
Holding perpetrators accountable will give guidelines teeth: Fowler
It’s hardly just GenNomis offering AI image generators to create pornographic images from text prompts. There are plenty of sites offering “nudify” services that generate images of people in sexually explicit situations. According to Fowler, “any service that provides the ability to face-swap images or bodies using AI without an individual’s knowledge and consent poses serious privacy, ethical, and legal risks. These images can be highly realistic, and it may be humiliating for individuals to be portrayed in such a way without their consent.”
Fowler calls it a “wild west” in terms of deepfake regulation, a high-stakes game in which the chips are human reputations and, sometimes, lives. He notes that there have been numerous cases of individuals and young people taking their own lives over sextortion attempts.
Stronger detection mechanisms and strict verification requirements are badly needed. “In a perfect world, AI providers should have strict guardrails and protections in place to prevent misuse,” Fowler writes, once again betting on the weak hope that people who make “nudify” apps are worried about misuse, rather than counting on it. “Identifying perpetrators and holding them accountable for the content they create should be made easier, allowing service providers to remove harmful content fast.”
The laws are moving, if not fast enough. The U.S. “Take It Down Act,” which aims to criminalize the distribution of non-consensual intimate images, has passed the Senate and is awaiting action in the House of Representatives. “In October 2024, a South Korean court handed down a ten-year prison sentence to the perpetrator of a deepfake sex crime. In March 2025, a teacher in the US was arrested for using artificial intelligence to create fake pornographic videos of his students.” Fowler sees the sector trending toward more effective enforcement of guidelines that may already exist, but are just there for decor.
“Explicit images of children and any other illegal activities are strictly prohibited on GenNomis – at least on paper. The guidelines also state that posting such content will result in immediate account termination and potential legal action.”
In this case, GenNomis appears to have slunk away voluntarily. Several days after Fowler sent a responsible disclosure notice, the websites of both GenNomis and AI-NOMIS went offline and the database was deleted.
Governments waking up to extent of deepfake threat
A release from the office of the Governor of New Jersey announces the signing of A3540/S2544, a law “establishing civil and criminal penalties for the production and dissemination of deceptive audio or visual media, commonly known as ‘deepfakes.’”
The law classifies “making or distributing deceptive audio or visual media for the furtherance of additional criminal activity” as a crime of the third degree, subject to imprisonment and a fine of up to $30,000.
Governor Phil Murphy says that “while artificial intelligence has proven to be a powerful tool, it must be used responsibly. My Administration is laser-focused on combating misinformation and ensuring media integrity. We stand with the victims of deepfake imagery and will continue to prioritize the safety and well-being of all New Jerseyans.”
Also present at the signing was Francesca Mani, a high school student whose peers created and shared explicit AI-generated images of her, spurring her to become an advocate for AI regulation. Representatives from the Senate, the New Jersey Coalition Against Sexual Assault (NJCASA) and the New Jersey Institute of Technology also expressed support.
Deepfakes were also front and center at an inaugural University of Mississippi symposium this week. A release says the Jordan Center for Journalism Advocacy and Innovation hosted its debut event, “Addressing the Impact of Social Media and Artificial Intelligence on Democracy,” bringing together top journalists, researchers and business leaders to discuss censorship, media literacy, ethics, and the intersection of AI and U.S. law.
Andrea Hickerson, dean of the School of Journalism and New Media, says that while the event brought together people with different political opinions, “we can all agree that lies are bad.”
Article Topics
Ada Lovelace Institute | deepfake detection | deepfakes | generative AI | legislation | New Jersey | regulation | synthetic data
Comments