It rarely takes long before new media technologies are turned to pornography. This was true of the printing press, photography and the early internet. It is also true of generative artificial intelligence (AI).
Face-swapping technology has existed for more than a decade and quickly gave rise to “deepfakes” fabricated but convincing images and videos of people. Generative AI has accelerated the spread of deepfake pornography, making it easier to create explicit images and videos of others.
Victims are not limited to celebrities. Deepfake nudes of classmates and teachers have been reported in schools worldwide, sometimes targeting children as young as 11. Image-based abuse is widespread, and victims often say the law offers inadequate protection.
In 2024, Australia amended its criminal code to explicitly include AI-generated pornography under laws prohibiting the distribution of sexual material without consent. Digitally manipulated sexual imagery now falls within the same legal category as genuine photographs or video recordings.
However, gaps remain. The offence primarily prohibits transmitting such material via a carriage service such as the internet, but there is no clear standalone offence for creating it. Ambiguities also arise because many AI tools operate through online platforms where users upload data and receive generated images in return. Whether this constitutes “transmission” in a legal sense remains uncertain.
The offence also requires proof that the distributor knew the target did not consent, or acted recklessly regarding consent. The meaning of “reckless” is unclear. If someone produces deepfake pornography without considering consent, it may qualify. But if they claim to have wrongly assumed consent because the imagery is fabricated, the legal interpretation remains unsettled.
Since the law does not clearly prohibit private creation and use of deepfake pornography, individuals must rely on moral judgement. Laws often have limited influence over online behaviour. Despite being illegal, internet piracy remains widespread, partly because enforcement is inconsistent and many do not perceive it as morally serious.
Many people instinctively feel that even private use of deepfake pornography is wrong. Yet explaining why is difficult. Society generally does not condemn private sexual fantasies involving acquaintances, celebrities or strangers. The question arises whether computer-assisted fantasies are morally different.
Deepfake pornography is frequently described as a violation of privacy. Victims often report feeling exposed or violated, believing others have effectively “seen them naked”. Such imagery can appear more intrusive than a private mental fantasy.
However, AI-generated images do not reveal genuine personal details about an individual’s body or sexual behaviour. These systems typically place a person’s face onto existing footage or generate synthetic imagery based on generalised patterns. Sexual privacy concerns information unique to an individual, such as distinctive bodily features or personal sexual history. Assumptions based on general human characteristics do not reveal such personal information.
Nevertheless, the absence of a strict privacy violation does not make deepfake pornography harmless. Research and victim testimony indicate that discovering others have viewed fabricated sexual depictions can cause severe psychological and emotional harm. This harm alone provides strong ethical grounds for condemning the use of such technologies.
While AI cannot reveal the truly private aspects of sexual lives, using it to create deepfake pornography remains a morally unjustifiable act that disrespects personal dignity and autonomy.
Koplin is Lecturer in Bioethics, Monash University; Bhatia is Associate Professor in Health Law, Deakin University
The Conversation