Artificial intelligence and deepfakes
Artificial Intelligence is sweeping the globe. It is not a new concept by any means; however, its capabilities are reaching a stage that was previously only realised in science fiction, not fact.

Artificial Intelligence is an advancement of technology that simulates tasks that would be typical of a human mind to complete and is often incorporated in attempts to be efficient, to solve problems, as well as assist in one’s own creativity without the effort normally required to do on one’s own.
It comes in many different forms some of which we all would have had exposure to in the physical and virtual world. These range from chatbots and virtual assistants like ChatGPT or Siri, to the algorithms for content that you view on your social media feeds, as well as physical robotics engaged in manufacturing products, amongst many other forms.
The everyday person’s use of it seems to be in a honeymoon stage, where the creative elements of simply writing a text prompt and generating an image from those words is captivating the world. Not only this, image to image generation can display illusory integrations of pre-existing images into fantasies. Image to video generation expands on this further, bringing life to imagery to the extent of the creator’s imagination.
Normally what would require much talent and skill, or even a dedicated team to complete over several years, is taking seconds on a mobile phone to produce. Although it has not reached the level of concern of a T-800 from Terminator or HAL 9000 from 2001 A Space Odyssey, there is a danger not foreseen or acknowledged by these forms of popular media, particularly to do with the content that is being produced from these generations.
Deepfakes are essentially content, whether it’s audio, imagery, or videos, that falsely depicts individuals doing or saying something they never did. The term derives from a Reddit user of the same name who in 2017 posted sexually explicit videos that falsely featured celebrities, using open-source software to swap their faces into videos and imagery. The ability to create believable content is only being enhanced as advancements in artificial intelligence continue.
That is not to say that there cannot be positive applications of deepfakes. The entertainment industry is one place where deepfakes are utilised whether it be to digitally de-age actors or super impose their faces on other body types. Deepfakes can be used in education and training to create lifelike scenarios or reawaken historical figures in a realistic but controlled environment, allowing for more immersion and engagement. It can also be of benefit to those who have lost their voice to disease, where deepfake technology can create a synthetic version of their voice to be utilised, of notability, the restoration of Val Kilmer’s voice.
However, ill intent with such technology can cause irreparable damage to one’s reputation, it can manipulate public perceptions, as well as result in widespread disinformation.
More specifically, ill intent with the creation of deepfakes can also see the generation of child abuse material. Headlines around the world detail charges being laid for AI generated child abuse, school students caught selling them, as well as individuals creating sexual deepfakes of colleagues. In January 2025, the Australian Federal Police issued a warning to parents regarding an increase in the number of sexually explicit deepfakes depicting children, “including a rise in students creating material such as deepfakes for a variety of reasons, including to harass or embarrass classmates.”
Deepfakes are becoming increasingly harder to detect but there are often giveaways simply by observing a photo. Such giveaways include elements of the background appearing unnatural, hands and eyes of a generated image appearing unnatural, sound that does not sync appropriately, as well as irregular movements. This guide provides further insights – DeepFakes, Can You Spot Them?
Appropriate offences are identified in both Commonwealth and NSW Legislation that respond to the harm that could be committed through the creation of deepfake material. It’s important to understand that NSW Legislation does not specifically address deepfakes as legislation introduced in 2017 rather revolves around intimate images that have been recorded or altered through software.
The eSafety Commissioner is the Australian Government’s independent online safety regulator. Their purpose is to help safeguard Australians at risk of online harm and to promote safer online experiences. Individuals whose images or videos have been altered and posted online can contact eSafety for help to have them removed. eSafety also investigates image-based abuse which includes intimate images that have been digitally altered like deepfakes.
They can also help to remove online communication to or about a child that is seriously threatening, seriously intimidating, seriously harassing or seriously humiliating – known as cyberbullying as well as illegal and restricted material that shows or encourages the sexual abuse of children, terrorism or other acts of extreme violence.
Resources:
- Everyday examples and applications of artificial intelligence (AI) | Tableau
- What is the history of artificial intelligence (AI)? | Tableau
- Sexually explicit deepfakes.pdf
- What Are Deepfakes and How Are They Created? – IEEE Spectrum
- Deepfakes | What are deepfakes? | eSafety Commissioner
- Why Deepfakes Are A Net Positive For Humanity
- Positive Use Cases of “Deepfakes” | Wilson Center
- AFP warns parents over rise in AI-generated child abuse material | Australian Federal Police