Deepfakes are rising fast. Learn how AI deepfake detection works, the risks of synthetic media, and how researchers fight AI-generated fraud and scams.

The Rise of Deepfakes
AI deepfake detection is now part of everyday digital safety as fake videos and voices grow harder to spot. Imagine receiving a video call from your bank manager asking you to urgently transfer funds. The face looks real. The voice sounds right. But it never happened, at least, not in the way you think. Welcome to the age of deepfakes, where seeing is no longer believing.
Deepfakes are hyper-realistic synthetic media, images, videos, or audio, generated by artificial intelligence (AI). They can make a person appear to say or do things they never did, and today, they are more convincing than ever. But here is the twist: the same AI technology that creates these fakes can also be trained to catch them through AI deepfake detection systems.
Before GenAI Was Cool: A Personal Story
Back in 2022, I was researching at Swinburne University of Technology Sarawak Campus (Swinburne Sarawak) on how to spot fake faces made by AI. At that time, generative AI wasn’t widely known yet, and a type of AI model called a Generative Adversarial Network (GAN) was leading the way in creating realistic fake images.
GANs work like a competition between two AI models: one tries to make fake images, and the other tries to detect them. As they “battle,” both improve. The faker improves at fooling and the detector improves at catching fakes.
The research team trained a separate AI model to recognise faces produced by well-known GAN systems such as StyleGAN2, StyleGAN3, ProGAN, and DCGAN, the same kinds of models behind many of the shockingly realistic fake faces seen online today. Even then, the results showed how convincing these synthetic faces already were. And that was back in 2022.
Deepfake Threat is Increasing
Since then, the leap in generative AI has been staggering. Tools that once required specialist computing power and weeks of training now produce photorealistic fake videos in minutes, accessible to almost anyone with a laptop.
What was once a research novelty is now a tool misused for financial scams, misinformation, and identity fraud.
Globally, fraud cases involving AI-generated voice and video impersonation have surged. In one widely reported case in the United Kingdom, a company lost over USD 240,000 after an employee was tricked by an AI-cloned voice of their CEO.
Closer to home, Malaysians have encountered AI-generated audio used in scam calls, a tactic increasingly reported alongside the notorious Macau scam variants that continue to plague the country.
The risk is not only financial. Deepfakes spread false narratives, damage reputations, and create non-consensual intimate imagery, causing real harm to real people.
AI vs AI: The Deepfake Detection Arms Race
The encouraging news is that researchers are fighting back, using AI itself as the weapon. AI deepfake detection systems analyse subtle inconsistencies that the human eye misses: unnatural eye blinking patterns, irregular skin texture under certain lighting, or pixel-level artefacts around the hairline. These signals, invisible to most viewers, act as fingerprints left behind by the generation process.
My own research focused on training a classifier to distinguish real faces from GAN-generated ones by learning these hidden patterns. The challenge, however, is that detection models must constantly evolve. Each time a new generation method emerges, detectors require retraining. It is an ongoing arms race that requires sustained research investment and collaboration between universities, industry, and policymakers.
What Can We Do to Protect Ourselves from Deepfake?
Technology alone does not solve this problem. Media literacy matters just as much. Here are some practical steps everyone can take:
- pause before you share. If a video or image feels shocking or too dramatic, verify it through trusted news sources before forwarding it.
- look for tell-tale signs. Unnatural blinking, mismatched lip movements, blurry edges around the face, or inconsistent lighting indicate a fake.
- use verification tools. Platforms like Microsoft’s Video Authenticator and various open-source deepfake detectors are increasingly available for public use.
- be sceptical of urgency. Scam calls and videos often create panic to override better judgement. Slow down.
The Road Ahead for AI and Deepfake Detection
The deepfake challenge is not going away, but neither are the researchers working to counter it. At institutions like Swinburne Sarawak, students engage with these problems, building the technical foundations to contribute to a safer digital future. The next generation of AI researchers, cybersecurity experts, and responsible technologists may well be sitting in a classroom in Kuching right now.
AI is neither inherently good nor bad; it reflects how we choose to use it. The same intelligence that creates convincing fakes also powers AI deepfake detection systems that expose them. The question is whether we invest in that work, support the research, and build the awareness required to stay one step ahead.
So the next time someone sends you a shocking video, ask yourself: can I trust what I see?
Dr Khaled Elkarazle is a researcher and academic with the Faculty of Engineering, Computing and Science. With research focus on developing efficient and interpretable deep learning architecture, particularly involving attention mechanism, transformer-based models, and image segmentation for healthcare and digital forensics applications, Dr Khaled is contactable at [email protected]