The rise in the capabilities of generative AI over recent years has seen a corresponding increase in the use of realistic fake faces and voices masked in real time, allowing scammers to supercharge a range of online scams, from romance to employment to tax fraud.
In the first half of 2024, primarly AI-generated ‘influencers’ targeted adult creators by deepfitting new faces to their bodies and monetising the resulting videos, while deepfaked versions of mayors of European capital cities posted videos calling for support to Kyiv.
More recently, real-time AI-generated deepfakes are being used to solicit money from unsuspecting individuals who have been romanced online and are sent fraudulent requests for money for wifi access or cryptocurrency schemes, among others.
Currently, the best way to detect a deepfake is via human interaction, with individuals taking time to cross-reference the details of the individuals they are chatting to and being cautious with any requests for money.
However, with the metastasising of deepfakes, it may be the case that wider familiarity with them will thwart attempts at scamming.