AI Impersonation: What It Is, How to Identify It, And How to Stop It

Learn how to spot AI impersonation scams, protect yourself, and prevent fraud effectively.


AI impersonation is a growing threat to businesses. The use of AI technology makes impersonation attacks that were previously easy to identify far more sophisticated and believable, which leads to major issues for the targeted businesses.

In this article, Titan Security Europe outlines what AI impersonation is, how to identify an attack, and how to prevent them.

What is AI Impersonation?

AI impersonation is a cybercrime where criminals use AI-powered social engineering to pose as a known person or organisation to steal confidential data or money from a business.

With AI-powered technology, AI impersonators mimic speech patterns of a person through email voice notes – even copying intonation – to make contact with someone within an organisation.

This is an easy feat when using AI technology when criminals know how to utilise it.

This is how AI impersonation – commonly known as “deepfakes” -  works:

1.  AI technology is used to analyse and learn a person’s voice, speech patterns and intonations by analysing audio and video recordings obtained via social media, news stories or public speaking; or by analysing emails sent to and from the target.

2.  After the technology has learned enough of the target’s voice and speech patterns to effectively mimic them, the cybercriminal makes contact with their victim.

3.  AI-powered technology is then used to send an email or voice note that appears to be from the target enquiring after confidential data or payment of an invoice from the victim.

4.  The victim, having no reason to believe that the scammer is anything but their trusted colleague or third party vendor, will then hand over the data or money – causing a major security breach.

AI technology is rapidly improving, making these attacks more and more difficult to identify and prevent.

What Is the Threat of Deepfakes?

AI Deepfakes pose a major threat to businesses, including:

Fraud: Cybercriminals use AI deepfakes to impersonate CEOs and other senior executives, leading employees of a company to assume they are speaking to the person in charge.

  • Through this, they can successfully convince employees to transfer large sums of money to them under the guise of a needed business expense or the payment of an invoice.

  • Insurance companies are at major risk of this – AI deepfakes can even go as far as to fake photos of accidents to attach to otherwise baseless insurance claims. This is becoming a major issue due to recent moves to automated claims, cutting out the human middleman.

  • Customers are also at risk – cybercriminals can pose as a member of the company they are patron to, and gain money from them directly.

Example: Arup’s Deepfake CFO

At the start of 2024, British engineering company Arup was the victim of a deepfake fraud attack resulting a loss of almost £20M.

  • A video conference call was held, attended by a deepfake impersonation of the company’s CFO and other employees. Through this video call, a real employee was tricked into sending 15 transactions adding up to HK $200M to five different accounts in Hong Kong.

  • This was a major financial loss, all done at the hands of an employee who fell for a realistic scam.

Misinformation: Through deepfakes, cybercriminals can intentionally spread false information regarding organisations.

  • This information can be incredibly damaging to the reputation of a company, not to mention their relationships with partners and clients alike.

  • Misinformation is an incredibly huge risk to businesses in today’s world. Thanks to social media, misinformation – often dubbed “fake news” – can go viral in seconds. Many who see a news clipping or headline do not bother to check the credibility of the source. They take what they see on Facebook or X as gospel.

  • The spreading of misinformation can lead to a company rapidly losing clients, can prevent mergers and business deals, and overall lead to a drastic drop in business.

Example: Voice Cloning Targets Advertising Group

Mark Read, the CEO of WPP – the world’s largest advertising group – was the victim of a deepfake scam that used AI-powered voice cloning and YouTube footage.

  • Cybercriminals used a publicly available image of Read to create a WhatsApp account and set up a Microsoft Teams meeting that appeared to be between Read and another executive of the company. They attempted to gain money and details from an agency leader by asking them to set up a new business.

  • The criminals used voice cloning and YouTube footage of the executive and impersonated Read in the meeting chat window.

  • The attack was not successful – but could have led to major reputational losses had it gone through.

Data Leaks: Another major target for cybercriminals is a business’ confidential data.

  • By posing as a higher-up in a company or even just a regular employee that a victim knows, AI deepfake emails or voice notes can request to be sent over confidential data that the person they are posing as could have access to.

  • When a cybercriminal accesses this data, they are then able to leak it, sell it on or use it as blackmail against the company.

  • The data can also be used for further deepfake attacks – such as allowing the cybercriminals to pose as others within the company, or gaining access to bank accounts themselves.

Malware: Cybercriminals can use deepfake accounts to gain access to company systems and plant malware within said system.

  • Malware can lead to data breaches, disruption to operations and reputational damages.

  • It can make it easier for sensitive information to be stolen or prevent work from being able to happen until the system is fixed, thus leading to loss of business for the company.

  • Between data leaks and operational disruption, malware attacks can lead to massive financial and reputational loss for a company.

How to Identify a Deepfake?

Identifying a deepfake can be difficult to identify. Here are some steps employees can take to identify a scam:

Unusual Requests: Employees should be on the lookout for requests that seem out of the ordinary. Colleagues and CEOs who should already have access to specific data should not have to request for it to be sent to them; employees should not be sending money for invoices they are not aware of.

Example: Hong Kong CEO Deepfake

At the start of 2024, a multinational Hong Kong company was the victim of a sophisticated deepfake attack resulting in major financial loss.

  • The fraudsters created a deepfake through voice cloning and fake footage to set up a video call between an employee and what appeared to be the company’s CFO.

  • The CFO instructed the employee to transfer $35 million to a lost of “confidential” bank accounts, claiming that the money was for an upcoming corporate acquisition.

  • Everything about the call felt routine, as the face and voice were familiar and the instructions were clear. The employee transferred the money, and the money vanished.

  • Unusual Email Addresses: If contact is made from an email address with an odd request, employees should check the address with scepticism. It is possible that cybercriminals have gained access to the email account in full – however, in many cases the email address is only replicated. In these cases, there will be mild discrepancies between the email used and the colleague’s usual email address.

  • Unnatural Facial Expressions: In deepfake video messages, employees should keep an eye out for any discrepancy in facial expressions. AI technology can struggle to create realistic eye movements, and there could be issues with lip movement syncing to audio. Facial features can also be distorted.

Example: AI-Generated Elon Musk

A fake video circulated on YouTube and other platforms in 2024 that appeared to be an interview with Tesla CEO Elon Musk promoting a cryptocurrency investment opportunity.

  • The fraudsters successfully created a high-quality deepfake of Elon Musk using old interview footage spliced with AI generated content. The video mimicked the branding of CNBC in order to add authenticity.

  • It ran as a YouTube ad, as the platform’s ad approval process missed out on spotting the fake.

  • Viewers fell for the scam, sending crypto to the given addresses and never seeing the money again.

Although this was exposed as a scam, deepfakes of CEOs targeting customers can lead to major reputational damage for businesses.

 Audio Discrepancies: AI-generated voices can sound automated or robotic. While AI technology is scarily good at mimicking intonation, there could be slight discrepancies that employees could pick up on.

What Can You Do     

To prevent AI impersonation, there are several things businesses can do – from training your teams in AI impersonation tactics to installing verification systems that make it more difficult for criminals to gain access to data.

Educating Your Team

Educating your company’s team in detection and prevention of AI-based impersonation tactics is one way to minimise the risk of an attack occurring – especially as around two thirds of all cyber attacks (AI impersonation included) happen as a result of employee negligence.

Train your team to:

Verify Requests: If a member of your team receives an email requesting payment, data or anything else sensitive, train them to verify the request through a trusted channel.

  • Suggest that employees make a phone call to the colleague or third party vendor who apparently requested the information to verify whether or not it was them. If it was not, then the employee can assume this was an attempted impersonation attack.

  • It may also be wise to establish a safe word that only employees know. Should a request for data or money come in, the employee receiving the request would ask for the safe word. Only another employee would be able to give it.

Research Emerging Scams: Employees should take a degree of responsibility to stay in the know-how about emerging AI scams, as they evolve extremely quickly.

  • Your company could hold monthly workshops or training days in which the latest scams and prevention tactics are discussed.

  • A constant conversation around AI scams should be kept – weekly update emails may also be wise. 

Report and Escalate Suspected Scams: No matter how minor or insignificant something suspicious seems, encourage employees to raise it with their higher-ups.

  • Employees should forward any emails or links that seem suspicious. Things to look out for include new geographical locations, unusual sending times, no confirmation that the request comes from someone trusted and so forth.

Implementing Additional Security

It is not enough to just train your team. Installing effective security measures for data protection and account security company-wide ensures better protection against AI scams.

Multi-Factor Authentication: In order to access data, payments, or accounts, all employees and third-party vendors should have to go through MFA. This can be an additional password, a personal security question, a fingerprint, facial recognition, or code.

  • MFA prevents data from being accessed by outsiders – even if they are claiming to be someone else. Should an AI scam go successfully, and an unknowing employee sends data files to a criminal, the Multi Factor Authentication needed to gain access to the file will prevent the criminal from getting in.

Endpoint Detection Response: EDR monitors threats in real time, analysing data such as incoming emails for suspicious behaviour and offering immediate response actions to these threats.

  • EDR can isolate any infected endpoints, quarantine files it deems malicious, provide warnings to employees and more.

  • AI scams may evolve in real-time, but with EDR monitoring threats in real time too, it will evolve along with the scams to catch any suspicious activity that employees miss.

Account Security: Implementing company-wide account security via MFA, password protection and strict rules on access works to prevent AI impersonation attacks as it prevents criminals from gaining enough access to employee accounts to create AI-generated deepfakes.

  • MFA should be in operation every time an employee logs into their account. Rules should be in place that employees must only log in during their working days, and should log out of their accounts whenever they are away from their device.

  • Passwords should be strong – 10 characters minimum, no personal information, a range of letters, numbers and special characters. Passwords should be stored in a password manager that only employees should access, and only used once.

Data Encryption: With data encryption, data is scrambled and unreadable to even AI unless the person accessing the information can provide the password, code or other authentication method to access the data.

  • If data is encrypted, then even if an AI impersonation attack is successful to the point of a criminal accessing data, the data will be unreadable to them unless they can provide an authentication that they will not have access to.

  • AI technology is growing at an alarming rate, as is the threat of AI deepfake attacks. By following the steps in this article, businesses should be able to prep their employees to identify attacks and prevent them from happening.

0
Comments