Skip links

AI Payment Fraud in 2026: How Deepfakes and Voice Cloning Are Bypassing Finance Controls

In February 2024, a finance employee at a multinational firm in Hong Kong transferred $25 million to fraudsters. He had just come off a video call with what appeared to be the company's CFO and several senior colleagues. All of them — the CFO, every colleague on screen — were deepfakes generated from publicly available footage. The employee had no reason to doubt what he saw. He authorised the transfer. The money was gone within hours.

That case — now referred to simply as the Arup incident — was not a one-off. It was a proof of concept that has since been replicated at lower cost, with less technical skill, and at far greater scale. The barrier to creating a convincing deepfake video has collapsed. Voice cloning now requires fewer than three seconds of audio. AI-generated emails match your CFO's writing style precisely because they were trained on communications your CFO actually wrote.

Finance teams are facing a threat that their existing controls were never designed to stop. Phone call verification, video confirmation, email sender checks — all of these rely on the assumption that the person on the other end is a real human being. That assumption no longer holds.

This article covers what AI payment fraud looks like in practice in 2026, why it is bypassing the controls finance teams rely on, and — critically — where the one defence exists that AI cannot defeat.


$40B projected annual losses from AI-driven fraud by 2027, up from $12.3B in 2023 (Deloitte)
71% of organisations report an increase in AI-powered fraud attempts over the past 12 months (Trustpair 2026)
3 sec of audio is all a fraudster needs to clone a person's voice convincingly (FS-ISAC)

What changed — why AI broke the controls finance teams relied on

For decades, the primary defence against payment fraud in corporate finance was human verification. If an unusual payment request arrived, you called the person who sent it. If a vendor updated their bank details, you confirmed it with a known contact. If an executive asked for an urgent wire, you validated it face to face or on a video call. These controls worked because impersonating a person convincingly — in real time, across multiple channels — required resources and proximity that most fraudsters didn't have.

Generative AI removed that barrier completely.

What finance teams used to trust

The verification methods that finance operations teams built their processes around assumed human authenticity as the baseline. Phone calls confirmed identity because voices are unique. Video calls confirmed presence because faces are hard to fake in real time. Emails from known domains confirmed the sender. These weren't naive assumptions — they were reasonable controls for the threat environment that existed.

What AI can now replicate

Every one of those signals is now compromised. Voice cloning technology can produce a convincing replica of any person's voice from as little as three seconds of publicly available audio — a clip from a podcast, an earnings call recording, a LinkedIn video. Real-time deepfake video tools can overlay a synthetic face onto a live video call, complete with natural lip sync and expression. AI language models trained on a person's emails and communications can generate text that is indistinguishable from the genuine article — matching tone, vocabulary, sentence structure, and even specific quirks of phrasing.

The result is that a fraudster with a target, a laptop, and a $50 monthly subscription to a Fraud-as-a-Service platform can impersonate your CFO on a phone call, your supplier in an email, and a colleague on a Zoom call — all without any system access, any hacking, and any technical skill beyond knowing how to use commercial tools.

🚨

A phone call from your CFO is no longer verification. AI can replicate a voice from three seconds of audio. A video call showing your CEO's face is no longer verification. Deepfake video calls are a documented fraud vector with confirmed multi-million dollar losses. The procedures your team trusts need to reflect this reality.

The fraud-as-a-service problem

What makes this particularly alarming is not just the capability — it is the accessibility. These attacks are no longer limited to nation-state threat actors or sophisticated criminal organisations. For as little as $50 per month, low-skilled actors can access enterprise-grade phishing kits, voice cloning tools, deepfake generation platforms, automated targeting scripts, and mule network access — all bundled into subscription services that operate like legitimate SaaS businesses, complete with dashboards, customer support, and refund policies.

The Fraud-as-a-Service economy means every small business, every mid-market AP team, and every finance director who has ever appeared on a company website or earnings call is now a viable target. The cost of attacking you is lower than ever. The cost of a successful attack — median $120,000 per incident — makes it highly profitable.

How fraudsters find and build their targets

Before a single deepfake is generated or a voice clone is made, a fraudster spends time on reconnaissance. This phase — identifying the right target, gathering the right material, mapping the right relationships — is what makes AI-assisted fraud so difficult to detect. By the time the attack reaches your AP team, the fraudster already knows more about your organisation's payment workflows than most of your employees do.

The raw material for these attacks is almost entirely public. Your CFO's voice is on the quarterly earnings call recording, freely available on your investor relations page. Your CEO's face and mannerisms are in the conference keynote video on YouTube. Your AP team lead's name, role, and direct contact email are on LinkedIn. Your supplier relationships — who you work with, on what projects, in what capacity — are visible through press releases, case studies, LinkedIn connections, and company announcements. A fraudster building a targeted attack against your organisation does not need to breach anything. They need a browser and a few hours.

The typical reconnaissance phase of an AI payment fraud attack covers four areas:

  1. Identifying the payment decision-makers. Who in your organisation can authorise a wire transfer? Who processes vendor bank changes? LinkedIn company pages, email signatures in public correspondence, and company website team pages answer this within minutes.
  2. Gathering voice and video samples. Earnings calls, conference appearances, webinar recordings, YouTube videos, podcast appearances — any public audio or video of your CFO, CEO, or key supplier contacts provides the training data for a voice clone or deepfake model. The more public-facing the person, the more material is available. Three seconds is the technical minimum. Most targets provide hours.
  3. Mapping supplier relationships. Which vendors do you pay regularly? What are their typical invoice amounts? What project names or contract references would an email need to cite to appear credible? Press releases, case studies, LinkedIn posts from suppliers, and company announcements provide this context in detail.
  4. Identifying timing vulnerabilities. End of quarter, financial year-end, large acquisition announcements, executive travel announcements — all create windows where unusual payment requests are less likely to be questioned. Fraudsters monitor these signals and time their attacks accordingly.
⚠️

Your public professional presence is attack surface. This is not a reason to stop publishing earnings calls or remove your team from LinkedIn. It is a reason to ensure your verification controls do not rely on recognising the person making a request — because the fraudster has done their homework. The verification step that cannot be bypassed by reconnaissance is the one that checks the destination account, not the identity of the requester.


The three AI attack patterns hitting finance teams hardest in 2026

AI payment fraud is not one attack. It is a family of attacks that share a common mechanism — impersonation using synthetic media — but exploit different moments in the payment workflow. Understanding which pattern targets which stage of your process tells you exactly where your controls need to change.

The deepfake CFO call

This is the Arup attack pattern. A finance employee receives an invitation to an urgent, confidential video call — a strategic acquisition, a regulatory matter, a sensitive supplier negotiation. On the call are the CFO, perhaps a legal counsel, perhaps a senior colleague. All appear on screen. All speak naturally. All are deepfakes generated from publicly available footage.

The attack works because it combines three elements that individually might trigger scepticism but together overwhelm it: visual confirmation (they can see the person), audio confirmation (they recognise the voice), and social proof (multiple people are on the call, all behaving normally). The employee is asked to authorise a wire transfer. They comply. The call ends. There was no real human being on the other side.

In early 2026, a mid-market manufacturing company's AP manager received a video call from what appeared to be the CFO requesting an urgent $340,000 wire to a new vendor account. The video and voice quality were convincing. The manager processed the wire. The CFO had not made the call.

Voice cloning vendor impersonation

This attack targets the bank detail change process — the highest-risk event in any AP workflow, and the one most commonly exploited by AI-assisted fraud. A supplier calls your AP team. The voice is the right person — the same regional accent, the same slightly informal tone, the same way they always open a call. They are updating their bank account details. They have the right invoice references. They know the payment terms. They ask for the change to be processed before the next payment run.

The AP team member updates the vendor master file. The next payment run processes. Every subsequent payment to that supplier goes to the fraudster's account. The real supplier follows up weeks later about missing invoices. By then, multiple payments have already left.

The voice was cloned from a short clip of the supplier speaking at an industry event, available publicly on YouTube. Three seconds of audio. That was all that was needed.

AI-generated BEC at scale

Business Email Compromise was already the dominant payment fraud vector before AI — BEC attacks accounted for 73% of reported cyber incidents in 2024. AI has not created BEC. It has industrialised it.

AI tools like WormGPT analyse a target's publicly available communications, professional profiles, and writing samples to generate emails that match their tone and style precisely. The email references real projects by correct name, cites actual invoice numbers, uses the vocabulary and sentence rhythm of the person it impersonates. It arrives from a lookalike domain — one letter different, one character swapped — that passes a visual check. It asks AP to redirect a payment to a new account, citing a plausible reason. There are no grammatical errors. There is no generic phrasing. There are no traditional red flags. The email passes spam filters because AI-generated text does not use the patterns those filters were trained to catch.

"AI dynamically personalises emails to fit company tone, mentions actual account numbers, or references recent projects. This realism means even vigilant staff can fall for urgent requests to reroute wire transfers or change vendor payment details." — Turning Numbers, 2025 Financial Fraud Schemes Report

The combined attack — when all three converge

The most sophisticated attacks in 2026 don't use a single channel. They build credibility across multiple channels before the payment request arrives — each stage reinforcing the last, until the victim is so thoroughly convinced that the verification step feels unnecessary.

A typical combined attack unfolds in stages. It begins with an AI-generated email from what appears to be a known supplier, mentioning a legitimate project and flagging that their banking details will be updated shortly due to a change in banking provider. The email is professionally written, references a real project name, and comes from a domain that passes a visual check. It creates no urgency — it simply primes the AP team to expect the change.

A few days later, a call arrives from the supplier's voice — or rather, a clone of it. The caller confirms the bank change, cites the earlier email, provides the new account details, and mentions the invoice currently in queue. Everything matches. The tone is right. The AP team member makes the update.

Then a final email arrives with the updated remittance details and a polite request to confirm the change before the next payment run. It looks like a standard supplier follow-up. The AP team member confirms. The payment processes. The money leaves.

At no point did any single element of that attack look wrong. Each interaction confirmed the previous one. The attack succeeded not despite the AP team member's diligence — but because of it. They followed the process they were trained to follow. The process just didn't include the one step that would have caught it: independently verifying that the new account belonged to the supplier.

How AI payment fraud attacks are built — from reconnaissance to payment All three attack types share the same anatomy — the only difference is the synthetic media used 🔍 STAGE 1 Reconnaissance LinkedIn, earnings calls, website Hours of work No system access needed 🤖 STAGE 2 Synthetic media Voice clone, deepfake video, AI email 3 seconds of audio $50/month tooling 🎭 STAGE 3 Impersonation CFO call, supplier email, video call Feels completely real Bypasses human judgment 💸 STAGE 4 Payment instruction Bank change, wire authorisation Approved by finance All human checks passed 🏦 STAGE 5 Destination account The one thing AI cannot fake Verifiable independently ← Catch it here Account ownership check Stages 1–4 are defeated by AI. Stage 5 cannot be — the account either belongs to the vendor or it doesn't.

Why the bank account is the one thing AI can't fake

Here is what every one of these attacks has in common — the deepfake CFO call, the voice-cloned vendor impersonation, the AI-generated BEC email: the money still has to go somewhere. A real bank account. One that exists in the real world, is registered to a real entity, and can be independently verified.

A fraudster can clone any voice. They can generate any face. They can write any email. They can impersonate any person across any channel. But they cannot deepfake a bank account. They cannot voice-clone an IBAN. The destination account — the one place in every AI fraud attack where the synthetic media ends and physical financial reality begins — is the single point that AI cannot manufacture.

This is the gap that almost no existing guidance on AI fraud identifies. Every article, every advisory, every KPMG report focuses on detecting the deepfake — better liveness checks, AI detection tools, employee training. These are worthwhile. But they address the impersonation, not the payment. If the impersonation succeeds — if the voice clone convinced your AP team, if the deepfake video passed the call — those controls have already failed. The money is about to leave.

Account ownership verification works at a different point in the chain. It doesn't try to detect the synthetic media. It verifies, independently and in real time, that the account the money is about to reach belongs to the legal entity it claims to belong to. If a voice clone successfully convinced your AP team to update a bank account, account ownership verification catches the fraudulent account before the payment processes — regardless of how convincing the social engineering was.

The social engineering may have succeeded. The fraud hasn't.

The defence that AI cannot bypass: Account ownership verification confirms via independent banking data that an account number and routing details belong to the named legal entity. No deepfake, no voice clone, no AI-generated email can change what an independent banking registry says about who owns an account. This is the specific control that closes the gap AI fraud exploits.

Verify the destination account before any payment leaves — regardless of how the instruction arrived MonitorPay's Account Ownership Verification and Payee Name Verification confirm in real time that the account a payment is heading to belongs to the entity named in your records. No deepfake can change that result. 190+ countries. No ERP dependency.
Request a demo

The controls that work — and the ones that no longer do

The most dangerous thing a finance team can do right now is assume their existing fraud controls cover the AI threat. Most of them were designed for a world where impersonation required physical presence or system access. That world is gone.

Control What it was designed to stop Why AI defeats it What replaces it
Phone call verification (using number from the request) Confirm the person requesting the payment is real The fraudster provides their own callback number. You call them. Call using a number from your own records — never from the request
Video call confirmation Visual confirmation of identity before authorising high-value payments Real-time deepfake video overlays a synthetic face onto a live feed. Human detection rate: 24.5%. Video is context, not verification. Treat it the same as email.
Email sender domain check Confirm the email came from the right person/company AI-generated emails from lookalike domains (one character different) pass visual inspection and spam filters DMARC enforcement + system-level domain verification, not human visual inspection
Human judgment ("does this feel right?") Employee intuition as a fraud filter Deepfakes are designed to feel right. Voice clones are indistinguishable. AI emails have no red flags. Process-level controls that apply regardless of how convincing the request feels
Annual fraud awareness training Teach employees to spot phishing and impersonation AI fraud doesn't look like what employees were trained to spot — no poor grammar, no generic phrasing Scenario-based, role-specific drills. Goal: follow the procedure, not detect the AI.
Vendor onboarding verification (one-time) Confirm vendor is legitimate at point of setup Bank details can be changed after onboarding via a voice-cloned call or AI-generated email Re-verify account ownership on every bank detail change + continuous monitoring
Account ownership verification (independent banking data) AI cannot defeat this. The account either belongs to the vendor or it doesn't. No deepfake changes what an independent banking registry records. This is the control. It works even when everything else has already failed.
⚠️

The verification gap that AI exploits: Most finance teams verify identity before authorising a payment. Very few verify the destination account independently. AI fraud targets the first verification and ignores the second — because the second doesn't exist. Adding account ownership verification as a mandatory final check before payment release closes the specific gap AI attacks are designed to exploit.

The cost of doing nothing — made specific

The $40 billion projection and the $25 million Arup case are important for framing the threat. But for most finance leaders, the relevant number is not the global total — it is what a single incident costs their organisation.

The cost of a single AI fraud incident vs. the cost of preventing it WITHOUT ACCOUNT VERIFICATION $120,000 median loss per fraud incident (AFP 2025) + investigation costs + reputational damage Only 22% recover most of what was stolen vs. WITH ACCOUNT VERIFICATION A fraction of $120,000 per verification check Real-time API call at point of payment Runs in milliseconds. No manual process. One prevented incident funds years of checks

The median loss per vendor fraud or BEC incident in 2024 was $120,000, according to AFP data. That is not the worst case — that is the middle of the distribution. Incidents involving deepfake video calls and executive impersonation tend to be significantly higher, because the level of trust generated by a convincing deepfake is proportional to the size of the payment request it can support.

Against that number, the cost of running an account ownership verification check at the point of every payment or bank detail change is orders of magnitude smaller. This is not a technology investment with uncertain ROI. A single prevented incident pays for years of verification checks. The question is not whether account verification is worth the cost. It is why it is not already mandatory in every AP workflow.

💡

47% of US companies lost more than $10 million to fraud in 2024, according to Trustpair data. For mid-market organisations without dedicated fraud teams, a single large incident does not appear as a line item — it appears in annual results, triggers board-level review, and in some cases results in personal liability for finance leadership where controls were demonstrably absent.

A practical checklist for finance teams

The following steps address AI payment fraud specifically — not generic cybersecurity hygiene. They are ordered by impact, with the highest-value controls first.

  1. Make account ownership verification a mandatory final step before payment release. For all new vendor accounts and any bank detail changes, run an independent account ownership check via a banking data source — not a callback, not a form submission, not an email reply. The payment does not leave until this check returns a match.
  2. Establish a "no bank changes by email" policy — and enforce it in the system. Bank detail changes require a formal request through a verified channel. Any change received by email is logged but not processed until independently verified through your own records and an account ownership check.
  3. Source all verification phone numbers from your own records. Remove any number provided in the change request from the verification process. Your vendor master file, your internal directory, or a public official source — never the request itself.
  4. Require dual approval for all bank detail changes, enforced at the system level. Not an email chain, not a verbal sign-off. A logged, system-enforced two-person approval with named approvers and timestamps.
  5. Add a standing "urgency = more scrutiny, not less" rule to AP policy. Document it explicitly. Communicate it to every AP team member. Any request that creates time pressure to skip a verification step gets escalated, not expedited.
  6. Re-verify existing vendor accounts before large payment runs. A vendor that was legitimate six months ago may have had their account details compromised since. Continuous monitoring or pre-payment re-verification for accounts above a defined threshold catches this before the payment leaves.
  7. Run scenario-based training, not awareness sessions. Give your AP team a simulated voice-clone call. Give them a simulated AI-generated email from the CFO. The goal is not to help them spot AI — they won't reliably. The goal is to ensure they know the verification procedure applies regardless of how convincing the request feels.

Frequently asked questions

AI payment fraud refers to fraud attacks on payment workflows that use artificial intelligence to generate or enhance the impersonation elements — synthetic voice, deepfake video, AI-written emails, or fabricated identities — in order to convince finance teams to authorise fraudulent payments. The underlying fraud goal is the same as traditional BEC or vendor fraud: redirect money to a fraudster-controlled account. What AI changes is the believability, the scale, and the barrier to entry of the attacks.

A fraudster identifies a target — a supplier's finance contact, a CFO, an AP team lead — and sources a short audio clip of their voice from publicly available material: a conference recording, a podcast appearance, an earnings call, a LinkedIn video. Commercial AI tools can generate a convincing voice clone from as little as three seconds of clear audio. The fraudster then uses this clone to call the victim company's AP or treasury team, impersonating the supplier or executive, and requests a bank detail change or payment authorisation. The voice is indistinguishable from the real person to a human listener.

In February 2024, a finance employee at Arup — a global engineering firm — transferred $25 million to fraudsters after participating in a video conference call where all other participants, including what appeared to be the company's CFO and multiple senior colleagues, were deepfake avatars generated from publicly available footage. The employee attended the call, received payment instructions, and authorised the wire. The fraud was only discovered when he followed up with the real CFO separately. It remains the largest single confirmed AI-enabled payment fraud incident on record.

Yes — and in most cases, you won't know from watching the call. Real-time deepfake video technology can overlay a synthetic face onto a live video feed with natural lip sync and realistic expression. Researchers have found that human detection rates for high-quality deepfake video are around 24.5% — meaning trained observers correctly identify them less than a quarter of the time. The practical implication is that visual confirmation via video call is no longer a reliable verification method for high-value payment instructions. Treat video call confirmation the same as email confirmation: useful context, not independent verification.

Deloitte's Center for Financial Services projects fraud losses attributable to generative AI will reach $40 billion annually by 2027, up from $12.3 billion in 2023 — a compound annual growth rate of 32%. In North America alone, losses exceeded $200 million in Q1 2025. BEC attacks — the primary delivery mechanism for AI-enhanced payment fraud — cost businesses $2.9 billion in 2023 according to the FBI's IC3, with total exposed losses between 2013 and 2023 exceeding $55 billion globally. These figures significantly understate the actual cost because the majority of incidents go unreported.

BEC 2.0 refers to the AI-enhanced evolution of Business Email Compromise attacks. Classic BEC used human-written emails impersonating executives or vendors, often detectable by generic language, poor grammar, or slightly off phrasing. BEC 2.0 uses generative AI trained on real communications to produce emails that are grammatically perfect, tonally accurate, and contextually specific — referencing real projects, real invoice numbers, and real relationships. It also extends beyond email to include voice cloning (phone-based BEC), deepfake video (video conference BEC), and multi-channel attacks that use email, text, voice, and video in combination to build credibility before the payment request arrives.

Training helps — but it cannot be the primary control. Human detection rates for high-quality AI-generated content are poor: 24.5% for deepfake video, and most people cannot distinguish a good voice clone from the real person at all. More importantly, AI fraud is explicitly designed to defeat human judgment — it creates urgency, familiarity, social proof, and authority that override analytical thinking. The realistic goal of training is not to enable employees to detect AI attacks. It is to ensure they follow verification procedures even when an attack feels convincing. The verification procedure — specifically account ownership verification — is what catches the fraud, not the employee's ability to recognise it.

Account ownership verification confirms, via an independent banking data source, that a specific bank account number and routing details belong to the legal entity named in your records. It is independent of the payment instruction — it does not verify who asked for the payment, it verifies where the payment is going. This is why it works against AI fraud specifically: regardless of how convincing the voice clone, the deepfake, or the AI-generated email was, if the destination account has been changed to one owned by a fraudster, the ownership check returns a mismatch and the payment is stopped before it leaves. The social engineering may have succeeded. The fraud hasn't. MonitorPay's Account Ownership Verification provides this check in real time for 190+ countries.

Nacha's 2026 fraud monitoring rules do not mention AI or deepfakes explicitly, but they directly address the outcome AI fraud produces: payments authorised under False Pretenses — where the payment was technically approved but only because the payer was deceived about the identity of the payee or the ownership of the destination account. The rules require all non-consumer ACH Originators to implement documented, risk-based processes to detect this category of fraud. Account ownership verification at the point of payment — the specific control that catches AI-facilitated bank account substitution — is the mechanism Nacha's False Pretenses requirement is designed to mandate.

If you receive a suspicious payment instruction — regardless of channel — do not act on it. Place a hold on any pending payment to the relevant account. Contact the supposed sender using a number or email from your own records, not from the suspicious communication. If a payment has already been processed, call your bank's fraud line immediately to initiate a recall — time is critical. Preserve all evidence: the original email, phone call logs, any screen recordings, and approval trails. File a report with the FBI's IC3 (US), Action Fraud (UK), or your national cybercrime reporting body. Notify your cyber insurer within your policy's notification window.

MonitorPay · Account Verification Infrastructure Stop AI fraud at the one point it can't fake — the destination account Deepfakes beat authentication. They don't beat account ownership verification. MonitorPay confirms in real time that the account any payment is heading to belongs to the entity it claims to be — regardless of how convincing the instruction was. 190+ countries. No ERP dependency.