Last Tuesday, I got a Zoom call from what looked like my biggest client—same voice, same mannerisms, even the background of his home office. He needed an urgent $50K wire transfer to close a time-sensitive deal. Something felt off, so I said I’d call him back on his cell to confirm.
Turns out, he was on a flight to Denver and had no idea what I was talking about.
That near-miss shook me because I’ve been in this business for 10 years, and I pride myself on not falling for scams. But this wasn’t some Nigerian prince email—this was a real-time video call that looked and sounded exactly like my client.
The Deepfake Threat Is Real (and Growing Fast)
If you haven’t heard about the Arup case yet: in February 2024, a finance employee in Hong Kong joined a video conference with their CFO and several colleagues from the London office. Perfect video quality, crisp audio, natural conversation. Following what seemed like clear instructions from verified executives, the employee authorized 15 wire transfers totaling $25.6 million.
Every person on that video call was a deepfake.
According to ScamWatch HQ, deepfake video scams surged 700% in 2025. The technology has gotten so good that 1 in 4 Americans have been fooled by AI voice scams. And here’s the scary part: real-time voice cloning now requires only seconds of audio from a public video, conference call, or social media post.
For bookkeepers and accountants, this creates a massive problem: we can’t trust email, we can’t trust phone calls, and now we can’t even trust video.
What This Means for Small Business Bookkeepers
I work with 20+ small businesses, mostly restaurants and retail shops. My clients aren’t Fortune 500 companies with enterprise security teams—they’re busy owners who text me payment requests between shifts, send voice memos instead of emails, and sometimes ask for urgent wire transfers to vendors.
Until last week, I would’ve trusted a video call as verification. Now? I don’t know what to trust.
The research is sobering: 72% of business leaders identify AI fraud as a top operational challenge, yet 80% of companies lack protocols for handling deepfake attacks. We’re all flying blind here.
Using Beancount’s Audit Trail for Verification Documentation
One advantage we have as Beancount users: our transaction records include metadata that commercial accounting software can’t match. When I process a payment now, I document:
2026-03-28 * "Wire transfer to ABC Supplies" ^verification-protocol
Assets:Business:Checking -50000.00 USD
Expenses:Inventory 50000.00 USD
verification-method: "Callback to stored phone +1-512-555-1234, spoke with John Smith, confirmed invoice #4521"
verification-time: "2026-03-28 14:32 CST"
authorization-channel: "Email confirmation from [email protected] received 14:35 CST"
This creates an audit trail showing I followed dual-channel verification. If something goes wrong, I have documentation proving I did my due diligence.
What I’m Implementing (Starting This Week)
-
No more single-channel authorizations - Email request requires phone callback to known number. Video call request requires separate email confirmation.
-
Pre-shared authentication phrases - Stored in Beancount contact metadata. High-risk clients have a code phrase only they and I know.
-
Callback-only policy for wire transfers - I don’t care if you’re on a video call. I’m calling you back on the number I have on file.
-
Transaction templates with verification checklists - Beancount templates that force me to document who I contacted, when, and through which channel.
-
Quarterly client education - Sending a one-pager to all clients explaining these policies and why they exist.
Questions for the Community
What verification protocols are you using? Have any of you encountered suspicious requests that turned out to be deepfakes?
How are you documenting verification in Beancount? I’d love to see other approaches to metadata structure.
For CPAs: what liability protection do we need? Should engagement letters specifically address deepfake verification requirements?
Is anyone using technology solutions beyond just process changes? Voice authentication, video watermarking, etc.?
This feels like one of those moments where the profession has to adapt fast. I’d rather over-verify and annoy a client than under-verify and wire $50K to a scammer.
What are you doing to protect yourself and your clients?