Picture this: You’re using AI-powered accounting software to streamline your workflow. The AI confidently categorizes your client’s transactions. Your client approves them. You review the high-level numbers, everything looks reasonable. You file the return.
Six months later: IRS audit. Turns out the AI miscategorized $30,000 in personal expenses as legitimate business deductions. The IRS disallows them. Your client owes back taxes plus penalties—maybe $12,000-$15,000 total.
Who’s liable?
- The CPA who reviewed and signed the return?
- The software vendor whose AI made the error?
- The client who approved the transactions?
I’ve been thinking about this a lot lately as AI becomes standard in our practice. Here’s what’s keeping me up at night:
The Liability Landscape in 2026
After researching this and talking to my E&O insurance broker, here’s the uncomfortable truth: professional standards haven’t changed just because we’re using AI tools.
When I sign a tax return under penalty of perjury, the IRS holds me accountable for accuracy regardless of what tools I used. “The AI said so” isn’t a defense. The software vendors explicitly disclaim liability for tax penalties in their terms of service. And while the client approved transactions, they relied on our professional judgment.
From a regulatory perspective, we’re in the same position we’ve always been: fully responsible for the work product. The Journal of Accountancy’s February 2026 article on AI risks made this crystal clear—CPAs remain accountable under existing professional standards, and regulators won’t accept “the AI miscategorized it” as an excuse.
The E&O Insurance Problem
Here’s where it gets worse: I just renewed my professional liability insurance, and the carrier added new language about AI exclusions. They’re trying to limit coverage for claims “in any way related, directly or indirectly” to AI usage.
So we might be in this bizarre situation where:
- We’re professionally required to stay current with technology
- AI is becoming industry standard
- But our liability insurance may not cover AI-related errors
My broker and I are still negotiating this, but it’s concerning.
What Constitutes “Reasonable Review”?
This is the question I’m wrestling with: What does “reasonable professional review” mean when AI is doing the categorization?
Is it reasonable to spot-check 10% of transactions? 25%? Do I need to manually review every single one, defeating the purpose of automation?
For context, I handle about 80 small business clients during tax season. AI categorization could save me 200+ hours. But if I need to review every transaction anyway, I’m not actually saving time—I’m just adding an AI step to my existing manual process.
Beancount’s Audit Trail Advantage
One reason I’m exploring Beancount more seriously: the plain text format creates an inherent audit trail. When AI categorizes transactions, I can:
- See exactly what the AI did (not a black box)
- Write queries to spot anomalies (unusually large deductions in new categories)
- Track changes over time with version control
- Document my review process with comments directly in the ledger
This feels more defensible than clicking “approve” in proprietary software where I can’t prove what I reviewed.
Questions for the Community
I’m curious how others are thinking about this:
- Have you modified your engagement letters to address AI usage and where liability sits?
- What’s your review workflow when using AI categorization? How much do you verify?
- Has your E&O insurance carrier asked about AI tools or changed your coverage?
- Do you see Beancount’s transparency as an advantage for professional liability?
- How do you explain this to clients who think AI is magic and don’t understand the risk?
I’m not anti-AI—I think it’s transformative. But I’m trying to use it responsibly while protecting my license and my clients. The liability framework feels unclear right now, and I’d love to hear how others are navigating this.
What am I missing? What’s your approach?