I need to share something that’s been bothering me lately, and I’m hoping this community can help me think through it.
Last month, I was reviewing a new hire’s work—someone fresh out of college who’d been using AI-powered accounting tools throughout their degree. They’d processed three months of client transactions using one of those “smart categorization” systems. Everything looked fine at first glance. Then I noticed something odd: a $12,000 wire transfer categorized as “Office Supplies.”
When I asked about it, they said, “That’s what the AI suggested, so I clicked approve.” No hesitation. No second-guessing. Just complete trust.
That transaction was actually a down payment on equipment. The miscategorization would have created a $12,000 expense timing problem and thrown off their depreciation schedule entirely. When I explained this, they looked genuinely confused: “But the AI is usually right, isn’t it?”
The Training Paradox We’re Facing
Here’s the uncomfortable reality: we’re in the middle of an 83% talent shortage crisis (CPA candidates down 27% over the past decade), so we’re hiring junior accountants with less training than ever before. Many of them have never manually categorized transactions. They’ve only used systems that auto-categorize everything.
This creates a dangerous knowledge gap. These folks lack the pattern recognition to spot when AI gets it wrong. They don’t know what “normal” looks like because they’ve never done the tedious, repetitive work of manually categorizing 500 transactions and learning from the mistakes.
The Journal of Accountancy just published an article asking: “How will accountants learn new skills when AI does the work?” It’s a legitimate question. AI is automating the low-risk, repetitive tasks that used to be the training ground for junior staff.
Why Beancount Might Be Part of the Solution
I’ve been thinking about why I trust my Beancount ledger more than I trust most commercial systems, and I think it comes down to explicitness.
When you write a Beancount transaction, you have to think:
2026-03-15 * "Office Depot" "Printer paper and toner"
Expenses:Office:Supplies 127.43 USD
Liabilities:CreditCard:Chase -127.43 USD
You can’t just click “approve.” You have to type the account names. You have to understand double-entry. You have to make the decision consciously.
That explicitness is a teaching tool. It forces you to think through the categorization rather than accepting a suggestion.
Training Approaches I’m Considering
I’m experimenting with a few ideas for training juniors to develop healthy skepticism:
The “100 Transactions Rule”: Require new hires to manually categorize 100 transactions in Beancount before they’re allowed to use any AI automation. Not 10. Not 50. One hundred. Enough to build pattern recognition.
Intentional Error Injection: Periodically slip obviously wrong AI categorizations into their workflow and see if they catch them. Make it a teaching moment, not a gotcha.
Balance Assertions as Checkpoints: Teach them to use Beancount’s balance assertions religiously. If your bank says $5,432.10 and your ledger disagrees, something is wrong. That’s the immune system detecting the problem.
Socratic Questioning: Instead of correcting their mistakes directly, ask leading questions: “Does that expense amount seem typical for that vendor?” “Where do you usually see transactions in that account?” Make them think through the logic.
What I Need from This Community
I’m not anti-AI. I use automation myself. But I’m worried we’re creating a generation of accountants who trust algorithms more than they trust their own judgment—because they’ve never developed that judgment in the first place.
So here’s what I’m asking:
- How are you training junior staff in the AI era? What’s working? What’s failing?
- At what point do you trust someone to review AI output? What’s the threshold?
- Are there exercises or workflows that build healthy skepticism without making people paranoid?
- Is Beancount’s explicit syntax actually an advantage here, or am I overthinking it?
This feels like one of those moments where the profession needs to adapt or we’re going to have a crisis in 5 years when nobody can actually do accounting anymore—they can only prompt AI and hope for the best.
Would love to hear your thoughts, especially if you’re dealing with this in your practice or firm.