I’ve been thinking a lot about the industry conversation around AI in accounting, and there’s this line I keep coming across in the literature: “AI tools often struggle with nuance, particularly in making judgment calls on sensitive financial strategies.”
At first, I nodded along. Of course AI can’t replicate human judgment. But then I started asking myself harder questions: If human nuance is supposedly our sustainable competitive advantage, why aren’t we intentionally developing it? And more uncomfortable: are we even working on the parts of accounting that require nuance, or are we spending most of our time on tasks AI could handle?
What I Mean By “Nuance”
AI can process data at scale, but it fundamentally can’t navigate:
-
Ambiguous situations - Is this home office expense “ordinary and necessary” for this particular business? It requires understanding context, intent, industry norms, the client’s actual work patterns.
-
Ethical gray areas - There’s a legal-but-questionable tax strategy that would save the client $8K. Should you recommend it? AI can present the option. It can’t weigh the client’s risk tolerance, their long-term reputation, or the broader ethical implications.
-
Client-specific tradeoffs - Save $5K in taxes but increase audit risk by 15%. For one client (risk-averse, values peace of mind), that’s a terrible deal. For another (sophisticated, has strong documentation), it’s worth considering. AI doesn’t know the difference.
-
Unstated preferences - Client says “minimize my taxes” but really means “minimize my taxes without creating stress, complexity, or requiring me to understand arcane rules.” The stated goal and the actual goal are different.
The Positioning Question
Here’s where it gets interesting for competitive positioning:
If AI commoditizes technical skills (data processing, calculations, basic compliance), then human nuance becomes the premium offering. Which means we need to be world-class at:
- Deep client understanding - Know their risk tolerance, family situation, long-term goals, decision-making style
- Professional judgment - Balance competing objectives: tax savings vs. simplicity vs. audit risk vs. time cost
- Communication skills - Explain tradeoffs in accessible language without talking down to clients
- Ethical reasoning - Navigate gray areas with integrity, even when it’s not the most profitable path
The Uncomfortable Question for Beancount Users
This is where I get uncomfortable with my own practice:
Does plain text accounting develop nuance skills or technical skills?
When I spend 4 hours writing a Python importer to automatically categorize transactions, I’m developing technical skills. When I spend 2 hours building a custom SQL query to generate a specific tax report, I’m developing technical skills.
Those are valuable! But they’re not the “human nuance” skills that supposedly differentiate us from AI.
When do I practice:
- Having difficult conversations with clients about financial tradeoffs?
- Developing pattern recognition for “this client says X but needs Y”?
- Building the judgment to know when a legal strategy is technically compliant but situationally wrong?
The Revenue Implications
I’ve been tracking my billable hours, and here’s what’s sobering:
- $50/hour work (data entry, transaction categorization): Increasingly commoditized. Clients question the value. “Why does this take so long when AI is instant?”
- $150/hour work (technical compliance, tax prep, report generation): Important, but increasingly automated. Still valuable in 2026, but for how long?
- $300/hour work (strategic judgment, scenario planning, advising on major financial decisions): Irreplaceable human nuance. Clients don’t question the rate because they know AI can’t do this.
When I look at my time breakdown honestly, I’m spending maybe 20% of my time on the $300/hour nuance work, and 80% on work that’s either already commoditized or will be in 2-3 years.
That ratio needs to flip.
Questions for the Community
I’m genuinely trying to figure this out:
-
How do you develop judgment and nuance? Is it pure experience (years of pattern recognition)? Mentorship? Deliberate practice? Can it be taught formally or only learned through mistakes?
-
Do you have examples of “human nuance” that AI couldn’t replicate? I want to understand what makes it complex beyond “it requires judgment.”
-
How do you communicate the value of nuance to clients? When they see AI tools promising instant answers, how do you articulate why the slow, thoughtful, expensive human approach is worth it?
-
For Beancount practitioners specifically: Are we building nuance skills or technical skills? Is our time spent on automation actually reducing our ability to command premium rates for judgment work?
I’m asking these questions because I genuinely don’t have the answers. I worry that we’re all nodding along to “human nuance is our competitive advantage” without actually investing in developing it.
What am I missing? How are you thinking about this?