Laptop Stolen From My Car Last Week—20 Client Ledger Files Exposed. Here's What I Learned About Our Security Blind Spots

Something happened to a colleague of mine last month that’s been keeping me up at night, and I think every Beancount practitioner needs to hear this.

His laptop was stolen from his car outside a coffee shop. Standard smash-and-grab, nothing sophisticated. The laptop had Beancount ledger files for 22 small business clients—five years of transaction history, bank account numbers embedded in importer configs, Social Security numbers in payroll metadata, the works.

Here’s the thing that hit me hardest: plain text files are human readable. If that laptop wasn’t encrypted, the thief (or whoever they sell it to) can literally open any text editor and read every financial transaction. No database password to crack. No application-level security to bypass. Just… open the file.

I immediately went through my own setup and realized I had some serious gaps. Let me walk through the threat scenario and what I found.

The Threat Model for Beancount Practitioners

Physical theft is the #1 risk. We’re not running cloud servers with firewalls. We’re carrying laptops to client meetings, working from coffee shops, leaving bags in cars. The attack surface is literally “steal the hardware.”

What’s exposed in a typical Beancount setup:

  • Ledger files (.beancount) — Full transaction history, account names that often include bank identifiers
  • Importer configs — May contain API keys, bank credentials, file paths revealing client names
  • Git history — EVERYTHING is preserved. Even if you deleted sensitive data from HEAD, it’s in the commit history
  • CSV downloads — Bank statements sitting in Downloads folder with full account numbers
  • Email attachments — Client documents, tax forms, W-2s

My Security Audit Results (Embarrassing)

I checked my own setup. Here’s what I found:

Security Layer Status Notes
FileVault (disk encryption) :white_check_mark: Enabled But I didn’t know if my backup drive was encrypted (it wasn’t)
Git remote encryption :cross_mark: Not encrypted Using private GitHub repo, but GitHub employees theoretically have access
GPG encryption on ledgers :cross_mark: None Never set it up
Backup encryption :cross_mark: Not encrypted Time Machine to external drive, unencrypted
72-hour breach response plan :cross_mark: Doesn’t exist I wouldn’t know who to call first
Client security disclosure :cross_mark: Never discussed Clients don’t know I use plain text files

That table was a wake-up call.

The Regulatory Reality in 2026

This isn’t just about best practices anymore. The FTC Safeguards Rule now applies to CPA firms and financial service providers—including bookkeepers who handle client financial data. Key requirements:

  • Written Information Security Program (WISP) — You need a documented security program
  • Breach notification within 30 days (FTC) to the Commission if 500+ consumers affected
  • State laws vary — California’s 2026 update requires notification within 30 days; some states are stricter
  • IRS guidance for tax professionals — Specific protocol for reporting data theft

If my colleague’s laptop contained data for clients across multiple states (which mine does—I have clients in TX, CA, IL, and NY), he’d need to comply with EACH state’s notification law. That’s a nightmare of different timelines, different AG offices, different disclosure formats.

The Questions I Can’t Answer Yet

  1. Is FileVault enough? If my MacBook is stolen while sleeping (not powered off), is the encryption actually protecting data? I’ve read conflicting things about cold boot attacks.

  2. Should I GPG-encrypt individual ledger files? This would break my workflow (can’t just run bean-check on encrypted files), but it adds a layer of protection.

  3. Self-hosted Git vs. GitHub private repo — Is GitHub “secure enough” for client financial data? Or do I need a self-hosted Gitea instance with additional encryption?

  4. What goes in an engagement letter about security? Do I disclose to clients that their data lives in plain text files on my laptop? Will that terrify them?

  5. Breach response plan — If my laptop is stolen TODAY, what are literally the first 5 things I should do? In what order?

I’m in the process of building out a proper security posture, but I’d love to hear from others. Especially those of you managing multiple client ledgers.

How secure is YOUR Beancount setup right now? If your laptop was stolen from your car this afternoon, would you be ready?

Be honest. I was embarrassed by my own audit, and I think most of us would be.

Bob, this post needs to be pinned. Seriously. I run a CPA firm with 40+ clients and I guarantee most practitioners reading this have the same gaps you described.

Let me address this from the compliance and liability perspective, because the stakes are higher than most people realize.

The FTC Safeguards Rule Is Not Optional

I want to be crystal clear: if you handle client financial data in any capacity—bookkeeping, tax prep, financial planning—the FTC Safeguards Rule under the Gramm-Leach-Bliley Act almost certainly applies to you. This isn’t just for banks and insurance companies anymore.

As of 2026, the rule requires:

  1. Designate a Qualified Individual to oversee your security program (yes, even if that person is you as a solo practitioner)
  2. Conduct a written risk assessment identifying threats to client data
  3. Implement safeguards including encryption, MFA, access controls
  4. Monitor and test your safeguards regularly
  5. Train your personnel (even if it’s just you training yourself)
  6. Develop an incident response plan — not optional, not “nice to have”
  7. Report breaches to the FTC within 30 days if 500+ consumers affected

The enforcement mechanism has teeth. The FTC can pursue action against firms that fail to comply, and state AGs can pile on with their own penalties.

Your WISP Is Your Legal Shield

A Written Information Security Program isn’t bureaucratic paperwork—it’s your first line of legal defense if a breach occurs. When the regulator asks “what security measures did you have in place?” you need to hand them a document, not say “well, I had FileVault enabled I think.”

For Beancount practitioners, a WISP should specifically address:

  • Data inventory: What client data do you have, where is it stored (ledger files, Git repos, CSV downloads, email), and who has access?
  • Encryption policy: Full disk encryption (FileVault/BitLocker) is the minimum. Document that it’s enabled and verified.
  • Access controls: Strong login password, automatic screen lock after 2 minutes, separate user accounts if multiple people use the machine
  • Backup security: Encrypted backups only. Time Machine to an unencrypted external drive is a compliance violation waiting to happen.
  • Disposal procedures: When you retire a laptop, how do you ensure client data is destroyed?

To Answer Your GPG Question

Should I GPG-encrypt individual ledger files?

For professional use with client data, I’d say the workflow cost is worth it for stored/archived ledgers. But I wouldn’t GPG-encrypt your active working files—that would make Beancount unusable.

My approach:

  • Active client ledgers: Rely on full disk encryption (FileVault) + strong login password + automatic lock
  • Archived client ledgers (engagement complete): GPG-encrypt the entire directory before archiving
  • Backups: Always encrypted (encrypted Time Machine or encrypted cloud backup like Arq + B2)
  • Git remotes: Private repos are adequate IF your GitHub account has MFA enabled and you’re not storing raw credentials in configs

The Engagement Letter Question

Do I disclose to clients that their data lives in plain text files?

Yes, and you frame it correctly. Don’t say “your data is in plain text files on my laptop.” Say:

“Your financial data is maintained using an open-source, auditable accounting system with full version control. All data is protected by industry-standard full-disk encryption, encrypted backups, and multi-factor authentication on all access points. Our security practices comply with FTC Safeguards Rule requirements as documented in our Written Information Security Program, available for your review upon request.”

That’s accurate, professional, and actually positions Beancount’s transparency as a STRENGTH rather than a weakness. Version control = full audit trail. Open source = no vendor lock-in or opacity. Plain text = data portability and long-term accessibility.

Breach Response: First 5 Steps

Since you asked for the literal first 5 things:

  1. Report to law enforcement — File a police report immediately. This documents the theft and starts the legal clock.
  2. Remote wipe if possible — If you have Find My Mac or MDM enabled, wipe the device NOW.
  3. Identify exposed data — List every client whose data was on the device. Note what data types (SSN, bank accounts, tax returns).
  4. Consult legal counsel — Before notifying anyone, talk to a lawyer who knows breach notification law. They’ll help you navigate multi-state requirements.
  5. Begin client notifications — Work with counsel on timing and language. Each state has different requirements.

And step 0, which should have happened BEFORE the theft: have this plan written down and accessible from a device OTHER than the stolen one. Keep a printed copy in your home office safe.

Bob, I’d be happy to share a WISP template I’ve been working on that’s tailored for Beancount practitioners. Still a work in progress but it covers the basics.

This thread is hitting close to home. I went through my own security reckoning about two years ago when I started tracking rental property finances for my family alongside my personal Beancount ledger. The moment I realized I had tenant SSNs (from lease applications) referenced in metadata alongside five years of investment account numbers, I got serious fast.

Let me share what I actually implemented, because I think the practical “how” matters more than the theoretical “should.”

My Current Security Stack (Beancount-Specific)

Layer 1: Full Disk Encryption (the absolute minimum)

FileVault on Mac, BitLocker on Windows, LUKS on Linux. This is non-negotiable and honestly should be enabled by default on every machine in 2026. Bob, to answer your question about sleep mode: on modern Apple Silicon Macs, FileVault encryption is hardware-backed and the keys are stored in the Secure Enclave. The data IS encrypted even when the laptop is sleeping. Cold boot attacks against the T2/M-series chips are not practically feasible for a random thief. So yes, FileVault is enough for the physical theft scenario.

That said, if the laptop is stolen while unlocked and logged in (you step away from a coffee shop table), encryption doesn’t help. Screen lock timeout of 1-2 minutes is essential.

Layer 2: Git Repository Security

I moved away from GitHub for client-sensitive data. Here’s why: GitHub’s private repos are “private” in the sense that only authorized accounts can access them, but GitHub itself (employees, law enforcement requests, security breaches) can access the data. For personal finance tracking, private GitHub is fine. For client data with SSNs and bank accounts, I wanted more control.

My setup:

  • Self-hosted Gitea instance on a VPS with full disk encryption
  • SSH key authentication only (no password auth)
  • Repository-level access controls
  • Automatic encrypted backups to a separate location using restic

Is this overkill for 20 client ledgers? Maybe. But the VPS costs /month and I sleep better at night.

Layer 3: Sensitive Data Separation

This is the most impactful change I made. I restructured my Beancount files to separate sensitive data from non-sensitive data:

client_acme/
  main.beancount         # Account structure, transactions (no PII)
  sensitive.beancount     # SSNs, bank account numbers, tax IDs
  importers/
    config.py            # Importer configs (may have credentials)
    credentials.env      # API keys, extracted from config

The sensitive.beancount file uses Beancount’s include directive but is added to .gitignore on the remote. It only exists on my encrypted local machine and encrypted backups. The main ledger file references accounts by name (Assets:ACME:BankOfAmerica:Checking) without embedding actual account numbers.

This means even if someone accesses the Git repo, they get transaction history but NOT the PII that triggers breach notification requirements. The data that matters for accounting (amounts, dates, categorization) is separate from the data that matters for identity theft (SSNs, account numbers).

Layer 4: The “Oh Crap” Folder

I keep a folder in my home safe (yes, physical paper) with:

  • List of all clients whose data is on my machines
  • Contact info for each client
  • Which states they’re in (for notification law purposes)
  • My cyber liability insurance policy number and claims phone number
  • A printed breach response checklist
  • Login credentials for Gitea admin (to revoke access remotely)

If my laptop is stolen, I grab that folder. Everything I need to start the response is there without depending on any device that might be compromised.

The Git History Problem

Bob, you raised an excellent point about Git history. This is actually the scariest part of our toolchain for security purposes.

Even if you “delete” a sensitive file from your repo, it lives forever in Git history. The only way to truly remove it is git filter-repo or BFG Repo-Cleaner, which rewrites history. And if you’ve pushed to a remote, you need to force-push and hope nobody cloned the old history.

My rule now: never commit sensitive data to Git in the first place. Use .gitignore religiously. If you accidentally committed a file with SSNs, clean it immediately—don’t wait.

Practical Advice for Getting Started

If anyone reading this thread is starting from zero (no judgment—we all were):

  1. Today: Verify FileVault/BitLocker is enabled. Check Settings > Privacy & Security > FileVault on Mac. Takes 5 minutes.
  2. This week: Enable encrypted backups. On Mac, toggle encryption in Time Machine settings. Takes 1 minute.
  3. This month: Audit your Git repos for sensitive data. Search for patterns like SSN formats (XXX-XX-XXXX), account numbers, API keys.
  4. This quarter: Write your WISP (Alice’s template sounds great). Separate sensitive data from your ledger files.

Don’t try to do everything at once. Start with encryption because it’s the highest-impact, lowest-effort change. Then layer on additional protections as you go.

The perfect is the enemy of the good here. FileVault alone would have prevented Bob’s colleague’s worst-case scenario.

Former IRS auditor here. I want to add the tax professional-specific angle because the notification requirements for us are different and in some ways MORE demanding than for general businesses.

IRS Data Theft Protocol for Tax Professionals

If you prepare tax returns (even as a side service alongside bookkeeping), the IRS has a specific protocol that kicks in the moment you suspect client data has been compromised. This is separate from and in addition to FTC and state requirements.

IRS steps you MUST take:

  1. Report to IRS immediately — Contact the IRS Stakeholder Liaison for your state. They have a dedicated process for tax professional data theft. The IRS wants to know so they can flag affected taxpayer accounts and watch for fraudulent returns filed using stolen identities.

  2. File Form 14039 (Identity Theft Affidavit) — Each affected taxpayer should file this, but YOU need to coordinate that process and provide them the information they need.

  3. Report to state tax authorities — Every state where you filed returns for affected clients needs to be notified separately. Most state departments of revenue have their own identity theft units.

  4. Contact the IRS Criminal Investigation division if you believe the theft was targeted (not random). Tax professional data is valuable specifically because it enables fraudulent refund claims.

The IRS takes this seriously because a stolen laptop with 20 clients’ tax data can lead to 20 fraudulent tax returns filed before the legitimate ones, potentially diverting hundreds of thousands in refunds.

The Multi-State Notification Nightmare

Bob mentioned having clients across TX, CA, IL, and NY. Let me walk through what notification actually looks like for those four states specifically, because the differences are jarring:

State Notification Deadline AG Notification Required? Special Requirements
California 30 days (as of 2026) Yes, within 15 days Must use specific template language
Texas 60 days Yes, if 250+ residents Must include specific types of information
Illinois As soon as practical Yes, if 500+ residents AG can investigate even for smaller breaches
New York As expeditiously as possible Yes, always Must also notify NY Division of State Police

Four clients in four states = four different notification letters, four different AG offices, four different timelines. And if you miss any of these, you’ve compounded the breach with a compliance violation.

The Engagement Letter Language You Need

Alice’s framing language is good, but I want to add something specific that I now include in ALL my engagement letters. This isn’t about transparency for transparency’s sake—it’s about limiting your liability exposure:

“DATA SECURITY PRACTICES: [Firm Name] maintains client data using encrypted storage systems with full disk encryption, encrypted backups, and multi-factor authentication. In the event of a security incident affecting your data, you will be notified in accordance with applicable federal and state laws. [Firm Name] maintains cyber liability insurance coverage for data security incidents. A copy of our Written Information Security Program is available upon request.”

Key additions vs. Alice’s version:

  • Explicitly mentions insurance — clients want to know they’re covered if something goes wrong
  • References notification commitment — preemptively sets expectation that you’ll follow the law
  • Mentions WISP by name — demonstrates you have a formal program, not just ad hoc practices

Cyber Liability Insurance: The Elephant in the Room

Nobody in this thread has mentioned cyber liability insurance yet, and this is arguably the most important financial protection you can have.

A data breach involving 20 clients across 4 states could easily cost:

  • Legal counsel: ,000-,000 (breach notification attorney)
  • Notification costs: ,000-,000 (letters, credit monitoring for affected clients)
  • Regulatory defense: ,000-,000+ (if a state AG investigates)
  • Business interruption: Lost revenue while you’re handling the crisis instead of serving clients
  • Client attrition: Some clients WILL leave, even if you handled it perfectly

Total exposure: easily ,000-,000 for a solo practitioner. That could end a small practice.

Cyber liability insurance typically costs -,500/year for a solo bookkeeper or small CPA firm. It covers breach notification costs, legal defense, regulatory fines, and sometimes even PR crisis management.

If you don’t have this insurance, getting it should be priority #1—before encrypted backups, before self-hosted Git, before anything else. Because encryption prevents breaches, but insurance prevents the breach from destroying your business.

One More Thing: Tax Season Timing

I want to flag something nobody thinks about: when the breach occurs matters enormously for tax professionals. A laptop stolen in January or February—peak filing season—could lead to fraudulent returns being filed before you even finish preparing the legitimate ones. The window for damage is measured in days, not months.

During tax season, I run even tighter security: shorter screen lock timeouts, I never leave my laptop in a car, and I keep an offline backup of all client data updated nightly. The cost of a breach during filing season is an order of magnitude higher than during the off-season.