It's 6pm on the last working day of the month. Actuals are in. The Finance Director wants the pack by 9am. Somewhere in your organisation, someone is sitting in front of a spreadsheet with 40 line items and a blank text box where the commentary should go.
That's the situation. And in 2026, filling that box is still mostly a manual job.
What variance commentary actually is
If you've been on the receiving end, you know what a good commentary looks like. It doesn't list every variance. It tells you what moved the needle, why, and whether you should care. The bad version restates the numbers in prose. The useful version explains them.
Here's a concrete illustration. Same five line items. Same numbers. Two versions of what comes out below the table.
Same numbers. One of those paragraphs earns its place in the pack. The other just describes what the reader already saw.
Why it's still written by hand
Variance commentary is harder to automate than it looks. The reason standard reporting tools — Power BI, Anaplan, SAP, take your pick — produce great tables and terrible narratives is structural. The tools know the numbers. They don't know what the numbers mean in context.
The table knows revenue missed by €200k. It doesn't know about the deal that slipped, or that the CFO already knows about it and doesn't want it in the headline.
Useful variance commentary requires three things that live outside the spreadsheet: context (why did this actually happen), judgment (what's material versus noise this month), and voice (language that sounds like a person who understands the business wrote it, not a system that read a column header).
Every month-end, across finance teams everywhere, someone is doing this manually. In a large company, that's several people, across several reporting packs, for several different audiences. The FD version. The board version. The segment version. Each one slightly different, each one written from scratch.
The time this takes is real. A typical month-end commentary cycle in a finance team of moderate size runs anywhere from half a day to a full one, just on narration — after the numbers are already clean.1 The work is low judgment, high effort, and repeated every single month. Which is exactly the profile of something worth trying to automate.
What I'm going to build
I've spent seven years in audit, which means I've sat inside the management reporting of banks, insurers, and PE funds as a reviewer rather than a producer. I've read a lot of variance commentary. I've flagged when it was misleading. I've noticed the patterns in what makes the good versions good.
Now I want to see how far a well-designed AI workflow can get toward producing the useful version automatically — and be honest in public about where it falls short.
The plan is a five-post build series. A tool that takes a structured Excel input — actuals vs budget vs prior year — and produces first-draft commentary using the Claude API. Version 1 will be rough. I'll show you the output, the prompt, and where it confidently writes nonsense. Then I'll iterate toward something that could actually sit in a finance team's workflow.
I'm building it in public. The repo is live from today — no code yet, but it will be committed post by post as the series progresses. If you want to follow the build or eventually run it yourself, everything will be there.
Next post: the architecture. What the input structure looks like, what the output actually needs to do to be finance-grade, and the prompt design decisions that determine whether the commentary is useful or just plausible-sounding.
One question before then: if you work in FP&A or financial control, what does your month-end commentary process actually look like? How long does it take, who writes it, and what format does it end up in? Message me on LinkedIn — I want to build against the real version of this problem, not my assumed version of it.
Following along or building something similar? Find me on LinkedIn. The next post in this series goes up when the architecture is locked.
Revenue came in €200k below budget. Cost of sales was €20k favourable. Gross margin was therefore €180k below plan. Operating expenses were €150k above budget. EBITDA was €330k below plan at €1,170k, representing a 22% unfavourable variance.
Revenue was €200k below budget, driven by two enterprise deals that slipped into Q3 — timing, not a pipeline issue. Opex was €150k above plan, largely the Amsterdam office fit-out landing in April rather than May as phased. The EBITDA miss is mostly timing-related; full-year run rate looks intact.