I got an email one morning about a newly formed innovation team at my firm. They were looking at AI, prompt engineering, what it all meant for how we work. I read it twice.
Then I did what they force-feed you with a six-foot spoon in these big firms.
I turned proactive.
I went to one of the partners on the team. Told him I was interested. Wanted to be part of it. He said "of course, let me get back to you."
He never did.
That's the thing about being proactive in a large firm. They tell you to raise your hand. What happens after is less certain.
A month later, at an office party, I bumped into a colleague who I knew was sending out those innovation emails. I asked her the same thing — is there a way in?
She said: come find me at the office one day.
I did. We sat in a meeting room and talked properly for the first time. She explained the problem she was trying to solve. Annual reports in financial services can run to 300, sometimes 400 pages. Doing a proper financial statement analysis — capturing and explaining movements year-on-year across all the notes — takes a week for a moderately experienced auditor. Sometimes more. She wanted to know if AI could do it faster.
She asked me to take a crack at it. The test case: Van Lanschot Kempen's 2024 annual report. A publicly published document — 350 pages. I work in financial services, so it was a natural fit.
I went home that evening and started thinking about approach. The work laptop had Copilot, but it felt limited — basic text and email, not much else. I later found out that's a privacy constraint. EU data regulations mean certain tools get restricted in ways that blunt their usefulness for this kind of analytical work.
So I opened ChatGPT on my personal laptop instead.
And right there, the first time I launched it that day, a pop-up appeared.
"Try the new Agent mode."
I sat back.
Of all the frigging days.
I spent time building the prompt carefully — giving it the brief a senior auditor would give a very capable junior. It defined materiality from scratch, with the agent documenting its own reasoning for the benchmark it selected. It set up flagging logic: context-aware thresholds, a volatility test for variances above 10%, a sign-change override that flagged items regardless of size if the direction of a number reversed between years. It was asked to independently identify significant risk areas and run the analysis for those regardless of the other criteria.
Then I opened a fresh session, uploaded the 350-page PDF, and hit go.
I watched it work for a few minutes — the way agents reason through steps, pull from the document, document their own thinking. Then I realised this was probably going to take a while.
So I went and made a coffee.
Twenty-five minutes later, I came back.
The Excel was ready.
I've done a lot of financial statement analysis over the years. FS analytical review, variance explanations, movement notes across complex group structures. A thorough job on a report this size — 40 hours for a moderately experienced auditor, conservatively.
I opened the file expecting gaps, errors, things I'd need to fix. Instead I found myself going through it like a reviewer, not a builder. The materiality calculations were solid, with the reasoning documented step by step. Every line item explained individually, cross-referenced to the relevant notes, flagged where the logic said to flag. It had even gone beyond the financials — researching the company's environment, the macro context it was operating in, the industry movements that explained what the numbers were doing.
I went to sleep that night thinking about it. Not anxiously. In that rarer way where something has genuinely shifted and your brain won't let it go.
The next time I was in the office, I showed my colleague. She didn't say much. She didn't need to — it was written across her face.
She immediately set up a meeting with the manager overseeing the audit financial services innovation team.
Same walkthrough. Same reaction. He looked at the output for a while and said:
"Things were fine until now. But after seeing agents — things are about to get very interesting."
He meant it as a compliment. Mostly. There was a pause — the kind that contains more than it says — and then he added something about the nature of certain roles changing faster than people expected. We all sat with that for a moment without saying much.
He wanted to set up a follow-up with the partner heading the team. That meeting never happened. Scheduling, competing priorities, the general friction of large organisations moving slower than the ideas inside them. I didn't push too hard. These things happen.
A couple of months later, at a firm training, a senior manager from the broader assurance innovation team gave a session on where things were heading with AI. How clients were already using it. How the audit approach itself was beginning to shift. It was the kind of talk that makes you feel the industry is actually paying attention.
I wanted to speak to him after, but he was occupied. I let it go and found a table at lunch with some of the people I'd met during the day.
A few minutes later, he walked over with his plate and asked if he could join us.
We talked about a lot of things — careers, where people were, where they wanted to go. At some point I mentioned what I'd built. He looked at me and said:
"Oh — that was yours? I've seen it. It was pretty cool."
He asked what I thought the main obstacle to implementation would be. I said consistency, initially — but that agents learn fast, and the curve is steep. He was nodding. He told me they were seriously considering it. The thing holding it back was data privacy — ChatGPT's policies conflicting with client confidentiality requirements. But Copilot agents were coming, he said. When they did, something similar could be built inside the firm's own infrastructure.
Whether he sat down at our table by chance or by design, I genuinely don't know. I've thought about it.
The audit financial services innovation manager did eventually ask me to join the team in a more formal capacity.
It never quite materialised.
The question that 40-minute Excel file had planted wasn't "how do we use this in one process?" It was something broader: if a task that took 40 hours now takes 40 minutes, what does the next version do? And the one after that?
That was roughly a year ago.
Here's where things stand now.
Across the financial services industry, AI agents are no longer a proof of concept. Early agentic AI deployments are already reducing manual workloads in finance functions by 30–50%.1 And adoption is accelerating fast — 82% of midsize companies and 95% of PE firms have either begun or plan to implement agentic AI in their operations in 2026, with financial planning and analysis, fraud detection, and compliance monitoring leading the way.2
The tools have moved too. The privacy constraints that stopped my work from going live inside the firm? Infrastructure is catching up to capability — and fast.3
What this means practically: the analysis I ran on a 350-page annual report in 40 minutes — that's not impressive anymore. That's the floor. The question now is what gets built on top of it.
I don't think finance professionals disappear. I think the ones who understand both the numbers and the tools become harder to replace, not easier. But only if they move.
The thing nobody tells you about these industrial revolutions — and that's exactly what this is — is that the people who thrive aren't the ones who predicted it earliest. They're the ones who stayed curious long enough to actually do something with what they saw.
An email landed in my inbox. I turned proactive.
It didn't materialise the way I expected. But it put a question in my head I haven't been able to put down since.
Where do you want to stand when this lands properly?
I'd genuinely like to hear where you are with this — whether you're a finance professional figuring out the right move, or someone already building in this space. Find me on LinkedIn.