Sage Future 2026: The glass box: what AI trust really means for finance
Day one was about pace of AI adoption. Day two brought to the surface that AI isn’t just supporting finance now. It’s starting to augment the work.
If day one was about the pace and pressure bearing down on finance teams, day two made something harder to ignore.
AI is no longer just supporting finance work. It’s augmenting it.
It’s not enough to simply ask whether AI is accurate. The question becomes: can you see inside it, stand behind it, and defend it when it matters?
Watch the highlights from day two
From analysis to action
Across Sage Future, the conversation has moved beyond AI as an analytical tool.
The focus is now on embedding AI directly into finance workflows automating tasks and progressing work in real time.
Finance teams are no longer just interpreting outputs. They’re increasingly acting on them, with AI preparing tasks, surfacing decisions, and moving processes forward within the system itself.
Once AI moves from supporting decisions to shaping execution, the expectations change. The challenge isn’t about AI models being accurate enough.
It’s whether the systems around them can make that intelligence usable—safely, consistently, and with the level of control finance requires.
This week, Sage expanded AI agents across finance, HR and operations—putting that shift into practice.
From black box to glass box
This is where the glass box framing becomes more than a concept.
In a black box model, AI produces outputs. Finance teams consume them, then spend time reconstructing the logic, validating the numbers, and defending the conclusions to stakeholders.
Sage research with IDC puts a number on that: finance professionals are already spending over 12 hours a week on exactly this kind of work.
The glass box model inverts that. Outputs can be interrogated. Assumptions are visible. Data sources can be traced. Decisions can be explained at the point they’re made, not reconstructed after the fact.
As one session framed it: when that visibility is in place, “you stop being a passenger in your own process and start being the one driving it.”
This thinking is reflected in initiatives like Beyond the Black Box, developed with PwC—focused on making AI explainable in practice, not just in principle.
The hidden cost of AI in practice
There’s a subtler shift happening underneath all of this.
AI is reducing time spent on manual tasks. But it’s also creating new work—particularly around validation, exception handling, and oversight. And that new work isn’t always visible until something goes wrong.
One example shared on the day two stage illustrated this clearly.
An AI agent handling financial data performed well initially—then began making subtle errors: misclassifying entries, removing data, introducing inconsistencies that only surfaced under review. The outputs looked credible. The problems weren’t obvious. Human judgment caught what the model missed.
This is where much of AI’s promised productivity gain is currently being absorbed.
Not because the technology isn’t working—but because outputs need to be trusted before they can be used. And building that trust takes time, attention, and oversight.
Accountability concentrates, it doesn’t disappear
This is beginning to reshape the structure of finance work itself.
Teams are spending less time producing outputs and more time verifying them. Less time processing transactions, more time managing exceptions. AI handles more of the base workflow. Humans step in where judgment, context, and accountability are required.
That model can be more efficient. But it also concentrates responsibility.
When workflows are more automated and decisions move faster, the moments of human oversight become more critical—not less. Getting those moments right matters more, precisely because there are fewer of them.
Looking ahead
The question Sage Future leaves finance leaders with isn’t whether to adopt AI. That decision is largely made.
It’s how to build an operating model around it—one that balances speed with control, automation with oversight, and insight with the accountability that finance has always required.
The glass box is a condition under which AI becomes genuinely usable in high-stakes finance environments. Day three turns to what that looks like in practice.