Every morning, a small team at one of the world's largest financial institutions opens the same reports. They're looking for movement — specifically, which line items in a massive balance sheet shifted overnight, by how much, and why. Their job is to validate that shift before publishing to the Federal Reserve.
It sounds straightforward. It is not.
The Problem: Every Morning, Same Question, Half a Workday to Answer
The liquidity operations team at this G-SIB processes roughly 1 billion records daily across a complex, multi-source data environment. By end of day, those records are consolidated into ~700,000 lines (each with ~60 attributes) and submitted to the Federal Reserve as a regulatory XML report (think 2052a, LCR).
The problem isn't the data. They have it all, and it lives in one place.
The problem is answering a deceptively simple question: "Why did this number change?"
When a balance sheet attribute moves beyond its threshold (say, $100M becomes $120M overnight) analysts need to understand whether that's a legitimate business movement, an intercompany transfer, a data quality issue, or something that warrants escalation. And they need to write commentary explaining it.
In practice, that process looked like this:
- Pull the reports
- Cross-reference yesterday's data with today's
- Identify the deltas, hunt down the source
- Write the narrative
- Repeat for dozens and dozens of line items.
Total time: ~5 hours. Every day.
The lead analyst described the bottleneck clearly: "We have so many different data attributes and so much data. The pain points are really just around the day-over-day variance analysis."
What They Tried Before
The team wasn't starting from zero. They had access to their data and tooling to extract it. What they were missing was a way to automatically generate the why: not just identify that a number changed, but surface the rationale based on historical patterns, prior commentary, and the underlying source data.
Previous approaches required analysts to manually gather context, review historical commentary, compare source-level data, and write narrative from scratch.
There was no system that could bring all of that together and produce a defensible, auditable explanation automatically.
What PromptQL Makes Possible
By connecting PromptQL to their data environment, the shift is immediate.
Rather than pulling reports manually and writing commentary from scratch, analysts ask PromptQL natural language questions directly against their balance sheet data. PromptQL generates the queries dynamically, pulls the relevant context (including historical commentary and prior-day values) and returns a structured, explainable answer.
The key phrase the team keeps returning to is "draft commentary."
PromptQL doesn't replace the analyst's judgment. It does the investigative legwork so the analyst can focus on interpretation and sign-off.
Now they can lean on PromptQL to:
- Identify which specific data attributes have moved beyond threshold
- Surface the source system driving each change
- Generate draft narrative commentary grounded in historical patterns
- Flag anomalies that warrant escalation versus routine movement
All of this in seconds, across 15 million records with 200 attributes.
The architecture was straightforward: PromptQL would sit on top of their existing data environment without requiring any rearchitecting of the underlying systems.
For a team that processes a billion records a day and publishes to the Fed, "works with what you already have" isn't a nice-to-have. It's a prerequisite.
Why Determinism Matters More Than Speed
Speed is the obvious headline. But the team's deeper requirement is something more precise: the same question has to produce the same answer, every time.
Regulatory reporting isn't a context where you can tolerate probabilistic outputs. If an analyst asks "why did this position move?" on Monday and gets one answer, then asks again on Wednesday during an audit review and gets a different one, the whole process breaks down.
Commentary submitted to the Federal Reserve needs to be traceable, defensible, and consistent.
PromptQL's deterministic execution model (where outputs are grounded in actual query results against source data rather than LLM inference alone) is what makes the use case viable in a regulated environment. Not just faster, but more accurate, and ultimately trustworthy.
The Broader Opportunity
What started as a variance analysis problem is a window into a larger pattern across regulated financial institutions: analysts who need answers from complex datasets, on a daily cadence, with zero tolerance for error.
The liquidity ops use case is one instance of that problem. Balance sheet reporting, capital adequacy, LCR submissions, intraday risk monitoring all share the same underlying structure: large datasets, time-sensitive questions, and the requirement that every answer be explainable and reproducible.
PromptQL is built for exactly that environment.
Interested in seeing how PromptQL handles your regulatory reporting workflows? → Learn more
See it in action on your data → Request a demo