•
Engineering-led consulting will shape the future Enterprise AI
VentureBeat recently profiled our approach and asked whether engineering‑led firms are coming for the big consultancies’ AI business.
We're witnessing a technology that has had perhaps the largest percentage of "stuck in pilot" in the last 30 years of continuous technology disruption. 95% of enterprise AI projects are stuck in pilot - per a recent MIT study.
You've probably heard enough about FDEs, pioneered at Palantir, adopted by AI labs like OpenAI, and something that we've done at PromptQL from Day 0.
But FDEs are not enough. From what I’ve seen across large enterprises, two ingredients determine whether AI compounds or stalls.
These are hard-earned lessons learnt from customers we shouldn't have worked with, customers we might have lost and recovered and customers where things have worked really well.
1) Leaders and engineers, in the same loop
Skip the layers. Progress accelerates when data & AI leaders with crisp business outcomes work directly with engineers who have shipped AI to real users at scale. Everything else gets in the way of rapid execution.
What goes wrong without it?
Leader without clarity: Budgets burn, but there’s no defined impact metric or decision owner. Initiative fatigue sets in and poisons the well for future AI.
Inexperienced team: LLMs look approachable, but reliable systems require intuition you only earn by shipping. We’ve learned the hard way that generic “AI training” can’t replace lived experience with rollouts.
What good looks like: small, empowered pods; a single KPI; direct Slack/Teams channels between the exec sponsor and the lead engineer; pre‑agreed escalation paths that remove blockers in hours, not quarters.
If you’re partnering with a consulting firm, insist on this working model. Leaders and engineering in the same loop isn’t a preference—it’s the operating system for enterprise AI.
2) Tech that makes accuracy scalable, not just impressive
Pilots are easy. The real adoption killer when you rollout at scale is the confidently wrong response which makes it impossible to maintain accuracy at scale.
At a high level, this means that your AI solution must have the following features:
Calibrated confidence and abstention. The system shows when it’s unsure and can choose not to answer rather than hallucinate.
Human steer at the moment of work. When confidence is low, users can nudge the system with missing context or policy choices.
Corrections that become org knowledge. Those nudges are captured with lineage and fed into the next update so the AI learns your definitions, data quirks, and processes—accuracy improves with usage.
I wrote more about why “confidently wrong” quietly stalls ROI and how the accuracy flywheel works.
Specialize your AI - or stay stuck in the shallow end
If you’re a Fortune 500 data/AI leader, treat “off‑the‑shelf” as a starting point, not a strategy. Generic models don’t know your ontology, edge cases, approval paths, or risk posture, and they won’t magically learn them without an explicit learning loop.
The step‑change in value comes from customizing the AI to your domain so it continuously absorbs the tribal knowledge of your business.
At PromptQL
As an enterprise AI vendor, we've built our company around this operating model:
FDEs: Experienced AI engineers embedded with your team and accountable to a business KPI.
A platform to build an org‑specific AI Analyst that is uncertainty‑native and improves with use. It incrementally captures human guidance and turning it into reusable organizational knowledge so accuracy scales with adoption, not despite it.
Enterprise AI needs a different way of working - leaders and engineers as one team, with accuracy and trust engineered from day one, and a system that learns your business over time.
That’s how engineering‑led consulting will reshape enterprise AI for success, and why it’s poised to disrupt a $200B industry.