04 Sep, 2025

3 MIN READ

Launching GAT – An SAT for your AI

We're launching a GAT design service to help AI change agents and leaders concretely define their GenAI initiatives and measure its progress.

These are the kinds of questions that become possible to answer with a GAT:

  1. What are the possible internal & external use-cases if I had an accurate AI system on top of our data & systems?
  2. For each use-case, what is the concrete business impact of this AI system at different levels of reliabilty and performance?
  3. Given how rapidly the AI ecosystem is evolving, how do I make the right strategic choice of vendor or technique to build the AI solution for my use-case?
  4. Given the use-case, what is the right form factor for my AI solution? What is the right entry point for my stakeholders to maximize impact?
  5. Given the use-case, how do I plan a roadmap in terms of scope and in terms of accuracy for my AI project?
  6. How can I objectively evaluate progress of my AI project? Whether its vendor or whether it's something we're building in-house?

What is a GAT?

We realized that the preparation work we've done with 50+ Fortune 500 and Silicon Valley companies, is valuable for AI change agents regardless of whether they choose to use PromptQL. We've nailed the art of assessing planned GenAI initiatives and projects into a repeatable science, that helps get AI projects out from pilot into production and successfully increasing adoption beyond.

We do this by creating a domain specific GAT - a GenAI Assessment Test - which becomes a toolkit for you to develop an AI strategy and measure your progress against it.

A GAT is like an SAT, but for an AI system. It consists of 2 key components:

  1. An evalset with business-impact noted against each eval. (Read more about what evals are and how AI researchers use them to build AI here). This helps you define the scope, roadmap and expected ROI of your AI project. This also helps you measure evaluation and progress concretely as your AI project goes from evaluation to pilot to rollout.
  2. A decision making framework, that maps your use-case into a 3x3 AI strategy matrix. This helps you choose the right technique to customize generic LLMs to your domain and the right model family that aligns with your workload.

Why are existing methods to assess GenAI failing?

You've read the recent MIT report on how 95% of enterprise AI deployments fail.

The root cause is a more nuanced reality than "AI is not good enough". It's because teams are often using the wrong AI approach for their specific problems.

Some teams build custom AI where simple configuration of a latest generation LLM would have worked. Others force off-the-shelf solutions onto problems requiring deep institutional knowledge that need sophisticated adaptive AI systems that can continuously learn. The result? Millions wasted, teams demoralized, and boards losing faith in AI.


The GAT methodology

  1. Evals are your only defense against AI snake oil: Evals are well known amongst AI researchers and engineers. However, we've realized that it is in fact the most critical asset for an AI leader to create clarity in driving a GenAI project. Whether it's a product that is purchased or a solution that is built in-house. Evals are the only way to define the scope of an AI system, and the only way to assess its current capability level.
  2. Institutional knowledge determines the approach: Every Gen AI system that integrates with proprietary data and systems is dependent on integrating institutional knowledge (documented, tribal or tacit). Understanding the extent of that dependence, and the lifecycle of that institutional knowledge is critical to evaluating the right AI approach.
  3. Mapping workloads to LLM output types optimizes model family: Today there are 3 primary types of enterprise workloads (search, act & solve) that map directly to generative outputs that LLMs are optimized for. Understanding that workload is critical to creating a longer term AI strategy, making you resilient to constant model updates and also continuously benefit from these updates.

Read more

  1. Evals 101 for executives: What is an eval? How does it help define scope, measure progress, create a roadmap & assess ROI?
  2. The 3x3 decision making framework for GenAI Assessment

Contact us

If you're interesting in creating a GAT for your business, reach out to us to book your GAT Design Service.

It takes our team of experienced AI engineers just 5 days to deliver your first GAT. Our team knows how to find and interact with the right stakeholders in your organization, and uses our internal tools to rapidly assemble the evalset and a create a 3x3 decision making framework for your use-case.

Tanmai Gopal
Tanmai Gopal
Tanmai is the co-founder of PromptQL.
PromptQL Logo

© 2025 Copyright Hasura, Inc. All Rights Reserved.