PRODUCT
The innovations, features and architecture that power PromptQL’s reliability on real-world business problems.
Captures the evolving business context that exists nowhere else
Traditional semantic layers and knowledge graphs fail because they're hard to build and only capture documented knowledge. But the most valuable business insights – tribal knowledge – live in the heads of experienced employees.
PromptQL automates what traditional systems do manually: it introspects schemas, documentation, and code. Then it goes further – learning the unwritten expertise that no system captures, but teams rely on every day.
Bootstrapped automatically: Introspects schema, documentation, and code to generate the initial semantic graph.
Learns tribal knowledge: Captures insights like “deals pushed twice are at risk” and context-specific meanings like “GM” as gross margin vs. general manager.
Self-improving: Learns team-specific language and usage patterns to evolve the graph continuously.
Decouple query planning & execution
PromptQL doesn't rely on LLMs for final answers – instead, it uses LLMs to generate rich, multi-step query plans in a domain-specific language for data retrieval, computation, and semantic tasks.
By decoupling planning from execution, PromptQL avoids hallucinations, inconsistencies, and context window limits. This separation enables it to handle billion-row datasets and complex multi-system queries that overwhelm traditional methods.
Query data where it lives. No ETL required.
PromptQL engine pushes query plans directly to the source systems across databases, APIs, SaaS apps, or unstructured sources.
No data movement or centralization needed
Federated execution across heterogeneous systems
Granular, field-level access control enforced at source
The runtime and semantic layer operate in a closed loop – enabling continuous learning and improvement.
For your org, ACME, PromptQL helps you build AcmeQL, a language that an LLM can use to plan, reason and act with the level of reliability you can expect from an analyst or engineer on staff
PromptQL is an AI platform that continuously adapts LLMs to your domain by capturing and encoding the proprietary know-how into a planning language the LLM can understand and generate.
The planning language incorporates terminology, processes and a semantic graph of data and tools.
Importantly, this generated language can be compiled into precise machine executable code that can run deterministically and hence handle arbitrarily large amounts of data and complex plans without LLM context limitations.
Request demo
QUESTION / TASK
[From a business or customer]
FOUNDATIONAL LLM
“ACMEQL”
“ACMEQL” LLM: A specialized model fine-tuned on your org’s semantics and data based on in-context learning – enabling accurate, context-aware reasoning.
“ACMEQL” SEMANTIC GRAPH
“ACMEQL” Semantic Graph: A dynamic semantic model that captures your unique business concepts, entities, and logic and acts as a contextual brain for the LLM.
“ACMEQL” PLAN
“ACMEQL” Plan: A structured, multi-step execution plan with human-like reasoning
PROMPTQL LEARNING LAYER
PromptQL Learning Layer: Continuously learns from user behavior and data changes to evolve the semantic graph—no manual tagging required.
PROMPTQL RUNTIME
PromptQL Runtime: Programmatically runs the plan, with structured memory outside LLM context.
AI Platform
PROMPTQL FEDERATION
PromptQL Distributed Query Engine: Federates data across multiple data sources with governance and access control policies automatically enforced.
MCP
UNSTRUCTURED DATA
SAAS
APIs
WEB
STRUCTURED DATA
FEATURES
Connectors
Out-of-the-box connectors make it easy to ground AI with live context from internal and external systems – databases, SaaS apps, APIs, web sources, and more.
Structured memory artifacts
PromptQL uses structured memory artifacts to manage large-scale data outside the LLM context. This avoids hallucinations, context loss, and repeatability issues that plague alternative approaches that try to cram everything into a single context.
Editable query plans
PromptQL shows the full execution plan behind every response – in natural language and the exact DSL program. You can inspect and modify plans in real time to resolve ambiguity or apply expert input before execution.
Reliability score
PromptQL analyzes each plan it generates for reliability based on factors like incomplete instructions, ambiguous queries, and inconsistent data. It flags issues and suggests prompt refinements to improve response accuracy.
PromptQL Playground
An out-of-the-box assistant UI for exploring PromptQL’s planning, execution, and debug capabilities in real time.
PromptQL APIs
PromptQL APIs let you call it from every form factor – assistants, copilots, or even full-blown agents.
PromptQL Agent API: Integrate PromptQL into your own AI agents, copilots, and assistants with a developer-friendly, tooling.
PromptQL Program API: Capture logic in reusable PromptQL programs and trigger them via automations, buttons, or generative APIs.
Fine-grained access control
Secure access with granular, role-based controls – field-level, declarative, and easily integrated with your existing authentication systems.
Bring your own LLM
Use any frontier model or self-hosted alternative – OpenAI, Anthropic, Gemini, open-source models, or internal deployments. No vendor lock-in.