Security

Multiplayer AI connected to all your systems can get a lot of work done for your team.

But this level of access needs a level of security that multi-tenant SaaS wasn't designed for.

PromptQL is the only multiplayer AI app (web/mobile/desktop) with a BYOC infrastructure option (Bring Your Own Cloud). All data resides on — and flows through — your cloud account. We only handle your password-less login.

Infrastructure

BYOC — data stays in your cloud

AI Sandboxing

Every AI action mediated through the virtual layer

Supply Chain

Fixed deps, WebAssembly sandbox

Authorization

Enforced at three independent layers before data reaches the AI

Infrastructure

Your cloud. Your data. Always.

PromptQL's architecture is split into two planes with a strict, auditable boundary. Everything operationally sensitive lives in your account — not in a shared vendor database.

Your Data Plane

Customer-owned — lives in your AWS, GCP, or Azure

  • Conversation history

    Every AI session, stored in your cloud

  • Wiki & knowledge base

    Your team's institutional knowledge

  • Metadata & indexes

    Query plans, schema caches, search indexes

  • Credentials & secrets

    API keys, OAuth tokens, database passwords

Private Connectivity

AWS PrivateLinkGCP Private Service ConnectAzure Private Endpoint

Our Control Plane

Hasura-hosted — auth, billing, and observability only

  • Authentication (SSO, SAML, OIDC)

  • Billing and subscription management

  • Observability and platform health monitoring

  • Version and update delivery

Why this matters: when an AI vendor is breached, your data goes with it

In one documented incident, attackers pivoted from a compromised AI tool into a major platform's environment variables, harvesting credentials for downstream services. With PromptQL BYOC, the equivalent attack stops at your cloud perimeter — credentials never leave your account. → See the incident

AI Sandboxing

The AI thinks it has direct access. It doesn't.

PromptQL's virtual data layer intercepts every AI action before it reaches your systems. AI models see familiar SQL and HTTP interfaces — but every request is validated, scoped, and credential-stripped underneath before touching your data.

Virtual SQL Layer

All data sources are unified behind a virtual SQL interface. The AI issues standard SQL; the layer enforces RLS, applies column masks, and injects ABAC claims before any query reaches your data.

No raw database credentials are ever exposed to the AI.

Sandboxed HTTP

All outbound AI requests are routed through a sandboxed API access layer. OAuth tokens are injected server-side — the AI sees a successful call, never the token. Domain allowlists and rate limits enforced at the boundary.

The AI cannot exfiltrate credentials it never had access to.

Computer Agents (SCAS)

Browser and desktop agents connect through the Secure Computer Agent Service (SCAS). Personal agents are accessible only to the user who created them. Shared agents are admin-configured and available to authorized project members. Agents connect to PromptQL via secure tunnels — no open inbound ports on the agent host.

Supply Chain Security

Your execution environment is fixed, auditable, and sealed.

When AI platforms let code pull arbitrary packages at runtime, a single poisoned dependency reaches every user. PromptQL eliminates this attack surface entirely: Python code runs in a sandboxed WebAssembly environment with a fixed, vetted dependency list — no dynamic installs, ever.

WebAssembly sandbox — structurally isolated

All Python execution happens inside a WebAssembly sandbox. Code has no access to the host filesystem, network, or system calls. A compromised package cannot escape the sandbox even if executed.

Isolation is structural, not policy-based — it cannot be bypassed.

Fixed, pinned dependency list

Only the Python standard library plus a specific set of vetted packages are available: numpy, pandas, openpyxl, reportlab. No pip install at runtime, ever. Enterprise customers can request reviewed custom dependency lists.

Every dependency is pinned and auditable — supply chain is a fixed surface, not a moving target.

Authorization

Three independent authorization layers.

Authorization in PromptQL is not a feature you configure — it's a structural guarantee enforced at every layer of the stack, before any data reaches the AI.

Layer 1

Data Authorization

  • Row-Level Security (RLS)

    Which rows the AI can see, per user/role

  • Column Restrictions

    PII and sensitive fields masked or excluded

  • Attribute-Based Access Control

    JWT claims injected into every query

  • Custom Claims

    Bring your own claims; enforced at the virtual SQL layer

Layer 2

API Integration Authorization

  • Personal OAuth

    Each user's tokens scoped to their identity

  • Shared OAuth

    Team integrations with explicit admin provisioning

  • API Key Management

    Secrets encrypted in your cloud; never plaintext

  • Scope Enforcement

    AI can only call endpoints in the declared manifest

Layer 3

Computer Agent Authorization

  • Personal agents

    Accessible only to the user who created them

  • Shared agents

    Admin-configured; available to authorized project members only

  • Session-scoped keys

    Agents connect to PromptQL via secure tunnels — no open inbound ports on the agent host

  • Audit trail

    All agent actions logged to your cloud storage

Comprehensive Audit Logs

Authorization tells you who can access what. Audit logs tell you what was actually accessed — and by whom.

SQL Query Audit

Every query executed, by whom, with full query text and result metadata

LLM Usage Audit

Token consumption tracked by user and model — cost and compliance in one view

Wiki Access Audit

Which wiki pages were accessed, when, and which were used to inform AI answers

HTTP API Call Audit

Every outbound API call made by the AI — endpoint, method, response status, and which user session triggered it

Computer Agent Action Audit

All SCAS actions logged — browser interactions, clicks, form fills — tied to user session

The Threat Landscape

2026 proved it: third-party AI access is the new perimeter.

The same properties that make AI tools powerful — broad data access, trusted OAuth grants, deep system integration — make them the highest-value targets for attackers. Multi-tenant SaaS wasn't designed for AI agents that hold tokens and execute queries on behalf of thousands of users simultaneously.

IncidentAug 2025

Salesloft Drift → Salesforce

Attackers stole OAuth refresh tokens for an AI sales integration and silently bulk-exported CRM data from 700+ organizations. MFA was irrelevant — the tokens were already trusted. AWS keys, Snowflake tokens, and CRM records were exfiltrated. FINRA issued a sector-wide alert.

Lesson: AI integrations with broad OAuth grants are org-wide skeleton keys.

FINRA cybersecurity alert
IncidentFeb–Apr 2026

Context.ai → Vercel

A Lumma Stealer infection at Context.ai — an enterprise AI tool granted deployment-level Google Workspace OAuth — gave attackers a trusted foothold inside Vercel. Non-sensitive env vars plus Supabase, Datadog, and Authkit credentials were harvested. Data listed at $2M on BreachForums.

Lesson: A breach of your AI vendor is a breach of your company.

Vercel security bulletin

How PromptQL is built

OAuth tokens injected at runtime — the AI never sees credentials

Integration tokens are injected server-side at the moment of the API call and never written to conversation context or model memory. There is no credential surface for the AI to leak.

All operational data stored exclusively in your cloud account

Conversations, credentials, wiki, metadata, and query history never transit or touch Hasura-operated infrastructure. BYOC means exactly that — your AWS, GCP, or Azure account is the only place data lives.

Authorization enforced at the virtual SQL layer, before queries reach the AI

Every query passes through RLS, column restrictions, and ABAC claims at the virtual SQL layer — not at the application layer, not post-query. The AI only ever receives already-filtered results.

Computer agents with explicit personal or shared access boundaries

Personal agents are accessible only to the user who created them. Shared agents are configured by project admins and available to authorized project members — not to arbitrary accounts. Each session uses short-lived tunnel keys.

Row-Level Security is a structural property, not a configuration option

RLS is enforced at the virtual SQL layer for every query from every user. There is no way to deploy PromptQL with RLS accidentally disabled on a data source.

Compliance & Certifications

Independently audited and certified. View our trust center →

SOC 2 Type IISOC 2 Type II
ISO 27001ISO 27001
HIPAAHIPAA
GDPRGDPR
CCPACCPA