Configuration
Introduction
The configuration.json
file is generated whenever you introspect a new connector. The file is located in the
<connector_name>/connector
sub-directory of its parent subgraph.
The configuration is a metadata object that lists all the database entities — such as tables — that the data connector has to know about in order to serve queries. When your database schema changes you will have to update the configuration accordingly.
While the configuration.json
file is generated and populated for you, you can hand-edit sections (such as
native queries) to manipulate what resources are available to your application.
Structure
The configuration object is a JSON object with the following fields:
{
"version": "v2",
"connection_uri": {
"value": "...",
"variable": "..."
},
"schemas": ["public"],
"tables": [...],
"functions": [],
}
Property: connection_uri
The connection URI for the datasource. This is a required field that can be specified either as a direct string value or as a reference to an environment variable:
{
"connection_uri": {
"value": "<connection_uri>"
}
}
This construction differs from source to source. Check out these docs for examples of connection strings for this and other sources.
Or using an environment variable:
{
"connection_uri": {
"variable": "JDBC_URL"
}
}
Property: schemas
This is an optional array of schema names to include in the introspection process. If not provided, all schemas will be included. Any schema passed in the JDBC URL will take precedence.
Example:
{
"schemas": ["schema1", "schema2"]
}
Property: tables
An array of table definitions generated automatically during introspection. Each table definition includes metadata about the table structure, columns, primary keys, and foreign keys.
Example:
{
"tables": [
{
"name": "public.customers",
"description": "Customer information table",
"category": "TABLE",
"columns": [
{
"name": "customer_id",
"description": "Unique customer identifier",
"type": {
"scalar_type": "INT64"
},
"nullable": false,
"auto_increment": false,
"is_primarykey": true
},
{
"name": "name",
"description": "Customer name",
"type": {
"scalar_type": "STRING"
},
"nullable": false,
"auto_increment": false
},
{
"name": "location",
"description": "Geographic location",
"type": {
"scalar_type": "GEOGRAPHY"
},
"nullable": true,
"auto_increment": false
},
{
"name": "tags",
"description": "Customer tags",
"type": {
"array_type": {
"scalar_type": "STRING"
}
},
"nullable": true,
"auto_increment": false
}
],
"primary_keys": ["customer_id"],
"foreign_keys": {
"fk_customer_order": {
"column_mapping": {
"customer_id": "customer_id"
},
"foreign_collection": "public.orders"
}
}
}
]
}
Versioning & upgrading
The JDBC connector configuration uses a version field to indicate its schema version:
{
"version": "v2"
// other configuration properties
}
This version field helps the connector understand how to interpret the rest of the configuration. As the connector evolves, new configuration versions may be introduced to support new features or changes in behavior.
Configuration versions
The JDBC connector configuration has gone through the following versions:
- v1: Initial configuration format that provides the foundation for JDBC connector configuration
- v2: Current configuration format that uses jooq SQLDataType for all sources, providing better type handling and compatibility across different database systems
Upgrading configuration
When a new configuration version is available, you can upgrade your existing configuration using the Hasura CLI plugin command:
# Upgrade the configuration to the latest version
ddn connector plugin --connector <subgraph>/connector/<connector>/connector.yaml upgrade --config-file /current/config/file/path --outfile /new/config/file/path
The upgrade process will automatically convert your configuration to the latest format while preserving your existing data source connections and schema information. This versioning system allows for future schema evolution while maintaining backward compatibility.
What changes during an upgrade
When upgrading your configuration from v1 to v2:
- Column type handling is improved with jooq SQLDataType for better cross-database compatibility
- Configuration structure is refactored for better organization of versioned code
- Type parameters are properly handled for more robust configuration parsing
The upgrade process is designed to be non-destructive, preserving all your existing data source connections and schema information while enabling access to new features and improved type handling.
Native queries
Native queries allow you to use the SQL syntax of the underlying data source to create custom operations and expose them as models in your application. This is useful for complex queries, stored procedures, or custom functions that you want to leverage directly in your API.
Native query structure
A native query is a single SQL statement that returns results and can take arguments. The JDBC connector supports two methods for defining native queries:
- File-based approach (recommended): Store SQL queries in separate files
- Configuration-based approach: Define queries directly in the
configuration.json
file
File-based native queries
To create a file-based native query:
-
Create a directory structure for your native operations:
mkdir -p <subgraph>/connector/<connector>/native_operations/queries/
-
Create a SQL file with your query, using
{{parameter}}
syntax for parameters:-- <subgraph>/connector/<connector>/native_operations/queries/get_customers_by_region.sql
SELECT * FROM customers
WHERE region = :region
AND sales > :min_sales -
Register the query using the CLI:
ddn connector plugin --connector <subgraph>/connector/<connector>/connector.yaml -- \
native-queries create --operation-path native_operations/queries/get_customers_by_region.sql --name get_customers_by_region -
Update your metadata to track the new native query:
ddn connector-link update <connector_name> --add-all-resources
Configuration-based native queries
You can also define native queries directly in the configuration.json
file:
{
"version": "v2",
"connection_uri": {
"value": "..."
},
"schemas": ["public"],
"tables": [...],
"functions": [
{
"name": "get_customers_by_region",
"description": "Get customers filtered by region",
"sql": "SELECT * FROM customers WHERE region = :region AND sales > :min_sales",
"parameters": [
{
"name": "region",
"description": "Region to filter by",
"type": {
"scalar_type": "STRING"
}
},
{
"name": "min_sales",
"description": "Minimum sales amount",
"type": {
"scalar_type": "INT64"
}
}
],
"result_type": {
"type": "array",
"element_type": {
"type": "named",
"name": "public.customers"
}
}
}
]
}
Parameter syntax
The JDBC connector supports colon prefix syntax to specify parameters. This syntax is translated to parameterized queries, which helps prevent SQL injection.
Important syntax rules
When writing native queries, follow these rules:
- Parameters as scalar values only: Parameters can only be used in place of scalar values, not table names, column names, or other SQL parts
- No quoting of string parameters: Don't add quotes around parameters (use
:name
not':name'
) - Single statements only: Each native query should be a single SQL statement without a semicolon at the end
- String patterns with concatenation: For LIKE patterns, use concatenation (e.g.,
LIKE '%' || :search || '%'
) - No "hasura_" prefixed parameters: Parameter names starting with
hasura_
are reserved
Result types
The result_type
field defines the structure of data returned by the native query:
- Scalar value: A single value (string, number, boolean, etc.)
- Array of values: A list of scalar values or objects
- Named type: References an existing table structure
- Custom object type: A custom structure defined for the query result
Once defined, native queries are exposed in your application and made available to PromptQL.