Session Isolation
Every user gets an isolated DuckDB instance.
Each authenticated user session runs queries against an ephemeral DuckDB instance that is sandboxed from the filesystem, other sessions, and dangerous SQL statements.
User A ──→ DuckDB Instance A (read-only attach, sandboxed)
User B ──→ DuckDB Instance B (read-only attach, sandboxed)
User C ──→ DuckDB Instance C (read-only attach, sandboxed)How It Works
When a user makes their first /data request, the engine allocates a DuckDB instance from a pool. The instance has the user's attached databases loaded but runs in a restricted mode that blocks dangerous operations. When the session expires or the instance is idle beyond the timeout, it is destroyed and returned to the pool.
What Is Blocked
DuckDB instances are hardened by disabling capabilities that could be used for data exfiltration or denial of service:
| Blocked Category | Examples |
|---|---|
| Filesystem access | COPY TO, EXPORT DATABASE, read_csv('/etc/passwd') |
| External access | httpfs extension, read_parquet('https://...') |
| Schema modification | CREATE TABLE, DROP TABLE, ALTER TABLE |
| System functions | current_setting(), pg_read_file() |
| Unsafe extensions | Loading arbitrary extensions at runtime |
All data access goes through the engine's permission layer, which validates and modifies the parameterized SQL received from the Drizzle Proxy client. Users cannot bypass permission filters or access unauthorized tables.
Instance Pooling
Creating a DuckDB instance is fast but not free. The engine maintains a pool of pre-initialized instances to minimize latency:
const engine = createEngine({
duckdb: {
poolSize: 10, // max concurrent instances
idleTimeout: 300_000, // destroy idle instances after 5 minutes
maxMemory: '256MB', // memory limit per instance
threads: 2, // CPU threads per instance
},
})| Setting | Default | Description |
|---|---|---|
poolSize | 10 | Maximum number of concurrent DuckDB instances |
idleTimeout | 300_000 | Milliseconds before an idle instance is destroyed |
maxMemory | '256MB' | Maximum memory per instance |
threads | 2 | CPU threads allocated per instance |
Resource Limits
Each instance is constrained to prevent a single user from consuming all server resources:
- Memory -- hard cap via
maxMemory. Queries exceeding the limit are terminated. - CPU -- limited to
threadscores. Long-running queries are killed afterqueryTimeout. - Time --
queryTimeout(default 30 seconds) applies to every query. Exceeded queries return408 Request Timeout.
const engine = createEngine({
duckdb: {
maxMemory: '256MB',
threads: 2,
queryTimeout: 30_000,
},
})DuckDB Hardening
The engine applies the following DuckDB settings to every session instance:
SET enable_external_access = false;
SET enable_fsst_vectors = false;
SET allow_unsigned_extensions = false;
SET lock_configuration = true;enable_external_access = false-- blockshttpfs,s3, and any network I/O from within DuckDBallow_unsigned_extensions = false-- prevents loading untrusted extensionslock_configuration = true-- prevents the session from changing any settings after initialization
Session Lifecycle
1. Request arrives with JWT
2. JWT validated → user identity extracted
3. Instance allocated from pool (or created if pool is empty)
4. Databases attached in read-only mode
5. Query executed with permission filters applied
6. Results returned to client
7. Instance returned to pool (or destroyed if pool is full)Instances do not persist any state between requests. Each query starts from a clean slate with only the attached databases visible.