Introduction
sqltgen is a multi-language SQL-to-code generator. You write standard SQL — a schema (DDL) and annotated query files — and sqltgen emits fully typed, idiomatic database access code in Java, Kotlin, Rust, Go, Python, TypeScript, and JavaScript.
No ORM. No reflection. No runtime query building. Just your SQL, compiled to code.
sqltgen generate
What it does
-
Parse your schema — sqltgen reads
CREATE TABLEandALTER TABLEstatements and builds an in-memory type model of every table and column. -
Analyse your queries — each annotated query is parsed and its parameter types and result column types are inferred from the schema. No guessing; the inference is derived from the actual SQL.
-
Emit typed code — for every configured target language, sqltgen writes source files containing model types (one per table) and typed query functions.
The generated code is ready to use with standard database drivers — JDBC for Java/Kotlin, sqlx for Rust, psycopg3/sqlite3/mysql-connector for Python, pg / better-sqlite3 / mysql2 for TypeScript/JavaScript.
Why sqltgen?
You already wrote the SQL. ORMs ask you to translate your mental model into a different API; sqltgen lets you keep writing SQL and adds type safety around it.
It's not another dependency — it's your code. The generated files are plain
source code that belongs to your project. They use only your language's standard
database driver (JDBC, sqlx, psycopg3, pg, etc.) — there is no sqltgen-runtime
package, no framework sitting in your import path. If you stop using sqltgen
tomorrow, your code still compiles and runs. You can read it, debug it, even
hand-edit it. Stack traces go through your code and your driver, not through a
library you don't control. Upgrading sqltgen is just re-running the generator and
reviewing the diff — there is no runtime migration.
The generated code is readable. Output looks like code a competent developer wrote by hand: idiomatic function names, proper nullability, correct types. There are no codegen artifacts to work around.
No live database required — and fast because of it. sqltgen analyses your DDL statically. There is nothing to connect, migrate, or seed before you can generate. This also makes it genuinely fast: analysis and codegen are pure in-memory computation, so generation completes in milliseconds regardless of schema size. In CI, that means a single binary invocation with no infrastructure setup — no database service container, no network, no health-check polling.
All the SQL features you use. JOINs, CTEs, subqueries, aggregates, UNION, RETURNING, list parameters — all supported. Unknown constructs are handled gracefully rather than failing the whole build.
Comparison to sqlc
sqltgen is inspired by sqlc and shares its core philosophy: write SQL, get typed functions. Both tools perform static analysis from DDL files and neither requires a running database. The main design difference is in how language targets are delivered:
| sqltgen | sqlc | |
|---|---|---|
| Implementation language | Rust | Go |
| Supported targets | Java, Kotlin, Rust, Go, Python, TypeScript, JavaScript | Go, Python, TypeScript (+ community plugins) |
| Supported dialects | PostgreSQL, SQLite, MySQL | PostgreSQL, MySQL, SQLite |
| Language targets | Built-in, native per driver | Built-in + WASM plugin system |
| Config format | sqltgen.json | sqlc.yaml |
sqltgen builds each language target directly into the binary with first-class support for the idiomatic driver in that ecosystem. sqlc's plugin system allows the community to add targets independently. Neither approach is strictly better — they reflect different trade-offs between extensibility and native integration.
Supported targets
| Language | Driver |
|---|---|
| Java | JDBC (java.sql) |
| Kotlin | JDBC (java.sql) |
| Rust | sqlx (async) |
| Python | psycopg3 / sqlite3 / mysql-connector |
| TypeScript | pg / better-sqlite3 / mysql2 |
| JavaScript | same as TypeScript (JSDoc types) |
| Go | database/sql |
Supported dialects
- PostgreSQL — full support
- SQLite — full support
- MySQL — full support
Installation
Docker (no Rust required)
The quickest way to try sqltgen without installing anything. The
sqltgen/sqltgen image on Docker Hub
contains only the binary and its runtime — no shell, no package manager.
Run from your project root, mounting the current directory as /workspace:
docker run --rm -v $(pwd):/workspace sqltgen/sqltgen generate --config sqltgen.json
For convenience, wrap it in a shell script and put it on your $PATH:
#!/bin/sh
exec docker run --rm -v "$(pwd):/workspace" sqltgen/sqltgen "$@"
Tags:
edgetracks the latest commit onmainand is updated on every push. Versioned tags (v0.1.0, etc.) will be added with each stable release.
Build from source
The simplest install method. Requires a Rust stable toolchain.
git clone https://github.com/sqltgen/sqltgen.git
cd sqltgen
cargo build --release
The binary lands at target/release/sqltgen. Copy it anywhere on your PATH:
cp target/release/sqltgen ~/.local/bin/
cargo install
If you have Rust installed, cargo install fetches and compiles sqltgen from
crates.io:
cargo install sqltgen
The binary is placed in ~/.cargo/bin/, which is on your PATH if you installed
Rust via rustup.
Homebrew (macOS / Linux)
Distribution packages are planned for v0.1.0. This section will be updated when they are available.
brew install sqltgen/tap/sqltgen
curl / pre-built binaries
Pre-built binaries for Linux (x86_64, aarch64), macOS (x86_64, Apple Silicon), and Windows are available on the GitHub releases page.
# Linux x86_64 — replace VERSION with the latest release tag
curl -L https://github.com/sqltgen/sqltgen/releases/download/VERSION/sqltgen-x86_64-unknown-linux-musl.tar.gz \
| tar xz
chmod +x sqltgen
sudo mv sqltgen /usr/local/bin/
.deb / .rpm packages
Planned for v0.1.0.
# Debian / Ubuntu
sudo dpkg -i sqltgen_VERSION_amd64.deb
# Fedora / RHEL
sudo rpm -i sqltgen-VERSION.x86_64.rpm
Verify the installation
sqltgen --version
Expected output: sqltgen VERSION
Quickstart
This guide walks you from zero to running code in about five minutes. We will build a small bookstore schema, write a few queries, generate Java code, and run it against a PostgreSQL database.
If you prefer a different language, swap
"java"for"rust","kotlin","python","typescript", or"javascript"in the config below. The schema and query files stay the same.
1. Install sqltgen
Docker (no Rust required — recommended for trying sqltgen):
docker run --rm -v $(pwd):/workspace sqltgen/sqltgen generate --config sqltgen.json
cargo install (if you have Rust):
cargo install sqltgen
Build from source:
git clone https://github.com/sqltgen/sqltgen.git
cd sqltgen
cargo build --release
# binary at target/release/sqltgen
See the installation guide for all options including pre-built binaries and Homebrew.
2. Write your schema
Create schema.sql:
CREATE TABLE author (
id BIGSERIAL PRIMARY KEY,
name TEXT NOT NULL,
bio TEXT,
birth_year INTEGER
);
CREATE TABLE book (
id BIGSERIAL PRIMARY KEY,
author_id BIGINT NOT NULL,
title TEXT NOT NULL,
genre TEXT NOT NULL,
price NUMERIC(10, 2) NOT NULL,
published_at DATE
);
3. Annotate your queries
Create queries.sql:
-- name: GetAuthor :one
SELECT id, name, bio, birth_year
FROM author
WHERE id = @id;
-- name: ListAuthors :many
SELECT id, name, bio, birth_year
FROM author
ORDER BY name;
-- name: CreateAuthor :one
INSERT INTO author (name, bio, birth_year)
VALUES (@name, @bio, @birth_year)
RETURNING *;
-- name: DeleteAuthor :exec
DELETE FROM author WHERE id = @id;
-- name: GetBooksByAuthor :many
SELECT id, author_id, title, genre, price, published_at
FROM book
WHERE author_id = @author_id
ORDER BY title;
Each -- name: QueryName :command annotation names the query and tells sqltgen
what kind of result to expect:
:one— return a single optional row:many— return all rows as a list:exec— execute and return nothing:execrows— execute and return the affected row count
4. Create the config
Create sqltgen.json:
{
"version": "1",
"engine": "postgresql",
"schema": "schema.sql",
"queries": "queries.sql",
"gen": {
"java": { "out": "gen", "package": "com.example.db" }
}
}
5. Generate
sqltgen generate
sqltgen writes the following files under gen/com/example/db/:
gen/com/example/db/
Author.java — record type for the author table
Book.java — record type for the book table
Queries.java — static query functions
Querier.java — DataSource-backed wrapper
6. Use the generated code
Add the PostgreSQL JDBC driver to your project:
<!-- pom.xml -->
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.7.3</version>
</dependency>
Then call the generated functions:
import java.sql.Connection;
import java.sql.DriverManager;
import java.util.List;
import java.util.Optional;
import com.example.db.Author;
import com.example.db.Queries;
Connection conn = DriverManager.getConnection(
"jdbc:postgresql://localhost:5432/mydb", "user", "pass");
// Insert a new author
Optional<Author> author = Queries.createAuthor(conn,
"Ursula K. Le Guin", "American author of science fiction", 1929);
// Fetch by ID
Optional<Author> found = Queries.getAuthor(conn, author.get().id());
// List all authors
List<Author> all = Queries.listAuthors(conn);
// Delete
Queries.deleteAuthor(conn, author.get().id());
7. What was generated
Open gen/com/example/db/Queries.java. You will find:
- A
private static final String SQL_…constant for each query. - A static method for each query, with typed parameters matching the
@paramannotations, and a return type derived from the:commandand the inferred result columns.
public static Optional<Author> getAuthor(Connection conn, long id)
throws SQLException {
try (var ps = conn.prepareStatement(SQL_GET_AUTHOR)) {
ps.setLong(1, id);
try (var rs = ps.executeQuery()) {
if (!rs.next()) return Optional.empty();
return Optional.of(new Author(
rs.getLong(1),
rs.getString(2),
rs.getString(3),
rs.getObject(4, Integer.class)
));
}
}
}
No reflection. No runtime overhead. The generated code does exactly what you would write by hand — just without the repetition.
Next steps
- Configuration — full reference for
sqltgen.json - Writing queries — named params, list params, RETURNING, CTEs
- Type mapping — how SQL types map to each language
- Language guides — driver setup, Querier pattern, and tips per language
Writing your schema
sqltgen reads standard DDL. Point the schema field in sqltgen.json at a
single .sql file or a directory of migration files — sqltgen loads them all
and applies every statement in order.
Tables
CREATE TABLE author (
id BIGSERIAL PRIMARY KEY,
name TEXT NOT NULL,
bio TEXT, -- nullable: no NOT NULL → nullable
birth_year INTEGER
);
CREATE TABLE book (
id BIGSERIAL PRIMARY KEY,
author_id BIGINT NOT NULL,
title TEXT NOT NULL,
genre TEXT NOT NULL,
price NUMERIC(10, 2) NOT NULL,
published_at DATE -- nullable
);
Nullability rules:
- A column with
NOT NULLis non-null. - A column without
NOT NULLis nullable. PRIMARY KEYcolumns are implicitly non-null.
Views
CREATE VIEW is supported. sqltgen infers the column types of a view from
its SELECT body using the same type resolver used for query parsing:
CREATE VIEW author_book_count AS
SELECT a.id, a.name, COUNT(b.id) AS book_count
FROM author a
LEFT JOIN book b ON b.author_id = a.id
GROUP BY a.id, a.name;
Views are registered in the schema as first-class entities. You can query them in annotated query files exactly like tables:
-- name: GetAuthorBookCount :one
SELECT id, name, book_count
FROM author_book_count
WHERE id = @id;
Views that reference other views are resolved correctly as long as they are declared after the views they depend on.
DROP VIEW [IF EXISTS] is also supported.
ALTER TABLE
sqltgen applies ALTER TABLE statements in declaration order, so a directory
of numbered migration files works exactly as expected:
| Operation | PostgreSQL | SQLite | MySQL |
|---|---|---|---|
ADD COLUMN [IF NOT EXISTS] | ✅ | ✅ | ✅ |
DROP COLUMN [IF EXISTS] | ✅ | — | ✅ |
RENAME COLUMN … TO … | ✅ | ✅ | ✅ |
RENAME TO … | ✅ | ✅ | ✅ |
ALTER COLUMN … SET/DROP NOT NULL | ✅ | — | ✅ |
ALTER COLUMN … TYPE | ✅ | — | ✅ |
ADD [CONSTRAINT …] PRIMARY KEY | ✅ | — | ✅ |
DROP TABLE
DROP TABLE [IF EXISTS] removes the table from the schema model. Subsequent
queries referencing the dropped table will produce a warning.
What is silently ignored
Statements that do not affect the type model — CREATE INDEX, CREATE FUNCTION,
CREATE TRIGGER, CREATE SEQUENCE, COMMENT ON, etc. — are skipped without
error. This lets you point sqltgen at a real migration directory without curating
the SQL.
Migration directory example
migrations/
001_create_authors.sql
002_create_books.sql
003_add_genre_to_books.sql
004_create_sales.sql
"schema": "migrations/"
sqltgen loads the files in lexicographic order (which matches numeric prefixes) and applies all DDL statements in sequence.
If each migration file contains both an "up" and a "down" section, use
schema_stop_marker to tell sqltgen where the down section begins:
"schema": "migrations/",
"schema_stop_marker": "-- migrate:down"
Everything from the marker line onward is ignored, so only the up DDL reaches the schema parser. See Migration files with up/down sections for the full list of supported tools.
Configuration
sqltgen is configured by a JSON file, sqltgen.json by default.
sqltgen generate # reads sqltgen.json in the current directory
sqltgen generate --config path/to/sqltgen.json
sqltgen generate -c path/to/sqltgen.json
Full reference
{
"version": "1",
"engine": "postgresql",
"schema": "migrations/",
"queries": "queries.sql",
"gen": {
"java": { "out": "src/main/java", "package": "com.example.db" },
"kotlin": { "out": "src/main/kotlin", "package": "com.example.db" },
"rust": { "out": "src/db", "package": "" },
"python": { "out": "gen", "package": "" },
"typescript": { "out": "src/db", "package": "" },
"javascript": { "out": "src/db", "package": "" }
}
}
Top-level fields
| Field | Required | Description |
|---|---|---|
version | yes | Must be "1". |
engine | yes | SQL dialect. One of "postgresql", "sqlite", "mysql". |
schema | yes | Path to a .sql file or a directory. See Schema path. |
schema_stop_marker | no | Strip down-migration sections. See Migration files with up/down sections. |
default_schema | no | Default schema for unqualified table references. See Default schema. |
queries | yes | Query source(s). See Queries field. |
gen | yes | Map of language key → output config. At least one entry required. |
Schema path
The schema field accepts:
-
A single file —
"schema.sql". -
A directory — all
.sqlfiles in the directory are loaded in lexicographic order. This is ideal for numbered migration files:migrations/ 001_create_users.sql 002_create_posts.sql 003_add_tags.sql"schema": "migrations/"sqltgen applies
CREATE TABLE,ALTER TABLE, andDROP TABLEin file order, so the final schema reflects the fully-migrated state.
Statements sqltgen does not recognise (CREATE INDEX, CREATE FUNCTION, etc.)
are silently skipped.
Migration files with up/down sections
Some migration tools (dbmate, goose, golang-migrate) store both the "up" and "down" SQL in a single file, separated by a comment marker:
-- migrate:up
CREATE TABLE users (id BIGINT PRIMARY KEY, name TEXT NOT NULL);
-- migrate:down
DROP TABLE users;
Set schema_stop_marker to the down-section marker and sqltgen will discard
everything from that line onward in each file:
{
"schema": "migrations/",
"schema_stop_marker": "-- migrate:down"
}
Common values by tool:
| Tool | schema_stop_marker |
|---|---|
| dbmate | "-- migrate:down" |
| goose | "-- +goose Down" |
| golang-migrate / sql-migrate | "-- +migrate Down" |
Omit the field (or set it to null) to read files in full — this is the
default and preserves existing behaviour.
Default schema
When your DDL uses schema-qualified table names (e.g. CREATE TABLE public.users),
unqualified references in queries (e.g. SELECT * FROM users) need to know which
schema to match against. Set default_schema to control this:
{
"engine": "postgresql",
"default_schema": "public"
}
If omitted, sqltgen uses the engine's conventional default:
| Engine | Default |
|---|---|
postgresql | "public" |
sqlite | "main" |
mysql | none (schema = database; specify explicitly if needed) |
This also applies to DDL operations: ALTER TABLE users will match
public.users when the default schema is "public".
Most projects can omit this field entirely. Set it only if your DDL uses a
non-standard schema (e.g. a custom PostgreSQL search_path).
Queries field
The queries field accepts three forms.
Single file
All queries land in one output file per language (Queries.java, queries.ts, etc.):
"queries": "queries.sql"
Array of paths or globs
Each file becomes its own group, named after the file stem. users.sql → group
users → UsersQueries.java / users.ts.
"queries": ["queries/users.sql", "queries/posts.sql"]
Glob patterns are supported and are sorted lexicographically:
"queries": ["queries/**/*.sql"]
An error is raised if a glob matches no files.
Grouped map
Full control over group names and which files belong to each group. Values can be a single path/glob or an array:
"queries": {
"users": "queries/users.sql",
"posts": ["queries/posts/**/*.sql", "queries/extra.sql"]
}
Output file names per group
| Group name | Java / Kotlin | Rust / Python / TypeScript / JavaScript |
|---|---|---|
users | UsersQueries.java / .kt | users.rs / users.py / users.ts / users.js |
posts | PostsQueries.java / .kt | posts.rs / posts.py / posts.ts / posts.js |
| (single file) | Queries.java / .kt | queries.rs / queries.py / queries.ts / queries.js |
Per-language output config (gen.*)
| Field | Required | Description |
|---|---|---|
out | yes | Output root directory. Generated files are written under this path. |
package | yes | Package or module name. Empty string "" for languages without packages (Rust, Python, TypeScript, JavaScript). For Java/Kotlin: "com.example.db". |
list_params | no | Strategy for list/IN parameters: "native" (default) or "dynamic". See List parameter strategies. |
Language keys
Valid keys in the gen map:
| Key | Language |
|---|---|
java | Java (JDBC) |
kotlin | Kotlin (JDBC) |
rust | Rust (sqlx) |
python | Python (psycopg3 / sqlite3 / mysql-connector) |
typescript | TypeScript |
javascript | JavaScript (JSDoc types) |
go | Go (database/sql) |
List parameter strategies
When a query uses -- @ids type[] to pass a list to WHERE id IN (@ids),
sqltgen rewrites the SQL in one of two ways.
native (default)
A single bind is used with an engine-native JSON/array expression. The list size does not need to be known at code-generation time.
| Engine | Rewritten SQL |
|---|---|
| PostgreSQL | WHERE id = ANY($1) — native PostgreSQL array |
| SQLite | WHERE id IN (SELECT value FROM json_each(?)) — JSON array string |
| MySQL | WHERE id IN (SELECT value FROM JSON_TABLE(?, '$[*]' COLUMNS (value BIGINT PATH '$'))) |
dynamic
The IN (?,?,…) clause is built at runtime with one placeholder per element.
The SQL string is reconstructed on every call.
"java": { "out": "gen", "package": "com.example.db", "list_params": "dynamic" }
Use dynamic when:
- Your driver does not support the native array/JSON approach.
- You prefer simple, portable SQL.
- Lists are always small and performance is not a concern.
Complete example
{
"version": "1",
"engine": "postgresql",
"schema": "migrations/",
"queries": {
"users": "sql/users.sql",
"posts": ["sql/posts.sql", "sql/post_tags.sql"],
"search": "sql/search/**/*.sql"
},
"gen": {
"java": {
"out": "src/main/java",
"package": "com.example.db",
"list_params": "dynamic"
},
"typescript": {
"out": "src/db",
"package": ""
}
}
}
Writing queries
sqltgen reads query files containing standard SQL with a lightweight annotation
format. Each query starts with a -- name: comment that gives it a name and a
command.
Annotation syntax
-- name: QueryName :command
SELECT …
QueryName— PascalCase name used to derive function/method names in all generated backends (converted tocamelCasefor Java/Kotlin/TypeScript/JavaScript andsnake_casefor Rust/Python).:command— see Query commands below.
The annotation must appear immediately before the SQL statement. Any whitespace between the annotation and the statement is ignored.
Multiple queries live in the same file, separated by their annotations:
-- name: GetAuthor :one
SELECT id, name FROM author WHERE id = @id;
-- name: ListAuthors :many
SELECT id, name FROM author ORDER BY name;
Query commands
| Command | Return type | Notes |
|---|---|---|
:one | One optional row | Optional<T> / T? / Option<T> / T | None / T | null |
:many | All rows as a list | List<T> / Vec<T> / list[T] / T[] |
:exec | Nothing | void / Unit / () / None / Promise<void> |
:execrows | Affected row count | long / Long / u64 / int / Promise<number> |
Named parameters
Use @param_name in the SQL body. sqltgen infers the parameter type from the
context in which it is used — a WHERE clause comparison, a SET assignment,
a VALUES row, etc.
-- name: GetBook :one
SELECT id, author_id, title, genre, price, published_at
FROM book
WHERE id = @id;
@id above is compared against the book.id column (BIGINT NOT NULL), so
sqltgen infers id: i64 in Rust, id: long in Java, id: int in Python, etc.
Named parameters become named function arguments in every generated backend.
Type and nullability overrides
Above the -- name: annotation, you can add per-parameter annotation lines to
override the inferred type or nullability:
-- name: UpdateAuthorBio :exec
-- @bio null
UPDATE author SET bio = @bio WHERE id = @id;
| Annotation | Effect |
|---|---|
-- @name null | Mark the parameter nullable |
-- @name not null | Mark the parameter non-null |
-- @name type | Override the type (e.g. -- @published_at date) |
-- @name type null | Override type and mark nullable |
-- @name type not null | Override type and mark non-null |
Valid types follow the SQL type names used in the type mapping reference:
bigint, text, boolean, date, timestamp, uuid, etc.
Annotations apply only to the query immediately following them.
Positional parameters
Instead of @name, you can use the engine's native positional placeholders:
- PostgreSQL:
$1,$2,$3, … - SQLite:
?1,?2,?3, … - MySQL:
$1,$2,$3, …
-- name: ListBooksWithLimit :many
SELECT id, title, genre, price
FROM book
ORDER BY title
LIMIT $1 OFFSET $2;
Positional parameters get auto-generated names (p1, p2, …) in generated
function signatures.
Named and positional parameters cannot be mixed in the same query.
List parameters (IN clauses)
To pass a variable-length list to an IN clause, declare the parameter with
an array type annotation:
-- name: GetBooksByIds :many
-- @ids bigint[] not null
SELECT id, author_id, title, genre, price, published_at
FROM book
WHERE id IN (@ids)
ORDER BY title;
The generated function accepts a collection (List<Long> in Java, &[i64] in
Rust, list[int] in Python, number[] in TypeScript). The SQL is rewritten to
the engine's native expression — see
List parameter strategies.
RETURNING clauses
RETURNING columns are resolved just like SELECT projections. When RETURNING *
targets a single table, sqltgen reuses the existing table model type:
-- name: CreateAuthor :one
INSERT INTO author (name, bio, birth_year)
VALUES (@name, @bio, @birth_year)
RETURNING *;
-- returns: Optional<Author> (reuses Author type)
-- name: DeleteAuthor :one
DELETE FROM author WHERE id = @id
RETURNING id, name;
-- returns: Optional<DeleteAuthorRow> (explicit column list → inline type)
JOINs
Result columns from JOINs are resolved per-table. Columns from the nullable side of an outer join (LEFT/RIGHT/FULL JOIN) are automatically made nullable. When the result spans multiple tables, an inline row type is emitted:
-- name: ListBooksWithAuthor :many
SELECT b.id, b.title, b.genre, b.price,
a.name AS author_name, a.bio AS author_bio
FROM book b
JOIN author a ON a.id = b.author_id
ORDER BY b.title;
-- emits: ListBooksWithAuthorRow { id, title, genre, price, author_name, author_bio }
CTEs (WITH)
CTE result columns are available to the outer query as if they were tables.
Data-modifying CTEs (WITH … DELETE … RETURNING …) are fully supported:
-- name: GetTopSellingBooks :many
WITH book_sales AS (
SELECT book_id, SUM(quantity) AS units_sold
FROM sale_item
GROUP BY book_id
)
SELECT b.id, b.title, b.genre, bs.units_sold
FROM book b
JOIN book_sales bs ON bs.book_id = b.id
ORDER BY bs.units_sold DESC;
-- name: ArchiveOldBooks :many
WITH archived AS (
DELETE FROM book WHERE published_at < $1 RETURNING id, title
)
SELECT id, title FROM archived ORDER BY title;
Subqueries
Subqueries in WHERE, EXISTS, scalar positions in the SELECT list, and
derived tables in FROM are all supported:
-- name: GetBooksNotByAuthor :many
SELECT id, title, genre
FROM book
WHERE author_id NOT IN (SELECT id FROM author WHERE name = $1)
ORDER BY title;
-- name: GetBookWithAuthorName :many
SELECT b.id, b.title,
(SELECT a.name FROM author a WHERE a.id = b.author_id) AS author_name
FROM book b
ORDER BY b.title;
-- author_name is nullable (scalar subquery → may return NULL)
Aggregates and expressions
COUNT, SUM, MIN, MAX, AVG are all supported:
COUNT(*)andCOUNT(col)always produce a non-null integer.SUM,MIN,MAX,AVGproduce a nullable result if the argument or its source column is nullable, or when the query has a LEFT JOIN.
-- name: CountBooksByGenre :many
SELECT genre, COUNT(*) AS book_count
FROM book
GROUP BY genre
ORDER BY genre;
-- name: GetSaleStats :one
SELECT MIN(quantity) AS min_qty,
MAX(quantity) AS max_qty,
SUM(quantity) AS sum_qty
FROM sale_item;
CASE WHEN … END, COALESCE, BETWEEN, LIKE, HAVING, and EXISTS
parameters are all inferred from their surrounding context.
UNION / INTERSECT / EXCEPT
Set operations are supported. Result columns are derived from the first branch, and column types are coerced across branches:
-- name: GetAllPeopleNames :many
SELECT name FROM author
UNION
SELECT display_name AS name FROM user_account
ORDER BY name;
DISTINCT and LIMIT / OFFSET
-- name: GetDistinctGenres :many
SELECT DISTINCT genre FROM book ORDER BY genre;
-- name: ListBooksWithLimit :many
SELECT id, title, genre, price FROM book ORDER BY title
LIMIT $1 OFFSET $2;
Table-wildcard reuse
When a query selects all columns from a single table (SELECT * or SELECT t.*)
and the join structure means no outer-join nullability is introduced, sqltgen
reuses the table's existing model type instead of emitting a per-query row struct:
-- name: GetAllBookFields :many
SELECT b.*
FROM book b
ORDER BY b.id;
-- returns: List<Book> (not List<GetAllBookFieldsRow>)
Type mapping
Each language guide below shows how SQL types map to host-language types, how nullability is expressed, and which dependencies are required.
Nullability
Nullability is determined from the schema and the query structure:
- A column with
NOT NULLis non-null. - A column without
NOT NULLis nullable. PRIMARY KEYcolumns are implicitly non-null.- Columns from the outer side of a
LEFT/RIGHT/FULL JOINare made nullable. - Scalar subqueries in the SELECT list produce nullable results.
The sections below show the nullable form alongside the non-null form for each type.
Java
| SQL type | Non-null | Nullable |
|---|---|---|
BOOLEAN | boolean | Boolean |
SMALLINT | short | Short |
INTEGER / INT | int | Integer |
BIGINT / BIGSERIAL | long | Long |
REAL / FLOAT4 | float | Float |
DOUBLE PRECISION / FLOAT8 | double | Double |
NUMERIC / DECIMAL | BigDecimal | BigDecimal |
TEXT / VARCHAR / CHAR | String | String |
BYTEA | byte[] | byte[] |
DATE | LocalDate | LocalDate |
TIME | LocalTime | LocalTime |
TIMESTAMP | LocalDateTime | LocalDateTime |
TIMESTAMPTZ | OffsetDateTime | OffsetDateTime |
INTERVAL | String | String |
UUID | UUID | UUID |
JSON / JSONB | String | String |
type[] | List<T> | List<T> |
| Unknown | Object | Object |
Reference types (String, BigDecimal, etc.) are nullable by passing null.
Primitive types (boolean, int, long, etc.) are boxed when the column is
nullable.
The generated code uses only java.sql and standard JDK classes. No extra
imports are needed for basic types.
Kotlin
| SQL type | Non-null | Nullable |
|---|---|---|
BOOLEAN | Boolean | Boolean? |
SMALLINT | Short | Short? |
INTEGER / INT | Int | Int? |
BIGINT / BIGSERIAL | Long | Long? |
REAL / FLOAT4 | Float | Float? |
DOUBLE PRECISION / FLOAT8 | Double | Double? |
NUMERIC / DECIMAL | BigDecimal | BigDecimal? |
TEXT / VARCHAR / CHAR | String | String? |
BYTEA | ByteArray | ByteArray? |
DATE | LocalDate | LocalDate? |
TIME | LocalTime | LocalTime? |
TIMESTAMP | LocalDateTime | LocalDateTime? |
TIMESTAMPTZ | OffsetDateTime | OffsetDateTime? |
INTERVAL | String | String? |
UUID | UUID | UUID? |
JSON / JSONB | String | String? |
type[] | List<T> | List<T>? |
| Unknown | Any | Any? |
Rust
| SQL type | Non-null | Nullable |
|---|---|---|
BOOLEAN | bool | Option<bool> |
SMALLINT | i16 | Option<i16> |
INTEGER / INT | i32 | Option<i32> |
BIGINT / BIGSERIAL | i64 | Option<i64> |
REAL / FLOAT4 | f32 | Option<f32> |
DOUBLE PRECISION / FLOAT8 | f64 | Option<f64> |
NUMERIC / DECIMAL | rust_decimal::Decimal | Option<rust_decimal::Decimal> |
TEXT / VARCHAR / CHAR | String | Option<String> |
BYTEA / BLOB | Vec<u8> | Option<Vec<u8>> |
DATE | time::Date | Option<time::Date> |
TIME | time::Time | Option<time::Time> |
TIMESTAMP | time::PrimitiveDateTime | Option<time::PrimitiveDateTime> |
TIMESTAMPTZ | time::OffsetDateTime | Option<time::OffsetDateTime> |
INTERVAL | String | Option<String> |
UUID | uuid::Uuid | Option<uuid::Uuid> |
JSON / JSONB | serde_json::Value | Option<serde_json::Value> |
type[] | Vec<T> | Option<Vec<T>> |
| Unknown | serde_json::Value | Option<serde_json::Value> |
Enable sqlx features as needed:
sqlx = { version = "0.8", features = [
"runtime-tokio",
"postgres", # or "sqlite" / "mysql"
"time", # Date, Time, Timestamp, TimestampTz
"uuid", # UUID
"rust_decimal", # NUMERIC / DECIMAL
] }
Python
| SQL type | Non-null | Nullable |
|---|---|---|
BOOLEAN | bool | bool | None |
SMALLINT / INTEGER / BIGINT | int | int | None |
REAL / DOUBLE PRECISION | float | float | None |
NUMERIC / DECIMAL | decimal.Decimal | decimal.Decimal | None |
TEXT / VARCHAR / CHAR | str | str | None |
BYTEA / BLOB | bytes | bytes | None |
DATE | datetime.date | datetime.date | None |
TIME | datetime.time | datetime.time | None |
TIMESTAMP / TIMESTAMPTZ | datetime.datetime | datetime.datetime | None |
INTERVAL | datetime.timedelta | datetime.timedelta | None |
UUID | uuid.UUID | uuid.UUID | None |
JSON (psycopg3) | object | object | None |
JSON (sqlite3, mysql-connector) | str | str | None |
type[] | list[T] | list[T] | None |
| Unknown | Any | Any | None |
All date/time, decimal, and uuid types come from the standard library. psycopg3 automatically deserializes JSON columns to Python objects; sqlite3 and mysql-connector return the raw JSON string.
TypeScript
| SQL type | Non-null | Nullable |
|---|---|---|
BOOLEAN | boolean | boolean | null |
SMALLINT / INTEGER / BIGINT | number ⚠️ | number | null |
REAL / DOUBLE PRECISION / NUMERIC | number | number | null |
TEXT / VARCHAR / CHAR | string | string | null |
BYTEA / BLOB | Buffer | Buffer | null |
DATE / TIME / TIMESTAMP / TIMESTAMPTZ | Date | Date | null |
INTERVAL | string | string | null |
UUID | string | string | null |
JSON / JSONB | unknown | unknown | null |
type[] | T[] | T[] | null |
| Unknown | unknown | unknown | null |
⚠️
BIGINTmaps tonumber, which loses precision above 2⁵³. UseBigIntin application code if your IDs or values may exceed this range.
All types are built-in to TypeScript/Node.js — no extra dependencies needed for the type annotations themselves.
JavaScript
Type mapping is identical to TypeScript. Types are expressed as JSDoc comments
(@typedef, @param, @returns) rather than inline TypeScript syntax, but the
underlying types are the same.
Go
| SQL type | Non-null | Nullable |
|---|---|---|
BOOLEAN | bool | sql.NullBool |
SMALLINT / INTEGER | int32 | sql.NullInt32 |
BIGINT / BIGSERIAL | int64 | sql.NullInt64 |
REAL / DOUBLE PRECISION / NUMERIC | float64 | sql.NullFloat64 |
TEXT / VARCHAR / CHAR | string | sql.NullString |
BYTEA / BLOB | []byte | []byte |
DATE / TIME / TIMESTAMP / TIMESTAMPTZ | time.Time | sql.NullTime |
INTERVAL | string | sql.NullString |
UUID | string | sql.NullString |
JSON / JSONB | json.RawMessage | json.RawMessage |
type[] | []T | []T |
| Unknown | interface{} | interface{} |
All nullable types use the standard database/sql null wrappers. No extra
dependencies are required beyond the driver itself.
Running examples
Runnable example projects for all backends and dialects live in examples/.
Each is a self-contained project with its own Makefile containing generate,
build, and run targets.
Prerequisites
- Docker — for PostgreSQL and MySQL examples
- Java 21 + Maven — for Java and Kotlin examples
- Go 1.23+ — for Go examples
- Node 22+ — for TypeScript and JavaScript examples
- Python 3.11+ — for Python examples
- Rust stable — for Rust examples (same toolchain used to build sqltgen)
Running a single example
make -C examples/rust/postgresql run
make -C examples/java/sqlite run
make -C examples/python/mysql run
make -C examples/typescript/postgresql run
make -C examples/go/sqlite run
Each example starts its own database container if needed, runs
sqltgen generate, compiles the project, executes it against the database,
and tears down the container.
Available examples
| Language | PostgreSQL | SQLite | MySQL |
|---|---|---|---|
| Java | ✅ | ✅ | ✅ |
| Kotlin | ✅ | ✅ | ✅ |
| Rust | ✅ | ✅ | ✅ |
| Python | ✅ | ✅ | ✅ |
| TypeScript | ✅ | ✅ | ✅ |
| JavaScript | ✅ | ✅ | ✅ |
| Go | ✅ | ✅ | — |
Running all examples at once
make run-all
This starts one shared PostgreSQL container and one shared MySQL container, runs all examples against them in sequence, and tears the containers down at the end. Faster than running each example independently.
Example layout
Every example follows the same structure:
examples/{lang}/{dialect}/
sqltgen.json — config pointing at schema.sql + queries.sql
schema.sql — bookstore DDL
queries.sql — annotated queries
Makefile — generate / build / run targets
gen/ — generated code (checked in)
src/ — application code (or main.py, Main.java, etc.)
The bookstore schema (author, book, sale, sale_item tables) is shared
across all examples. It exercises a representative range of features: JOINs,
CTEs, aggregates, RETURNING, list parameters, nullable columns, and multiple
query types.
See examples/README.md for more detail.
Java
sqltgen generates JDBC-based Java code using only the standard java.sql package.
No framework, no reflection, no extra runtime dependencies.
Configuration
"java": {
"out": "src/main/java",
"package": "com.example.db"
}
| Field | Description |
|---|---|
out | Directory under which the package path is created. |
package | Java package name. Files are placed at out/com/example/db/. |
list_params | "native" (default) or "dynamic". See List parameter strategies. |
What is generated
For a schema with author and book tables and a single query file, sqltgen emits:
src/main/java/com/example/db/
Author.java — Java record for the author table
Book.java — Java record for the book table
Queries.java — static query functions
Querier.java — DataSource-backed connection-per-call wrapper
When using query grouping, each group produces its own
{GroupName}Queries.java and {GroupName}Querier.java.
Model records
// Author.java
package com.example.db;
public record Author(
long id,
String name,
String bio, // nullable → String (null if absent)
Integer birthYear // nullable → boxed Integer
) {}
- Non-null columns use primitive types (
long,int,boolean,double, etc.). - Nullable columns use boxed types (
Long,Integer,Boolean,Double, etc.) or reference types (String,BigDecimal,LocalDate, etc.). - SQL
snake_casecolumn names are converted to JavacamelCasefield names (birth_year→birthYear).
Query functions
// Queries.java
package com.example.db;
import java.sql.*;
import java.util.*;
public final class Queries {
public static Optional<Author> getAuthor(Connection conn, long id)
throws SQLException { … }
public static List<Author> listAuthors(Connection conn)
throws SQLException { … }
public static Optional<Author> createAuthor(Connection conn,
String name, String bio, Integer birthYear)
throws SQLException { … }
public static void deleteAuthor(Connection conn, long id)
throws SQLException { … }
public static long countAuthors(Connection conn)
throws SQLException { … } // :execrows
}
The SQL constant is emitted as a private static final String SQL_… field inside
the class. Each function prepares the statement, binds parameters, executes, and
maps the result — all in a single, self-contained method body.
Querier wrapper
Querier.java wraps a DataSource and opens a new connection per call:
// Querier.java
package com.example.db;
import javax.sql.DataSource;
import java.sql.*;
import java.util.*;
public final class Querier {
private final DataSource ds;
public Querier(DataSource ds) { this.ds = ds; }
public Optional<Author> getAuthor(long id) throws SQLException {
try (var conn = ds.getConnection()) {
return Queries.getAuthor(conn, id);
}
}
public List<Author> listAuthors() throws SQLException {
try (var conn = ds.getConnection()) {
return Queries.listAuthors(conn);
}
}
// …
}
Wiring up
Maven dependency
<!-- pom.xml — PostgreSQL driver -->
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.7.3</version>
</dependency>
<!-- SQLite -->
<dependency>
<groupId>org.xerial</groupId>
<artifactId>sqlite-jdbc</artifactId>
<version>3.47.0.0</version>
</dependency>
<!-- MySQL -->
<dependency>
<groupId>com.mysql</groupId>
<artifactId>mysql-connector-j</artifactId>
<version>9.1.0</version>
</dependency>
Using a plain Connection
import java.sql.Connection;
import java.sql.DriverManager;
import com.example.db.Author;
import com.example.db.Queries;
Connection conn = DriverManager.getConnection(
"jdbc:postgresql://localhost:5432/mydb", "user", "pass");
Optional<Author> author = Queries.getAuthor(conn, 1L);
List<Author> all = Queries.listAuthors(conn);
Using the Querier wrapper (connection pool)
import com.zaxxer.hikari.HikariDataSource;
import com.example.db.Querier;
var ds = new HikariDataSource();
ds.setJdbcUrl("jdbc:postgresql://localhost:5432/mydb");
ds.setUsername("user");
ds.setPassword("pass");
var q = new Querier(ds);
Optional<Author> author = q.getAuthor(1L);
List<Author> all = q.listAuthors();
Inline row types
When a query result does not match any single table (JOINs, partial RETURNING, etc.),
sqltgen emits an inline record inside Queries.java:
public final class Queries {
public record ListBooksWithAuthorRow(
long id,
String title,
String genre,
BigDecimal price,
String authorName,
String authorBio
) {}
public static List<ListBooksWithAuthorRow> listBooksWithAuthor(Connection conn)
throws SQLException { … }
}
Naming conventions
| SQL | Java |
|---|---|
get_author | getAuthor |
list_books_by_genre | listBooksByGenre |
birth_year column | birthYear field |
Author table | Author record |
Java version requirements
The generated code requires Java 16+ for records. Java 8–15 will require modifying the record declarations to standard classes with constructors and accessor methods.
Kotlin
sqltgen generates JDBC-based Kotlin code using the standard java.sql package.
Generated code uses Kotlin idioms: data classes, nullable types, and Kotlin objects.
Configuration
"kotlin": {
"out": "src/main/kotlin",
"package": "com.example.db"
}
| Field | Description |
|---|---|
out | Output root directory. |
package | Kotlin package name. |
list_params | "native" (default) or "dynamic". |
What is generated
src/main/kotlin/com/example/db/
Author.kt — data class for the author table
Book.kt — data class for the book table
Queries.kt — Kotlin object with query functions
Querier.kt — DataSource-backed wrapper object
Model data classes
// Author.kt
package com.example.db
data class Author(
val id: Long,
val name: String,
val bio: String?, // nullable → T?
val birthYear: Int? // nullable → T?
)
- Non-null columns use non-nullable Kotlin types (
Long,Int,Boolean, etc.). - Nullable columns use Kotlin nullable types (
Long?,Int?,String?, etc.). snake_caseSQL names →camelCaseKotlin property names.
Query functions
// Queries.kt
package com.example.db
import java.sql.Connection
object Queries {
fun getAuthor(conn: Connection, id: Long): Author? { … }
fun listAuthors(conn: Connection): List<Author> { … }
fun createAuthor(conn: Connection, name: String, bio: String?,
birthYear: Int?): Author? { … }
fun deleteAuthor(conn: Connection, id: Long): Unit { … }
fun countAuthors(conn: Connection): Long { … } // :execrows
}
Querier wrapper
// Querier.kt
package com.example.db
import javax.sql.DataSource
class Querier(private val ds: DataSource) {
fun getAuthor(id: Long): Author? =
ds.connection.use { Queries.getAuthor(it, id) }
fun listAuthors(): List<Author> =
ds.connection.use { Queries.listAuthors(it) }
// …
}
Wiring up
Gradle dependency
// build.gradle.kts — PostgreSQL driver
dependencies {
implementation("org.postgresql:postgresql:42.7.3")
// SQLite:
// implementation("org.xerial:sqlite-jdbc:3.47.0.0")
// MySQL:
// implementation("com.mysql:mysql-connector-j:9.1.0")
}
Using a plain Connection
import java.sql.DriverManager
import com.example.db.Queries
val conn = DriverManager.getConnection(
"jdbc:postgresql://localhost:5432/mydb", "user", "pass")
val author = Queries.getAuthor(conn, 1L)
val all = Queries.listAuthors(conn)
Using the Querier wrapper
import com.zaxxer.hikari.HikariDataSource
import com.example.db.Querier
val ds = HikariDataSource().apply {
jdbcUrl = "jdbc:postgresql://localhost:5432/mydb"
username = "user"
password = "pass"
}
val q = Querier(ds)
val author = q.getAuthor(1L)
Inline row types
object Queries {
data class ListBooksWithAuthorRow(
val id: Long,
val title: String,
val genre: String,
val price: java.math.BigDecimal,
val authorName: String,
val authorBio: String?
)
fun listBooksWithAuthor(conn: Connection): List<ListBooksWithAuthorRow> { … }
}
Naming conventions
| SQL | Kotlin |
|---|---|
get_author | getAuthor |
birth_year column | birthYear property |
Author table | Author data class |
Rust
sqltgen generates async Rust code using sqlx.
Model structs derive sqlx::FromRow. Query functions are async fn returning
Result<…, sqlx::Error>.
Configuration
"rust": {
"out": "src/db",
"package": ""
}
| Field | Description |
|---|---|
out | Output directory (relative to the project root). |
package | Unused for Rust — set to "". |
list_params | "native" (default) or "dynamic". |
What is generated
src/db/
mod.rs — re-exports all modules
author.rs — Author struct with sqlx::FromRow
book.rs — Book struct
_sqltgen.rs — shared helper trait (SqlxAdapter)
queries.rs — async query functions (single-file form)
users.rs — per-group query files (grouped form)
Model structs
#![allow(unused)] fn main() { // src/db/author.rs #[derive(Debug, sqlx::FromRow)] pub struct Author { pub id: i64, pub name: String, pub bio: Option<String>, pub birth_year: Option<i32>, } }
- Non-null columns → bare type (
i64,String,bool, …). - Nullable columns →
Option<T>. snake_caseSQL names →snake_caseRust field names (unchanged).
Query functions
#![allow(unused)] fn main() { // src/db/queries.rs use sqlx::PgPool; use super::author::Author; pub async fn get_author(pool: &PgPool, id: i64) -> Result<Option<Author>, sqlx::Error> { sqlx::query_as::<_, Author>(SQL_GET_AUTHOR) .bind(id) .fetch_optional(pool) .await } pub async fn list_authors(pool: &PgPool) -> Result<Vec<Author>, sqlx::Error> { sqlx::query_as::<_, Author>(SQL_LIST_AUTHORS) .fetch_all(pool) .await } pub async fn delete_author(pool: &PgPool, id: i64) -> Result<(), sqlx::Error> { sqlx::query(SQL_DELETE_AUTHOR) .bind(id) .execute(pool) .await .map(|_| ()) } pub async fn count_authors(pool: &PgPool) -> Result<u64, sqlx::Error> { sqlx::query(SQL_COUNT_AUTHORS) .execute(pool) .await .map(|r| r.rows_affected()) } }
_sqltgen.rs helper
The generated _sqltgen.rs file provides a shared SqlxAdapter trait that
abstracts over the three pool types. This is an implementation detail — you do
not need to call it directly.
mod.rs
#![allow(unused)] fn main() { pub mod author; pub mod book; pub mod queries; mod _sqltgen; }
Wiring up
Cargo.toml
[dependencies]
sqlx = { version = "0.8", features = [
"runtime-tokio",
"postgres", # or "sqlite" / "mysql"
"time", # for Date, Time, Timestamp, TimestampTz
"uuid", # for UUID
"rust_decimal", # for NUMERIC / DECIMAL
] }
tokio = { version = "1", features = ["full"] }
Using the pool directly
mod db; use db::queries; #[tokio::main] async fn main() -> Result<(), sqlx::Error> { let pool = sqlx::PgPool::connect("postgres://user:pass@localhost/mydb").await?; let author = queries::get_author(&pool, 1).await?; let all = queries::list_authors(&pool).await?; Ok(()) }
Pool types per dialect
| Dialect | Pool type |
|---|---|
| PostgreSQL | sqlx::PgPool |
| SQLite | sqlx::SqlitePool |
| MySQL | sqlx::MySqlPool |
Inline row types
JOIN queries or partial RETURNING queries produce additional structs in the query file:
#![allow(unused)] fn main() { #[derive(Debug, sqlx::FromRow)] pub struct ListBooksWithAuthorRow { pub id: i64, pub title: String, pub genre: String, pub price: rust_decimal::Decimal, pub author_name: String, pub author_bio: Option<String>, } pub async fn list_books_with_author(pool: &PgPool) -> Result<Vec<ListBooksWithAuthorRow>, sqlx::Error> { … } }
List parameters
For PostgreSQL, list parameters use = ANY($1) with a slice bind:
#![allow(unused)] fn main() { // generated pub async fn get_books_by_ids(pool: &PgPool, ids: &[i64]) -> Result<Vec<Book>, sqlx::Error> { sqlx::query_as::<_, Book>(SQL_GET_BOOKS_BY_IDS) .bind(ids) .fetch_all(pool) .await } }
For SQLite and MySQL, the native strategy uses json_each / JSON_TABLE
with a JSON-serialized string:
#![allow(unused)] fn main() { pub async fn get_books_by_ids(pool: &SqlitePool, ids: &[i64]) -> Result<Vec<Book>, sqlx::Error> { let ids_json = serde_json::to_string(ids).unwrap(); sqlx::query_as::<_, Book>(SQL_GET_BOOKS_BY_IDS) .bind(ids_json) .fetch_all(pool) .await } }
Naming conventions
| SQL | Rust |
|---|---|
GetAuthor | get_author |
ListBooksWithAuthor | list_books_with_author |
birth_year column | birth_year field |
Author table | Author struct |
Python
sqltgen generates Python code using standard database drivers: psycopg (psycopg3)
for PostgreSQL, sqlite3 (stdlib) for SQLite, and mysql-connector-python for MySQL.
Model types are @dataclass classes. Functions are plain synchronous functions.
Configuration
"python": {
"out": "gen",
"package": ""
}
| Field | Description |
|---|---|
out | Output directory. |
package | Unused for Python — set to "". |
list_params | "native" (default) or "dynamic". |
What is generated
gen/
__init__.py — barrel import
author.py — Author dataclass
book.py — Book dataclass
queries.py — query functions (single-file form)
users.py — per-group query file (grouped form)
Model dataclasses
# gen/author.py
from __future__ import annotations
import dataclasses
@dataclasses.dataclass
class Author:
id: int
name: str
bio: str | None
birth_year: int | None
- Non-null columns → bare type (
int,str,bool,float, …). - Nullable columns →
T | None. snake_caseSQL names →snake_casePython field names (unchanged).
Query functions
# gen/queries.py
import psycopg
from .author import Author
SQL_GET_AUTHOR = "SELECT id, name, bio, birth_year FROM author WHERE id = %s"
SQL_LIST_AUTHORS = "SELECT id, name, bio, birth_year FROM author ORDER BY name"
SQL_CREATE_AUTHOR = "INSERT INTO author (name, bio, birth_year) VALUES (%s, %s, %s) RETURNING *"
SQL_DELETE_AUTHOR = "DELETE FROM author WHERE id = %s"
def get_author(conn: psycopg.Connection, id: int) -> Author | None:
with conn.cursor() as cur:
cur.execute(SQL_GET_AUTHOR, (id,))
row = cur.fetchone()
if row is None:
return None
return Author(*row)
def list_authors(conn: psycopg.Connection) -> list[Author]:
with conn.cursor() as cur:
cur.execute(SQL_LIST_AUTHORS)
return [Author(*row) for row in cur.fetchall()]
def create_author(conn: psycopg.Connection,
name: str, bio: str | None,
birth_year: int | None) -> Author | None:
with conn.cursor() as cur:
cur.execute(SQL_CREATE_AUTHOR, (name, bio, birth_year))
row = cur.fetchone()
if row is None:
return None
return Author(*row)
def delete_author(conn: psycopg.Connection, id: int) -> None:
with conn.cursor() as cur:
cur.execute(SQL_DELETE_AUTHOR, (id,))
def count_authors(conn: psycopg.Connection) -> int:
with conn.cursor() as cur:
cur.execute(SQL_COUNT_AUTHORS)
return cur.rowcount # :execrows
Results are unpacked positionally (Author(*row)), so the column order in the
query must match the dataclass field order.
Querier wrapper
# gen/queries.py (excerpt)
import contextlib
from typing import Callable
class Querier:
def __init__(self, connect: Callable[[], psycopg.Connection]) -> None:
self._connect = connect
def get_author(self, id: int) -> Author | None:
with contextlib.closing(self._connect()) as conn:
return get_author(conn, id)
def list_authors(self) -> list[Author]:
with contextlib.closing(self._connect()) as conn:
return list_authors(conn)
Wiring up
PostgreSQL
pip install psycopg
import psycopg
from gen.queries import get_author, list_authors
with psycopg.connect("postgresql://user:pass@localhost/mydb") as conn:
author = get_author(conn, 1)
all_authors = list_authors(conn)
conn.commit()
SQLite (stdlib — no install needed)
import sqlite3
from gen.queries import get_author, list_authors
conn = sqlite3.connect("mydb.db")
author = get_author(conn, 1)
all_authors = list_authors(conn)
MySQL
pip install mysql-connector-python
import mysql.connector
from gen.queries import get_author, list_authors
conn = mysql.connector.connect(
host="localhost", database="mydb",
user="user", password="pass")
author = get_author(conn, 1)
all_authors = list_authors(conn)
Using the Querier wrapper
from gen.queries import Querier
import psycopg
q = Querier(lambda: psycopg.connect("postgresql://user:pass@localhost/mydb"))
author = q.get_author(1)
JSON columns
| Driver | JSON column Python type |
|---|---|
| psycopg3 | object — psycopg3 automatically deserializes JSON |
| sqlite3 | str — returns the raw JSON string |
| mysql-connector | str — returns the raw JSON string |
Application code is responsible for parsing the string in the sqlite3 and mysql-connector cases.
Naming conventions
| SQL | Python |
|---|---|
GetAuthor | get_author |
ListBooksWithAuthor | list_books_with_author |
birth_year column | birth_year field |
Author table | Author class |
Python version requirements
The generated code uses PEP 604 union syntax (str | None) and from __future__ import annotations. Python 3.10+ is required for runtime evaluation of these
annotations. Python 3.11+ is recommended.
TypeScript
sqltgen generates TypeScript code with full type annotations. Database access uses pg for PostgreSQL, better-sqlite3 for SQLite, and mysql2 for MySQL.
Note: SQLite better-sqlite3 is synchronous. Generated SQLite functions do not return
Promise— they return values directly.
Configuration
"typescript": {
"out": "src/db",
"package": ""
}
| Field | Description |
|---|---|
out | Output directory. |
package | Unused for TypeScript — set to "". |
list_params | "native" (default) or "dynamic". |
What is generated
src/db/
index.ts — barrel export
author.ts — Author interface
book.ts — Book interface
_sqltgen.ts — shared SqltgenAdapter type
queries.ts — async query functions + Querier class
Model interfaces
// src/db/author.ts
export interface Author {
id: number;
name: string;
bio: string | null;
birth_year: number | null;
}
- Non-null columns → bare type.
- Nullable columns →
T | null. - Column names are kept as
snake_caseto match the database column names exactly.
Query functions
// src/db/queries.ts
import type { ClientBase } from 'pg';
import type { Author } from './author';
const SQL_GET_AUTHOR = `SELECT id, name, bio, birth_year FROM author WHERE id = $1`;
const SQL_LIST_AUTHORS = `SELECT id, name, bio, birth_year FROM author ORDER BY name`;
const SQL_DELETE_AUTHOR = `DELETE FROM author WHERE id = $1`;
export async function getAuthor(db: ClientBase, id: number): Promise<Author | null> {
const result = await db.query<Author>(SQL_GET_AUTHOR, [id]);
return result.rows[0] ?? null;
}
export async function listAuthors(db: ClientBase): Promise<Author[]> {
const result = await db.query<Author>(SQL_LIST_AUTHORS);
return result.rows;
}
export async function deleteAuthor(db: ClientBase, id: number): Promise<void> {
await db.query(SQL_DELETE_AUTHOR, [id]);
}
export async function countAuthors(db: ClientBase): Promise<number> {
const result = await db.query(SQL_COUNT_AUTHORS);
return result.rowCount ?? 0; // :execrows
}
Querier class
export class Querier {
constructor(private connect: () => ClientBase | Promise<ClientBase>) {}
async getAuthor(id: number): Promise<Author | null> {
const db = await this.connect();
try {
return await getAuthor(db, id);
} finally {
if ('end' in db) await (db as any).end();
}
}
// …
}
Wiring up
PostgreSQL
npm install pg
npm install --save-dev @types/pg
import { Client } from 'pg';
import { getAuthor, listAuthors } from './src/db/queries';
const client = new Client({
connectionString: 'postgres://user:pass@localhost/mydb'
});
await client.connect();
const author = await getAuthor(client, 1);
const all = await listAuthors(client);
await client.end();
SQLite
npm install better-sqlite3
npm install --save-dev @types/better-sqlite3
import Database from 'better-sqlite3';
import { getAuthor, listAuthors } from './src/db/queries';
const db = new Database('mydb.db');
// SQLite functions are synchronous — no await needed
const author = getAuthor(db, 1);
const all = listAuthors(db);
MySQL
npm install mysql2
import mysql from 'mysql2/promise';
import { getAuthor } from './src/db/queries';
const conn = await mysql.createConnection({
host: 'localhost', database: 'mydb',
user: 'user', password: 'pass'
});
const author = await getAuthor(conn, 1);
await conn.end();
Inline row types
export interface ListBooksWithAuthorRow {
id: number;
title: string;
genre: string;
price: number;
author_name: string;
author_bio: string | null;
}
export async function listBooksWithAuthor(db: ClientBase): Promise<ListBooksWithAuthorRow[]> { … }
Naming conventions
| SQL | TypeScript |
|---|---|
GetAuthor | getAuthor (function) |
ListBooksWithAuthor | listBooksWithAuthor |
birth_year column | birth_year (interface field — unchanged) |
Author table | Author interface |
JavaScript
sqltgen generates JavaScript code with JSDoc type annotations. It uses the same database drivers as the TypeScript backend: pg, better-sqlite3, and mysql2.
The generated JavaScript is functionally identical to the TypeScript output, but
replaces inline TypeScript syntax with JSDoc comments (@typedef, @param,
@returns).
Configuration
"javascript": {
"out": "src/db",
"package": ""
}
| Field | Description |
|---|---|
out | Output directory. |
package | Unused for JavaScript — set to "". |
list_params | "native" (default) or "dynamic". |
What is generated
src/db/
index.js — barrel export
author.js — Author typedef
book.js — Book typedef
_sqltgen.js — shared adapter typedef
queries.js — async query functions + Querier class
Model typedefs
// src/db/author.js
/**
* @typedef {Object} Author
* @property {number} id
* @property {string} name
* @property {string | null} bio
* @property {number | null} birth_year
*/
Query functions
// src/db/queries.js
const SQL_GET_AUTHOR = `SELECT id, name, bio, birth_year FROM author WHERE id = $1`;
const SQL_LIST_AUTHORS = `SELECT id, name, bio, birth_year FROM author ORDER BY name`;
/**
* @param {import('pg').ClientBase} db
* @param {number} id
* @returns {Promise<Author | null>}
*/
export async function getAuthor(db, id) {
const result = await db.query(SQL_GET_AUTHOR, [id]);
return result.rows[0] ?? null;
}
/**
* @param {import('pg').ClientBase} db
* @returns {Promise<Author[]>}
*/
export async function listAuthors(db) {
const result = await db.query(SQL_LIST_AUTHORS);
return result.rows;
}
Querier class
export class Querier {
/** @param {() => import('pg').ClientBase | Promise<import('pg').ClientBase>} connect */
constructor(connect) {
this._connect = connect;
}
/** @returns {Promise<Author | null>} */
async getAuthor(id) {
const db = await this._connect();
try {
return await getAuthor(db, id);
} finally {
if (typeof db.end === 'function') await db.end();
}
}
}
Wiring up
PostgreSQL
npm install pg
import { Client } from 'pg';
import { getAuthor, listAuthors } from './src/db/queries.js';
const client = new Client({
connectionString: 'postgres://user:pass@localhost/mydb'
});
await client.connect();
const author = await getAuthor(client, 1);
const all = await listAuthors(client);
await client.end();
SQLite
npm install better-sqlite3
import Database from 'better-sqlite3';
import { getAuthor } from './src/db/queries.js';
const db = new Database('mydb.db');
const author = getAuthor(db, 1); // synchronous
MySQL
npm install mysql2
import mysql from 'mysql2/promise';
import { getAuthor } from './src/db/queries.js';
const conn = await mysql.createConnection({
host: 'localhost', database: 'mydb',
user: 'user', password: 'pass'
});
const author = await getAuthor(conn, 1);
Differences from TypeScript
| TypeScript | JavaScript |
|---|---|
.ts file extension | .js file extension |
Inline interface types | @typedef JSDoc comments |
| Inline parameter types | @param JSDoc annotations |
index.ts barrel | index.js barrel |
All drivers, driver setup, and query function semantics are identical. See the TypeScript guide for more detail.
Naming conventions
Identical to TypeScript: function names are camelCase, interface field names
follow snake_case database column names.
Go
sqltgen generates Go code using the standard database/sql package. Functions
take a context.Context and a *sql.DB and return idiomatic (T, error) pairs.
Configuration
"go": {
"out": "db",
"package": "db"
}
| Field | Description |
|---|---|
out | Output directory. |
package | Go package name declared at the top of each generated file. |
list_params | "native" (default) or "dynamic". |
What is generated
db/
mod.go — package declaration
author.go — Author struct
book.go — Book struct
sqltgen.go — shared helpers (execRows, buildInClause, scanArray)
queries.go — query functions
Model structs
// db/author.go
package db
import "database/sql"
// Author represents a row from the author table.
type Author struct {
Id int64
Name string
Bio sql.NullString
BirthYear sql.NullInt32
}
- Non-null columns use bare Go types (
int64,string,bool, …). - Nullable columns use
sql.Null*types (sql.NullString,sql.NullInt32,sql.NullInt64,sql.NullFloat64,sql.NullBool,sql.NullTime). - Field names are
PascalCase(e.g.birth_year→BirthYear).
Query functions
// db/queries.go
package db
import (
"context"
"database/sql"
)
// GetAuthor executes the GetAuthor query.
func GetAuthor(ctx context.Context, db *sql.DB, id int64) (*Author, error) {
row := db.QueryRowContext(ctx, SQL_GET_AUTHOR, id)
var r Author
err := row.Scan(&r.Id, &r.Name, &r.Bio, &r.BirthYear)
if err == sql.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
return &r, nil
}
// ListAuthors executes the ListAuthors query.
func ListAuthors(ctx context.Context, db *sql.DB) ([]Author, error) {
rows, err := db.QueryContext(ctx, SQL_LIST_AUTHORS)
if err != nil {
return nil, err
}
defer rows.Close()
var results []Author
for rows.Next() {
var r Author
if err := rows.Scan(&r.Id, &r.Name, &r.Bio, &r.BirthYear); err != nil {
return nil, err
}
results = append(results, r)
}
return results, rows.Err()
}
// DeleteAuthor executes the DeleteAuthor query.
func DeleteAuthor(ctx context.Context, db *sql.DB, id int64) error {
_, err := db.ExecContext(ctx, SQL_DELETE_AUTHOR, id)
return err
}
// CountAuthors executes the CountAuthors query.
func CountAuthors(ctx context.Context, db *sql.DB) (int64, error) {
// :execrows — returns affected row count
}
Return types by command:
:one | :many | :exec | :execrows |
|---|---|---|---|
(*T, error) — nil, nil if no row | ([]T, error) | error | (int64, error) |
Wiring up
go.mod
go get github.com/lib/pq # PostgreSQL
go get modernc.org/sqlite # SQLite (pure Go, no CGo)
go get github.com/go-sql-driver/mysql # MySQL
PostgreSQL
import (
"context"
"database/sql"
_ "github.com/lib/pq"
"yourmodule/db"
)
conn, err := sql.Open("postgres", "postgres://user:pass@localhost/mydb?sslmode=disable")
author, err := db.GetAuthor(context.Background(), conn, 1)
all, err := db.ListAuthors(context.Background(), conn)
SQLite
import (
"context"
"database/sql"
_ "modernc.org/sqlite"
"yourmodule/db"
)
conn, err := sql.Open("sqlite", "mydb.db")
author, err := db.GetAuthor(context.Background(), conn, 1)
MySQL
import (
"context"
"database/sql"
_ "github.com/go-sql-driver/mysql"
"yourmodule/db"
)
conn, err := sql.Open("mysql", "user:pass@tcp(localhost:3306)/mydb")
author, err := db.GetAuthor(context.Background(), conn, 1)
Inline row types
type ListBooksWithAuthorRow struct {
Id int64
Title string
Genre string
Price float64
AuthorName string
AuthorBio sql.NullString
}
func ListBooksWithAuthor(ctx context.Context, db *sql.DB) ([]ListBooksWithAuthorRow, error) { … }
List parameters
For SQLite and MySQL the native strategy serializes the list to a JSON string
and uses json_each / JSON_TABLE. For PostgreSQL it uses pq.Array:
// SQLite / MySQL
func GetBooksByIds(ctx context.Context, db *sql.DB, ids []int64) ([]Book, error) {
idsJSON, _ := json.Marshal(ids)
rows, err := db.QueryContext(ctx, SQL_GET_BOOKS_BY_IDS, string(idsJSON))
…
}
// PostgreSQL
import "github.com/lib/pq"
func GetBooksByIds(ctx context.Context, db *sql.DB, ids []int64) ([]Book, error) {
rows, err := db.QueryContext(ctx, SQL_GET_BOOKS_BY_IDS, pq.Array(ids))
…
}
Naming conventions
| SQL | Go |
|---|---|
GetAuthor | GetAuthor (PascalCase — exported) |
ListBooksWithAuthor | ListBooksWithAuthor |
birth_year column | BirthYear field |
Author table | Author struct |
Contributing
Thank you for your interest in contributing to sqltgen.
The full contributing guide — build instructions, test suite, code style, and step-by-step walkthroughs for adding new backends and dialects — is in CONTRIBUTING.md in the repository root.
The detailed technical reference — architecture overview, IR data model, and guides for adding new backends, dialects, and examples — is in docs/contributor-guide.md.
Quick reference
# Clone and build
git clone https://github.com/sqltgen/sqltgen.git
cd sqltgen
cargo build
# Run all tests
cargo test
# Format and lint
cargo fmt
cargo clippy -- -D warnings
Opening a pull request
- Fork the repository and create a feature branch.
- Make your changes; run
cargo fmtandcargo clippy. - Add or update tests to cover the change.
- Open a PR with a clear description of what changes and why.
Keep PRs focused — one logical change per PR makes review much easier.
Questions?
Open an issue or start a discussion on the GitHub repository.