Introduction

sqltgen is a multi-language SQL-to-code generator. You write standard SQL — a schema (DDL) and annotated query files — and sqltgen emits fully typed, idiomatic database access code in Java, Kotlin, Rust, Go, Python, TypeScript, and JavaScript.

No ORM. No reflection. No runtime query building. Just your SQL, compiled to code.

sqltgen generate

What it does

  1. Parse your schema — sqltgen reads CREATE TABLE and ALTER TABLE statements and builds an in-memory type model of every table and column.

  2. Analyse your queries — each annotated query is parsed and its parameter types and result column types are inferred from the schema. No guessing; the inference is derived from the actual SQL.

  3. Emit typed code — for every configured target language, sqltgen writes source files containing model types (one per table) and typed query functions.

The generated code is ready to use with standard database drivers — JDBC for Java/Kotlin, sqlx for Rust, psycopg3/sqlite3/mysql-connector for Python, pg / better-sqlite3 / mysql2 for TypeScript/JavaScript.

Why sqltgen?

You already wrote the SQL. ORMs ask you to translate your mental model into a different API; sqltgen lets you keep writing SQL and adds type safety around it.

It's not another dependency — it's your code. The generated files are plain source code that belongs to your project. They use only your language's standard database driver (JDBC, sqlx, psycopg3, pg, etc.) — there is no sqltgen-runtime package, no framework sitting in your import path. If you stop using sqltgen tomorrow, your code still compiles and runs. You can read it, debug it, even hand-edit it. Stack traces go through your code and your driver, not through a library you don't control. Upgrading sqltgen is just re-running the generator and reviewing the diff — there is no runtime migration.

The generated code is readable. Output looks like code a competent developer wrote by hand: idiomatic function names, proper nullability, correct types. There are no codegen artifacts to work around.

No live database required — and fast because of it. sqltgen analyses your DDL statically. There is nothing to connect, migrate, or seed before you can generate. This also makes it genuinely fast: analysis and codegen are pure in-memory computation, so generation completes in milliseconds regardless of schema size. In CI, that means a single binary invocation with no infrastructure setup — no database service container, no network, no health-check polling.

All the SQL features you use. JOINs, CTEs, subqueries, aggregates, UNION, RETURNING, list parameters — all supported. Unknown constructs are handled gracefully rather than failing the whole build.

Comparison to sqlc

sqltgen is inspired by sqlc and shares its core philosophy: write SQL, get typed functions. Both tools perform static analysis from DDL files and neither requires a running database. The main design difference is in how language targets are delivered:

sqltgensqlc
Implementation languageRustGo
Supported targetsJava, Kotlin, Rust, Go, Python, TypeScript, JavaScriptGo, Python, TypeScript (+ community plugins)
Supported dialectsPostgreSQL, SQLite, MySQLPostgreSQL, MySQL, SQLite
Language targetsBuilt-in, native per driverBuilt-in + WASM plugin system
Config formatsqltgen.jsonsqlc.yaml

sqltgen builds each language target directly into the binary with first-class support for the idiomatic driver in that ecosystem. sqlc's plugin system allows the community to add targets independently. Neither approach is strictly better — they reflect different trade-offs between extensibility and native integration.

Supported targets

LanguageDriver
JavaJDBC (java.sql)
KotlinJDBC (java.sql)
Rustsqlx (async)
Pythonpsycopg3 / sqlite3 / mysql-connector
TypeScriptpg / better-sqlite3 / mysql2
JavaScriptsame as TypeScript (JSDoc types)
Godatabase/sql

Supported dialects

  • PostgreSQL — full support
  • SQLite — full support
  • MySQL — full support

Installation

Docker (no Rust required)

The quickest way to try sqltgen without installing anything. The sqltgen/sqltgen image on Docker Hub contains only the binary and its runtime — no shell, no package manager.

Run from your project root, mounting the current directory as /workspace:

docker run --rm -v $(pwd):/workspace sqltgen/sqltgen generate --config sqltgen.json

For convenience, wrap it in a shell script and put it on your $PATH:

#!/bin/sh
exec docker run --rm -v "$(pwd):/workspace" sqltgen/sqltgen "$@"

Tags: edge tracks the latest commit on main and is updated on every push. Versioned tags (v0.1.0, etc.) will be added with each stable release.

Build from source

The simplest install method. Requires a Rust stable toolchain.

git clone https://github.com/sqltgen/sqltgen.git
cd sqltgen
cargo build --release

The binary lands at target/release/sqltgen. Copy it anywhere on your PATH:

cp target/release/sqltgen ~/.local/bin/

cargo install

If you have Rust installed, cargo install fetches and compiles sqltgen from crates.io:

cargo install sqltgen

The binary is placed in ~/.cargo/bin/, which is on your PATH if you installed Rust via rustup.

Homebrew (macOS / Linux)

Distribution packages are planned for v0.1.0. This section will be updated when they are available.

brew install sqltgen/tap/sqltgen

curl / pre-built binaries

Pre-built binaries for Linux (x86_64, aarch64), macOS (x86_64, Apple Silicon), and Windows are available on the GitHub releases page.

# Linux x86_64 — replace VERSION with the latest release tag
curl -L https://github.com/sqltgen/sqltgen/releases/download/VERSION/sqltgen-x86_64-unknown-linux-musl.tar.gz \
  | tar xz
chmod +x sqltgen
sudo mv sqltgen /usr/local/bin/

.deb / .rpm packages

Planned for v0.1.0.

# Debian / Ubuntu
sudo dpkg -i sqltgen_VERSION_amd64.deb

# Fedora / RHEL
sudo rpm -i sqltgen-VERSION.x86_64.rpm

Verify the installation

sqltgen --version

Expected output: sqltgen VERSION

Quickstart

This guide walks you from zero to running code in about five minutes. We will build a small bookstore schema, write a few queries, generate Java code, and run it against a PostgreSQL database.

If you prefer a different language, swap "java" for "rust", "kotlin", "python", "typescript", or "javascript" in the config below. The schema and query files stay the same.

1. Install sqltgen

Docker (no Rust required — recommended for trying sqltgen):

docker run --rm -v $(pwd):/workspace sqltgen/sqltgen generate --config sqltgen.json

cargo install (if you have Rust):

cargo install sqltgen

Build from source:

git clone https://github.com/sqltgen/sqltgen.git
cd sqltgen
cargo build --release
# binary at target/release/sqltgen

See the installation guide for all options including pre-built binaries and Homebrew.

2. Write your schema

Create schema.sql:

CREATE TABLE author (
    id         BIGSERIAL    PRIMARY KEY,
    name       TEXT         NOT NULL,
    bio        TEXT,
    birth_year INTEGER
);

CREATE TABLE book (
    id           BIGSERIAL      PRIMARY KEY,
    author_id    BIGINT         NOT NULL,
    title        TEXT           NOT NULL,
    genre        TEXT           NOT NULL,
    price        NUMERIC(10, 2) NOT NULL,
    published_at DATE
);

3. Annotate your queries

Create queries.sql:

-- name: GetAuthor :one
SELECT id, name, bio, birth_year
FROM author
WHERE id = @id;

-- name: ListAuthors :many
SELECT id, name, bio, birth_year
FROM author
ORDER BY name;

-- name: CreateAuthor :one
INSERT INTO author (name, bio, birth_year)
VALUES (@name, @bio, @birth_year)
RETURNING *;

-- name: DeleteAuthor :exec
DELETE FROM author WHERE id = @id;

-- name: GetBooksByAuthor :many
SELECT id, author_id, title, genre, price, published_at
FROM book
WHERE author_id = @author_id
ORDER BY title;

Each -- name: QueryName :command annotation names the query and tells sqltgen what kind of result to expect:

  • :one — return a single optional row
  • :many — return all rows as a list
  • :exec — execute and return nothing
  • :execrows — execute and return the affected row count

4. Create the config

Create sqltgen.json:

{
  "version": "1",
  "engine": "postgresql",
  "schema": "schema.sql",
  "queries": "queries.sql",
  "gen": {
    "java": { "out": "gen", "package": "com.example.db" }
  }
}

5. Generate

sqltgen generate

sqltgen writes the following files under gen/com/example/db/:

gen/com/example/db/
  Author.java        — record type for the author table
  Book.java          — record type for the book table
  Queries.java       — static query functions
  Querier.java       — DataSource-backed wrapper

6. Use the generated code

Add the PostgreSQL JDBC driver to your project:

<!-- pom.xml -->
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.7.3</version>
</dependency>

Then call the generated functions:

import java.sql.Connection;
import java.sql.DriverManager;
import java.util.List;
import java.util.Optional;
import com.example.db.Author;
import com.example.db.Queries;

Connection conn = DriverManager.getConnection(
    "jdbc:postgresql://localhost:5432/mydb", "user", "pass");

// Insert a new author
Optional<Author> author = Queries.createAuthor(conn,
    "Ursula K. Le Guin", "American author of science fiction", 1929);

// Fetch by ID
Optional<Author> found = Queries.getAuthor(conn, author.get().id());

// List all authors
List<Author> all = Queries.listAuthors(conn);

// Delete
Queries.deleteAuthor(conn, author.get().id());

7. What was generated

Open gen/com/example/db/Queries.java. You will find:

  • A private static final String SQL_… constant for each query.
  • A static method for each query, with typed parameters matching the @param annotations, and a return type derived from the :command and the inferred result columns.
public static Optional<Author> getAuthor(Connection conn, long id)
        throws SQLException {
    try (var ps = conn.prepareStatement(SQL_GET_AUTHOR)) {
        ps.setLong(1, id);
        try (var rs = ps.executeQuery()) {
            if (!rs.next()) return Optional.empty();
            return Optional.of(new Author(
                rs.getLong(1),
                rs.getString(2),
                rs.getString(3),
                rs.getObject(4, Integer.class)
            ));
        }
    }
}

No reflection. No runtime overhead. The generated code does exactly what you would write by hand — just without the repetition.

Next steps

Writing your schema

sqltgen reads standard DDL. Point the schema field in sqltgen.json at a single .sql file or a directory of migration files — sqltgen loads them all and applies every statement in order.

Tables

CREATE TABLE author (
    id         BIGSERIAL    PRIMARY KEY,
    name       TEXT         NOT NULL,
    bio        TEXT,                      -- nullable: no NOT NULL → nullable
    birth_year INTEGER
);

CREATE TABLE book (
    id           BIGSERIAL      PRIMARY KEY,
    author_id    BIGINT         NOT NULL,
    title        TEXT           NOT NULL,
    genre        TEXT           NOT NULL,
    price        NUMERIC(10, 2) NOT NULL,
    published_at DATE           -- nullable
);

Nullability rules:

  • A column with NOT NULL is non-null.
  • A column without NOT NULL is nullable.
  • PRIMARY KEY columns are implicitly non-null.

Views

CREATE VIEW is supported. sqltgen infers the column types of a view from its SELECT body using the same type resolver used for query parsing:

CREATE VIEW author_book_count AS
SELECT a.id, a.name, COUNT(b.id) AS book_count
FROM author a
LEFT JOIN book b ON b.author_id = a.id
GROUP BY a.id, a.name;

Views are registered in the schema as first-class entities. You can query them in annotated query files exactly like tables:

-- name: GetAuthorBookCount :one
SELECT id, name, book_count
FROM author_book_count
WHERE id = @id;

Views that reference other views are resolved correctly as long as they are declared after the views they depend on.

DROP VIEW [IF EXISTS] is also supported.

ALTER TABLE

sqltgen applies ALTER TABLE statements in declaration order, so a directory of numbered migration files works exactly as expected:

OperationPostgreSQLSQLiteMySQL
ADD COLUMN [IF NOT EXISTS]
DROP COLUMN [IF EXISTS]
RENAME COLUMN … TO …
RENAME TO …
ALTER COLUMN … SET/DROP NOT NULL
ALTER COLUMN … TYPE
ADD [CONSTRAINT …] PRIMARY KEY

DROP TABLE

DROP TABLE [IF EXISTS] removes the table from the schema model. Subsequent queries referencing the dropped table will produce a warning.

What is silently ignored

Statements that do not affect the type model — CREATE INDEX, CREATE FUNCTION, CREATE TRIGGER, CREATE SEQUENCE, COMMENT ON, etc. — are skipped without error. This lets you point sqltgen at a real migration directory without curating the SQL.

Migration directory example

migrations/
  001_create_authors.sql
  002_create_books.sql
  003_add_genre_to_books.sql
  004_create_sales.sql
"schema": "migrations/"

sqltgen loads the files in lexicographic order (which matches numeric prefixes) and applies all DDL statements in sequence.

If each migration file contains both an "up" and a "down" section, use schema_stop_marker to tell sqltgen where the down section begins:

"schema": "migrations/",
"schema_stop_marker": "-- migrate:down"

Everything from the marker line onward is ignored, so only the up DDL reaches the schema parser. See Migration files with up/down sections for the full list of supported tools.

Configuration

sqltgen is configured by a JSON file, sqltgen.json by default.

sqltgen generate                          # reads sqltgen.json in the current directory
sqltgen generate --config path/to/sqltgen.json
sqltgen generate -c path/to/sqltgen.json

Full reference

{
  "version": "1",
  "engine": "postgresql",
  "schema": "migrations/",
  "queries": "queries.sql",
  "gen": {
    "java":       { "out": "src/main/java",   "package": "com.example.db" },
    "kotlin":     { "out": "src/main/kotlin", "package": "com.example.db" },
    "rust":       { "out": "src/db",          "package": "" },
    "python":     { "out": "gen",             "package": "" },
    "typescript": { "out": "src/db",          "package": "" },
    "javascript": { "out": "src/db",          "package": "" }
  }
}

Top-level fields

FieldRequiredDescription
versionyesMust be "1".
engineyesSQL dialect. One of "postgresql", "sqlite", "mysql".
schemayesPath to a .sql file or a directory. See Schema path.
schema_stop_markernoStrip down-migration sections. See Migration files with up/down sections.
default_schemanoDefault schema for unqualified table references. See Default schema.
queriesyesQuery source(s). See Queries field.
genyesMap of language key → output config. At least one entry required.

Schema path

The schema field accepts:

  • A single file"schema.sql".

  • A directory — all .sql files in the directory are loaded in lexicographic order. This is ideal for numbered migration files:

    migrations/
      001_create_users.sql
      002_create_posts.sql
      003_add_tags.sql
    
    "schema": "migrations/"
    

    sqltgen applies CREATE TABLE, ALTER TABLE, and DROP TABLE in file order, so the final schema reflects the fully-migrated state.

Statements sqltgen does not recognise (CREATE INDEX, CREATE FUNCTION, etc.) are silently skipped.

Migration files with up/down sections

Some migration tools (dbmate, goose, golang-migrate) store both the "up" and "down" SQL in a single file, separated by a comment marker:

-- migrate:up
CREATE TABLE users (id BIGINT PRIMARY KEY, name TEXT NOT NULL);

-- migrate:down
DROP TABLE users;

Set schema_stop_marker to the down-section marker and sqltgen will discard everything from that line onward in each file:

{
  "schema": "migrations/",
  "schema_stop_marker": "-- migrate:down"
}

Common values by tool:

Toolschema_stop_marker
dbmate"-- migrate:down"
goose"-- +goose Down"
golang-migrate / sql-migrate"-- +migrate Down"

Omit the field (or set it to null) to read files in full — this is the default and preserves existing behaviour.

Default schema

When your DDL uses schema-qualified table names (e.g. CREATE TABLE public.users), unqualified references in queries (e.g. SELECT * FROM users) need to know which schema to match against. Set default_schema to control this:

{
  "engine": "postgresql",
  "default_schema": "public"
}

If omitted, sqltgen uses the engine's conventional default:

EngineDefault
postgresql"public"
sqlite"main"
mysqlnone (schema = database; specify explicitly if needed)

This also applies to DDL operations: ALTER TABLE users will match public.users when the default schema is "public".

Most projects can omit this field entirely. Set it only if your DDL uses a non-standard schema (e.g. a custom PostgreSQL search_path).

Queries field

The queries field accepts three forms.

Single file

All queries land in one output file per language (Queries.java, queries.ts, etc.):

"queries": "queries.sql"

Array of paths or globs

Each file becomes its own group, named after the file stem. users.sql → group usersUsersQueries.java / users.ts.

"queries": ["queries/users.sql", "queries/posts.sql"]

Glob patterns are supported and are sorted lexicographically:

"queries": ["queries/**/*.sql"]

An error is raised if a glob matches no files.

Grouped map

Full control over group names and which files belong to each group. Values can be a single path/glob or an array:

"queries": {
  "users": "queries/users.sql",
  "posts": ["queries/posts/**/*.sql", "queries/extra.sql"]
}

Output file names per group

Group nameJava / KotlinRust / Python / TypeScript / JavaScript
usersUsersQueries.java / .ktusers.rs / users.py / users.ts / users.js
postsPostsQueries.java / .ktposts.rs / posts.py / posts.ts / posts.js
(single file)Queries.java / .ktqueries.rs / queries.py / queries.ts / queries.js

Per-language output config (gen.*)

FieldRequiredDescription
outyesOutput root directory. Generated files are written under this path.
packageyesPackage or module name. Empty string "" for languages without packages (Rust, Python, TypeScript, JavaScript). For Java/Kotlin: "com.example.db".
list_paramsnoStrategy for list/IN parameters: "native" (default) or "dynamic". See List parameter strategies.

Language keys

Valid keys in the gen map:

KeyLanguage
javaJava (JDBC)
kotlinKotlin (JDBC)
rustRust (sqlx)
pythonPython (psycopg3 / sqlite3 / mysql-connector)
typescriptTypeScript
javascriptJavaScript (JSDoc types)
goGo (database/sql)

List parameter strategies

When a query uses -- @ids type[] to pass a list to WHERE id IN (@ids), sqltgen rewrites the SQL in one of two ways.

native (default)

A single bind is used with an engine-native JSON/array expression. The list size does not need to be known at code-generation time.

EngineRewritten SQL
PostgreSQLWHERE id = ANY($1) — native PostgreSQL array
SQLiteWHERE id IN (SELECT value FROM json_each(?)) — JSON array string
MySQLWHERE id IN (SELECT value FROM JSON_TABLE(?, '$[*]' COLUMNS (value BIGINT PATH '$')))

dynamic

The IN (?,?,…) clause is built at runtime with one placeholder per element. The SQL string is reconstructed on every call.

"java": { "out": "gen", "package": "com.example.db", "list_params": "dynamic" }

Use dynamic when:

  • Your driver does not support the native array/JSON approach.
  • You prefer simple, portable SQL.
  • Lists are always small and performance is not a concern.

Complete example

{
  "version": "1",
  "engine": "postgresql",
  "schema": "migrations/",
  "queries": {
    "users":  "sql/users.sql",
    "posts":  ["sql/posts.sql", "sql/post_tags.sql"],
    "search": "sql/search/**/*.sql"
  },
  "gen": {
    "java": {
      "out": "src/main/java",
      "package": "com.example.db",
      "list_params": "dynamic"
    },
    "typescript": {
      "out": "src/db",
      "package": ""
    }
  }
}

Writing queries

sqltgen reads query files containing standard SQL with a lightweight annotation format. Each query starts with a -- name: comment that gives it a name and a command.

Annotation syntax

-- name: QueryName :command
SELECT …
  • QueryName — PascalCase name used to derive function/method names in all generated backends (converted to camelCase for Java/Kotlin/TypeScript/JavaScript and snake_case for Rust/Python).
  • :command — see Query commands below.

The annotation must appear immediately before the SQL statement. Any whitespace between the annotation and the statement is ignored.

Multiple queries live in the same file, separated by their annotations:

-- name: GetAuthor :one
SELECT id, name FROM author WHERE id = @id;

-- name: ListAuthors :many
SELECT id, name FROM author ORDER BY name;

Query commands

CommandReturn typeNotes
:oneOne optional rowOptional<T> / T? / Option<T> / T | None / T | null
:manyAll rows as a listList<T> / Vec<T> / list[T] / T[]
:execNothingvoid / Unit / () / None / Promise<void>
:execrowsAffected row countlong / Long / u64 / int / Promise<number>

Named parameters

Use @param_name in the SQL body. sqltgen infers the parameter type from the context in which it is used — a WHERE clause comparison, a SET assignment, a VALUES row, etc.

-- name: GetBook :one
SELECT id, author_id, title, genre, price, published_at
FROM book
WHERE id = @id;

@id above is compared against the book.id column (BIGINT NOT NULL), so sqltgen infers id: i64 in Rust, id: long in Java, id: int in Python, etc.

Named parameters become named function arguments in every generated backend.

Type and nullability overrides

Above the -- name: annotation, you can add per-parameter annotation lines to override the inferred type or nullability:

-- name: UpdateAuthorBio :exec
-- @bio null
UPDATE author SET bio = @bio WHERE id = @id;
AnnotationEffect
-- @name nullMark the parameter nullable
-- @name not nullMark the parameter non-null
-- @name typeOverride the type (e.g. -- @published_at date)
-- @name type nullOverride type and mark nullable
-- @name type not nullOverride type and mark non-null

Valid types follow the SQL type names used in the type mapping reference: bigint, text, boolean, date, timestamp, uuid, etc.

Annotations apply only to the query immediately following them.

Positional parameters

Instead of @name, you can use the engine's native positional placeholders:

  • PostgreSQL: $1, $2, $3, …
  • SQLite: ?1, ?2, ?3, …
  • MySQL: $1, $2, $3, …
-- name: ListBooksWithLimit :many
SELECT id, title, genre, price
FROM book
ORDER BY title
LIMIT $1 OFFSET $2;

Positional parameters get auto-generated names (p1, p2, …) in generated function signatures.

Named and positional parameters cannot be mixed in the same query.

List parameters (IN clauses)

To pass a variable-length list to an IN clause, declare the parameter with an array type annotation:

-- name: GetBooksByIds :many
-- @ids bigint[] not null
SELECT id, author_id, title, genre, price, published_at
FROM book
WHERE id IN (@ids)
ORDER BY title;

The generated function accepts a collection (List<Long> in Java, &[i64] in Rust, list[int] in Python, number[] in TypeScript). The SQL is rewritten to the engine's native expression — see List parameter strategies.

RETURNING clauses

RETURNING columns are resolved just like SELECT projections. When RETURNING * targets a single table, sqltgen reuses the existing table model type:

-- name: CreateAuthor :one
INSERT INTO author (name, bio, birth_year)
VALUES (@name, @bio, @birth_year)
RETURNING *;
-- returns: Optional<Author> (reuses Author type)

-- name: DeleteAuthor :one
DELETE FROM author WHERE id = @id
RETURNING id, name;
-- returns: Optional<DeleteAuthorRow> (explicit column list → inline type)

JOINs

Result columns from JOINs are resolved per-table. Columns from the nullable side of an outer join (LEFT/RIGHT/FULL JOIN) are automatically made nullable. When the result spans multiple tables, an inline row type is emitted:

-- name: ListBooksWithAuthor :many
SELECT b.id, b.title, b.genre, b.price,
       a.name AS author_name, a.bio AS author_bio
FROM book b
JOIN author a ON a.id = b.author_id
ORDER BY b.title;
-- emits: ListBooksWithAuthorRow { id, title, genre, price, author_name, author_bio }

CTEs (WITH)

CTE result columns are available to the outer query as if they were tables. Data-modifying CTEs (WITH … DELETE … RETURNING …) are fully supported:

-- name: GetTopSellingBooks :many
WITH book_sales AS (
    SELECT book_id, SUM(quantity) AS units_sold
    FROM sale_item
    GROUP BY book_id
)
SELECT b.id, b.title, b.genre, bs.units_sold
FROM book b
JOIN book_sales bs ON bs.book_id = b.id
ORDER BY bs.units_sold DESC;

-- name: ArchiveOldBooks :many
WITH archived AS (
    DELETE FROM book WHERE published_at < $1 RETURNING id, title
)
SELECT id, title FROM archived ORDER BY title;

Subqueries

Subqueries in WHERE, EXISTS, scalar positions in the SELECT list, and derived tables in FROM are all supported:

-- name: GetBooksNotByAuthor :many
SELECT id, title, genre
FROM book
WHERE author_id NOT IN (SELECT id FROM author WHERE name = $1)
ORDER BY title;

-- name: GetBookWithAuthorName :many
SELECT b.id, b.title,
       (SELECT a.name FROM author a WHERE a.id = b.author_id) AS author_name
FROM book b
ORDER BY b.title;
-- author_name is nullable (scalar subquery → may return NULL)

Aggregates and expressions

COUNT, SUM, MIN, MAX, AVG are all supported:

  • COUNT(*) and COUNT(col) always produce a non-null integer.
  • SUM, MIN, MAX, AVG produce a nullable result if the argument or its source column is nullable, or when the query has a LEFT JOIN.
-- name: CountBooksByGenre :many
SELECT genre, COUNT(*) AS book_count
FROM book
GROUP BY genre
ORDER BY genre;

-- name: GetSaleStats :one
SELECT MIN(quantity) AS min_qty,
       MAX(quantity) AS max_qty,
       SUM(quantity) AS sum_qty
FROM sale_item;

CASE WHEN … END, COALESCE, BETWEEN, LIKE, HAVING, and EXISTS parameters are all inferred from their surrounding context.

UNION / INTERSECT / EXCEPT

Set operations are supported. Result columns are derived from the first branch, and column types are coerced across branches:

-- name: GetAllPeopleNames :many
SELECT name FROM author
UNION
SELECT display_name AS name FROM user_account
ORDER BY name;

DISTINCT and LIMIT / OFFSET

-- name: GetDistinctGenres :many
SELECT DISTINCT genre FROM book ORDER BY genre;

-- name: ListBooksWithLimit :many
SELECT id, title, genre, price FROM book ORDER BY title
LIMIT $1 OFFSET $2;

Table-wildcard reuse

When a query selects all columns from a single table (SELECT * or SELECT t.*) and the join structure means no outer-join nullability is introduced, sqltgen reuses the table's existing model type instead of emitting a per-query row struct:

-- name: GetAllBookFields :many
SELECT b.*
FROM book b
ORDER BY b.id;
-- returns: List<Book>  (not List<GetAllBookFieldsRow>)

Type mapping

Each language guide below shows how SQL types map to host-language types, how nullability is expressed, and which dependencies are required.

Nullability

Nullability is determined from the schema and the query structure:

  • A column with NOT NULL is non-null.
  • A column without NOT NULL is nullable.
  • PRIMARY KEY columns are implicitly non-null.
  • Columns from the outer side of a LEFT/RIGHT/FULL JOIN are made nullable.
  • Scalar subqueries in the SELECT list produce nullable results.

The sections below show the nullable form alongside the non-null form for each type.


Java

SQL typeNon-nullNullable
BOOLEANbooleanBoolean
SMALLINTshortShort
INTEGER / INTintInteger
BIGINT / BIGSERIALlongLong
REAL / FLOAT4floatFloat
DOUBLE PRECISION / FLOAT8doubleDouble
NUMERIC / DECIMALBigDecimalBigDecimal
TEXT / VARCHAR / CHARStringString
BYTEAbyte[]byte[]
DATELocalDateLocalDate
TIMELocalTimeLocalTime
TIMESTAMPLocalDateTimeLocalDateTime
TIMESTAMPTZOffsetDateTimeOffsetDateTime
INTERVALStringString
UUIDUUIDUUID
JSON / JSONBStringString
type[]List<T>List<T>
UnknownObjectObject

Reference types (String, BigDecimal, etc.) are nullable by passing null. Primitive types (boolean, int, long, etc.) are boxed when the column is nullable.

The generated code uses only java.sql and standard JDK classes. No extra imports are needed for basic types.


Kotlin

SQL typeNon-nullNullable
BOOLEANBooleanBoolean?
SMALLINTShortShort?
INTEGER / INTIntInt?
BIGINT / BIGSERIALLongLong?
REAL / FLOAT4FloatFloat?
DOUBLE PRECISION / FLOAT8DoubleDouble?
NUMERIC / DECIMALBigDecimalBigDecimal?
TEXT / VARCHAR / CHARStringString?
BYTEAByteArrayByteArray?
DATELocalDateLocalDate?
TIMELocalTimeLocalTime?
TIMESTAMPLocalDateTimeLocalDateTime?
TIMESTAMPTZOffsetDateTimeOffsetDateTime?
INTERVALStringString?
UUIDUUIDUUID?
JSON / JSONBStringString?
type[]List<T>List<T>?
UnknownAnyAny?

Rust

SQL typeNon-nullNullable
BOOLEANboolOption<bool>
SMALLINTi16Option<i16>
INTEGER / INTi32Option<i32>
BIGINT / BIGSERIALi64Option<i64>
REAL / FLOAT4f32Option<f32>
DOUBLE PRECISION / FLOAT8f64Option<f64>
NUMERIC / DECIMALrust_decimal::DecimalOption<rust_decimal::Decimal>
TEXT / VARCHAR / CHARStringOption<String>
BYTEA / BLOBVec<u8>Option<Vec<u8>>
DATEtime::DateOption<time::Date>
TIMEtime::TimeOption<time::Time>
TIMESTAMPtime::PrimitiveDateTimeOption<time::PrimitiveDateTime>
TIMESTAMPTZtime::OffsetDateTimeOption<time::OffsetDateTime>
INTERVALStringOption<String>
UUIDuuid::UuidOption<uuid::Uuid>
JSON / JSONBserde_json::ValueOption<serde_json::Value>
type[]Vec<T>Option<Vec<T>>
Unknownserde_json::ValueOption<serde_json::Value>

Enable sqlx features as needed:

sqlx = { version = "0.8", features = [
    "runtime-tokio",
    "postgres",      # or "sqlite" / "mysql"
    "time",          # Date, Time, Timestamp, TimestampTz
    "uuid",          # UUID
    "rust_decimal",  # NUMERIC / DECIMAL
] }

Python

SQL typeNon-nullNullable
BOOLEANboolbool | None
SMALLINT / INTEGER / BIGINTintint | None
REAL / DOUBLE PRECISIONfloatfloat | None
NUMERIC / DECIMALdecimal.Decimaldecimal.Decimal | None
TEXT / VARCHAR / CHARstrstr | None
BYTEA / BLOBbytesbytes | None
DATEdatetime.datedatetime.date | None
TIMEdatetime.timedatetime.time | None
TIMESTAMP / TIMESTAMPTZdatetime.datetimedatetime.datetime | None
INTERVALdatetime.timedeltadatetime.timedelta | None
UUIDuuid.UUIDuuid.UUID | None
JSON (psycopg3)objectobject | None
JSON (sqlite3, mysql-connector)strstr | None
type[]list[T]list[T] | None
UnknownAnyAny | None

All date/time, decimal, and uuid types come from the standard library. psycopg3 automatically deserializes JSON columns to Python objects; sqlite3 and mysql-connector return the raw JSON string.


TypeScript

SQL typeNon-nullNullable
BOOLEANbooleanboolean | null
SMALLINT / INTEGER / BIGINTnumber ⚠️number | null
REAL / DOUBLE PRECISION / NUMERICnumbernumber | null
TEXT / VARCHAR / CHARstringstring | null
BYTEA / BLOBBufferBuffer | null
DATE / TIME / TIMESTAMP / TIMESTAMPTZDateDate | null
INTERVALstringstring | null
UUIDstringstring | null
JSON / JSONBunknownunknown | null
type[]T[]T[] | null
Unknownunknownunknown | null

⚠️ BIGINT maps to number, which loses precision above 2⁵³. Use BigInt in application code if your IDs or values may exceed this range.

All types are built-in to TypeScript/Node.js — no extra dependencies needed for the type annotations themselves.


JavaScript

Type mapping is identical to TypeScript. Types are expressed as JSDoc comments (@typedef, @param, @returns) rather than inline TypeScript syntax, but the underlying types are the same.


Go

SQL typeNon-nullNullable
BOOLEANboolsql.NullBool
SMALLINT / INTEGERint32sql.NullInt32
BIGINT / BIGSERIALint64sql.NullInt64
REAL / DOUBLE PRECISION / NUMERICfloat64sql.NullFloat64
TEXT / VARCHAR / CHARstringsql.NullString
BYTEA / BLOB[]byte[]byte
DATE / TIME / TIMESTAMP / TIMESTAMPTZtime.Timesql.NullTime
INTERVALstringsql.NullString
UUIDstringsql.NullString
JSON / JSONBjson.RawMessagejson.RawMessage
type[][]T[]T
Unknowninterface{}interface{}

All nullable types use the standard database/sql null wrappers. No extra dependencies are required beyond the driver itself.

Running examples

Runnable example projects for all backends and dialects live in examples/. Each is a self-contained project with its own Makefile containing generate, build, and run targets.

Prerequisites

  • Docker — for PostgreSQL and MySQL examples
  • Java 21 + Maven — for Java and Kotlin examples
  • Go 1.23+ — for Go examples
  • Node 22+ — for TypeScript and JavaScript examples
  • Python 3.11+ — for Python examples
  • Rust stable — for Rust examples (same toolchain used to build sqltgen)

Running a single example

make -C examples/rust/postgresql run
make -C examples/java/sqlite run
make -C examples/python/mysql run
make -C examples/typescript/postgresql run
make -C examples/go/sqlite run

Each example starts its own database container if needed, runs sqltgen generate, compiles the project, executes it against the database, and tears down the container.

Available examples

LanguagePostgreSQLSQLiteMySQL
Java
Kotlin
Rust
Python
TypeScript
JavaScript
Go

Running all examples at once

make run-all

This starts one shared PostgreSQL container and one shared MySQL container, runs all examples against them in sequence, and tears the containers down at the end. Faster than running each example independently.

Example layout

Every example follows the same structure:

examples/{lang}/{dialect}/
  sqltgen.json     — config pointing at schema.sql + queries.sql
  schema.sql       — bookstore DDL
  queries.sql      — annotated queries
  Makefile         — generate / build / run targets
  gen/             — generated code (checked in)
  src/             — application code (or main.py, Main.java, etc.)

The bookstore schema (author, book, sale, sale_item tables) is shared across all examples. It exercises a representative range of features: JOINs, CTEs, aggregates, RETURNING, list parameters, nullable columns, and multiple query types.

See examples/README.md for more detail.

Java

sqltgen generates JDBC-based Java code using only the standard java.sql package. No framework, no reflection, no extra runtime dependencies.

Configuration

"java": {
  "out": "src/main/java",
  "package": "com.example.db"
}
FieldDescription
outDirectory under which the package path is created.
packageJava package name. Files are placed at out/com/example/db/.
list_params"native" (default) or "dynamic". See List parameter strategies.

What is generated

For a schema with author and book tables and a single query file, sqltgen emits:

src/main/java/com/example/db/
  Author.java         — Java record for the author table
  Book.java           — Java record for the book table
  Queries.java        — static query functions
  Querier.java        — DataSource-backed connection-per-call wrapper

When using query grouping, each group produces its own {GroupName}Queries.java and {GroupName}Querier.java.

Model records

// Author.java
package com.example.db;

public record Author(
    long id,
    String name,
    String bio,         // nullable → String (null if absent)
    Integer birthYear   // nullable → boxed Integer
) {}
  • Non-null columns use primitive types (long, int, boolean, double, etc.).
  • Nullable columns use boxed types (Long, Integer, Boolean, Double, etc.) or reference types (String, BigDecimal, LocalDate, etc.).
  • SQL snake_case column names are converted to Java camelCase field names (birth_yearbirthYear).

Query functions

// Queries.java
package com.example.db;

import java.sql.*;
import java.util.*;

public final class Queries {

    public static Optional<Author> getAuthor(Connection conn, long id)
            throws SQLException { … }

    public static List<Author> listAuthors(Connection conn)
            throws SQLException { … }

    public static Optional<Author> createAuthor(Connection conn,
            String name, String bio, Integer birthYear)
            throws SQLException { … }

    public static void deleteAuthor(Connection conn, long id)
            throws SQLException { … }

    public static long countAuthors(Connection conn)
            throws SQLException { … }  // :execrows
}

The SQL constant is emitted as a private static final String SQL_… field inside the class. Each function prepares the statement, binds parameters, executes, and maps the result — all in a single, self-contained method body.

Querier wrapper

Querier.java wraps a DataSource and opens a new connection per call:

// Querier.java
package com.example.db;

import javax.sql.DataSource;
import java.sql.*;
import java.util.*;

public final class Querier {
    private final DataSource ds;

    public Querier(DataSource ds) { this.ds = ds; }

    public Optional<Author> getAuthor(long id) throws SQLException {
        try (var conn = ds.getConnection()) {
            return Queries.getAuthor(conn, id);
        }
    }

    public List<Author> listAuthors() throws SQLException {
        try (var conn = ds.getConnection()) {
            return Queries.listAuthors(conn);
        }
    }
    // …
}

Wiring up

Maven dependency

<!-- pom.xml — PostgreSQL driver -->
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <version>42.7.3</version>
</dependency>

<!-- SQLite -->
<dependency>
    <groupId>org.xerial</groupId>
    <artifactId>sqlite-jdbc</artifactId>
    <version>3.47.0.0</version>
</dependency>

<!-- MySQL -->
<dependency>
    <groupId>com.mysql</groupId>
    <artifactId>mysql-connector-j</artifactId>
    <version>9.1.0</version>
</dependency>

Using a plain Connection

import java.sql.Connection;
import java.sql.DriverManager;
import com.example.db.Author;
import com.example.db.Queries;

Connection conn = DriverManager.getConnection(
    "jdbc:postgresql://localhost:5432/mydb", "user", "pass");

Optional<Author> author = Queries.getAuthor(conn, 1L);
List<Author> all        = Queries.listAuthors(conn);

Using the Querier wrapper (connection pool)

import com.zaxxer.hikari.HikariDataSource;
import com.example.db.Querier;

var ds = new HikariDataSource();
ds.setJdbcUrl("jdbc:postgresql://localhost:5432/mydb");
ds.setUsername("user");
ds.setPassword("pass");

var q = new Querier(ds);
Optional<Author> author = q.getAuthor(1L);
List<Author> all        = q.listAuthors();

Inline row types

When a query result does not match any single table (JOINs, partial RETURNING, etc.), sqltgen emits an inline record inside Queries.java:

public final class Queries {

    public record ListBooksWithAuthorRow(
        long id,
        String title,
        String genre,
        BigDecimal price,
        String authorName,
        String authorBio
    ) {}

    public static List<ListBooksWithAuthorRow> listBooksWithAuthor(Connection conn)
            throws SQLException { … }
}

Naming conventions

SQLJava
get_authorgetAuthor
list_books_by_genrelistBooksByGenre
birth_year columnbirthYear field
Author tableAuthor record

Java version requirements

The generated code requires Java 16+ for records. Java 8–15 will require modifying the record declarations to standard classes with constructors and accessor methods.

Kotlin

sqltgen generates JDBC-based Kotlin code using the standard java.sql package. Generated code uses Kotlin idioms: data classes, nullable types, and Kotlin objects.

Configuration

"kotlin": {
  "out": "src/main/kotlin",
  "package": "com.example.db"
}
FieldDescription
outOutput root directory.
packageKotlin package name.
list_params"native" (default) or "dynamic".

What is generated

src/main/kotlin/com/example/db/
  Author.kt         — data class for the author table
  Book.kt           — data class for the book table
  Queries.kt        — Kotlin object with query functions
  Querier.kt        — DataSource-backed wrapper object

Model data classes

// Author.kt
package com.example.db

data class Author(
    val id: Long,
    val name: String,
    val bio: String?,      // nullable → T?
    val birthYear: Int?    // nullable → T?
)
  • Non-null columns use non-nullable Kotlin types (Long, Int, Boolean, etc.).
  • Nullable columns use Kotlin nullable types (Long?, Int?, String?, etc.).
  • snake_case SQL names → camelCase Kotlin property names.

Query functions

// Queries.kt
package com.example.db

import java.sql.Connection

object Queries {

    fun getAuthor(conn: Connection, id: Long): Author? { … }

    fun listAuthors(conn: Connection): List<Author> { … }

    fun createAuthor(conn: Connection, name: String, bio: String?,
                     birthYear: Int?): Author? { … }

    fun deleteAuthor(conn: Connection, id: Long): Unit { … }

    fun countAuthors(conn: Connection): Long { … }  // :execrows
}

Querier wrapper

// Querier.kt
package com.example.db

import javax.sql.DataSource

class Querier(private val ds: DataSource) {

    fun getAuthor(id: Long): Author? =
        ds.connection.use { Queries.getAuthor(it, id) }

    fun listAuthors(): List<Author> =
        ds.connection.use { Queries.listAuthors(it) }
    // …
}

Wiring up

Gradle dependency

// build.gradle.kts — PostgreSQL driver
dependencies {
    implementation("org.postgresql:postgresql:42.7.3")
    // SQLite:
    // implementation("org.xerial:sqlite-jdbc:3.47.0.0")
    // MySQL:
    // implementation("com.mysql:mysql-connector-j:9.1.0")
}

Using a plain Connection

import java.sql.DriverManager
import com.example.db.Queries

val conn = DriverManager.getConnection(
    "jdbc:postgresql://localhost:5432/mydb", "user", "pass")

val author = Queries.getAuthor(conn, 1L)
val all    = Queries.listAuthors(conn)

Using the Querier wrapper

import com.zaxxer.hikari.HikariDataSource
import com.example.db.Querier

val ds = HikariDataSource().apply {
    jdbcUrl  = "jdbc:postgresql://localhost:5432/mydb"
    username = "user"
    password = "pass"
}

val q = Querier(ds)
val author = q.getAuthor(1L)

Inline row types

object Queries {

    data class ListBooksWithAuthorRow(
        val id: Long,
        val title: String,
        val genre: String,
        val price: java.math.BigDecimal,
        val authorName: String,
        val authorBio: String?
    )

    fun listBooksWithAuthor(conn: Connection): List<ListBooksWithAuthorRow> { … }
}

Naming conventions

SQLKotlin
get_authorgetAuthor
birth_year columnbirthYear property
Author tableAuthor data class

Rust

sqltgen generates async Rust code using sqlx. Model structs derive sqlx::FromRow. Query functions are async fn returning Result<…, sqlx::Error>.

Configuration

"rust": {
  "out": "src/db",
  "package": ""
}
FieldDescription
outOutput directory (relative to the project root).
packageUnused for Rust — set to "".
list_params"native" (default) or "dynamic".

What is generated

src/db/
  mod.rs              — re-exports all modules
  author.rs           — Author struct with sqlx::FromRow
  book.rs             — Book struct
  _sqltgen.rs         — shared helper trait (SqlxAdapter)
  queries.rs          — async query functions  (single-file form)
  users.rs            — per-group query files  (grouped form)

Model structs

#![allow(unused)]
fn main() {
// src/db/author.rs
#[derive(Debug, sqlx::FromRow)]
pub struct Author {
    pub id: i64,
    pub name: String,
    pub bio: Option<String>,
    pub birth_year: Option<i32>,
}
}
  • Non-null columns → bare type (i64, String, bool, …).
  • Nullable columns → Option<T>.
  • snake_case SQL names → snake_case Rust field names (unchanged).

Query functions

#![allow(unused)]
fn main() {
// src/db/queries.rs
use sqlx::PgPool;
use super::author::Author;

pub async fn get_author(pool: &PgPool, id: i64)
        -> Result<Option<Author>, sqlx::Error> {
    sqlx::query_as::<_, Author>(SQL_GET_AUTHOR)
        .bind(id)
        .fetch_optional(pool)
        .await
}

pub async fn list_authors(pool: &PgPool)
        -> Result<Vec<Author>, sqlx::Error> {
    sqlx::query_as::<_, Author>(SQL_LIST_AUTHORS)
        .fetch_all(pool)
        .await
}

pub async fn delete_author(pool: &PgPool, id: i64)
        -> Result<(), sqlx::Error> {
    sqlx::query(SQL_DELETE_AUTHOR)
        .bind(id)
        .execute(pool)
        .await
        .map(|_| ())
}

pub async fn count_authors(pool: &PgPool)
        -> Result<u64, sqlx::Error> {
    sqlx::query(SQL_COUNT_AUTHORS)
        .execute(pool)
        .await
        .map(|r| r.rows_affected())
}
}

_sqltgen.rs helper

The generated _sqltgen.rs file provides a shared SqlxAdapter trait that abstracts over the three pool types. This is an implementation detail — you do not need to call it directly.

mod.rs

#![allow(unused)]
fn main() {
pub mod author;
pub mod book;
pub mod queries;
mod _sqltgen;
}

Wiring up

Cargo.toml

[dependencies]
sqlx = { version = "0.8", features = [
    "runtime-tokio",
    "postgres",          # or "sqlite" / "mysql"
    "time",              # for Date, Time, Timestamp, TimestampTz
    "uuid",              # for UUID
    "rust_decimal",      # for NUMERIC / DECIMAL
] }
tokio = { version = "1", features = ["full"] }

Using the pool directly

mod db;
use db::queries;

#[tokio::main]
async fn main() -> Result<(), sqlx::Error> {
    let pool = sqlx::PgPool::connect("postgres://user:pass@localhost/mydb").await?;

    let author = queries::get_author(&pool, 1).await?;
    let all    = queries::list_authors(&pool).await?;

    Ok(())
}

Pool types per dialect

DialectPool type
PostgreSQLsqlx::PgPool
SQLitesqlx::SqlitePool
MySQLsqlx::MySqlPool

Inline row types

JOIN queries or partial RETURNING queries produce additional structs in the query file:

#![allow(unused)]
fn main() {
#[derive(Debug, sqlx::FromRow)]
pub struct ListBooksWithAuthorRow {
    pub id: i64,
    pub title: String,
    pub genre: String,
    pub price: rust_decimal::Decimal,
    pub author_name: String,
    pub author_bio: Option<String>,
}

pub async fn list_books_with_author(pool: &PgPool)
        -> Result<Vec<ListBooksWithAuthorRow>, sqlx::Error> { … }
}

List parameters

For PostgreSQL, list parameters use = ANY($1) with a slice bind:

#![allow(unused)]
fn main() {
// generated
pub async fn get_books_by_ids(pool: &PgPool, ids: &[i64])
        -> Result<Vec<Book>, sqlx::Error> {
    sqlx::query_as::<_, Book>(SQL_GET_BOOKS_BY_IDS)
        .bind(ids)
        .fetch_all(pool)
        .await
}
}

For SQLite and MySQL, the native strategy uses json_each / JSON_TABLE with a JSON-serialized string:

#![allow(unused)]
fn main() {
pub async fn get_books_by_ids(pool: &SqlitePool, ids: &[i64])
        -> Result<Vec<Book>, sqlx::Error> {
    let ids_json = serde_json::to_string(ids).unwrap();
    sqlx::query_as::<_, Book>(SQL_GET_BOOKS_BY_IDS)
        .bind(ids_json)
        .fetch_all(pool)
        .await
}
}

Naming conventions

SQLRust
GetAuthorget_author
ListBooksWithAuthorlist_books_with_author
birth_year columnbirth_year field
Author tableAuthor struct

Python

sqltgen generates Python code using standard database drivers: psycopg (psycopg3) for PostgreSQL, sqlite3 (stdlib) for SQLite, and mysql-connector-python for MySQL. Model types are @dataclass classes. Functions are plain synchronous functions.

Configuration

"python": {
  "out": "gen",
  "package": ""
}
FieldDescription
outOutput directory.
packageUnused for Python — set to "".
list_params"native" (default) or "dynamic".

What is generated

gen/
  __init__.py         — barrel import
  author.py           — Author dataclass
  book.py             — Book dataclass
  queries.py          — query functions (single-file form)
  users.py            — per-group query file (grouped form)

Model dataclasses

# gen/author.py
from __future__ import annotations
import dataclasses

@dataclasses.dataclass
class Author:
    id: int
    name: str
    bio: str | None
    birth_year: int | None
  • Non-null columns → bare type (int, str, bool, float, …).
  • Nullable columns → T | None.
  • snake_case SQL names → snake_case Python field names (unchanged).

Query functions

# gen/queries.py
import psycopg
from .author import Author

SQL_GET_AUTHOR    = "SELECT id, name, bio, birth_year FROM author WHERE id = %s"
SQL_LIST_AUTHORS  = "SELECT id, name, bio, birth_year FROM author ORDER BY name"
SQL_CREATE_AUTHOR = "INSERT INTO author (name, bio, birth_year) VALUES (%s, %s, %s) RETURNING *"
SQL_DELETE_AUTHOR = "DELETE FROM author WHERE id = %s"

def get_author(conn: psycopg.Connection, id: int) -> Author | None:
    with conn.cursor() as cur:
        cur.execute(SQL_GET_AUTHOR, (id,))
        row = cur.fetchone()
        if row is None:
            return None
        return Author(*row)

def list_authors(conn: psycopg.Connection) -> list[Author]:
    with conn.cursor() as cur:
        cur.execute(SQL_LIST_AUTHORS)
        return [Author(*row) for row in cur.fetchall()]

def create_author(conn: psycopg.Connection,
                  name: str, bio: str | None,
                  birth_year: int | None) -> Author | None:
    with conn.cursor() as cur:
        cur.execute(SQL_CREATE_AUTHOR, (name, bio, birth_year))
        row = cur.fetchone()
        if row is None:
            return None
        return Author(*row)

def delete_author(conn: psycopg.Connection, id: int) -> None:
    with conn.cursor() as cur:
        cur.execute(SQL_DELETE_AUTHOR, (id,))

def count_authors(conn: psycopg.Connection) -> int:
    with conn.cursor() as cur:
        cur.execute(SQL_COUNT_AUTHORS)
        return cur.rowcount  # :execrows

Results are unpacked positionally (Author(*row)), so the column order in the query must match the dataclass field order.

Querier wrapper

# gen/queries.py  (excerpt)
import contextlib
from typing import Callable

class Querier:
    def __init__(self, connect: Callable[[], psycopg.Connection]) -> None:
        self._connect = connect

    def get_author(self, id: int) -> Author | None:
        with contextlib.closing(self._connect()) as conn:
            return get_author(conn, id)

    def list_authors(self) -> list[Author]:
        with contextlib.closing(self._connect()) as conn:
            return list_authors(conn)

Wiring up

PostgreSQL

pip install psycopg
import psycopg
from gen.queries import get_author, list_authors

with psycopg.connect("postgresql://user:pass@localhost/mydb") as conn:
    author = get_author(conn, 1)
    all_authors = list_authors(conn)
    conn.commit()

SQLite (stdlib — no install needed)

import sqlite3
from gen.queries import get_author, list_authors

conn = sqlite3.connect("mydb.db")
author = get_author(conn, 1)
all_authors = list_authors(conn)

MySQL

pip install mysql-connector-python
import mysql.connector
from gen.queries import get_author, list_authors

conn = mysql.connector.connect(
    host="localhost", database="mydb",
    user="user", password="pass")

author = get_author(conn, 1)
all_authors = list_authors(conn)

Using the Querier wrapper

from gen.queries import Querier
import psycopg

q = Querier(lambda: psycopg.connect("postgresql://user:pass@localhost/mydb"))
author = q.get_author(1)

JSON columns

DriverJSON column Python type
psycopg3object — psycopg3 automatically deserializes JSON
sqlite3str — returns the raw JSON string
mysql-connectorstr — returns the raw JSON string

Application code is responsible for parsing the string in the sqlite3 and mysql-connector cases.

Naming conventions

SQLPython
GetAuthorget_author
ListBooksWithAuthorlist_books_with_author
birth_year columnbirth_year field
Author tableAuthor class

Python version requirements

The generated code uses PEP 604 union syntax (str | None) and from __future__ import annotations. Python 3.10+ is required for runtime evaluation of these annotations. Python 3.11+ is recommended.

TypeScript

sqltgen generates TypeScript code with full type annotations. Database access uses pg for PostgreSQL, better-sqlite3 for SQLite, and mysql2 for MySQL.

Note: SQLite better-sqlite3 is synchronous. Generated SQLite functions do not return Promise — they return values directly.

Configuration

"typescript": {
  "out": "src/db",
  "package": ""
}
FieldDescription
outOutput directory.
packageUnused for TypeScript — set to "".
list_params"native" (default) or "dynamic".

What is generated

src/db/
  index.ts            — barrel export
  author.ts           — Author interface
  book.ts             — Book interface
  _sqltgen.ts         — shared SqltgenAdapter type
  queries.ts          — async query functions + Querier class

Model interfaces

// src/db/author.ts
export interface Author {
  id: number;
  name: string;
  bio: string | null;
  birth_year: number | null;
}
  • Non-null columns → bare type.
  • Nullable columns → T | null.
  • Column names are kept as snake_case to match the database column names exactly.

Query functions

// src/db/queries.ts
import type { ClientBase } from 'pg';
import type { Author } from './author';

const SQL_GET_AUTHOR    = `SELECT id, name, bio, birth_year FROM author WHERE id = $1`;
const SQL_LIST_AUTHORS  = `SELECT id, name, bio, birth_year FROM author ORDER BY name`;
const SQL_DELETE_AUTHOR = `DELETE FROM author WHERE id = $1`;

export async function getAuthor(db: ClientBase, id: number): Promise<Author | null> {
  const result = await db.query<Author>(SQL_GET_AUTHOR, [id]);
  return result.rows[0] ?? null;
}

export async function listAuthors(db: ClientBase): Promise<Author[]> {
  const result = await db.query<Author>(SQL_LIST_AUTHORS);
  return result.rows;
}

export async function deleteAuthor(db: ClientBase, id: number): Promise<void> {
  await db.query(SQL_DELETE_AUTHOR, [id]);
}

export async function countAuthors(db: ClientBase): Promise<number> {
  const result = await db.query(SQL_COUNT_AUTHORS);
  return result.rowCount ?? 0;  // :execrows
}

Querier class

export class Querier {
  constructor(private connect: () => ClientBase | Promise<ClientBase>) {}

  async getAuthor(id: number): Promise<Author | null> {
    const db = await this.connect();
    try {
      return await getAuthor(db, id);
    } finally {
      if ('end' in db) await (db as any).end();
    }
  }
  // …
}

Wiring up

PostgreSQL

npm install pg
npm install --save-dev @types/pg
import { Client } from 'pg';
import { getAuthor, listAuthors } from './src/db/queries';

const client = new Client({
  connectionString: 'postgres://user:pass@localhost/mydb'
});
await client.connect();

const author = await getAuthor(client, 1);
const all    = await listAuthors(client);

await client.end();

SQLite

npm install better-sqlite3
npm install --save-dev @types/better-sqlite3
import Database from 'better-sqlite3';
import { getAuthor, listAuthors } from './src/db/queries';

const db = new Database('mydb.db');

// SQLite functions are synchronous — no await needed
const author = getAuthor(db, 1);
const all    = listAuthors(db);

MySQL

npm install mysql2
import mysql from 'mysql2/promise';
import { getAuthor } from './src/db/queries';

const conn = await mysql.createConnection({
  host: 'localhost', database: 'mydb',
  user: 'user', password: 'pass'
});

const author = await getAuthor(conn, 1);
await conn.end();

Inline row types

export interface ListBooksWithAuthorRow {
  id: number;
  title: string;
  genre: string;
  price: number;
  author_name: string;
  author_bio: string | null;
}

export async function listBooksWithAuthor(db: ClientBase): Promise<ListBooksWithAuthorRow[]> { … }

Naming conventions

SQLTypeScript
GetAuthorgetAuthor (function)
ListBooksWithAuthorlistBooksWithAuthor
birth_year columnbirth_year (interface field — unchanged)
Author tableAuthor interface

JavaScript

sqltgen generates JavaScript code with JSDoc type annotations. It uses the same database drivers as the TypeScript backend: pg, better-sqlite3, and mysql2.

The generated JavaScript is functionally identical to the TypeScript output, but replaces inline TypeScript syntax with JSDoc comments (@typedef, @param, @returns).

Configuration

"javascript": {
  "out": "src/db",
  "package": ""
}
FieldDescription
outOutput directory.
packageUnused for JavaScript — set to "".
list_params"native" (default) or "dynamic".

What is generated

src/db/
  index.js            — barrel export
  author.js           — Author typedef
  book.js             — Book typedef
  _sqltgen.js         — shared adapter typedef
  queries.js          — async query functions + Querier class

Model typedefs

// src/db/author.js

/**
 * @typedef {Object} Author
 * @property {number} id
 * @property {string} name
 * @property {string | null} bio
 * @property {number | null} birth_year
 */

Query functions

// src/db/queries.js

const SQL_GET_AUTHOR   = `SELECT id, name, bio, birth_year FROM author WHERE id = $1`;
const SQL_LIST_AUTHORS = `SELECT id, name, bio, birth_year FROM author ORDER BY name`;

/**
 * @param {import('pg').ClientBase} db
 * @param {number} id
 * @returns {Promise<Author | null>}
 */
export async function getAuthor(db, id) {
  const result = await db.query(SQL_GET_AUTHOR, [id]);
  return result.rows[0] ?? null;
}

/**
 * @param {import('pg').ClientBase} db
 * @returns {Promise<Author[]>}
 */
export async function listAuthors(db) {
  const result = await db.query(SQL_LIST_AUTHORS);
  return result.rows;
}

Querier class

export class Querier {
  /** @param {() => import('pg').ClientBase | Promise<import('pg').ClientBase>} connect */
  constructor(connect) {
    this._connect = connect;
  }

  /** @returns {Promise<Author | null>} */
  async getAuthor(id) {
    const db = await this._connect();
    try {
      return await getAuthor(db, id);
    } finally {
      if (typeof db.end === 'function') await db.end();
    }
  }
}

Wiring up

PostgreSQL

npm install pg
import { Client } from 'pg';
import { getAuthor, listAuthors } from './src/db/queries.js';

const client = new Client({
  connectionString: 'postgres://user:pass@localhost/mydb'
});
await client.connect();

const author = await getAuthor(client, 1);
const all    = await listAuthors(client);

await client.end();

SQLite

npm install better-sqlite3
import Database from 'better-sqlite3';
import { getAuthor } from './src/db/queries.js';

const db = new Database('mydb.db');
const author = getAuthor(db, 1);  // synchronous

MySQL

npm install mysql2
import mysql from 'mysql2/promise';
import { getAuthor } from './src/db/queries.js';

const conn = await mysql.createConnection({
  host: 'localhost', database: 'mydb',
  user: 'user', password: 'pass'
});
const author = await getAuthor(conn, 1);

Differences from TypeScript

TypeScriptJavaScript
.ts file extension.js file extension
Inline interface types@typedef JSDoc comments
Inline parameter types@param JSDoc annotations
index.ts barrelindex.js barrel

All drivers, driver setup, and query function semantics are identical. See the TypeScript guide for more detail.

Naming conventions

Identical to TypeScript: function names are camelCase, interface field names follow snake_case database column names.

Go

sqltgen generates Go code using the standard database/sql package. Functions take a context.Context and a *sql.DB and return idiomatic (T, error) pairs.

Configuration

"go": {
  "out": "db",
  "package": "db"
}
FieldDescription
outOutput directory.
packageGo package name declared at the top of each generated file.
list_params"native" (default) or "dynamic".

What is generated

db/
  mod.go          — package declaration
  author.go       — Author struct
  book.go         — Book struct
  sqltgen.go      — shared helpers (execRows, buildInClause, scanArray)
  queries.go      — query functions

Model structs

// db/author.go
package db

import "database/sql"

// Author represents a row from the author table.
type Author struct {
    Id        int64
    Name      string
    Bio       sql.NullString
    BirthYear sql.NullInt32
}
  • Non-null columns use bare Go types (int64, string, bool, …).
  • Nullable columns use sql.Null* types (sql.NullString, sql.NullInt32, sql.NullInt64, sql.NullFloat64, sql.NullBool, sql.NullTime).
  • Field names are PascalCase (e.g. birth_yearBirthYear).

Query functions

// db/queries.go
package db

import (
    "context"
    "database/sql"
)

// GetAuthor executes the GetAuthor query.
func GetAuthor(ctx context.Context, db *sql.DB, id int64) (*Author, error) {
    row := db.QueryRowContext(ctx, SQL_GET_AUTHOR, id)
    var r Author
    err := row.Scan(&r.Id, &r.Name, &r.Bio, &r.BirthYear)
    if err == sql.ErrNoRows {
        return nil, nil
    }
    if err != nil {
        return nil, err
    }
    return &r, nil
}

// ListAuthors executes the ListAuthors query.
func ListAuthors(ctx context.Context, db *sql.DB) ([]Author, error) {
    rows, err := db.QueryContext(ctx, SQL_LIST_AUTHORS)
    if err != nil {
        return nil, err
    }
    defer rows.Close()
    var results []Author
    for rows.Next() {
        var r Author
        if err := rows.Scan(&r.Id, &r.Name, &r.Bio, &r.BirthYear); err != nil {
            return nil, err
        }
        results = append(results, r)
    }
    return results, rows.Err()
}

// DeleteAuthor executes the DeleteAuthor query.
func DeleteAuthor(ctx context.Context, db *sql.DB, id int64) error {
    _, err := db.ExecContext(ctx, SQL_DELETE_AUTHOR, id)
    return err
}

// CountAuthors executes the CountAuthors query.
func CountAuthors(ctx context.Context, db *sql.DB) (int64, error) {
    // :execrows — returns affected row count
}

Return types by command:

:one:many:exec:execrows
(*T, error)nil, nil if no row([]T, error)error(int64, error)

Wiring up

go.mod

go get github.com/lib/pq          # PostgreSQL
go get modernc.org/sqlite          # SQLite (pure Go, no CGo)
go get github.com/go-sql-driver/mysql  # MySQL

PostgreSQL

import (
    "context"
    "database/sql"
    _ "github.com/lib/pq"
    "yourmodule/db"
)

conn, err := sql.Open("postgres", "postgres://user:pass@localhost/mydb?sslmode=disable")

author, err := db.GetAuthor(context.Background(), conn, 1)
all, err    := db.ListAuthors(context.Background(), conn)

SQLite

import (
    "context"
    "database/sql"
    _ "modernc.org/sqlite"
    "yourmodule/db"
)

conn, err := sql.Open("sqlite", "mydb.db")

author, err := db.GetAuthor(context.Background(), conn, 1)

MySQL

import (
    "context"
    "database/sql"
    _ "github.com/go-sql-driver/mysql"
    "yourmodule/db"
)

conn, err := sql.Open("mysql", "user:pass@tcp(localhost:3306)/mydb")

author, err := db.GetAuthor(context.Background(), conn, 1)

Inline row types

type ListBooksWithAuthorRow struct {
    Id         int64
    Title      string
    Genre      string
    Price      float64
    AuthorName string
    AuthorBio  sql.NullString
}

func ListBooksWithAuthor(ctx context.Context, db *sql.DB) ([]ListBooksWithAuthorRow, error) { … }

List parameters

For SQLite and MySQL the native strategy serializes the list to a JSON string and uses json_each / JSON_TABLE. For PostgreSQL it uses pq.Array:

// SQLite / MySQL
func GetBooksByIds(ctx context.Context, db *sql.DB, ids []int64) ([]Book, error) {
    idsJSON, _ := json.Marshal(ids)
    rows, err := db.QueryContext(ctx, SQL_GET_BOOKS_BY_IDS, string(idsJSON))
    …
}

// PostgreSQL
import "github.com/lib/pq"

func GetBooksByIds(ctx context.Context, db *sql.DB, ids []int64) ([]Book, error) {
    rows, err := db.QueryContext(ctx, SQL_GET_BOOKS_BY_IDS, pq.Array(ids))
    …
}

Naming conventions

SQLGo
GetAuthorGetAuthor (PascalCase — exported)
ListBooksWithAuthorListBooksWithAuthor
birth_year columnBirthYear field
Author tableAuthor struct

Contributing

Thank you for your interest in contributing to sqltgen.

The full contributing guide — build instructions, test suite, code style, and step-by-step walkthroughs for adding new backends and dialects — is in CONTRIBUTING.md in the repository root.

The detailed technical reference — architecture overview, IR data model, and guides for adding new backends, dialects, and examples — is in docs/contributor-guide.md.

Quick reference

# Clone and build
git clone https://github.com/sqltgen/sqltgen.git
cd sqltgen
cargo build

# Run all tests
cargo test

# Format and lint
cargo fmt
cargo clippy -- -D warnings

Opening a pull request

  1. Fork the repository and create a feature branch.
  2. Make your changes; run cargo fmt and cargo clippy.
  3. Add or update tests to cover the change.
  4. Open a PR with a clear description of what changes and why.

Keep PRs focused — one logical change per PR makes review much easier.

Questions?

Open an issue or start a discussion on the GitHub repository.