gsql API design: how it differs from GORM, sqlx, and squirrel
Releasing yet another Go SQL builder requires a reason. gsql aims for the unoccupied corner: type safety via generics, no code generation, and an explicit answer to Go's zero-value problem in partial updates.
Why write another one
GORM, sqlx, sqlc, ent, sqlboiler, squirrel, goqu, bun, jet, bob — Go's SQL space is already crowded. When I decided to release gsql into it, the first question I had to answer for myself was the reason for adding one more.
The answer fits in a single line: I wanted type safety without code generation. GORM is not type-safe. sqlc / ent / sqlboiler are type-safe but require a codegen step. squirrel / goqu give you query structure, but their string-based APIs do not let the Go compiler catch a single column-name typo. That narrow gap had been sitting there unoccupied since Go 1.18 added generics. gsql sits down in that seat.
The uncomfortable gap between existing libraries
The Go SQL landscape splits roughly along three axes.
| Track | Examples | Type-safe | Codegen | Hot-path reflection |
|---|---|---|---|---|
| ORM-like | GORM, ent | No (interface{} based) |
None / required | Yes |
| Codegen | sqlc, sqlboiler | Yes | Required | None |
| String builder | squirrel, goqu | No (string-based) | None | None |
| gsql | — | Yes (generics) | None | None |
I have spent time in production with both GORM and sqlx in client work, and I know what each kind of friction feels like. GORM lets .Where("age > ?", "eighteen") slip past the compiler, only to panic in production logs at the worst time. The error is not "wrong type" — it is whatever the database happens to say when it sees a malformed parameter, surfaced through a stack trace several layers away from the offending line. Debugging it in a production incident eats hours. sqlc is clean, but during the early phase when you rewrite queries twenty times a day, threading sqlc generate through every iteration grinds. It also adds a step to your CI setup, plus a generated-code review burden that scales with team size.
squirrel deserves a separate note. It is well-designed for what it is — a structural builder that produces parameterized SQL strings. But the columns and tables you reference are still bare strings: sq.Select("id", "name").From("users").Where(sq.Eq{"age": 18}). Rename a column in the schema, and the only thing that catches the mismatch is a failing query at runtime. For a project that goes through dozens of schema migrations a year, that is the same exposure surface as raw SQL, just with prettier syntax.
After Go 1.18 brought generics, filling the "type-safe × no codegen" quadrant became technically possible. None of the major libraries had moved into it. gsql is built specifically to occupy that empty seat.
Type safety with generics
gsql's API is built on two generic types: Col[T] and Table[C]. Because columns carry their Go type, type mismatches are caught at compile time rather than at runtime.
type Col[T any] struct {
table string
column string
}
type UserColumns struct {
ID qb.Col[int64] `db:"id"`
Name qb.Col[string] `db:"name"`
Age qb.Col[int] `db:"age"`
}
var Users = qb.NewTable[UserColumns]("users")
// Compiles
q := qb.Select(Users.Cols.ID, Users.Cols.Name).
From(Users).
Where(Users.Cols.Age.Gt(18))
// Compile error: passing a string to Eq on a Col[int]
// Users.Cols.Age.Eq("eighteen")
There is one realistic compromise here. The string db:"id" for a column name cannot be verified by the Go compiler — the database is the only thing that can ultimately say "no such column," and that is true for every library except sqlc. gsql validates identifiers at NewTable() initialization with the regex [A-Za-z_][A-Za-z0-9_]*, panicking immediately on anything else. That is where I drew the SQL-injection boundary for string identifiers.
Reflection runs once, inside NewTable(), and never on the query hot path. Build(), Insert().Set(...), and Update().Set(...) are all generics. In benchmarks, gsql's SelectSimple comes in at 272 ns/op while GORM lands at 1715 ns/op — roughly 6× faster — and I think that gap is paying for the "reflect once, then types" design.
The allocation numbers tell a similar story. SelectSimple allocates 488 B/op in gsql vs 2873 B/op in GORM, with a similar ratio across Update and Delete. None of this matters in a request that ultimately waits on a database round-trip, of course. What it does mean is that the design has headroom: you can run gsql in a hot inner loop, in a write-amplified bulk path, or in a benchmark suite without the SQL builder itself becoming the bottleneck. That is the freedom I wanted to preserve, even if a typical web request never notices.
Naming the zero-value problem
Anyone who has written partial updates in Go has run into this. You cannot tell UPDATE users SET age = 0 (intentional) from "do not touch age" using Go's zero value alone.
GORM treats zero-value struct fields as "not specified" and excludes them from UPDATE. Convenient on a good day — until the day "set age to 0" gets silently dropped, and you find out from a customer support ticket. The standard library's sql.NullString, sql.NullInt64, and friends solve part of this for scanning results, but they do not give you a clean writer-side vocabulary for partial updates. Pointer-based nullable fields (*string, *int) work as a writer-side convention, but the call sites end up taking the address of literals, plumbing pointers through service layers, and dealing with nil checks at every boundary. The code reads like Go that is apologizing for being Go.
gsql gives this ambiguity a name with Optional[T].
optName := qb.Set("Alice") // included in SET
optAge := qb.Unset[int]() // excluded from SET
err := qb.Update(Users).
Set(qb.ValIf(Users.Cols.Name, optName)).
Set(qb.ValIf(Users.Cols.Age, optAge)).
Where(Users.Cols.ID.Eq(int64(1))).
Exec(ctx, db)
// → UPDATE users SET name = ? WHERE users.id = ?
// age does not appear in SET (its current value is preserved)
qb.Set(0) lands in SET as "intentional zero." qb.Unset[int]() falls out as "do not touch." Instead of leaning on the language's accident around zero values, gsql pins the meaning down at the library layer with names. It feels like a quiet but load-bearing design choice.
The shape of Optional[T] is also intentionally small. Two constructors, Set and Unset. One predicate, IsSet(). One accessor, Value(). There is no Map, no OrElse, no functor algebra. The point is not to import a Maybe monad into Go; it is to give partial-update code a vocabulary that the Go zero-value rules do not provide. Anything beyond that risks pulling the API into "now you have an Optional library" territory, and I do not want gsql to grow that surface.
Most of the work was subtraction
Designing gsql cost me more time deciding what to leave out than what to add.
Deliberately omitted:
- Eager-loading magic: has_many requires you to write the two-query pattern yourself. No automatic N+1 fixing.
- Result scanner (
Fetch[T]): cut to keep the hot path reflection-free. Userows.Scan(...)directly. - Dynamic column-name selection: no
OrderByName(string), noWhereRaw(string). Identifiers must be source-code literals. - Migrations, connection pooling, retry: those belong to the
database/sqldriver layer and stay out of scope.
Not yet implemented:
- Subqueries, CTEs, window functions, aggregates (COUNT/SUM/AVG/GROUP BY/HAVING)
- Raw SQL escape hatch
The missing aggregates genuinely sting. Until they ship, you fall back to plain db.QueryContext(ctx, "SELECT COUNT(*) ...", args...) — which is half-finished and I know it. They are on the roadmap.
The cuts cost real expressiveness. But the moment I dilute "type-safe × no codegen × reflection-free hot path," gsql stops being the thing that occupies the empty seat. Other libraries already cover the broader feature surface, and they do it well. So I keep that line non-negotiable, and I tell people up front in the README that gsql will refuse to grow features whose only justification is "another library has it." A narrow library that keeps its promises beats a broad library that hedges them.
What I actually wanted to test with gsql
gsql is the second product I have shipped under the "Pure Go OSS library" label, sitting alongside the first one, gpdf. With gpdf, the experiment was whether a from-scratch PDF library could survive without CGO. With gsql, the experiment was different — I wanted to find out whether Go 1.18 generics could carry a production-grade SQL builder without code generation.
When the benchmarks landed at roughly 6× GORM and shoulder-to-shoulder with bun, the technical question felt answered. What is left is API-surface taste, and the ergonomics of Optional[T] are still in motion. I might rewrite parts of it after running the library for half a year, and I am not yet sure whether the answer to "should Set and Unset be top-level or live under a sub-package" will look the same in production as it does on a whiteboard.
The decision to ship under MIT first follows the same logic as gpdf. The longer-form reasoning sits in Building Pure Go micro SaaS on the side. If you want the design narrative for the library that came before this one, Building a Pure Go zero-dependency PDF library is the one to read first.
The implementation lives at the gsql README and repository. The compromises around Optional[T] are still in flux, so if you try it and something feels off, an issue would help.