It returned three.
The integration test had seeded five suppression rows in beforeAll, called GET /api/suppressions?limit=2, and walked the cursors forward expecting all five back. The first page returned two. The second page returned one. The third page came back empty.
The suppression list endpoint is a paginated GET. Tenants call it to see who they have manually blocked, who Resend marked as a hard bounce, and who complained about an email. The list grows over time and needs ordering by recency. Standard cursor pagination: order by (created_at DESC, id DESC), encode the last-seen tuple into a base64 cursor, and the next page asks for everything strictly less than that tuple. I had this pattern working on the notifications endpoint and wrote the suppressions version against the same template. Drizzle, Postgres, tuple comparison in SQL.
Where the Microseconds Disappeared
The seed used a single db.insert(...).values([row1, row2, row3, row4, row5]) call. Postgres performs that as one transaction. Every row gets the same clock_timestamp() for created_at, down to the microsecond Postgres records natively. From the database's point of view, the five rows are not just in the same second, not just in the same millisecond. They share a timestamp at six decimal places of precision.
The cursor encoding called row.createdAt.toISOString(). That returns ISO 8601 with millisecond precision. 2026-04-26T14:32:18.473Z. Nine digits in, the microsecond information is gone. When the next-page query compared the column value to the cursor value:
(created_at, id) < ('2026-04-26T14:32:18.473'::timestamp, 'last-id-here'::uuid)
Postgres compared the column's microsecond-precise value (14:32:18.473251) against the cursor's millisecond-precise value (14:32:18.473000). The column value was strictly greater. The tuple comparison returned false. Rows that should have appeared on the next page were filtered out as already-seen.
The first page returned the two newest rows correctly. The second page used the second row's millisecond-truncated timestamp as the cursor. The query asked for rows strictly less than that timestamp. The remaining three rows had timestamps strictly greater at the microsecond level, so they did not match. The endpoint silently returned an empty result. The test reported missing rows. Nothing in the logs hinted at why.
Why Notifications Never Hit This
The notifications endpoint had been running this pattern in production without a problem. The reason it worked is that production notifications are inserted one at a time, each one in its own transaction, each one with a distinct clock_timestamp(). Microsecond collisions are theoretically possible but practically vanishing.
Suppressions are different. The realistic write pattern is bulk: a tenant onboards and uploads their existing suppression list as a CSV, an admin runs a backfill from a previous notification provider, an auto-import grabs all hard bounces from the last 90 days. These all hit the database as batch inserts. Five rows sharing a microsecond stops being a contrived test fixture and becomes the normal case.
I considered three fixes. Round the column down to milliseconds at write time, so the cursor and the column always match. Switch the cursor format to a millisecond-precise serialization. Find a way to round-trip the microseconds losslessly through the cursor.
The first option destroys information. Postgres stores microseconds for a reason; throwing them away because the cursor is too coarse is a fix in the wrong layer. The second option amounts to the same thing. Whatever precision the cursor carries becomes the precision of the comparison, so reducing the cursor to milliseconds is identical to reducing the column.
The third option meant figuring out how to get microsecond precision out of Postgres in a form that survives a round trip through JSON, base64, and back into a SQL parameter.
to_char Saves the Round-Trip
Postgres has a function called to_char that formats timestamps using a pattern string. The pattern YYYY-MM-DD"T"HH24:MI:SS.US produces a string with microsecond precision in ISO-like format. Casting that string back to timestamp parses it without loss. The driver and the cursor never have to handle microseconds in JavaScript at all. The value moves as a string from query to cursor to next query, and Postgres does the precision work on both ends.
The list query selects an additional helper column with the formatted timestamp:
.select({
id: tenantSuppressions.id,
recipient: tenantSuppressions.recipient,
reason: tenantSuppressions.reason,
expiresAt: tenantSuppressions.expiresAt,
createdAt: tenantSuppressions.createdAt,
createdAtText: sqlOp<string>`to_char(${tenantSuppressions.createdAt} AT TIME ZONE 'UTC', 'YYYY-MM-DD"T"HH24:MI:SS.US')`.as('created_at_text'),
})
The createdAtText column is computed in the database for every returned row. The cursor encoder uses that string verbatim, paired with the row's id. The decoder pulls the string back out and casts it to timestamp in the next query's tuple comparison:
sqlOp`(${tenantSuppressions.createdAt}, ${tenantSuppressions.id}) < (${decoded.createdAtText}::timestamp, ${decoded.id}::uuid)`
The helper column is stripped from the response payload before the JSON goes back to the tenant, so the API surface stays clean. The byte-exact round trip happens entirely inside the cursor.
The fix is one extra column in the select, one cast in the where clause, one filter in the response mapper. Around fifteen lines of code. The behavior change is that bulk-inserted rows page correctly, even when five of them share a microsecond.
The Clever Fix That Failed Identically
An earlier attempt looked clever. Instead of relying on tuple comparison, I split it into two clauses: rows strictly older OR (rows at the same timestamp AND id strictly less). The Drizzle expression was something like or(lt(createdAt, cursorTs), and(eq(createdAt, cursorTs), lt(id, cursorId))). This is the textbook decomposition of a tuple comparison.
It also did not work, for the same reason. The eq(createdAt, cursorTs) branch compared the microsecond-precise column against the millisecond-truncated cursor, and the equality returned false. The second clause never matched. Same bug, more code.
I would not have noticed the OR-form failure if I had not been writing fresh tests against the seeded fixture. In production, with one row per transaction, both the tuple form and the OR form work. The seed was the only thing that exposed the precision mismatch. If I had written the suppressions endpoint with looser test fixtures (a setTimeout(10) between inserts, for instance) the bug would have shipped, and the first tenant to bulk-import a CSV would have hit it. Five hundred rows in, three hundred would page correctly and the rest would silently disappear from the list view.
The lesson I keep relearning is that test fixtures that simulate the realistic write pattern catch a class of bugs that loose fixtures hide. The seeded db.insert(...).values([...]) is exactly the shape of a bulk import. Writing the test that way was an accident, but it surfaced the bug at development time instead of in production.
Where the Recipe Lands
The fix shipped in batch 018, alongside the rest of the suppressions CRUD surface. The endpoint pages through bulk-inserted rows correctly regardless of how many share a timestamp. The cursor format is base64-encoded JSON containing the createdAtText string and the row id. The query plan uses the existing (tenant_id, created_at DESC, id DESC) index because tuple comparison maps cleanly to an index range scan in Postgres.
Pattern 006 in the project's pattern catalog already documented cursor pagination for the notifications endpoint. Rather than write a new pattern file, I extended 006 in place with the microsecond-precision recipe as a sub-section. The recipe is small enough that promoting it to a standalone pattern would be over-cataloging, but specific enough that the next endpoint that needs cursor pagination over bulk-insertable data can copy the exact to_char format string instead of rediscovering it.
Notifications are still on the millisecond-precision cursor. They have not hit the bug because nothing in the system bulk-inserts notifications. The day a backfill job lands, that endpoint gets the same fifteen-line treatment.
The whole thing is a two-character precision difference between two systems that both believe they are using ISO 8601, and the gap is invisible until five rows show up in the same transaction.