~/About~/Foundry~/Blueprint~/Journal~/Projects
Book a Call
Blueprint

Event-Driven Notification Hub

·6 min read·Kingsley Onoh·View on GitHub

Architectural Brief: Event-Driven Notification Hub

Every project in the portfolio needs notifications. The first two projects each wired their own Resend integration, their own template rendering, their own opt-out logic. The third was about to repeat the cycle when the cost of duplication became impossible to ignore. This system consolidates notification infrastructure into a single multi-tenant service. Projects fire events over HTTP with an API key. The Hub matches those events against configurable rules, renders Handlebars templates, enforces user preferences, and dispatches across four channels. One deployment, one Resend domain, unlimited consumers.

The production VPS has 1GB of RAM shared across all services, which eliminated anything memory-heavy from the stack.

System Topology

Infrastructure Decisions

  • Runtime: Node.js 22 LTS with TypeScript 5.x in strict mode. Chose TypeScript over Go or Python because the notification pipeline is I/O-bound (HTTP calls to Resend, Telegram Bot API, database queries), not CPU-bound. TypeScript's async/await model handles concurrent channel dispatch without goroutines or thread pools, and the ecosystem (KafkaJS, Drizzle, Fastify) has first-class TypeScript support.
  • Framework: Fastify 5.x. Chose over Express because Fastify's plugin encapsulation model keeps multi-tenant middleware (API key auth, admin auth, rate limiting) scoped without leaking state between plugins. The fastify-plugin wrapper breaks encapsulation only where explicitly needed.
  • Data Layer: PostgreSQL 16 with Drizzle ORM across 7 tables. Chose Drizzle over Prisma because Drizzle generates SQL directly from TypeScript expressions, ships no query engine binary, and keeps migration files in sync with the schema definition. Chose PostgreSQL over SQLite because the system needs concurrent writes from HTTP requests, background jobs, and WebSocket acknowledgments.
  • Message Broker: KafkaJS with Redpanda for local development. Disabled in production via a USE_KAFKA environment flag. Redpanda is configured at 256MB locally but needs 150-200MB resident memory. The production VPS can't spare that budget. Events arrive via HTTP, so the Kafka round-trip adds latency without benefit at current throughput. The consumer code remains in the repository for future horizontal scaling.
  • Templating: Handlebars 4.x compiled from strings stored in PostgreSQL. Chose over React Email because templates are created and modified at runtime via REST API. React Email requires a build step. Handlebars compiles from a plain string in memory with zero toolchain dependencies.
  • Email: Resend API with per-tenant credentials stored in tenants.config.channels JSONB. One verified Resend domain serves all tenants by varying the sender address. Zero DNS changes per onboarded project.
  • WebSocket: @fastify/websocket with an in-memory Map<string, Set<WebSocket>> keyed by tenantId:userId. Chose over Socket.IO because the Hub needs only push and acknowledge, not rooms or namespaces. The tradeoff: no multi-instance sync. WebSocket connections live on a single process. At scale, this would need Redis Pub/Sub.
  • Validation: Zod v4 for request bodies, query parameters, and environment variable parsing. A single Zod schema file (src/api/schemas.ts) defines validation for every API endpoint. The config loader uses the same Zod pattern, producing readable startup failures when environment variables are missing or malformed.

Constraints That Shaped the Design

  • Input: JSON events via POST /api/events with X-API-Key header. Each API key maps to a tenant. The event envelope: { event_type, event_id, payload }. Kafka topic subscription (events.*) available for local development or async processing.
  • Output: Email via Resend API, Telegram via Bot API sendMessage, in-app via WebSocket push, SMS via log-only stub.
  • Scale Handled: A handful of tenants at low event volume. The 128MB container processes events in single-digit milliseconds. At sustained throughput above ~100 events/second, the sequential pipeline would need the Kafka path re-enabled for async processing and back-pressure management.
  • Hard Constraints: 128MB container memory limit in production. 60-minute deduplication window (configurable). 90-day notification retention before automated cleanup. 50 notifications maximum per digest email with truncation. Rate limits: 10 events/minute per tenant on the HTTP endpoint, 200/minute on management routes.
  • Monitoring Boundaries: Consumer lag alerts at 500+ messages behind (Kafka mode only). Email failure rate warning when above 20% in a sliding 1-hour window. Health endpoint reports PostgreSQL connectivity, Kafka broker reachability (if enabled), and Resend API status. BetterStack polls /api/health externally.

Decision Log

Decision Alternative Rejected Why
Fastify plugin model over Express middleware Express Fastify plugins scope state by default. Multi-tenant auth, admin auth, and rate limiting stay isolated. Express middleware shares a flat pipeline where one misconfigured handler can leak context to all routes.
Application-level tenant isolation over PostgreSQL RLS Row-Level Security Every query already filters by tenant_id via the auth middleware that injects request.tenantId. RLS would add per-query policy evaluation for a constraint that's already enforced at the application layer. Upgrade path: one migration to enable RLS, no code rewrite.
Direct HTTP processing in production, Kafka in development Kafka everywhere Redpanda needs 150-200MB resident. The VPS has 1GB shared across PostgreSQL, Traefik, and the app container. Events arrive via HTTP, so Kafka adds a round-trip without adding value at current volume. The flag is one environment variable.
Handlebars string compilation over React Email React Email, MJML Templates live in PostgreSQL rows and change via REST API at runtime. Both React Email and MJML require a build or compilation step. Handlebars compiles a string to a function in memory, which fits a database-driven template model.
JSONB config column over normalized config tables Per-channel config tables tenants.config.channels stores Resend and Telegram credentials in one JSONB column. Adding a new channel means adding a key, not running a migration. Validation happens at read time via Zod schemas in resolveTenantChannelConfig().
Digest queue marked sent on send failure Retry on next cycle A retry-on-failure digest model sends duplicate emails. If the send fails and the queue retries next hour, the user gets repeated copies. Marking as sent on failure loses one digest but prevents runaway duplicates.
#typescript#fastify#postgresql#kafka#websocket#handlebars

The complete performance for Event-Driven Notification Hub

Get Notified

New system breakdown? You'll know first.