v1.0 · NestJS · ClickHouse · PostgreSQL

A Self hostable event analytics engine that scales with you.

A small, open-source ingestion engine you run on your own stack. Capture events at HTTP speed, buffer through Redis Streams, store in ClickHouse or Postgres — and query DAU, MAU, funnels, and feature usage out of the box.

Non-blocking capture Redis-buffered ingestion Pluggable storage Batched worker writes
01 · Capture POST /capture HTTP · validated
02 · Buffer Redis Stream insightplus_events
03 · Worker Consumer group batch · ACK
04 · Storage ClickHouse / PG DATABASE_TYPE
05 · Query GET /analytics DAU · MAU · funnels
Ingest path Decoupled, durable Read path

A small engine, with the parts that matter.

Six choices that keep ingestion fast, storage flexible, and the analytics layer useful from day one.

01

Non-blocking ingestion

The capture endpoint writes to a Redis Stream and returns immediately. Your app never waits on the database.

XADD insightplus_events
02

Pluggable storage

Flip DATABASE_TYPE=clickhouse or postgres. The worker picks the driver at boot — no code change.

DATABASE_TYPE=clickhouse
03

Built-in analytics

DAU, MAU, conversion funnel, most-used feature, and power users — exposed as REST endpoints, no SQL required.

/api/v1/analytics/*
04

Batched writes

A consumer group drains up to 200 events per cycle and ACKs the stream — durable, idempotent, restart-safe.

COUNT 200 · XACK
05

Typed event allowlist

A single VALID_EVENTS list rejects garbage at the edge so your analytics never has to clean it up later.

events.config.ts
06

Docker-first dev loop

One docker compose up -d brings up Redis, ClickHouse, and Postgres. Local in under a minute.

docker compose up -d

REST in, REST out.

A capture endpoint and a handful of analytics endpoints. Switch tabs to see the request and the shape that comes back.

POST /api/v1/capture
// Body — fire-and-forget event ingestion
{
  "event": "payment_success",
  "distinctId": "user_123",
  "properties": {
    "amount": 499,
    "currency": "INR",
    "plan": "pro"
  },
  "timestamp": "2026-05-13T10:24:00Z"
}

// Response (200 OK)
{
  "status": "queued",
  "event": "payment_success",
  "eventId": "3b1d7f4e-…"
}
GET  /api/v1/analytics/all
// Every metric in a single roll-up payload
{
  "totalUsers": 12480,
  "totalEvents": 4823910,
  "dau": [{ "date": "2026-05-12", "users": 3210 }, …],
  "mau": [{ "month": "2026-05", "users": 9881 }, …],
  "mostUsedFeature": [{ "feature": "dashboard", "count": 84120 }, …],
  "conversionFunnel": {
    "registered": 12480,
    "paid": 2104,
    "rate": "16.86%"
  },
  "mostActiveUsers": [{ "distinctId": "user_52", "events": 1872 }, …]
}
GET  /api/v1/analytics/dau?days=30
// Daily Active Users — last N days, default 30
[
  { "date": "2026-05-13", "users": 3284 },
  { "date": "2026-05-12", "users": 3210 },
  { "date": "2026-05-11", "users": 3098 },
  { "date": "2026-05-10", "users": 2841 }
  // …
]
GET  /api/v1/analytics/conversion-funnel
// Registered → Paid conversion
{
  "registered": 12480,
  "paid": 2104,
  "rate": "16.86%"
}

Running locally, in four commands.

Clone, install, bring up Redis + DB with Docker, start the server. The whole loop fits in a terminal pane.

~/analytics-engine — zsh
$git clone git@github.com:Shobhnik13/analytics-engine.git
$cd server && npm install
$docker compose up -d
↳ redis · clickhouse · postgres ready
$npm run start:dev
↳ http://localhost:7002

Tech stack

NestJS 11 Redis 7 ClickHouse PostgreSQL 16 ioredis Pino Docker TypeScript

Capture runs on Express via NestJS, the worker is a long-lived Node process consuming the Redis Stream, and storage is hot-swappable between ClickHouse (default) and PostgreSQL via a single env var.