Non-blocking ingestion
The capture endpoint writes to a Redis Stream and returns immediately. Your app never waits on the database.
XADD insightplus_eventsA small, open-source ingestion engine you run on your own stack. Capture events at HTTP speed, buffer through Redis Streams, store in ClickHouse or Postgres — and query DAU, MAU, funnels, and feature usage out of the box.
Six choices that keep ingestion fast, storage flexible, and the analytics layer useful from day one.
The capture endpoint writes to a Redis Stream and returns immediately. Your app never waits on the database.
XADD insightplus_eventsFlip DATABASE_TYPE=clickhouse or postgres. The worker picks the driver at boot — no code change.
DAU, MAU, conversion funnel, most-used feature, and power users — exposed as REST endpoints, no SQL required.
/api/v1/analytics/*A consumer group drains up to 200 events per cycle and ACKs the stream — durable, idempotent, restart-safe.
COUNT 200 · XACKA single VALID_EVENTS list rejects garbage at the edge so your analytics never has to clean it up later.
One docker compose up -d brings up Redis, ClickHouse, and Postgres. Local in under a minute.
A capture endpoint and a handful of analytics endpoints. Switch tabs to see the request and the shape that comes back.
// Body — fire-and-forget event ingestion { "event": "payment_success", "distinctId": "user_123", "properties": { "amount": 499, "currency": "INR", "plan": "pro" }, "timestamp": "2026-05-13T10:24:00Z" } // Response (200 OK) { "status": "queued", "event": "payment_success", "eventId": "3b1d7f4e-…" }
// Every metric in a single roll-up payload { "totalUsers": 12480, "totalEvents": 4823910, "dau": [{ "date": "2026-05-12", "users": 3210 }, …], "mau": [{ "month": "2026-05", "users": 9881 }, …], "mostUsedFeature": [{ "feature": "dashboard", "count": 84120 }, …], "conversionFunnel": { "registered": 12480, "paid": 2104, "rate": "16.86%" }, "mostActiveUsers": [{ "distinctId": "user_52", "events": 1872 }, …] }
// Daily Active Users — last N days, default 30 [ { "date": "2026-05-13", "users": 3284 }, { "date": "2026-05-12", "users": 3210 }, { "date": "2026-05-11", "users": 3098 }, { "date": "2026-05-10", "users": 2841 } // … ]
// Registered → Paid conversion { "registered": 12480, "paid": 2104, "rate": "16.86%" }
Clone, install, bring up Redis + DB with Docker, start the server. The whole loop fits in a terminal pane.
Capture runs on Express via NestJS, the worker is a long-lived Node process consuming the Redis Stream, and storage is hot-swappable between ClickHouse (default) and PostgreSQL via a single env var.