Trigger.dev vs BullMQ
Every production app needs background jobs. BullMQ is the most popular way to run them in Node.js: 14M+ monthly downloads, MIT-licensed, and backed by Redis for raw throughput. Trigger.dev is a managed compute platform where those same jobs get durability, observability, and auto-scaling with nothing to provision. Choose BullMQ for lightweight queuing with full control over your stack. Choose Trigger.dev for TypeScript teams that want background jobs without managing Redis, workers, or scaling.
What is Trigger.dev?
Trigger.dev is a managed compute platform for building AI agents and workflows in TypeScript. Your code runs as normal TypeScript with no determinism constraints and no execution timeout. It includes built-in retries, queues, scheduling, and OpenTelemetry observability with custom dashboards and a SQL-style query language. Tasks can stream data to your frontend in real-time, receive typed input while running, and pause indefinitely for human approval or external events. Thousands of engineering teams run production workloads on it, processing millions of runs.
What is BullMQ?
BullMQ is a Redis-backed job queue library for Node.js with 14M+ monthly downloads, making it the most widely used background job solution in the JavaScript ecosystem. It supports delayed jobs, cron scheduling, job flows (parent-child dependencies), rate limiting, automatic retries with exponential backoff, priorities, sandboxed workers, and OpenTelemetry support (via official adapter). BullMQ is compatible with Redis, Valkey, DragonflyDB, ElastiCache (node-based clusters only, not serverless), and Upstash.
Feature deep-dive
How does developer experience and setup compare?
BullMQ gets you running in minutes with npm install plus a Redis connection. Trigger.dev gets you running with a CLI command and zero infrastructure to manage afterward.
| Feature | Trigger.dev | BullMQ |
|---|---|---|
| What you install | npm package + CLI | npm package + Redis server |
| Infrastructure | Managed compute (Cloud) or self-hosted (Docker/K8s) | Redis + your Node.js worker processes |
| Time to first job | ~5 minutes (CLI deploy) | ~10 minutes (with existing Redis) |
| Worker management | Managed by platform | Your Node.js processes (PM2, Docker, K8s) |
| Scaling | Automatic | Add more worker processes |
| Dashboard | Dashboard with OpenTelemetry traces, logs, custom dashboards, TRQL query language, AI assistant, and alerting | BullBoard (community), Taskforce.sh (commercial), Bullstudio (community) |
| Child tasks | triggerAndWait / batchTriggerAndWait | Job flows via FlowProducer (unlimited nesting) |
| Delayed execution | delay option (duration string or date) | Delayed jobs (duration or timestamp) |
| Local development | Hot reload via CLI (internet required) | Fully offline with local Redis |
| Payload validation | schemaTask with Zod, ArkType, or TypeBox for runtime-validated payloads | No built-in validation |
| AI coding tools | MCP server (docs, metrics, write tasks, trigger, monitor, deploy), agent rules, skills, llms.txt | No equivalent |
BullMQ's setup takes minutes: add the npm package, point it at Redis, and start processing jobs. The library gives you full control over workers, concurrency, and job lifecycle, and local development works fully offline. Trigger.dev takes a different approach: npx trigger.dev@latest dev starts local development, npx trigger.dev@latest deploy ships to production, and the platform handles workers, scaling, and monitoring. The tradeoff is control versus operational overhead. Trigger.dev also ships an MCP server that lets AI editors deploy, trigger tasks, and monitor runs. Agent rules add code generation guidance for Claude Code, Cursor, Windsurf, VS Code, Zed, Gemini CLI, and Cline.
How do concurrency and rate limiting compare?
BullMQ provides per-worker concurrency, global rate limiting, and per-group controls (Pro). Trigger.dev provides per-queue and per-entity concurrency, debounce, idempotency keys, and dynamic overrides at trigger time.
| Feature | Trigger.dev | BullMQ |
|---|---|---|
| Per-worker concurrency | Per-queue concurrency limits | Concurrency option per worker instance |
| Global rate limiting | Concurrency-based (no time-based rate limiting) | Per-second/minute/hour rate limiter |
| Per-entity controls | concurrencyKey (e.g., per-user queues) | Groups with round-robin (Pro tier) |
| Dynamic overrides | Override queue and concurrency at trigger time | Set at queue/worker creation |
| Job priorities | Per-run priority at trigger time | Numeric priority per job (lower = higher) |
| Debounce | Built-in with leading/trailing modes and configurable delay | Part of deduplication (3 modes: simple, throttle, debounce) |
| Deduplication | Idempotency keys with configurable TTL | 3 modes: simple, throttle, and debounce |
| Processing order | First-in-first-out (FIFO) | First-in-first-out (FIFO) and last-in-first-out (LIFO) |
BullMQ's concurrency model is granular: set concurrency per worker instance, apply time-based rate limits (10 jobs per second), and use priorities to control processing order. BullMQ Pro extends this with group-level concurrency and round-robin processing across groups. Trigger.dev uses concurrencyKey to create per-entity queues (one concurrent job per user, per tenant, or per resource), and lets you override queue assignment at trigger time for patterns like different limits for free vs paid users. Trigger.dev also has built-in debounce (leading and trailing modes with configurable delay) and idempotency keys with TTL to prevent duplicate runs. Trigger.dev does not have time-based rate limiting, only concurrency-based controls.
How do reliability and failure recovery compare?
BullMQ reliability depends on your Redis persistence setup and retry configuration. Trigger.dev provides durable execution via checkpoint-resume, so tasks survive crashes and restarts automatically.
| Feature | Trigger.dev | BullMQ |
|---|---|---|
| Failure recovery | Checkpoint-resume (durable execution) | Redis persistence + automatic retries |
| Data loss risk | None (checkpoints are durable) | Depends on Redis config (AOF: ~1s loss, RDB: minutes, none: all lost) |
| Retry mechanism | Built-in with configurable backoff | Built-in with configurable backoff |
| Long-running jobs | No timeout (hours, days, weeks) | No hard limit (stalled detection at 30s lock, configurable) |
| Delivery guarantee | At-least-once (with idempotency keys to prevent duplicates) | At-least-once (idempotency is your responsibility) |
BullMQ's reliability comes from Redis persistence and its built-in retry mechanism. AOF with appendfsync always gives strong queue reliability at the cost of throughput. RDB snapshots are faster but can lose recent data. For teams already running Redis with proper persistence, BullMQ queues are reliable. Trigger.dev is a different category: durable execution. Checkpoint-resume snapshots your entire process state at wait points and restores it on recovery. Your code picks up exactly where it left off, not just the job getting retried from scratch.
How do AI agent capabilities compare?
BullMQ processes AI tasks as regular jobs with no new concepts to learn. Trigger.dev adds AI-specific features: real-time streaming, human-in-the-loop, and no execution timeout.
| Feature | Trigger.dev | BullMQ |
|---|---|---|
| AI framework support | Any TS framework (AI SDK, Mastra, LangGraph.js, etc.). ai.tool() exposes tasks as LLM-callable tools | Any framework (it's a queue, not an AI platform) |
| Real-time streaming | Realtime Streams (SSE to frontend and backend) | Job progress events (numeric or custom object) |
| Human-in-the-loop | Waitpoints (pause indefinitely for approval) | No built-in equivalent |
| Bidirectional communication | Input streams (typed data into running tasks) | No built-in equivalent |
| Max execution time | No limit | No hard limit (tune stalledInterval for long jobs) |
| Stalled job detection | Heartbeat-based (managed by platform) | 30s default lock, adjustable via lockDuration setting |
| Build customization | Build extensions install packages, SDKs, and system deps at build time (Prisma, FFmpeg, Playwright, Python, custom) | Your Dockerfile or package manager |
BullMQ processes AI tasks the same way it processes any other job: no new concepts, no additional APIs. Long-running LLM calls work well with tuned lockDuration and stalledInterval settings. Job progress events provide status updates to callers. Trigger.dev adds features designed for AI workflows: Realtime Streams push tokens to your frontend over SSE, input streams send typed data into running tasks (cancel signals, approvals, user messages), and Waitpoints pause tasks indefinitely for human review. The managed compute model handles stalled detection automatically via heartbeats, so there's nothing to tune for long-running calls.
How does observability and debugging compare?
BullMQ provides an OpenTelemetry adapter and a community dashboard ecosystem. Trigger.dev includes built-in and custom dashboards, a SQL-style query language (TRQL), OpenTelemetry tracing, and structured logs for every run.
| Feature | Trigger.dev | BullMQ |
|---|---|---|
| Dashboard | Built-in dashboard with run explorer, custom dashboards, and widgets (charts, tables, big numbers) | BullBoard (community), Taskforce.sh (commercial), Bullstudio |
| Query language | TRQL (SQL-style) queries against runs and metrics in ClickHouse, with AI assistant | No built-in query language |
| Tracing | OpenTelemetry traces per run with span-level detail | OpenTelemetry adapter (requires tracing backend) |
| Log capture | Structured logs attached to each run, viewable in dashboard | Application-level logging (stdout) |
| Error visibility | Stack traces, payloads, run history, and output diffs in dashboard | Error events via worker event handlers |
| Alerting | Built-in failure and success notifications (email, webhook) | Via monitoring stack (Grafana, Datadog, etc.) |
| Programmatic access | SDK and REST API for running queries from your code | Redis commands or dashboard API |
| Run tagging | Up to 10 tags per run, filterable in dashboard and SDK | No built-in tagging |
BullMQ ships an OpenTelemetry adapter that integrates with your existing tracing infrastructure (Datadog, Grafana, Honeycomb). The dashboard ecosystem has grown: BullBoard for basic inspection, Taskforce.sh for commercial monitoring with SOC 2 compliance, and Bullstudio as a newer open-source option. Teams with an existing observability stack can instrument BullMQ thoroughly. Trigger.dev includes observability out of the box: OpenTelemetry tracing with span-level detail, structured log capture, error visibility with full stack traces, and a run explorer that shows every run's payload, output, and timeline. Every project ships with a built-in dashboard , and you can build custom dashboards with charts, tables, and big number widgets. TRQL (a SQL-style query language backed by ClickHouse) lets you ask questions like "what are my most expensive runs?" or "what's the p95 duration for this task?" directly, or through the built-in AI assistant. You can also run queries from your code via the SDK or REST API to power internal tools or feed data to AI agents.
What infrastructure do you need to manage?
BullMQ is a library that runs in your Node.js process with Redis. Trigger.dev is a platform that runs your code on managed compute.
| Feature | Trigger.dev | BullMQ |
|---|---|---|
| Managed option | Trigger.dev Cloud (fully managed) | Self-operated (Redis hosting via cloud providers) |
| Redis dependency | None | Redis, Valkey, DragonflyDB, ElastiCache, or Upstash |
| Worker deployment | npx trigger.dev@latest deploy | Your process manager (PM2, Docker, K8s) |
| Scaling | Automatic (managed compute) | Add more worker processes |
| Compute isolation | Each run gets its own container (configurable CPU/RAM) | Jobs share the worker process |
| Self-hosted option | Docker Compose or Kubernetes (Apache 2.0) | Always self-operated (MIT) |
BullMQ's architecture is simple and well-understood: your Node.js process connects to Redis, enqueues jobs, and processes them. You control the entire stack, from Redis configuration (maxmemory-policy: noeviction is a hard requirement) to worker scaling and health monitoring. This is a strength for teams that want full control and already operate Redis in production. Trigger.dev provides managed compute: write a task, deploy it, and the platform handles scaling, isolation, and monitoring. Each run gets its own container with a configurable machine preset (up to 4 vCPU, 8 GB RAM), so a memory spike in one task does not affect others. Both offer self-hosting: BullMQ is always self-operated (MIT), Trigger.dev runs on Docker Compose or Kubernetes (Apache 2.0).
How does the build and deploy pipeline compare?
Trigger.dev deploys with a single CLI command and manages build dependencies through config. BullMQ deploys as part of your existing Node.js application.
| Feature | Trigger.dev | BullMQ |
|---|---|---|
| Build customization | Build extensions (Prisma, FFmpeg, Playwright, Python, custom) | Your Node.js toolchain and Docker image |
| Deploy integrations | GitHub auto-deploy, Vercel integration, CLI | Your CI/CD pipeline (GitHub Actions, etc.) |
| Environments | Production, Staging, Preview (per-branch), Development | Your environment setup (flexible, you control it) |
| System dependencies | FFmpeg, Puppeteer, Sharp, Python, etc. via extensions | Install via Dockerfile or package manager |
BullMQ deploys as part of your Node.js application, so it uses whatever build and deploy pipeline you already have. If you are running Docker, you install system dependencies in your Dockerfile. Trigger.dev provides build extensions that add system-level dependencies (Prisma, FFmpeg, Playwright, Python) with a config line, plus custom extensions for anything not covered. The GitHub integration auto-deploys on every push, and preview branches create isolated environments per PR with their own API key, env vars, and schedules.
How do pricing and cost compare?
BullMQ is MIT-licensed and free. You pay for Redis hosting and your own compute. Trigger.dev bills per compute-second with a free tier included.
| Feature | Trigger.dev | BullMQ |
|---|---|---|
| Infrastructure cost | Included in compute-second pricing | Redis hosting + worker compute (you manage) |
| Free tier | Free plan available | MIT (free forever) + your infrastructure costs |
| Pricing model | Compute-seconds + per-run fee | Your infrastructure costs (Redis + compute + monitoring tools) |
| Self-hosted cost | Free (Apache 2.0) + your infra | Free (MIT) + Redis + your compute |
BullMQ's cost floor is genuinely low. The library is free, and a small Redis instance handles moderate workloads. BullMQ Pro adds groups, batches, and observables as a paid tier. As workloads grow, infrastructure costs include Redis scaling, worker compute, monitoring tools, and the engineering time to keep everything running. Trigger.dev's pricing is compute-seconds plus a per-run fee, with a free tier for getting started. Which is cheaper depends on workload shape: BullMQ's cost floor is low if you already run Redis, while Trigger.dev removes infrastructure and ops costs from the equation.
Code comparison
AI document summarizer
Process an uploaded image (resize and optimize)
Scheduled task: daily report email
What developers say about Trigger.dev
With Trigger.dev, we've summarized over a million student interactions in just a couple of weeks. We're incredibly thankful for tools like Trigger.dev that are empowering us to bring AI-driven solutions to educators and students at scale.

Ben Duggan

Moving to Trigger for our background jobs was more reliable, cheaper, and easier. We run 200,000+ monthly background jobs without worrying about infrastructure.

Alex Danilowicz
Frequently asked questions
Can I migrate from BullMQ to Trigger.dev?
Yes. BullMQ jobs map directly to Trigger.dev tasks. The job processor function becomes your task's
runfunction, queue configuration maps to Trigger.dev queue options, and repeatable jobs become scheduled tasks. The main work is removing Redis connection setup and worker boilerplate. Most migrations take a day or two.Do I still need Redis with Trigger.dev?
No. BullMQ requires Redis for all queue operations. Trigger.dev manages its own infrastructure with no Redis dependency. If you use Redis for other purposes (caching, sessions), you keep that. But the job queue no longer depends on it.
How does BullMQ Pro compare to Trigger.dev?
BullMQ Pro adds groups with round-robin processing, batch consumption, and observables for state-machine patterns. Trigger.dev includes batch triggering, per-entity concurrency controls, and observability in its standard offering. BullMQ Pro does not add managed infrastructure or durability guarantees, which are included in Trigger.dev Cloud.
Can BullMQ handle the same scale as Trigger.dev?
BullMQ can process 250,000+ jobs per second with DragonflyDB, which is impressive raw throughput. Throughput and operational scale are different dimensions. BullMQ throughput depends on your Redis setup, worker count, and infrastructure tuning. Trigger.dev handles scaling automatically. For most applications, both are more than capable.
What happens to my BullMQ jobs if Redis goes down?
It depends on your Redis persistence configuration. With AOF (append-only file) enabled, you lose approximately one second of data. With RDB snapshots only (the Redis default), you could lose minutes of data between snapshots. With no persistence configured, all pending jobs are lost on restart. Trigger.dev's durable execution does not have this failure mode.
Is BullMQ good for AI and LLM tasks?
BullMQ can queue AI tasks like any other job. Long-running LLM calls may trigger BullMQ's stalled job detection (default 30-second lock), which can be tuned with
lockDurationandstalledIntervalsettings. BullMQ does not have built-in streaming, human-in-the-loop primitives, or AI-specific observability. Trigger.dev has Realtime Streams, Waitpoints, no execution timeout, and works with any TypeScript AI framework.Does Trigger.dev support BullMQ features like job flows?
BullMQ job flows let you define parent-child job dependencies with unlimited nesting via
FlowProducer. Trigger.dev has a similar concept withtriggerAndWaitandbatchTriggerAndWait, which let you trigger child tasks and wait for their results. The API is different but the capability is equivalent.Can I use BullMQ and Trigger.dev together?
Yes. Some teams use BullMQ for high-throughput, low-complexity jobs (sending emails, cache invalidation) and Trigger.dev for long-running or complex workflows (AI tasks, multi-step pipelines). The two can coexist in the same application.
Is Trigger.dev open source like BullMQ?
Yes. BullMQ is MIT licensed and always self-operated. Trigger.dev is Apache 2.0 licensed and fully self-hostable via Docker Compose or Kubernetes. Both are genuinely open source. The difference is that Trigger.dev also offers a managed cloud option.
What is the best background jobs solution for TypeScript?
BullMQ is the most popular background job library for Node.js with 14M+ monthly downloads. Trigger.dev is a managed compute platform that processes millions of runs for thousands of engineering teams, with built-in durability, observability, and auto-scaling. BullMQ gives you full control with zero vendor dependency. Trigger.dev gives you zero ops.
Can I run background jobs in TypeScript without managing Redis?
Yes. BullMQ requires Redis for all queue operations. Trigger.dev does not use Redis. Deploy with the CLI and the platform handles queuing, retries, and scheduling. If you want background jobs without a message broker, Trigger.dev is the simpler path.
Can I build custom dashboards and query my task data with Trigger.dev?
BullMQ does not include built-in analytics or dashboards. Trigger.dev includes built-in dashboards for every project and lets you build custom dashboards with charts, tables, and big number widgets. TRQL, a SQL-style query language backed by ClickHouse, lets you query your runs and metrics data directly or through an AI assistant that generates queries from plain English. You can also run queries from your code via the SDK or REST API.