Trigger.dev vs BullMQ

Every production app needs background jobs. BullMQ is the most popular way to run them in Node.js: 14M+ monthly downloads, MIT-licensed, and backed by Redis for raw throughput. Trigger.dev is a managed compute platform where those same jobs get durability, observability, and auto-scaling with nothing to provision. Choose BullMQ for lightweight queuing with full control over your stack. Choose Trigger.dev for TypeScript teams that want background jobs without managing Redis, workers, or scaling.

What is Trigger.dev?

Trigger.dev is a managed compute platform for building AI agents and workflows in TypeScript. Your code runs as normal TypeScript with no determinism constraints and no execution timeout. It includes built-in retries, queues, scheduling, and OpenTelemetry observability with custom dashboards and a SQL-style query language. Tasks can stream data to your frontend in real-time, receive typed input while running, and pause indefinitely for human approval or external events. Thousands of engineering teams run production workloads on it, processing millions of runs.

What is BullMQ?

BullMQ is a Redis-backed job queue library for Node.js with 14M+ monthly downloads, making it the most widely used background job solution in the JavaScript ecosystem. It supports delayed jobs, cron scheduling, job flows (parent-child dependencies), rate limiting, automatic retries with exponential backoff, priorities, sandboxed workers, and OpenTelemetry support (via official adapter). BullMQ is compatible with Redis, Valkey, DragonflyDB, ElastiCache (node-based clusters only, not serverless), and Upstash.

Feature deep-dive

How does developer experience and setup compare?

BullMQ gets you running in minutes with npm install plus a Redis connection. Trigger.dev gets you running with a CLI command and zero infrastructure to manage afterward.

FeatureTrigger.devBullMQ
What you installnpm package + CLInpm package + Redis server
InfrastructureManaged compute (Cloud) or self-hosted (Docker/K8s)Redis + your Node.js worker processes
Time to first job~5 minutes (CLI deploy)~10 minutes (with existing Redis)
Worker managementManaged by platformYour Node.js processes (PM2, Docker, K8s)
ScalingAutomaticAdd more worker processes
DashboardDashboard with OpenTelemetry traces, logs, custom dashboards, TRQL query language, AI assistant, and alertingBullBoard (community), Taskforce.sh (commercial), Bullstudio (community)
Child taskstriggerAndWait / batchTriggerAndWaitJob flows via FlowProducer (unlimited nesting)
Delayed executiondelay option (duration string or date)Delayed jobs (duration or timestamp)
Local developmentHot reload via CLI (internet required)Fully offline with local Redis
Payload validationschemaTask with Zod, ArkType, or TypeBox for runtime-validated payloadsNo built-in validation
AI coding toolsMCP server (docs, metrics, write tasks, trigger, monitor, deploy), agent rules, skills, llms.txtNo equivalent

BullMQ's setup takes minutes: add the npm package, point it at Redis, and start processing jobs. The library gives you full control over workers, concurrency, and job lifecycle, and local development works fully offline. Trigger.dev takes a different approach: npx trigger.dev@latest dev starts local development, npx trigger.dev@latest deploy ships to production, and the platform handles workers, scaling, and monitoring. The tradeoff is control versus operational overhead. Trigger.dev also ships an MCP server that lets AI editors deploy, trigger tasks, and monitor runs. Agent rules add code generation guidance for Claude Code, Cursor, Windsurf, VS Code, Zed, Gemini CLI, and Cline.

How do concurrency and rate limiting compare?

BullMQ provides per-worker concurrency, global rate limiting, and per-group controls (Pro). Trigger.dev provides per-queue and per-entity concurrency, debounce, idempotency keys, and dynamic overrides at trigger time.

FeatureTrigger.devBullMQ
Per-worker concurrencyPer-queue concurrency limitsConcurrency option per worker instance
Global rate limitingConcurrency-based (no time-based rate limiting)Per-second/minute/hour rate limiter
Per-entity controlsconcurrencyKey (e.g., per-user queues)Groups with round-robin (Pro tier)
Dynamic overridesOverride queue and concurrency at trigger timeSet at queue/worker creation
Job prioritiesPer-run priority at trigger timeNumeric priority per job (lower = higher)
DebounceBuilt-in with leading/trailing modes and configurable delayPart of deduplication (3 modes: simple, throttle, debounce)
DeduplicationIdempotency keys with configurable TTL3 modes: simple, throttle, and debounce
Processing orderFirst-in-first-out (FIFO)First-in-first-out (FIFO) and last-in-first-out (LIFO)

BullMQ's concurrency model is granular: set concurrency per worker instance, apply time-based rate limits (10 jobs per second), and use priorities to control processing order. BullMQ Pro extends this with group-level concurrency and round-robin processing across groups. Trigger.dev uses concurrencyKey to create per-entity queues (one concurrent job per user, per tenant, or per resource), and lets you override queue assignment at trigger time for patterns like different limits for free vs paid users. Trigger.dev also has built-in debounce (leading and trailing modes with configurable delay) and idempotency keys with TTL to prevent duplicate runs. Trigger.dev does not have time-based rate limiting, only concurrency-based controls.

How do reliability and failure recovery compare?

BullMQ reliability depends on your Redis persistence setup and retry configuration. Trigger.dev provides durable execution via checkpoint-resume, so tasks survive crashes and restarts automatically.

FeatureTrigger.devBullMQ
Failure recoveryCheckpoint-resume (durable execution)Redis persistence + automatic retries
Data loss riskNone (checkpoints are durable)Depends on Redis config (AOF: ~1s loss, RDB: minutes, none: all lost)
Retry mechanismBuilt-in with configurable backoffBuilt-in with configurable backoff
Long-running jobsNo timeout (hours, days, weeks)No hard limit (stalled detection at 30s lock, configurable)
Delivery guaranteeAt-least-once (with idempotency keys to prevent duplicates)At-least-once (idempotency is your responsibility)

BullMQ's reliability comes from Redis persistence and its built-in retry mechanism. AOF with appendfsync always gives strong queue reliability at the cost of throughput. RDB snapshots are faster but can lose recent data. For teams already running Redis with proper persistence, BullMQ queues are reliable. Trigger.dev is a different category: durable execution. Checkpoint-resume snapshots your entire process state at wait points and restores it on recovery. Your code picks up exactly where it left off, not just the job getting retried from scratch.

How do AI agent capabilities compare?

BullMQ processes AI tasks as regular jobs with no new concepts to learn. Trigger.dev adds AI-specific features: real-time streaming, human-in-the-loop, and no execution timeout.

FeatureTrigger.devBullMQ
AI framework supportAny TS framework (AI SDK, Mastra, LangGraph.js, etc.). ai.tool() exposes tasks as LLM-callable toolsAny framework (it's a queue, not an AI platform)
Real-time streamingRealtime Streams (SSE to frontend and backend)Job progress events (numeric or custom object)
Human-in-the-loopWaitpoints (pause indefinitely for approval)No built-in equivalent
Bidirectional communicationInput streams (typed data into running tasks)No built-in equivalent
Max execution timeNo limitNo hard limit (tune stalledInterval for long jobs)
Stalled job detectionHeartbeat-based (managed by platform)30s default lock, adjustable via lockDuration setting
Build customizationBuild extensions install packages, SDKs, and system deps at build time (Prisma, FFmpeg, Playwright, Python, custom)Your Dockerfile or package manager

BullMQ processes AI tasks the same way it processes any other job: no new concepts, no additional APIs. Long-running LLM calls work well with tuned lockDuration and stalledInterval settings. Job progress events provide status updates to callers. Trigger.dev adds features designed for AI workflows: Realtime Streams push tokens to your frontend over SSE, input streams send typed data into running tasks (cancel signals, approvals, user messages), and Waitpoints pause tasks indefinitely for human review. The managed compute model handles stalled detection automatically via heartbeats, so there's nothing to tune for long-running calls.

How does observability and debugging compare?

BullMQ provides an OpenTelemetry adapter and a community dashboard ecosystem. Trigger.dev includes built-in and custom dashboards, a SQL-style query language (TRQL), OpenTelemetry tracing, and structured logs for every run.

FeatureTrigger.devBullMQ
DashboardBuilt-in dashboard with run explorer, custom dashboards, and widgets (charts, tables, big numbers)BullBoard (community), Taskforce.sh (commercial), Bullstudio
Query languageTRQL (SQL-style) queries against runs and metrics in ClickHouse, with AI assistantNo built-in query language
TracingOpenTelemetry traces per run with span-level detailOpenTelemetry adapter (requires tracing backend)
Log captureStructured logs attached to each run, viewable in dashboardApplication-level logging (stdout)
Error visibilityStack traces, payloads, run history, and output diffs in dashboardError events via worker event handlers
AlertingBuilt-in failure and success notifications (email, webhook)Via monitoring stack (Grafana, Datadog, etc.)
Programmatic accessSDK and REST API for running queries from your codeRedis commands or dashboard API
Run taggingUp to 10 tags per run, filterable in dashboard and SDKNo built-in tagging

BullMQ ships an OpenTelemetry adapter that integrates with your existing tracing infrastructure (Datadog, Grafana, Honeycomb). The dashboard ecosystem has grown: BullBoard for basic inspection, Taskforce.sh for commercial monitoring with SOC 2 compliance, and Bullstudio as a newer open-source option. Teams with an existing observability stack can instrument BullMQ thoroughly. Trigger.dev includes observability out of the box: OpenTelemetry tracing with span-level detail, structured log capture, error visibility with full stack traces, and a run explorer that shows every run's payload, output, and timeline. Every project ships with a built-in dashboard , and you can build custom dashboards with charts, tables, and big number widgets. TRQL (a SQL-style query language backed by ClickHouse) lets you ask questions like "what are my most expensive runs?" or "what's the p95 duration for this task?" directly, or through the built-in AI assistant. You can also run queries from your code via the SDK or REST API to power internal tools or feed data to AI agents.

What infrastructure do you need to manage?

BullMQ is a library that runs in your Node.js process with Redis. Trigger.dev is a platform that runs your code on managed compute.

FeatureTrigger.devBullMQ
Managed optionTrigger.dev Cloud (fully managed)Self-operated (Redis hosting via cloud providers)
Redis dependencyNoneRedis, Valkey, DragonflyDB, ElastiCache, or Upstash
Worker deploymentnpx trigger.dev@latest deployYour process manager (PM2, Docker, K8s)
ScalingAutomatic (managed compute)Add more worker processes
Compute isolationEach run gets its own container (configurable CPU/RAM)Jobs share the worker process
Self-hosted optionDocker Compose or Kubernetes (Apache 2.0)Always self-operated (MIT)

BullMQ's architecture is simple and well-understood: your Node.js process connects to Redis, enqueues jobs, and processes them. You control the entire stack, from Redis configuration (maxmemory-policy: noeviction is a hard requirement) to worker scaling and health monitoring. This is a strength for teams that want full control and already operate Redis in production. Trigger.dev provides managed compute: write a task, deploy it, and the platform handles scaling, isolation, and monitoring. Each run gets its own container with a configurable machine preset (up to 4 vCPU, 8 GB RAM), so a memory spike in one task does not affect others. Both offer self-hosting: BullMQ is always self-operated (MIT), Trigger.dev runs on Docker Compose or Kubernetes (Apache 2.0).

How does the build and deploy pipeline compare?

Trigger.dev deploys with a single CLI command and manages build dependencies through config. BullMQ deploys as part of your existing Node.js application.

FeatureTrigger.devBullMQ
Build customizationBuild extensions (Prisma, FFmpeg, Playwright, Python, custom)Your Node.js toolchain and Docker image
Deploy integrationsGitHub auto-deploy, Vercel integration, CLIYour CI/CD pipeline (GitHub Actions, etc.)
EnvironmentsProduction, Staging, Preview (per-branch), DevelopmentYour environment setup (flexible, you control it)
System dependenciesFFmpeg, Puppeteer, Sharp, Python, etc. via extensionsInstall via Dockerfile or package manager

BullMQ deploys as part of your Node.js application, so it uses whatever build and deploy pipeline you already have. If you are running Docker, you install system dependencies in your Dockerfile. Trigger.dev provides build extensions that add system-level dependencies (Prisma, FFmpeg, Playwright, Python) with a config line, plus custom extensions for anything not covered. The GitHub integration auto-deploys on every push, and preview branches create isolated environments per PR with their own API key, env vars, and schedules.

How do pricing and cost compare?

BullMQ is MIT-licensed and free. You pay for Redis hosting and your own compute. Trigger.dev bills per compute-second with a free tier included.

FeatureTrigger.devBullMQ
Infrastructure costIncluded in compute-second pricingRedis hosting + worker compute (you manage)
Free tierFree plan available MIT (free forever) + your infrastructure costs
Pricing modelCompute-seconds + per-run feeYour infrastructure costs (Redis + compute + monitoring tools)
Self-hosted costFree (Apache 2.0) + your infraFree (MIT) + Redis + your compute

BullMQ's cost floor is genuinely low. The library is free, and a small Redis instance handles moderate workloads. BullMQ Pro adds groups, batches, and observables as a paid tier. As workloads grow, infrastructure costs include Redis scaling, worker compute, monitoring tools, and the engineering time to keep everything running. Trigger.dev's pricing is compute-seconds plus a per-run fee, with a free tier for getting started. Which is cheaper depends on workload shape: BullMQ's cost floor is low if you already run Redis, while Trigger.dev removes infrastructure and ops costs from the equation.

Code comparison

AI document summarizer

import { task } from "@trigger.dev/sdk";
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export const summarizeDoc = task({
id: "summarize-doc",
run: async (payload: { documentUrl: string }) => {
const doc = await fetch(payload.documentUrl).then(r => r.text());
const { text } = await generateText({
model: anthropic("claude-sonnet-4-20250514"),
prompt: `Summarize this document:\n\n${doc}`,
});
return { summary: text };
},
});

Process an uploaded image (resize and optimize)

import { task } from "@trigger.dev/sdk";
import sharp from "sharp";
export const processImage = task({
id: "process-image",
run: async (payload: { imageUrl: string; width: number }) => {
const response = await fetch(payload.imageUrl);
const buffer = Buffer.from(await response.arrayBuffer());
const optimized = await sharp(buffer)
.resize(payload.width)
.webp({ quality: 80 })
.toBuffer();
// Upload to S3, return URL...
return { size: optimized.length };
},
});

Scheduled task: daily report email

import { schedules } from "@trigger.dev/sdk";
export const dailyReport = schedules.task({
id: "daily-report",
cron: "0 9 * * *", // 9am UTC daily
run: async () => {
const stats = await db.getDailyStats();
await sendEmail({
subject: `Daily report: ${stats.date}`,
body: formatReport(stats),
});
return { sent: true, date: stats.date };
},
});

What developers say about Trigger.dev

With Trigger.dev, we've summarized over a million student interactions in just a couple of weeks. We're incredibly thankful for tools like Trigger.dev that are empowering us to bring AI-driven solutions to educators and students at scale.

Ben Duggan

Ben Duggan

MagicSchool AI logo

Moving to Trigger for our background jobs was more reliable, cheaper, and easier. We run 200,000+ monthly background jobs without worrying about infrastructure.

Alex Danilowicz

Alex Danilowicz

Magic Patterns logo

We are a team of 2 and have scaled to 11,500+ customers in the last 12 months. Trigger.dev was the missing piece in our journey to go fully serverless.

Pontus Abrahamsson

Pontus Abrahamsson

Midday logo

Frequently asked questions

Can I migrate from BullMQ to Trigger.dev?

Yes. BullMQ jobs map directly to Trigger.dev tasks. The job processor function becomes your task's run function, queue configuration maps to Trigger.dev queue options, and repeatable jobs become scheduled tasks. The main work is removing Redis connection setup and worker boilerplate. Most migrations take a day or two.

Do I still need Redis with Trigger.dev?

No. BullMQ requires Redis for all queue operations. Trigger.dev manages its own infrastructure with no Redis dependency. If you use Redis for other purposes (caching, sessions), you keep that. But the job queue no longer depends on it.

How does BullMQ Pro compare to Trigger.dev?

BullMQ Pro adds groups with round-robin processing, batch consumption, and observables for state-machine patterns. Trigger.dev includes batch triggering, per-entity concurrency controls, and observability in its standard offering. BullMQ Pro does not add managed infrastructure or durability guarantees, which are included in Trigger.dev Cloud.

Can BullMQ handle the same scale as Trigger.dev?

BullMQ can process 250,000+ jobs per second with DragonflyDB, which is impressive raw throughput. Throughput and operational scale are different dimensions. BullMQ throughput depends on your Redis setup, worker count, and infrastructure tuning. Trigger.dev handles scaling automatically. For most applications, both are more than capable.

What happens to my BullMQ jobs if Redis goes down?

It depends on your Redis persistence configuration. With AOF (append-only file) enabled, you lose approximately one second of data. With RDB snapshots only (the Redis default), you could lose minutes of data between snapshots. With no persistence configured, all pending jobs are lost on restart. Trigger.dev's durable execution does not have this failure mode.

Is BullMQ good for AI and LLM tasks?

BullMQ can queue AI tasks like any other job. Long-running LLM calls may trigger BullMQ's stalled job detection (default 30-second lock), which can be tuned with lockDuration and stalledInterval settings. BullMQ does not have built-in streaming, human-in-the-loop primitives, or AI-specific observability. Trigger.dev has Realtime Streams, Waitpoints, no execution timeout, and works with any TypeScript AI framework.

Does Trigger.dev support BullMQ features like job flows?

BullMQ job flows let you define parent-child job dependencies with unlimited nesting via FlowProducer. Trigger.dev has a similar concept with triggerAndWait and batchTriggerAndWait, which let you trigger child tasks and wait for their results. The API is different but the capability is equivalent.

Can I use BullMQ and Trigger.dev together?

Yes. Some teams use BullMQ for high-throughput, low-complexity jobs (sending emails, cache invalidation) and Trigger.dev for long-running or complex workflows (AI tasks, multi-step pipelines). The two can coexist in the same application.

Is Trigger.dev open source like BullMQ?

Yes. BullMQ is MIT licensed and always self-operated. Trigger.dev is Apache 2.0 licensed and fully self-hostable via Docker Compose or Kubernetes. Both are genuinely open source. The difference is that Trigger.dev also offers a managed cloud option.

What is the best background jobs solution for TypeScript?

BullMQ is the most popular background job library for Node.js with 14M+ monthly downloads. Trigger.dev is a managed compute platform that processes millions of runs for thousands of engineering teams, with built-in durability, observability, and auto-scaling. BullMQ gives you full control with zero vendor dependency. Trigger.dev gives you zero ops.

Can I run background jobs in TypeScript without managing Redis?

Yes. BullMQ requires Redis for all queue operations. Trigger.dev does not use Redis. Deploy with the CLI and the platform handles queuing, retries, and scheduling. If you want background jobs without a message broker, Trigger.dev is the simpler path.

Can I build custom dashboards and query my task data with Trigger.dev?

BullMQ does not include built-in analytics or dashboards. Trigger.dev includes built-in dashboards for every project and lets you build custom dashboards with charts, tables, and big number widgets. TRQL, a SQL-style query language backed by ClickHouse, lets you query your runs and metrics data directly or through an AI assistant that generates queries from plain English. You can also run queries from your code via the SDK or REST API.