Trigger.dev v3 Developer Preview is now available

Matt AitkenMatt Aitken

Trigger.dev v3 Developer Preview is now available

Today the developer preview is live. Each day we'll let in more people as we fix bugs, increase capacity and roll out essential features. You can sign up to request access now.

We built v3 based on conversations with hundreds of users over the past year.


This video shows off writing tasks in a /trigger folder, running locally using our CLI and using the dashboard to view a run.

And here's the code for the task shown in that video:

/trigger/openai.ts

_30
export const summarizeUrl = task({
_30
id: "summarize-url",
_30
//specifying retry options overrides the defaults defined in your trigger.config file
_30
retry: {
_30
maxAttempts: 10,
_30
},
_30
run: async (payload: { url: string }) => {
_30
//you can use fetch, but we provide a version that retries
_30
const response = await retry.fetch(payload.url);
_30
const html = await response.text();
_30
_30
//if this fails, it will throw an error and retry
_30
const chatCompletion = await openai.chat.completions.create({
_30
messages: [
_30
{
_30
role: "user",
_30
content: `Can you summarize in bullet points the content of this web page: \n${html}`,
_30
},
_30
],
_30
model: "gpt-4-turbo-preview",
_30
});
_30
_30
if (chatCompletion.choices[0]?.message.content === undefined) {
_30
//sometimes OpenAI returns an empty response, let's retry by throwing an error
_30
throw new Error("OpenAI call failed");
_30
}
_30
_30
return chatCompletion.choices[0].message.content;
_30
},
_30
});

No timeouts but still serverless

Timeouts are a big problem for background tasks, especially when even seemingly short tasks can hit the limits when you factor in retrying.

We learned this the hard way with v2 because code ran on your servers (often with timeouts). This meant you needed to divide tasks into small chunks which is complex, makes many things impossible (no chunk can exceed the timeout), and can lead to subtle bugs.

There are zero timeouts in v3. We deploy, manage and scale your tasks on long-running servers.

Writing tasks is now far simpler, less confusing and entirely new use cases are opened up.

Freeze your server costs

With our cloud product, you only pay when code is executing. This doesn't seem particularly novel until you realize that there are no timeouts and you can write code like this:


_33
export const sendReminderEmail = task({
_33
id: "send-reminder-email",
_33
run: async (payload: { todoId: string; userId: string; date: string }) => {
_33
//wait until the date, this could be a really long time in the future
_33
await wait.until({ date: new Date(payload.date) });
_33
_33
const todo = await prisma.todo.findUnique({
_33
where: {
_33
id: payload.todoId,
_33
},
_33
});
_33
_33
const user = await prisma.user.findUnique({
_33
where: {
_33
id: payload.userId,
_33
},
_33
});
_33
_33
//send email
_33
const { data, error } = await resend.emails.send({
_33
_33
to: user.email,
_33
subject: `Don't forget to ${todo.title}!`,
_33
html: `<p>Hello ${user.name},</p><p>...</p>`,
_33
});
_33
_33
if (error) {
_33
throw new Error(`Failed to send email ${error.message} ${error.name}`);
_33
}
_33
_33
logger.info(`Email sent to ${user.email}`, { data });
_33
},
_33
});

If we can freeze the execution of your code we will, and you won't pay until it starts back up. Right now this happens when using the wait functions, after scheduling a retry, and when waiting for subtasks to complete.

The underlying technology we're using is called CRIU (Checkpoint/Restore In Userspace) and has been used at scale by Google since 2017. Each run has its own process that survives even after unfreezing. This is called Stateful Serverless and can simplify solving many problems.

Reliable by default

If a task throws an error (that you don't catch) it will be reattempted by default. Here you can see it happening 3 times:

By default we retry 3 times

You can combine and nest tasks to create robust workflows easily:


_22
export const myTask = task({
_22
id: "my-task",
_22
retry: {
_22
maxAttempts: 10,
_22
},
_22
run: async (payload: string) => {
_22
const result = await otherTask.triggerAndWait({ payload: "some data" });
_22
//...do other stuff
_22
},
_22
});
_22
_22
export const otherTask = task({
_22
id: "other-task",
_22
retry: {
_22
maxAttempts: 5,
_22
},
_22
run: async (payload: string) => {
_22
return {
_22
foo: "bar",
_22
};
_22
},
_22
});

You can configure the default retrying behavior in your trigger.config.ts file.

We also provide some convenient functions that allow you to easily add reliability inside of your tasks, like retry.fetch() and retry.onThrow(). They use the freezing system when waiting to retry. Or you can use npm packages to do this if you'd prefer.

Observability built for long-running tasks

You get a live view into exactly what's happening in your tasks. We did this by extending OpenTelemetry (OTEL).

We automatically generate useful spans and logs when you trigger and run tasks. Plus anytime you log something in your code it will appear.

OpenAI chat log

Here you can see Prisma calls being auto-instrumented. You don't need to do anything to get this level of observability. You can specify them in the trigger.config.ts file:

trigger.config.ts

_19
import type { TriggerConfig } from "@trigger.dev/sdk/v3";
_19
import { PrismaInstrumentation } from "@prisma/instrumentation";
_19
_19
export const config: TriggerConfig = {
_19
project: "proj_olbfrbvscqekuyhsrzst",
_19
instrumentations: [new PrismaInstrumentation()],
_19
retries: {
_19
enabledInDev: false,
_19
default: {
_19
maxAttempts: 3,
_19
minTimeoutInMs: 1000,
_19
maxTimeoutInMs: 10000,
_19
factor: 2,
_19
randomize: true,
_19
},
_19
},
_19
additionalFiles: ["./prisma/schema.prisma"],
_19
additionalPackages: ["[email protected]"],
_19
};

Open source

Our GitHub repo is Apache 2 licensed and we really appreciate issues, discussions, and contributions. You can self-host Trigger.dev v3 on your own infrastructure using Docker. We will be adding self-hosting guides soon.

Cloud pricing

Right now, v3 is invite-only and is free for a short period – you can request access here. We will let you know when you have access.

Very soon we will be rolling out these pricing plans:

Free

  • $5 free usage each month (verified GitHub account required)
  • 10 concurrent runs
  • 24 hour log history
  • Dev & Prod environments
  • Community support

Hobby ($10/month)

  • $10 of usage included
  • 100+ concurrent runs (soft-limit that can be raised)
  • 3 day log history
  • Dev & Prod environments
  • Email support

Pro ($50/month)

  • $50 of usage included
  • 100+ concurrent runs (soft-limit that can be raised)
  • 30 day log history
  • Alerts
  • Dev, Staging & Prod environments
  • Email, private Slack support

Enterprise (talk to us)

  • Custom usage included
  • Custom concurrent runs
  • Dedicated worker cluster
  • 90 days log history
  • Single-sign on
  • SOC2 report
  • Dedicated support

All plans have unlimited seats.

Usage pricing

Each plan includes some usage each month and then you pay for anything above that.

The usage pricing is:

  • $0.25 per 1000 runs
  • $0.50 per vCPU × GB RAM hour

In DEV the code runs on your machine so you only pay for runs, not compute.

The default for a task will be 0.5 vCPU and 0.5GB RAM but will be configurable (up and down).

What's available and what's coming soon

Available

  • Regular tasks
  • Testing tasks from dashboard
  • Triggering a run/batch of runs
  • Trigger and wait for result for a run/batch of runs
  • Concurrency controls
  • Per-tenant queues
  • Deploy via CLI
  • Deploy via GitHub Actions
  • Automatic reattempts
  • Atomic versioning
  • Useful local retrying functions

Coming soon

  • Self-hosting guide for our Docker provider
  • CRON and interval tasks
  • Zod tasks
  • Full text search of all runs
  • All logs view with full text search
  • Alerts for errors
  • Notifications: send data to your web app from inside a run
  • Rollbacks: easily rollback changes when errors happen
  • Webhook integrations (receiving a Stripe webhook triggers a task)

Let us know what we should prioritize and what we are missing.


You can sign up to request access now.