Changelog

Improvements, new features and fixes

  • We've added first-class support for getting runs using the SDK in version 3.0.0-beta.36. The list function is especially nice as we're using some pretty neat tricks to make it really easy to paginate through runs.

    If you haven't used any of the management functions before, the overview which includes instructions on authentication is a good starting point.

    Run this command in your repo to easily upgrade:


    _10
    npx trigger.dev@beta update

    runs.retrieve()


    _16
    import { runs, APIError } from "@trigger.dev/sdk/v3";
    _16
    _16
    //run IDs start with "run_"
    _16
    async function getRun(id: string) {
    _16
    try {
    _16
    const run = await runs.retrieve(id);
    _16
    } catch (error) {
    _16
    if (error instanceof ApiError) {
    _16
    console.error(
    _16
    `API error: ${error.status}, ${error.headers}, ${error.body}`
    _16
    );
    _16
    } else {
    _16
    console.error(`Unknown error: ${error.message}`);
    _16
    }
    _16
    }
    _16
    }

    You get lots of useful information about the run including:

    • status: (e.g. QUEUED, EXECUTING, COMPLETED, FAILED, etc)
    • payload: The payload that was sent to the run
    • output: The output of the run, if it has completed
    • startedAt: When the run was started
    • finishedAt: When the run finished, if it has completed
    • attempts: Information about each attempt, including the status and any error.

    Full reference docs.

    runs.list()

    Auto-pagination

    Dealing with list pagination is usually a pain. You get a bunch of items back, then you have to make another request to get the next page, and so on. We've made this super easy with the list function. You can just use a for await loop to iterate through every single matching run.


    _11
    import { runs } from "@trigger.dev/sdk/v3";
    _11
    _11
    async function fetchAllRuns() {
    _11
    const allRuns = [];
    _11
    _11
    for await (const run of runs.list({ limit: 10 })) {
    _11
    allRuns.push(run);
    _11
    }
    _11
    _11
    return allRuns;
    _11
    }

    Be warned: this will go through every single run, so you should break out of the loop when you have what you need. Or at least use a filter to only get the runs you're interested in.

    Using getNextPage()

    Alternatively, you can get a page then use getNextPage() to get the next page.


    _14
    import { runs } from "@trigger.dev/sdk/v3";
    _14
    _14
    async function fetchRuns() {
    _14
    let page = await runs.list({ limit: 10 });
    _14
    _14
    for (const run of page.data) {
    _14
    console.log(run);
    _14
    }
    _14
    _14
    while (page.hasNextPage()) {
    _14
    page = await page.getNextPage();
    _14
    // ... do something with the next page
    _14
    }
    _14
    }

    Filtering

    You can use advanced filtering when listing runs. For example, to get all completed runs in the last year:


    _10
    for await (const run of runs.list({
    _10
    status: ["COMPLETED"],
    _10
    period: "1y",
    _10
    })) {
    _10
    console.log(run);
    _10
    }

    Or just encode-video tasks in the past day, that aren't tests and failed:


    _10
    for await (const run of runs.list({
    _10
    taskIdentifier: "encode-video",
    _10
    status: ["FAILED", "CRASHED", "INTERRUPTED", "SYSTEM_FAILURE"],
    _10
    period: "1d",
    _10
    isTest: false,
    _10
    })) {
    _10
    console.log(run);
    _10
    }

    Full reference docs.

  • We strongly recommend you upgrade to the latest version of the v3 SDK because we've made some major improvements to run execution reliability.

    Run this command in your repo to easily upgrade:


    _10
    npx trigger.dev@beta update

    Run attempts

    Each v3 run has at least one attempt. An attempt is an execution of your code, if the attempt succeeds then the run will succeed. If the attempt fails then the run will be retried with more attempts until it succeeds or the maximum number of attempts is reached.

    You should read the Errors & Retrying guide. If you apply the guidance you will achieve highly reliable runs.

    What's changed and why is it better?

    When we first shipped v3 this is how attempts worked:

    1. Your run is taken from the queue on the platform.
    2. We create a run attempt on the platform.
    3. We spin up a new worker to execute your run.
    4. The worker runs (hopefully).
    5. The attempt succeeds or fails.
    6. Failed attempts go back into the queue.
    7. Repeat 1–6.

    This had a couple of problems:

    • If the worker failed to start then there'd be a hung attempt. We automatically fail attempts that haven't communicated with the platform recently so it would try again.
    • Each attempt needed to go back into the queue which is innefficient and causes load on the platform.
    • If an attempt didn't start it would count againt your run's attempt limit.

    We've moved creating run attempts to the worker, so now:

    1. Your run is taken from the queue on the platform.
    2. We spin up a new worker to execute your run.
    3. The worker creates a run attempt via the platform.
    4. The worker runs (hopefully).
    5. The attempt succeeds or fails.
    6. Failed attempts are retried by the worker, no need to be requeued.
    7. Repeat 3–6.

    This is far more reliable because attempts are only created when the worker is actually running. It's also more efficient because we don't need to requeue failed attempts. Win win.

  • From v3 SDK version 3.0.0-beta.34 we have a fully-featured SDK for managing environment variables as well as a convenient way of syncing them from other services. Read the full docs.

    Directly manipulating environment variables

    You can directly use SDK functions to manipulate environment variables. Here is a list of available functions:

    FunctionDescription
    envvars.list()List all environment variables
    envvars.upload()Upload multiple env vars. You can override existing values.
    envvars.create()Create a new environment variable
    envvars.retrieve()Retrieve an environment variable
    envvars.update()Update a single environment variable
    envvars.del()Delete a single environment variable

    Syncing environment variables from other services

    Instead of using the SDK functions above it's much easier to use our resolveEnvVars function in your trigger.config file.

    In this example we're using env vars from Infisical. You can use this with any secrets manager or environment variable provider.

    /trigger.config.ts

    _48
    import type {
    _48
    TriggerConfig,
    _48
    ResolveEnvironmentVariablesFunction,
    _48
    } from "@trigger.dev/sdk/v3";
    _48
    _48
    //This runs when you run the deploy command or the dev command
    _48
    export const resolveEnvVars: ResolveEnvironmentVariablesFunction = async ({
    _48
    //the project ref (starting with "proj_")
    _48
    projectRef,
    _48
    //any existing env vars from a .env file or Trigger.dev
    _48
    env,
    _48
    //"dev", "staging", or "prod"
    _48
    environment,
    _48
    }) => {
    _48
    //the existing environment variables from Trigger.dev (or your local .env file)
    _48
    if (
    _48
    env.INFISICAL_CLIENT_ID === undefined ||
    _48
    env.INFISICAL_CLIENT_SECRET === undefined
    _48
    ) {
    _48
    //returning undefined won't modify the existing env vars
    _48
    return;
    _48
    }
    _48
    _48
    const client = new InfisicalClient({
    _48
    clientId: env.INFISICAL_CLIENT_ID,
    _48
    clientSecret: env.INFISICAL_CLIENT_SECRET,
    _48
    });
    _48
    _48
    const secrets = await client.listSecrets({
    _48
    environment,
    _48
    projectId: env.INFISICAL_PROJECT_ID!,
    _48
    });
    _48
    _48
    return {
    _48
    variables: secrets.map((secret) => ({
    _48
    name: secret.secretKey,
    _48
    value: secret.secretValue,
    _48
    })),
    _48
    // this defaults to true
    _48
    // override: true,
    _48
    };
    _48
    };
    _48
    _48
    //the rest of your config file
    _48
    export const config: TriggerConfig = {
    _48
    project: "proj_1234567890",
    _48
    //etc
    _48
    };

    Read the full docs for the details.

  • From today, organizations using the Trigger.dev Cloud won't be able to create new v2 projects by default. OK, what does this mean and why?

    v3 is the future

    Whilst v3 is in Developer Preview, it's already far more capable than v2 in many areas. It's fundamentally better for writing background jobs than v2 and similar products from competitors.

    More importantly, we're at the point now where we think it would be a mistake for someone creating a new project to use v2 when they can choose v3.

    View the v3 feature matrix to see what is currently supported. Message us on Discord if you need a feature – this helps us prioritize work.

    Exceptions

    If you're using another open source tool that relies on Trigger.dev v2 then we'll make an exception and enable new v2 projects for your account. Let us know if this is the case.

    Existing v2 projects

    This change doesn't impact existing v2 projects, they will continue to work as normal. We continue to make reliability improvements for v2 but no new features will ship now.

    Self-hosting

    Of course, you can continue to self-host Trigger.dev v2. This change doesn't impact self-hosters at all – creating new v2 projects is enabled by default when self-hosting.


    If you need urgent access to v3 then please message us on Discord and we'll see what we can do.

  • From the v3 runs page you can select up to 250 runs and then replay or cancel them all at once.

    Use the checkboxes to select runs from multiple pages and then press "Replay" or "Cancel" to apply the bulk action. You can also use the keyboard shortcuts to make this faster:

    • Replay (r)
    • Cancel (c)
    • Cancel (esc)

    Whenever you perform a "bulk action" we redirect you to the runs page with a filter applied so you can see the runs you just affected. That filter is saved so in the future you can open the Filters menu, select "Bulk action", then select the action you want to see. This is especially useful if your runs take a long time to complete.

    You can always share a link to the runs page with any filters applied, your teammates will see exactly what you do.

  • Alerts

    Get alerted when v3 run attempts fail, deployments fail, or when deployments succeed.

    Create a new alert

    Runs in v3 are attempted multiple times, depending on your retry settings. An attempt fails if an uncaught error is thrown. You can get alerted when an attempt fails – this does not mean that the entire run has failed because you can have more retries remaining. It's worth reading the full guide on Errors and retrying as it will greatly increase the reliability of your tasks.

    For each type of alert you can choose to be notified using email, Slack, or Webhook.

    Read the full alerts guide.

  • Run filtering

    On the Runs page in v3 you can do some advanced filtering to find the runs you're looking for, including:

    • Select multiple statuses.
    • Select multiple environments.
    • Select multiple tasks.
    • Select a run created time period (e.g. last 6 hours).
    • Select previous bulk actions you've applied.

    There are keyboard shortcuts to make this faster:

    • Press F to open the filter menu.
    • Press 1–0 to quickly select options from the menu.
    • Type to filter the options.
    • Press Escape to go back or close the menu.
  • On the Environment Variables page you can now add many environment variables at once, to multiple environments.

    Tip: The easiest way is to paste a list of KEY=VALUE pairs, one per line, into one of the text fields. It will automatically create a new environment variable for each line. This is the format used by .env files.

    Add env vars easily

  • v3 Staging

    You can enable the "Staging" environment for your v3 project from the "API keys" page.

    Staging Environment

    Please note, that this feature will require a paid plan when we add billing in June.

    Using staging

    Staging has the same features as the Prod environment. You need to deploy your project to the Staging environment separately.

  • New tasks dashboard

    We've added some very useful information to the Tasks page:

    • The number of currently runs
    • The number of queued runs
    • The activity over the past 7 days, with breakdowns by status
    • The average duration of runs

    New Tasks Dashboard

    You can quickly find a task using the search field. It searches the task id, function name and file path.

  • Today we're releasing packages v3.0.0-beta.15 with some breaking changes, fixes, and API improvements.

    Breaking changes

    v3.0.0-beta.15 updates the Task.trigger, Task.batchTrigger and their *AndWait variants to use the first parameter for the payload/items, and the second parameter for options.

    Before:


    _15
    await yourTask.trigger({
    _15
    payload: { foo: "bar" },
    _15
    options: { idempotencyKey: "key_1234" },
    _15
    });
    _15
    await yourTask.triggerAndWait({
    _15
    payload: { foo: "bar" },
    _15
    options: { idempotencyKey: "key_1234" },
    _15
    });
    _15
    _15
    await yourTask.batchTrigger({
    _15
    items: [{ payload: { foo: "bar" } }, { payload: { foo: "baz" } }],
    _15
    });
    _15
    await yourTask.batchTriggerAndWait({
    _15
    items: [{ payload: { foo: "bar" } }, { payload: { foo: "baz" } }],
    _15
    });

    After:


    _11
    await yourTask.trigger({ foo: "bar" }, { idempotencyKey: "key_1234" });
    _11
    await yourTask.triggerAndWait({ foo: "bar" }, { idempotencyKey: "key_1234" });
    _11
    _11
    await yourTask.batchTrigger([
    _11
    { payload: { foo: "bar" } },
    _11
    { payload: { foo: "baz" } },
    _11
    ]);
    _11
    await yourTask.batchTriggerAndWait([
    _11
    { payload: { foo: "bar" } },
    _11
    { payload: { foo: "baz" } },
    _11
    ]);

    We've also changed the API of the triggerAndWait result. Before, if the subtask that was triggered finished with an error, we would automatically "rethrow" the error in the parent task.

    Now instead we're returning a TaskRunResult object that allows you to discriminate between successful and failed runs in the subtask:

    Before:


    _10
    try {
    _10
    const result = await yourTask.triggerAndWait({ foo: "bar" });
    _10
    _10
    // result is the output of your task
    _10
    console.log("result", result);
    _10
    } catch (error) {
    _10
    // handle subtask errors here
    _10
    }

    After:


    _10
    const result = await yourTask.triggerAndWait({ foo: "bar" });
    _10
    _10
    if (result.ok) {
    _10
    console.log(`Run ${result.id} succeeded with output`, result.output);
    _10
    } else {
    _10
    console.log(`Run ${result.id} failed with error`, result.error);
    _10
    }

    Fixes

    • Fixes an issue when using idempotency keys with triggerAndWait and batchTriggerAndWait failing to resume if the idempotency key matched an already completed run.
    • Add additional logging around cleaning up dev workers, and always kill them after 5 seconds if they haven't already exited
    • Fixes an issue that caused failed tasks when resuming after calling triggerAndWait or batchTriggerAndWait in prod/staging (this doesn't effect dev).

    The version of Node.js we use for deployed workers (latest 20) would crash with an out-of-memory error when the checkpoint was restored. This crash does not happen on Node 18x or Node21x, so we've decided to upgrade the worker version to Node.js21x, to mitigate this issue.

    You'll need to re-deploy to production to fix the issue.

    Improvements

    • We no longer limit individual task concurrency when no concurrency limit is set for the task or the queue the task belongs to. The concurrency limit will fall back to the org/env concurrency limit instead.

    Upgrade now

    To upgrade to v3.0.0-beta.15, run:


    _10
    npm install @trigger-dev/[email protected]

    or


    _10
    yarn add @trigger-dev/[email protected]

    or


    _10
    pnpm add @trigger-dev/[email protected]

  • Now you can create scheduled tasks in Trigger.dev v3. If you haven't already you should sign up for the v3 waitlist.

    At the end I'll reveal what schedule the CRON pattern in the image above, 0 17 * * 5, represents. You CRON nerds need closure.

    In the video my scheduled task posts a GIF to Slack every minute.

    Features

    • Define your task in code using schedules.task()
    • Attach many schedules (across environments) to each task in the dashboard.
    • Attach schedules using the SDK (you can do dynamic things like a schedule for each of your users using externalId)
    • List, view, edit, disable, re-enable, create and delete using the dashboard, SDK and REST API.
    • Use AI to help you create CRON patterns in the dashboard.

    Full docs here: https://trigger.dev/docs/v3/tasks-scheduled

    An example scheduled task

    This is the scheduled task code from the video above.

    /trigger/random-time-gif.ts

    _33
    import { WebClient } from "@slack/web-api";
    _33
    import { retry, schedules } from "@trigger.dev/sdk/v3";
    _33
    _33
    const slack = new WebClient(process.env.SLACK_BOT_TOKEN);
    _33
    _33
    //define the task using schedules.task()
    _33
    export const randomTimeGif = schedules.task({
    _33
    id: "random-time-gif",
    _33
    run: async (payload) => {
    _33
    //Get 25 GIFs related to "time" from the giphy API
    _33
    const randomGifResponse = await retry.fetch(
    _33
    `https://api.giphy.com/v1/gifs/search?api_key=${process.env.GIPHY_API_KEY}&q=time&limit=25&offset=0&rating=g&lang=en&bundle=messaging_non_clips`
    _33
    );
    _33
    const json = await randomGifResponse.json();
    _33
    _33
    //pick a random GIF URL from the results
    _33
    const resultCount = json.data.length;
    _33
    const randomIndex = Math.floor(Math.random() * resultCount);
    _33
    const url = json.data[randomIndex]?.url;
    _33
    _33
    if (!url) {
    _33
    throw new Error("No gif found");
    _33
    }
    _33
    _33
    //post to a Slack channel
    _33
    const result = await slack.chat.postMessage({
    _33
    text: url,
    _33
    channel: process.env.SLACK_CHANNEL_ID!,
    _33
    });
    _33
    _33
    return { success: true };
    _33
    },
    _33
    });

    As shown in the video, you can attach a schedule to this task in the dashboard. This is great for most use cases.

    Dynamic schedules (or multi-tenant)

    Using the SDK you can do more advanced scheduling. For example, you could let your users define a schedule that they want to post GIFs to their own Slack workspace channel.

    First take what we had before and modify the task slightly:

    /trigger/scheduled-slack-gif.ts

    _46
    import { WebClient } from "@slack/web-api";
    _46
    import { retry, schedules } from "@trigger.dev/sdk/v3";
    _46
    import { db } from "@/db";
    _46
    _46
    //define the task using schedules.task()
    _46
    export const scheduledSlackGifs = schedules.task({
    _46
    id: "scheduled-slack-gifs",
    _46
    run: async (payload) => {
    _46
    //we'll set the externalId to a row in our database when we create the schedule
    _46
    if (!payload.externalId) {
    _46
    throw new Error("externalId is required");
    _46
    }
    _46
    _46
    //get the details of what the user wants
    _46
    const { slackToken, slackChannelId, gifSearchQuery } =
    _46
    await db.getGifSchedule(payload.externalId);
    _46
    _46
    //Get 25 GIFs related to "time" from the giphy API
    _46
    const randomGifResponse = await retry.fetch(
    _46
    `https://api.giphy.com/v1/gifs/search?api_key=${
    _46
    process.env.GIPHY_API_KEY
    _46
    }&q=${encodeURIComponent(
    _46
    gifSearchQuery
    _46
    )}=25&offset=0&rating=g&lang=en&bundle=messaging_non_clips`
    _46
    );
    _46
    const json = await randomGifResponse.json();
    _46
    _46
    //pick a random GIF URL from the results
    _46
    const resultCount = json.data.length;
    _46
    const randomIndex = Math.floor(Math.random() * resultCount);
    _46
    const url = json.data[randomIndex]?.url;
    _46
    _46
    if (!url) {
    _46
    throw new Error("No gif found");
    _46
    }
    _46
    _46
    //post to a Slack channel
    _46
    const slack = new WebClient(slackToken);
    _46
    const result = await slack.chat.postMessage({
    _46
    text: url,
    _46
    channel: slackChannelId,
    _46
    });
    _46
    _46
    return { success: true };
    _46
    },
    _46
    });

    Then in your backend code you need to register a schedule for this task. This is a Next.js server action but the only thing that matters is that it's on your server:


    _33
    "use server";
    _33
    _33
    import { scheduledSlackGifs } from "@/trigger/scheduled-user-gifs";
    _33
    import { schedules } from "@trigger.dev/sdk/v3";
    _33
    import { db } from "@/db";
    _33
    _33
    export async function registerSchedule(
    _33
    userId: string,
    _33
    cron: string,
    _33
    searchQuery: string
    _33
    ) {
    _33
    try {
    _33
    //create a record for the GIF schedule for this user
    _33
    const row = await db.createGifSchedule(userId, searchQuery);
    _33
    _33
    //create a new schedule for this GIF schedule
    _33
    const createdSchedule = await schedules.create({
    _33
    task: scheduledSlackGifs.id,
    _33
    cron,
    _33
    //the row id, so we can retrieve the row inside the run
    _33
    externalId: row.id,
    _33
    //don't allow multiple shedules for the same GIF schedule
    _33
    deduplicationKey: row.id,
    _33
    });
    _33
    _33
    return { scheduleId: createdSchedule.id };
    _33
    } catch (error) {
    _33
    console.error(error);
    _33
    return {
    _33
    error: "something went wrong",
    _33
    };
    _33
    }
    _33
    }

    The user has defined their own reminder frequency and what kind of GIFs they want.

    Read the full docs for everything that scheduled tasks allow.

    So the CRON pattern 0 17 * * 5 is every Friday at 5pm (UTC). I hope that helps you sleep at night.

  • The Trigger.dev v3 Developer Preview begins in a couple of weeks. So it's the perfect time to share some of the changes coming to our dashboard.

    If you haven't already you should sign up for early access. I'll wait here.

    Runs with realtime traces

    A flat log of events is hard to follow, especially when dealing with a complex task with lots of asynchronous operations. Luckily there's a great existing solution for this: traces.

    We're using OpenTelemetry to give you realtime traces on your tasks. That means you can see a hierarchical view of everything that's happening, including the span of time, logs, and any subtasks you've triggered.

    Fun fact: OpenTelemetry doesn't support pending spans so we had to perform some wizardry to get the correct hierarchy even while a task is still running.

    View every log across all your tasks

    Logs page

    When you have a problem it's easier to search for a specific log across all your tasks. The new logs page has text search and advanced filtering so you can find problems faster. Then you can jump to see the associated run.

    Test your tasks right from the dashboard

    Test

    The new test page makes it faster to test a specific task and clearer which environment it will run against.

    Create and manage environment variables

    Environment variables

    In v3 your code is deployed in a container. This has lots of advantages including no-timeouts and far better durability. It does mean you need to add any environment variables your code needs.

    We will be shipping integrations with popular platforms like Vercel so you don't have to manually add/update these. But for now you can manage them directly in the dashboard.

    View all your deployed tasks in one place

    Deployments

    Every time you deploy, a new version of all your tasks is created. Any existing runs will continue to use the version they started on. But any runs that haven't started will use the latest version.

    On the deployments page you can quickly see all the versions that have been deployed along with the associated tasks.

  • You can (finally) rename and delete your projects and organizations.

    This has been a highly requested feature because sometimes you want to have a sandbox to play around in before you create the real thing. Or maybe you're just a terrible typist. No judgement.

    Project settings in the sidebar

    Make sure you're really sure when you delete anything because there's no going back. To guard against this we ask you to type in the slug to confirm you want to delete it.

    How to update

    The Trigger.dev Cloud is now running v2.2.36. If you are self-hosting you can upgrade by pinning to the v2.2.36 tag.

  • We've added a new section called "Events" to the side menu. Here you can view all the events that have been received in your project.

    Events are sent when using client.sendEvent(), from webhooks using our integrations, and when using HTTP Endpoints.

    Using the new run filters

    An event can trigger more than one run, so you can click through from an event to see the runs that it triggered, as well as view the payload and context.

    Thanks to Kritik-J for the excellent Pull Request.

    How to update

    The Trigger.dev Cloud is now running v2.2.28. If you are self-hosting you can upgrade by pinning to the v2.2.28 tag.

  • Filter your runs

    Now you can now quickly find runs by filtering by environment (Dev, Staging, Prod) and by the run status.

    Using the new run filters

    All of the filters are stored in the URL so you can use the magic of copy+paste to quickly share the filtered view with your team.

    We've also changed which Dev runs we show in the runs list. Previously we only showed your Dev runs so it was impossible to see runs from your teammates. Now we show you all Dev runs, and clearly label runs which aren't yours.

    Thanks to Abhi1992002 for their great Pull Request.

    How to update

    The Trigger.dev Cloud is now running v2.2.27. If you are self-hosting you can upgrade by pinning to the v2.2.27 tag.

  • Yet Another Local Tunnel

    When working with Trigger.dev in local development, we automatically open a free ngrok tunnel to your local app server so the Trigger.dev server can send requests to your local app server. Unfortunately free ngrok tunnels are restricted to 120 requests per minute, and if exceeded will return a 429 error.

    Starting in @trigger.dev/[email protected], we have dropped ngrok in favor of a custom tunneling solution which we're calling "yalt.dev" (Yet Another Local Tunnel). Yalt.dev is a tunneling solution built on top of Cloudflare Workers and Durable Objects that is designed to be used with the Trigger.dev Cloud service, and drops all limits on the number of requests that can be sent to your local app server.

    How to use it

    If you are on the latest version of the CLI, you can run @trigger.dev/cli@latest dev as you normally would, and the CLI will automatically use yalt.dev instead of ngrok. Currently yalt.dev is only available to users of the Trigger.dev Cloud service, so if you are self-hosting the Trigger.dev server, the CLI will automatically fallback to using ngrok.

    You can also continue to use your own custom tunnel URL by passing the -t flag to the CLI, for example @trigger.dev/cli@latest dev -t mycustomtunnel.com.

    How it works

    When you run the @trigger.dev/cli@latest dev command on your local machine, the CLI makes a request to the Trigger.dev cloud service to create a new yalt.dev tunnel, returning the tunnel URL (which looks like <tunnelId>.yalt.dev). That tunnel URL resolves to a Cloudflare Worker that we've deployed (and you can view the source code for it here).

    The CLI will then open a new WebSocket connection (powered by partysocket 🎈) which is forwarded to a Durable Object. Finally, the CLI will then register that tunnel URL with your Trigger.dev endpoint.

    So now that the tunnel setup is complete, when the Trigger.dev server wants to send a request to your endpoint, it will make a POST request to the <tunnelId>.yalt.dev URL, which again will be handled by that same Cloudflare Worker, which forwards it to tunnel's Durable Object instance, which has in memory the WebSocket connection to your local CLI. The CLI will then receive the request and forward it to your local app server, and respond back to the WebSocket connection with the response from your app server.

    image

  • Usage and billing

    You may have noticed the Usage and Billing page in the app has had a bit of a face lift. You can now subscribe to a plan directly from the app and view your current monthly usage.

    Usage

    Usage is now separated into concurrent runs and runs. Concurrent runs are the number of runs that are currently running at the same time. Runs is the total number of runs that have been run in the current month.

    Usage

    Billing

    Alongside the Usage page is a new Plans tab where you can upgrade or downgrade your subscription as well as estimate your monthly usage using the calculator.

    We've made the free plan more generous – with 10,000 runs per month and up to 5 concurrent runs. Of course, you can also always self-host.

    If you need more than 10,000 runs per month or a greater number of concurrent runs, you can upgrade to the paid plans from this page.

    As a special gesture this month, we’re increasing the concurrent runs to 25 for all free users (until January 1st).

    Plans

  • Previously, you could only use Trigger.dev on Node.js projects as our SDKs and integration packages were only available for Node.js. Today we are excited to announce that we've upgraded our SDKs to support three additional javascript runtimes:

    We've also added an adapter for Hono.dev, which also supports these runtimes. You can read more about getting started in our Hono Quickstart or checkout our Cloudflare Worker example project.

    How to update

    The Trigger.dev Cloud is now running v2.2.20. If you are self-hosting you can upgrade by pinning to the v2.2.20 tag.

    The trigger.dev/* packages are now at v2.3.0. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Shopify Integration

    Shopify is an e-commerce platform that needs no introduction.

    You can now use our new Shopify Integration to easily add webhook triggers and perform tasks on Shopify resources:


    _23
    client.defineJob({
    _23
    id: "shopify-create-customized-product",
    _23
    name: "Shopify - Create Customized Product",
    _23
    version: "1.0.0",
    _23
    integrations: {
    _23
    shopify,
    _23
    },
    _23
    trigger: shopify.on("customers/create"),
    _23
    run: async (payload, io, ctx) => {
    _23
    const firstName = payload.first_name;
    _23
    _23
    // Create a customized product
    _23
    const product = await io.shopify.rest.Product.save("teapot", {
    _23
    fromData: {
    _23
    title: `${firstName}'s Teapot`,
    _23
    },
    _23
    });
    _23
    _23
    return {
    _23
    message: `Hi, ${firstName}! Bet you'll love this: ${product.title}`,
    _23
    };
    _23
    },
    _23
    });

    This is the first integration to use our updated Webhook Triggers which come with a new UI. The dashboard can be found under Triggers -> Webhook Triggers. You can now see both webhook registrations and deliveries!

    image

    See more details in the Shopify integration docs.

  • Key-Value Store

    The Key-Value Store enables you to store and retrieve small chunks of serializable data in and outside of your Jobs.

    Use the io.store object to access namespaced stores inside of your Run functions. Or use client.store to access the environment store anywhere in your app!

    Sharing data across Runs with io.store.job:


    _13
    client.defineJob({
    _13
    ...
    _13
    run: async (payload, io, ctx) => {
    _13
    // this will only be undefined on the first run
    _13
    const counter = await io.store.job.get<number>("job-get", "counter")
    _13
    _13
    const currentCount = counter ?? 0
    _13
    const incrementedCounter = currentCount++
    _13
    _13
    // Job-scoped set
    _13
    await io.store.job.set("job-set", "counter", incrementedCounter);
    _13
    }
    _13
    })

    Sharing data across Jobs with io.store.env:


    _20
    client.defineJob({
    _20
    id: "job-1",
    _20
    ...
    _20
    run: async (payload, io, ctx) => {
    _20
    // store data in one job
    _20
    await io.store.env.set("cacheKey", "cross-run-shared-key", { foo: "bar" });
    _20
    }
    _20
    })
    _20
    _20
    client.defineJob({
    _20
    id: "job-2",
    _20
    ...
    _20
    run: async (payload, io, ctx) => {
    _20
    // access from a different job
    _20
    const value = await io.store.env.get<{ foo: string }>("cacheKey", "cross-run-shared-key");
    _20
    }
    _20
    })
    _20
    _20
    // or anywhere else in your app
    _20
    await client.store.env.get<{ foo: string }>("cross-run-shared-key");

    Full docs for io.store and client.store.

  • Using our new HTTP endpoints feature you can now create triggers for your jobs for any API that supports webhooks.

    We've been busy adding lots of new HTTP endpoint code examples for you to use in your projects.

    HTTP endpoint code examples added in November

    We will continue adding to this list over time. Keep an eye out for new examples in our API section. You can also contribute your own examples to our API reference repo.

  • Resend v2.0.0 support

    Our @trigger.dev/resend package has been updated to work with the latest resend-node 2.0.0 version, which brings with it a number of fixes and some additional tasks.

    See our Resend integration docs for more.

    How to update

    The trigger.dev/* packages are now at v2.2.7. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Run Notifications

    Before today, it was a very clunky experience trying to build workflows that responded to your job runs completing or failing. But now, you can easily subscribe to run notifications and perform additional work.

    There are two ways of subscribing to run notifications:

    Across all jobs

    You can use the TriggerClient.on instance method to subscribe to all job runs notifications. This is useful if you want to perform some work after any job run completes or fails:


    _12
    export const client = new TriggerClient({
    _12
    id: "my-project",
    _12
    apiKey: process.env.TRIGGER_API_KEY,
    _12
    });
    _12
    _12
    client.on("runSucceeeded", async (notification) => {
    _12
    console.log(`Run on job ${notification.job.id} succeeded`);
    _12
    });
    _12
    _12
    client.on("runFailed", async (notification) => {
    _12
    console.log(`Run on job ${notification.job.id} failed`);
    _12
    });

    On a specific job

    You can also pass onSuccess or onFailure when defining a job to subscribe to notifications for that specific job:


    _19
    client.defineJob({
    _19
    id: "github-integration-on-issue",
    _19
    name: "GitHub Integration - On Issue",
    _19
    version: "0.1.0",
    _19
    trigger: github.triggers.repo({
    _19
    event: events.onIssue,
    _19
    owner: "triggerdotdev",
    _19
    repo: "empty",
    _19
    }),
    _19
    onSuccess: async (notification) => {
    _19
    console.log("Job succeeded", notification);
    _19
    },
    _19
    onFailure: async (notification) => {
    _19
    console.log("Job failed", notification);
    _19
    },
    _19
    run: async (payload, io, ctx) => {
    _19
    // ...
    _19
    },
    _19
    });

    Run notifications

    All run notifications contain the following info:

    • The run's ID
    • The run's status
    • The run's duration
    • The run's start time
    • The run's payload
    • The run's explicit statuses (if any)
    • Whether or not the run was a test run
    • Which job the run belongs to
    • Which environment the run belongs to
    • Which project the run belongs to
    • Which organization the run belongs to
    • The external account associated with the run (if any)

    Successful run notifications also contain the output of the run. Failed run notifications contain the error and the task that failed.

    You can see the full run notification schema here

    How does it work?

    Run notifications work by making a separate HTTP request to your endpoint URL after a run completes or fails, which means that you get a fresh serverless function execution to perform additional work.

    We only send these notifications if you've subscribed to them, so ff you want to stop receiving them, just remove the code that subscribes to them, and we'll stop sending them.

    How to update

    The Trigger.dev Cloud is now running v2.2.10. If you are self-hosting you can upgrade by pinning to the v2.2.10 tag.

    The trigger.dev/* packages are now at v2.2.7. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • New API section

    We've added a new APIs section to the site. Here you can browse APIs by category and view working code samples of how to use each API with Trigger.dev (40+ APIs and counting!).

    We've included job examples for all of our integrations, as well as code samples showing how to connect to many APIs using their official Node SDKs or fetch.

    For example, our OpenAI API page includes multiple working code samples as well as full-stack projects, all using our OpenAI integration:

    OpenAI API page

    All the code is regularly maintained and updated by our team and the amazing open source community, and can be copied and pasted to use in your own projects.

  • Task Library & more tasks

    Task Library

    We now have a dedicated docs page called Task Library where you can easily find and learn about built-in tasks to use in your jobs, like waitForEvent() or backgroundFetch():

    image

    New Tasks

    We also have a few new tasks that we've added to the library:

    io.backgroundPoll()

    image

    This task is similar to backgroundFetch, but instead of waiting for a single request to complete, it will poll a URL until it returns a certain value:


    _12
    const result = await io.backgroundPoll<{ foo: string }>("🔃", {
    _12
    url: "https://example.com/api/endpoint",
    _12
    interval: 10, // every 10 seconds
    _12
    timeout: 300, // stop polling after 5 minutes
    _12
    responseFilter: {
    _12
    // stop polling once this filter matches
    _12
    status: [200],
    _12
    body: {
    _12
    status: ["SUCCESS"],
    _12
    },
    _12
    },
    _12
    });

    We even display each poll request in the run dashboard:

    image

    See our reference docs to learn more.

    io.sendEvents()

    image

    io.sendEvents() allows you to send multiple events at a time:


    _14
    await io.sendEvents("send-events", [
    _14
    {
    _14
    name: "new.user",
    _14
    payload: {
    _14
    userId: "u_12345",
    _14
    },
    _14
    },
    _14
    {
    _14
    name: "new.user",
    _14
    payload: {
    _14
    userId: "u_67890",
    _14
    },
    _14
    },
    _14
    ]);

    See our reference docs to learn more.

    io.random()

    io.random() is identical to Math.random() when called without options but ensures your random numbers are not regenerated on resume or retry. It will return a pseudo-random floating-point number between optional min (default: 0, inclusive) and max (default: 1, exclusive). Can optionally round to the nearest integer.

    How to update

    The trigger.dev/* packages are now at v2.2.6. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Wait for Request

    When we released our Replicate integration last month, we added support for tasks that could be completed via a one-time webhook request to support Replicate Prediction webhooks. Replicate webhooks work by providing a URL for a "callback" request when creating a prediction:


    _10
    await replicate.predictions.create({
    _10
    version: "d55b9f2d...",
    _10
    input: { prompt: "call me later maybe" },
    _10
    webhook: "https://example.com/replicate-webhook",
    _10
    webhook_events_filter: ["completed"], // optional
    _10
    });

    This allowed us to create an integration task that uses these webhooks to provide a seemless experience when creating a prediction:


    _10
    const prediction = await io.replicate.predictions.createAndAwait(
    _10
    "create-prediction",
    _10
    {
    _10
    version: "d55b9f2d...",
    _10
    input: {
    _10
    prompt: "call me later maybe",
    _10
    },
    _10
    }
    _10
    );

    We've now nicely exposed the same functionality so anyone can take advantage of similar APIs with our new io.waitForRequest() built-in task. This task allows you to create a task that will wait for a request to be made to a specific URL, and then return the request body as the task result. This is useful for any API that requires a webhook to be set up, or for any API that requires a callback URL to be provided.

    For example, you could use it to interface with ScreenshotOne.com to take a screenshot of a website and resume execution once the screenshot is ready:


    _24
    const result = await io.waitForRequest(
    _24
    "screenshot-one",
    _24
    async (url) => {
    _24
    await fetch(`https://api.screenshotone.com/take`, {
    _24
    method: "POST",
    _24
    headers: {
    _24
    "Content-Type": "application/json",
    _24
    },
    _24
    body: JSON.stringify({
    _24
    access_key: process.env.SCREENSHOT_ONE_API_KEY,
    _24
    url: "https://trigger.dev",
    _24
    store: "true",
    _24
    storage_path: "my-screeshots",
    _24
    response_type: "json",
    _24
    async: "true",
    _24
    webhook_url: url, // this is the URL that will be called when the screenshot is ready
    _24
    storage_return_location: "true",
    _24
    }),
    _24
    });
    _24
    },
    _24
    {
    _24
    timeoutInSeconds: 300, // wait up to 5 minutes for the screenshot to be ready
    _24
    }
    _24
    );

    How to update

    The trigger.dev/* packages are now at v2.2.6. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Wait for Event

    Up until now, you could only trigger a new job run when sending an event to Trigger.dev using eventTrigger():


    _17
    client.defineJob({
    _17
    id: "payment-accepted",
    _17
    name: "Payment Accepted",
    _17
    version: "1.0.0",
    _17
    trigger: eventTrigger({
    _17
    name: "payment.accepted",
    _17
    schema: z.object({
    _17
    id: z.string(),
    _17
    amount: z.number(),
    _17
    currency: z.string(),
    _17
    userId: z.string(),
    _17
    }),
    _17
    }),
    _17
    run: async (payload, io, ctx) => {
    _17
    // Do something when a payment is accepted
    _17
    },
    _17
    });

    Now with io.waitForEvent(), you can wait for an event to be sent in the middle of a job run:


    _12
    const event = await io.waitForEvent("🤑", {
    _12
    name: "payment.accepted",
    _12
    schema: z.object({
    _12
    id: z.string(),
    _12
    amount: z.number(),
    _12
    currency: z.string(),
    _12
    userId: z.string(),
    _12
    }),
    _12
    filter: {
    _12
    userId: ["user_1234"], // only wait for events from this specific user
    _12
    },
    _12
    });

    By default, io.waitForEvent() will wait for 1 hour for an event to be sent. If no event is sent within that time, it will throw an error. You can customize the timeout by passing a second argument:


    _18
    const event = await io.waitForEvent(
    _18
    "🤑",
    _18
    {
    _18
    name: "payment.accepted",
    _18
    schema: z.object({
    _18
    id: z.string(),
    _18
    amount: z.number(),
    _18
    currency: z.string(),
    _18
    userId: z.string(),
    _18
    }),
    _18
    filter: {
    _18
    userId: ["user_1234"], // only wait for events from this specific user
    _18
    },
    _18
    },
    _18
    {
    _18
    timeoutInSeconds: 60 * 60 * 24 * 7, // wait for 1 week
    _18
    }
    _18
    );

    This will allow you to build more complex workflows that simply were not possible before, or were at least a pain to implement. We're excited to see what you build with this new feature!

    Read more about it in the docs.

    How to update

    The trigger.dev/* packages are now at v2.2.6. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • We've added a Cloudflare worker proxy and SQS queue to improve the performance and reliability of our API. It's not just for our Cloud product, you can optionally use it if you're self-hosting as all the code is in our open source repository.

    To begin with we're using this when events are sent to us. That happens when you use client.sendEvent and client.sendEvents. More API routes will be supported in the future.

    How does it work?

    Requests to the API are proxied through a Cloudflare worker. That worker intercepts certain API routes and sends the data to an SQS queue. The main API server then polls the queue for new data and processes it.

    Why is this better?

    If there is any downtime on the main API servers, we won't lose events. The Cloudflare worker will queue them up and the main API server will process them when it's back online.

    Also, it allows us to deal with more load than before. The Cloudflare worker can handle a lot of requests and the main API server can process them at its own pace.

    Event processing can be scaled horizontally with more workers that poll the queue.

    How to update

    The Trigger.dev Cloud is now running v2.2.9. If you are self-hosting you can upgrade by pinning to the v2.2.9 tag.

  • OpenAI Assistants & more

    After OpenAI DevDay last week we got busy working on our @trigger.dev/openai integration and we're happy to announce that we now support GPT-4 Turbo, the new Assistants API, Dalle-3, and more, as well as some additional enhancements.

    GPT-4 Turbo

    GPT-4 Turbo is the newest model from OpenAI that includes up to 128K context and lower prices, and is supported by our integration by specifying the gpt-4-1106-preview model:


    _10
    await io.openai.chat.completions.create("debater-completion", {
    _10
    model: "gpt-4-1106-preview",
    _10
    messages: [
    _10
    {
    _10
    role: "user",
    _10
    content:
    _10
    'I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand. My first request is "I want an opinion piece about Deno."',
    _10
    },
    _10
    ],
    _10
    });

    Although we recommend the backgroundCreate variant as even though it's called Turbo, during preview it can take awhile to complete:


    _11
    // This will run in the background, so you don't have to worry about serverless function timeouts
    _11
    await io.openai.chat.completions.backgroundCreate("debater-completion", {
    _11
    model: "gpt-4-1106-preview",
    _11
    messages: [
    _11
    {
    _11
    role: "user",
    _11
    content:
    _11
    'I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand. My first request is "I want an opinion piece about Deno."',
    _11
    },
    _11
    ],
    _11
    });

    We've also improved completion tasks and added additional properties that make it easier to see your rate limits and how many tokens you have left:

    image

    Additionally, if an OpenAI request fails because of a rate limit error, we will automatically retry the request only after the rate limit has been reset.

    Assistants

    Also released on DevDay was the new Assistants API, which we now have support for:


    _44
    // Create a file and wait for it to be processed
    _44
    const file = await io.openai.files.createAndWaitForProcessing("upload-file", {
    _44
    purpose: "assistants",
    _44
    file: fs.createReadStream("./fixtures/mydata.csv"),
    _44
    });
    _44
    _44
    // Create the assistant
    _44
    const assistant = await io.openai.beta.assistants.create("create-assistant", {
    _44
    name: "Data visualizer",
    _44
    description:
    _44
    "You are great at creating beautiful data visualizations. You analyze data present in .csv files, understand trends, and come up with data visualizations relevant to those trends. You also share a brief text summary of the trends observed.",
    _44
    model: payload.model,
    _44
    tools: [{ type: "code_interpreter" }],
    _44
    file_ids: [file.id],
    _44
    });
    _44
    _44
    // Sometime later, you can now use the assistant by the assistant id:
    _44
    const run = await io.openai.beta.threads.createAndRunUntilCompletion(
    _44
    "create-thread",
    _44
    {
    _44
    assistant_id: payload.id,
    _44
    thread: {
    _44
    messages: [
    _44
    {
    _44
    role: "user",
    _44
    content:
    _44
    "Create 3 data visualizations based on the trends in this file.",
    _44
    file_ids: [payload.fileId],
    _44
    },
    _44
    ],
    _44
    },
    _44
    }
    _44
    );
    _44
    _44
    if (run.status !== "completed") {
    _44
    throw new Error(
    _44
    `Run finished with status ${run.status}: ${JSON.stringify(run.last_error)}`
    _44
    );
    _44
    }
    _44
    _44
    const messages = await io.openai.beta.threads.messages.list(
    _44
    "list-messages",
    _44
    run.thread_id
    _44
    );

    For more about how to use Assistants, check out our new OpenAI docs.

    Images

    We've added support for creating images in the background, similar to how our background completion works:


    _10
    const response = await io.openai.images.backgroundCreate("dalle-3-background", {
    _10
    model: "dall-e-3",
    _10
    prompt:
    _10
    "Create a comic strip featuring miles morales and spiderpunk fighting off the sinister six",
    _10
    });

    Files

    You can now wait for a file to be processed before continuing:


    _10
    const file = await io.openai.files.create("upload-file", {
    _10
    purpose: "assistants",
    _10
    file: fs.createReadStream("./fixtures/mydata.csv"),
    _10
    });
    _10
    _10
    await io.openai.files.waitForProcessing("wait-for-file", file.id);

    Or you can combine that into a single call:


    _10
    const file = await io.openai.files.createAndWaitForProcessing("upload-file", {
    _10
    purpose: "assistants",
    _10
    file: fs.createReadStream("./fixtures/mydata.csv"),
    _10
    });

    New Docs

    We've completely rewritten our OpenAI docs to make it easier to understand how to use our integration. Check them out at here.

    How to update

    The trigger.dev/* packages are now at v2.2.6. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Often you want to trigger your Jobs from events that happen in other APIs. This is where webhooks come in.

    Now you can easily subscribe to any APIs that support webhooks, without needing to use a Trigger.dev Integration. This should unlock you to create far more Jobs than was previously possible.

    How to create an HTTP endpoint

    We want to send a Slack message when one of our cal.com meetings is cancelled. To do this we need to create an HTTP endpoint that cal.com can send a webhook to.


    _15
    //create an HTTP endpoint
    _15
    const caldotcom = client.defineHttpEndpoint({
    _15
    id: "cal.com",
    _15
    source: "cal.com",
    _15
    icon: "caldotcom",
    _15
    verify: async (request) => {
    _15
    //this helper function makes verifying most webhooks easy
    _15
    return await verifyRequestSignature({
    _15
    request,
    _15
    headerName: "X-Cal-Signature-256",
    _15
    secret: process.env.CALDOTCOM_SECRET!,
    _15
    algorithm: "sha256",
    _15
    });
    _15
    },
    _15
    });

    Getting the URL and secret from the Trigger.dev dashboard

    There's a new section in the sidebar: "HTTP endpoints".

    HTTP endpoints

    From there you can select the HTTP endpoint you just created and get the URL and secret. In this case, it's cal.com.

    HTTP endpoint details

    Each environment has a different Webhook URL so you can control which environment you want to trigger Jobs for.

    Setting up the webhook in cal.com

    In cal.com you can navigate to "Settings/Webhooks/New" to create a new webhook.

    cal.com webhook

    Enter the URL and secret from the Trigger.dev dashboard and select the events you want to trigger Jobs for.

    We could only select "Booking cancelled" but we're going to select all the events so we can reuse this webhook for more than just a single trigger.

    Using HTTP endpoints to create Triggers

    Then we can use that HTTP endpoint to create multiple Triggers for your Jobs. They can have different filters, using the data from the webhook.


    _24
    client.defineJob({
    _24
    id: "http-caldotcom",
    _24
    name: "HTTP Cal.com",
    _24
    version: "1.0.0",
    _24
    enabled: true,
    _24
    //create a Trigger from the HTTP endpoint above. The filter is optional.
    _24
    trigger: caldotcom.onRequest({
    _24
    filter: { body: { triggerEvent: ["BOOKING_CANCELLED"] } },
    _24
    }),
    _24
    run: async (request, io, ctx) => {
    _24
    //note that when using HTTP endpoints, the first parameter is the request
    _24
    //you need to get the body, usually it will be json so you do:
    _24
    const body = await request.json();
    _24
    _24
    //this prints out "Matt Aitken cancelled their meeting"
    _24
    await io.logger.info(
    _24
    `${body.payload.attendees
    _24
    .map((a) => a.name)
    _24
    .join(", ")} cancelled their meeting ${new Date(
    _24
    body.payload.startTime
    _24
    )}`
    _24
    );
    _24
    },
    _24
    });

    See our HTTP endpoint docs for more info. Upgrade to the latest version of the SDK to start using this feature.

    How to update

    The Trigger.dev Cloud is now running v2.2.4. If you are self-hosting you can upgrade by pinning to the v2.2.4 tag.

    The trigger.dev/* packages are now at v2.2.5. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Invoke Trigger

    Up until now, Trigger.dev only supported the following 3 types of triggers: Events, Webhooks, and Scheduled (e.g. cron/interval).

    But sometimes it makes sense to be able to invoke a Job manually, without having to specify an event, especially for cases where you want to get notified when the invoked Job Run is complete.

    To specify that a job is manually invokable, you can use the invokeTrigger() function when defining a job:


    _16
    import { invokeTrigger } from "@trigger.dev/sdk";
    _16
    import { client } from "@/trigger";
    _16
    _16
    export const exampleJob = client.defineJob({
    _16
    id: "example-job",
    _16
    name: "Example job",
    _16
    version: "1.0.1",
    _16
    trigger: invokeTrigger({
    _16
    schema: z.object({
    _16
    foo: z.string(),
    _16
    }),
    _16
    }),
    _16
    run: async (payload, io, ctx) => {
    _16
    // do something with the payload
    _16
    },
    _16
    });

    And then you can invoke the job using the Job.invoke() method:


    _10
    import { exampleJob } from "./exampleJob";
    _10
    _10
    const jobRun = await exampleJob.invoke(
    _10
    { foo: "bar" },
    _10
    { callbackUrl: `${process.env.VERCEL_URL}/api/callback` }
    _10
    );

    Which is great but things become really cool when you invoke a job from another job and wait for the invoked job to complete:


    _15
    import { exampleJob } from "./exampleJob";
    _15
    _15
    client.defineJob({
    _15
    id: "example-job2",
    _15
    name: "Example job 2",
    _15
    version: "1.0.1",
    _15
    trigger: intervalTrigger({
    _15
    seconds: 60,
    _15
    }),
    _15
    run: async (payload, io, ctx) => {
    _15
    const runResult = await exampleJob.invokeAndWaitForCompletion("⚡", {
    _15
    foo: "123",
    _15
    });
    _15
    },
    _15
    });

    You can also batch up to 25 invocations at once, and we will run them in parallel and wait for all of them to complete before continuing execution of the current job.


    _28
    import { exampleJob } from "./exampleJob";
    _28
    _28
    client.defineJob({
    _28
    id: "example-job2",
    _28
    name: "Example job 2",
    _28
    version: "1.0.1",
    _28
    trigger: intervalTrigger({
    _28
    seconds: 60,
    _28
    }),
    _28
    run: async (payload, io, ctx) => {
    _28
    const runs = await exampleJob.batchInvokeAndWaitForCompletion("⚡", [
    _28
    {
    _28
    payload: {
    _28
    userId: "123",
    _28
    tier: "free",
    _28
    },
    _28
    },
    _28
    {
    _28
    payload: {
    _28
    userId: "abc",
    _28
    tier: "paid",
    _28
    },
    _28
    },
    _28
    ]);
    _28
    _28
    // runs is an array of RunNotification objects
    _28
    },
    _28
    });

    See our Invoke Trigger docs for more info. Upgrade to the latest version of the SDK to start using this feature.

    How to update

    The Trigger.dev Cloud is now running v2.2.4. If you are self-hosting you can upgrade by pinning to the v2.2.4 tag.

    The trigger.dev/* packages are now at v2.2.5. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • New navigation

    Trigger.dev has a new side menu to make navigating the app much easier. Here's a quick overview:

    New side menu

    More of the app is now accessible from the new side menu. Project related pages are grouped together at the top, followed by organization pages.

    Organization and Projects

    Organization and Projects menu

    Switching Organizations and Projects is now much easier.

    Organization menu

    Your profile page

    You can now access your profile from the avatar icon. View profile

    All the most helpful links are now in one place. You can access the documentation, changelog, and support from the bottom of the menu. View profile

    How to update

    The Trigger.dev Cloud is now running v2.2.4. If you are self-hosting you can upgrade by pinning to the v2.2.4 tag.

  • Next.js 14 support

    Next.js 14 was just announced on stage at Next.js Conf and we're happy to announce that we've just released support for it in our @trigger.dev/[email protected] release.

    You can now create a new Next.js 14 app with Trigger.dev as easy as:


    _10
    npx create-next-app@latest
    _10
    npx @trigger.dev/cli@latest init

    Our @trigger.dev/cli init command will automatically detect that you're using Next.js 14 and auto-configure your project, whether it uses Pages or the new App directory.

    Check out our Next.js Quickstart for more on how to get started with Next.js and Trigger.dev.

  • Auto-yielding executions

    We've just released Trigger.dev server v2.2.4 and @trigger.dev/* packages @ 2.2.2, which includes a new feature called Auto Yielding Executions that will drastically cut down on serverless function timeouts and provides stronger guarantees around duplicate task executions.

    The TLDR is that our @trigger.dev/sdk will now automatically yield Job Run executions that are about to timeout, and resume them in another function execution. Previously when executing a Job Run we'd keep executing until the serverless function timed out, and resume executing only after the timeout was received.

    The issue is that we didn't have good control over when the timeout would occur, and it could occur at any time during the execution. This could result in some tasks getting executed multiple times, which is not ideal. It would also mean unwanted timeout logs, which could cause issues with any downstream alert systems. This is what happened when upgrading one of our projects to the new @trigger.dev/[email protected]:

    image

    If you want to learn more about how this works, read the full Auto Yielding Executions discussion.

  • OpenAI Universal SDK

    We've made a small tweak to our OpenAI integration that allows it to be used with any OpenAI compatible API, such as Perplexity.ai:


    _10
    import { OpenAI } from "@trigger.dev/openai";
    _10
    _10
    const perplexity = new OpenAI({
    _10
    id: "perplexity",
    _10
    apiKey: process.env["PERPLEXITY_API_KEY"]!,
    _10
    baseURL: "https://api.perplexity.ai", // specify the base URL for Perplexity.ai
    _10
    icon: "brand-open-source", // change the task icon to a generic open source logo
    _10
    });

    Since Perplexity.ai is compatible with OpenAI, you can use the same tasks as with OpenAI but using Open Source models, like minstral-7b-instruct:


    _37
    client.defineJob({
    _37
    id: "perplexity-tasks",
    _37
    name: "Perplexity Tasks",
    _37
    version: "0.0.1",
    _37
    trigger: eventTrigger({
    _37
    name: "perplexity.tasks",
    _37
    }),
    _37
    integrations: {
    _37
    perplexity,
    _37
    },
    _37
    run: async (payload, io, ctx) => {
    _37
    await io.perplexity.chat.completions.create("chat-completion", {
    _37
    model: "mistral-7b-instruct",
    _37
    messages: [
    _37
    {
    _37
    role: "user",
    _37
    content: "Create a good programming joke about background jobs",
    _37
    },
    _37
    ],
    _37
    });
    _37
    _37
    // Run this in the background
    _37
    await io.perplexity.chat.completions.backgroundCreate(
    _37
    "background-chat-completion",
    _37
    {
    _37
    model: "mistral-7b-instruct",
    _37
    messages: [
    _37
    {
    _37
    role: "user",
    _37
    content:
    _37
    "If you were a programming language, what would you be and why?",
    _37
    },
    _37
    ],
    _37
    }
    _37
    );
    _37
    },
    _37
    });

    And you'll get the same experience in the Run Dashboard when viewing the logs:

    Perplexity.ai logs

    We also support the Azure OpenAI Service through the defaultHeaders and defaultQuery options:


    _11
    import { OpenAI } from "@trigger.dev/openai";
    _11
    _11
    const azureOpenAI = new OpenAI({
    _11
    id: "azure-openai",
    _11
    apiKey: process.env["AZURE_API_KEY"]!,
    _11
    icon: "brand-azure",
    _11
    baseURL:
    _11
    "https://my-resource.openai.azure.com/openai/deployments/my-gpt35-16k-deployment",
    _11
    defaultQuery: { "api-version": "2023-06-01-preview" },
    _11
    defaultHeaders: { "api-key": process.env["AZURE_API_KEY"] },
    _11
    });

  • Server v2.2.4

    These additional changes made it into the server in v2.2.4:


    _10
    await io.runTask(
    _10
    "cache-key",
    _10
    async () => {
    _10
    // do something cubey here
    _10
    },
    _10
    { icon: "3d-cube-sphere" }
    _10
    );

    • [6d3b761c][@hmacr] Fixed run list pagination
    • [5ea6a49d] Made the app work very basically on mobile devices
    • [627c767c] Fixed an issue where webhook triggers would erronously attempt to re-register whenever jobs were indexed.

    @trigger.dev/[email protected]

    • [6769d6b4]: Detects JSRuntime (Node/Deno at the moment). Adds basic Deno support
    • [9df93d07]: Improve create-integration output. Use templates and shared configs.
    • [50e31924]: add ability to use custom tunnel in dev command
    • [0adf41c7]: Added Next.js maxDuration commented out to the api/trigger file using CLI init

    How to update

    The Trigger.dev Cloud is now running v2.2.4. If you are self-hosting you can upgrade by pinning to the v2.2.4 tag.

    The trigger.dev/* packages are now at v2.2.2. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • For your Jobs to work they need to be registered with the Trigger.dev server (cloud or self-hosted). When they're registered we can trigger runs.

    We've just fixed a major problem: we now show errors in your Job definitions so you can fix them. Before you had no idea why they weren't appearing or being updated in the dashboard.

    In the console

    When you run npx @trigger.dev/cli@latest dev you'll now see any errors with your Job definitions.

    In this case, we've set an interval of less than 60 seconds on our intervalTrigger, and we've left the name off a Job:

    The console now shows errors

    In the dashboard

    These errors are also shown on the "Environments" page of the dashboard. You can manually refresh from this page as well, which is useful for Staging/Production if you haven't setup automatic refreshing.

    The dashboard now shows errors

    Other changes

    Improvements

    • Added a filter for active jobs in the dashboard (PR #601 by hmacr)
    • Replaced React Hot Toast with Sonner toasts (Issue #555 by arjunindiai)
    • Upgraded packages to use Node 18 and fetch instead of node-fetch (PR #581 by Rutam21)
    • Added contributors section to readme.md (PR #594 by mohitd404)
    • Added SvelteKit adaptor (PR #467 by Chigala)

    Fixes

    • intervalTriggers of >10 mins didn't ever start in Staging/Prod (Issue #611)
    • Improved the robustness and error reporting for login with magic link

    How to update

    The Trigger.dev Cloud is now running v2.2.0. If you are self-hosting you can upgrade by pinning to the v2.2.0 tag.

    The trigger.dev/* packages are now at v2.2.0. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • New Test page

    Examples and recent payloads

    You can easily select from our example payloads and the most recent 5 payloads that triggered this Job. We also automatically populate the editor with an example or the most recent payload.

    JSON linting

    As you're editing the JSON it's linted which means you get useful errors pointing you at where there are problems.

    Submit using your keyboard

    You can press ⌘↵ on Mac, CTRL+Enter on Windows to quickly submit the test.

    Test editor video tour (1m 37s)

  • Highlights

    We've added a new integration for Replicate, which lets you run machine learning models with a few lines of code. Powered by a new feature of the platform we call "Task Callbacks", which allows tasks to be "completed" via a webhook or failed via a timeout. You can use these using io.runTask:


    _14
    await io.runTask(
    _14
    "use-callback-url",
    _14
    async (task) => {
    _14
    // task.callbackUrl is the URL to call when the task is done
    _14
    // The output of this task will be the body POSTed to this URL
    _14
    },
    _14
    {
    _14
    name: "Use the callbackUrl to notify the caller when the task is done",
    _14
    callback: {
    _14
    enabled: true,
    _14
    timeoutInSeconds: 300, // If task.callbackUrl is not called within 300 seconds, the task will fail
    _14
    },
    _14
    }
    _14
    );

    Improvements

    We've done a lot of performance work this release, especially for job runs with a large amount of tasks and logs. Long story short we now do a much better job with using cached task outputs when resuming runs. For a deep dive check out this pull request

    Bug Fixes

    • Fixed an issue with the Linear getAll types #81e886a1
    • Updated the adapter packages (Remix, Next, Astro, and Express) to return response headers.

    Credits

    Thanks to @nicktrn for the Linear fix!

    How to update

    The Trigger.dev Cloud is now running 2.1.10. If you are self-hosting you can upgrade by pinning to the v2.1.10 tag.

    To upgrade your @trigger.dev/* packages, you can issue the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Replicate Integration

    Replicate lets you run machine learning models with a few lines of code, without needing to understand how machine learning works. And now you can easily use the Replicate API in your own applications using Trigger.dev and our new Replicate integration:


    _25
    client.defineJob({
    _25
    id: "replicate-cinematic-prompt",
    _25
    name: "Replicate - Cinematic Prompt",
    _25
    version: "0.1.0",
    _25
    integrations: { replicate },
    _25
    trigger: eventTrigger({
    _25
    name: "replicate.cinematic",
    _25
    }),
    _25
    run: async (payload, io, ctx) => {
    _25
    const prediction = await io.replicate.predictions.createAndAwait(
    _25
    "await-prediction",
    _25
    {
    _25
    version:
    _25
    "af1a68a271597604546c09c64aabcd7782c114a63539a4a8d14d1eeda5630c33",
    _25
    input: {
    _25
    prompt: `rick astley riding a harley through post-apocalyptic miami, cinematic, 70mm, anamorphic, bokeh`,
    _25
    width: 1280,
    _25
    height: 720,
    _25
    },
    _25
    }
    _25
    );
    _25
    _25
    return prediction.output;
    _25
    },
    _25
    });

    We make use of replicate webhooks and a new feature of Trigger.dev called "Task Callbacks" to ensure long-running predictions don't result in function timeout errors.

    See more details in the Replicate integration docs.

    Thanks to @nicktrn for the awesome work on this integration 🚀

  • Hacktoberfest 2023

    It's October! Which can only mean one thing... Hacktoberfest is back! 🎉 This year we've lined up some great swag, and plenty of GitHub issues to contribute to.

    Here's how to get involved:

    • We've created GitHub issues tagged: 🎃 hacktoberfest
    • Each issue is also tagged with points: 💎 100 points
    • For every PR you get merged, you collect points
    • Collect as many points before October 31st 2023
    • Then spend your points in our shop 🎁

    Get involved

  • Staging environment

    We've added support for an additional environment between DEV and PROD called STAGING. This environment is useful for testing your Jobs in a production-like environment before deploying to production.

    All existing projects will automatically have a STAGING environment created for them. The API Key for this environment will start with tr_stg_.

    We will be adding support for ephemeral PREVIEW environments for popular platforms like Vercel in the future, so stay tuned!

  • You can now redact data from Task outputs, so it won't be visible in the dashboard. This is useful for sensitive data like Personally Identifiable Information (PII).

    To use, add the redact option to runTask like so:


    _21
    const result = await io.runTask(
    _21
    "task-example-1",
    _21
    async () => {
    _21
    return {
    _21
    id: "evt_3NYWgVI0XSgju2ur0PN22Hsu",
    _21
    object: "event",
    _21
    api_version: "2022-11-15",
    _21
    created: 1690473903,
    _21
    data: {
    _21
    object: {
    _21
    id: "ch_3NYWgVI0XSgju2ur0C2UzeKC",
    _21
    },
    _21
    },
    _21
    };
    _21
    },
    _21
    {
    _21
    redact: {
    _21
    paths: ["data.object.id"],
    _21
    },
    _21
    }
    _21
    );

    View docs

  • We've had a lot of requests for using Trigger.dev with other frameworks other than Next.js, so today we're announcing three new ones:

    • Next.js
    • Remix
    • Astro
    • Express

    With many more coming very soon:

    • SvelteKit
    • RedwoodJS
    • Nuxt.js
    • Nest.js
    • Fastify
  • React Status Hooks

    You can now create statuses in your Job code that lets you do some pretty cool stuff in your UI, like:

    • Show exactly what you want in your UI (with as many statuses as you want).
    • Pass arbitrary data to your UI, which you can use to render elements.
    • Update existing elements in your UI as the progress of the run continues.

    Here's some example code showing for a Job that generates memes. We've created a single status generatingMemes (you can create as many as you like) and then we've updated it (you can update it as often as you like). It gives you fine-grained control over how you report progress and output data from your Job.


    _29
    client.defineJob({
    _29
    id: "meme-generator",
    _29
    name: "Generate memes",
    _29
    version: "0.1.1",
    _29
    trigger: eventTrigger({
    _29
    name: "generate-memes",
    _29
    }),
    _29
    run: async (payload, io, ctx) => {
    _29
    const generatingMemes = await io.createStatus("generating-memes", {
    _29
    label: "Generating memes",
    _29
    state: "loading",
    _29
    data: {
    _29
    progress: 0.1,
    _29
    },
    _29
    });
    _29
    _29
    //...do stuff, like generate memes
    _29
    _29
    await generatingMemes.update("middle-generation", {
    _29
    state: "success",
    _29
    data: {
    _29
    progress: 1,
    _29
    urls: [
    _29
    "https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExZnZoMndsdWh0MmhvY2kyaDF6YjZjZzg1ZGsxdnhhYm13a3Q1Y3lkbyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/13HgwGsXF0aiGY/giphy.gif",
    _29
    ],
    _29
    },
    _29
    });
    _29
    },
    _29
    });

    Check out React Status Hooks in the docs.

  • Bring-Your-Own-Auth

    You can now authenticate as your users!

    Before, you could only use our Integrations with API Keys or OAuth authentication as yourself, the developer. Now, you can authenticate using the auth credentials of your users.

    Auth Resolver allows you to implement your own custom auth resolving using a third-party service like Clerk or Nango.

    Documentation

  • Showcase

    We've created a beautiful showcase for Jobs (and projects) that you can build using Trigger.dev. They have code you can copy and paste to get started quickly.

    View the showcase

  • Usage Dashboard

    Welcome to our new Usage Dashboard! You can now keep track of your Jobs on the 'Usage & Billing' page of the app. Here's a list of all the data you can now see at a glance:

    • The number of Job Runs each month
    • The total number of Runs this month
    • The total number of Jobs
    • The total number of Integrations
    • The number of team members in your Organization

    usage-dashboard

    Login to your account and click 'Usage & Billing' in the side menu of a Project page to see your Usage Dashboard.

  • Linear Integration

    Streamline your project and issue tracking with our new Linear Integration.

  • We've improved our Integrations to support generic interfaces and better ergonomics.

    Previously, integrations could not support tasks with generic type parameters or fluent interfaces. For example, previously our OpenAI integration looked like this:


    _10
    await io.openai.createChatCompletion("chat-completion", {
    _10
    model: "gpt-3.5-turbo",
    _10
    messages: [
    _10
    {
    _10
    role: "user",
    _10
    content: "Create a good programming joke about background jobs",
    _10
    },
    _10
    ],
    _10
    });

    Which is now replaced with the following that much more closely matches the OpenAI SDK:


    _10
    await io.openai.chat.completions.create("chat-completion", {
    _10
    model: "gpt-3.5-turbo",
    _10
    messages: [
    _10
    {
    _10
    role: "user",
    _10
    content: "Create a good programming joke about background jobs",
    _10
    },
    _10
    ],
    _10
    });

    Tasks can also now have generic type parameters as well, which is useful for integrations like Supabase or Airtable that have user-defined schemas:


    _10
    const table = io.airtable
    _10
    .base(payload.baseId)
    _10
    .table<LaunchGoalsAndOkRs>(payload.tableName);
    _10
    _10
    const records = await table.getRecords("muliple records", {
    _10
    fields: ["Status"],
    _10
    });

  • Interact with your Airtable bases with our new Airtable Integration.

  • Improved Documentation

    We've improved our documentation for:

  • New testing package

    We've added a new @trigger.dev/testing package.

  • We've fixed the Zod errors that were occuring because of excessively deep Type instantiation when using eventTrigger and Zod 3.22.2.

  • Thanks to Liran Tal, we now have a native package to use Trigger.dev with Astro.

    To update your existing projects to the latest version of the SDK, run the following command:


    _10
    npx @trigger.dev/cli update

    If you are self-hosting the Trigger.dev service, you'll need to update to the latest image:


    _10
    docker pull triggerdotdev/trigger.dev:v2.0.0
    _10
    # or
    _10
    docker pull triggerdotdev/trigger.dev:latest@sha256:00d9d9646c3781c04b84b4a7fe2c3b9ffa79e22559ca70ffa1ca1e9ce570a799

    If you are using the Trigger.dev Cloud, you'll automatically get the latest version of the service.

    • a907e2a: chore: updated the type in the eventId argument (thx @Chigala ✨)
  • Our CLI has been updated with some fixes and improvements:

    • 3ce5397: Added the send-event command
    • 3897e6e: Make it more clear which API key the init command expects
    • dd10717: Added --hostname option to the cli dev command
    • 8cf8544: Bugfix: @trigger.dev/cli init now correctly identifies the App Dir when using JS (thx @Chigala ✨)
    • 4e78da3: fix: Add an update sub-command the @trigger.dev/cli that updates all @trigger.dev/* packages (thx @hugomn ✨)
    • 135cb49: fixed the cli init log message to show the correct path to the app route created (thx @Chigala ✨)
  • We've updated our OpenAI package to use the new and improved v4 of the OpenAI SDK.

    All of our existing tasks should work as before, but now when you use the .native property you'll get back and nice and shiny v4 SDK:


    _12
    import { OpenAI } from "@trigger.dev/openai";
    _12
    _12
    const openai = new OpenAI({
    _12
    id: "openai",
    _12
    apiKey: process.env["OPENAI_API_KEY"]!,
    _12
    });
    _12
    _12
    // Before: v3 SDK
    _12
    openai.native.createCompletion({...});
    _12
    _12
    // Now: v4 SDK
    _12
    openai.native.completions.create({...});

    We've also added some new tasks for enabling fine tuning jobs:

    • createFineTuningJob - Create a fine-tuning job for a fine-tuning model
    • retrieveFineTuningJob - Retrieve a fine-tuning job for a fine-tuning model
    • listFineTuningJobs - List fine-tuning jobs for a fine-tuning model
    • cancelFineTuningJob - Cancel a fine-tuning job for a fine-tuning model
    • listFineTuningJobEvents - List fine-tuning job events for a fine-tuning model
  • Cancel delayed events

    When sending events, you can delay the delivery by setting either the deliverAt or deliverAfter option:


    _10
    await client.sendEvent(
    _10
    {
    _10
    id: "event-1",
    _10
    name: "example.event",
    _10
    payload: { hello: "world" },
    _10
    },
    _10
    {
    _10
    deliverAfter: 1000 * 60 * 60 * 24, // 1 day
    _10
    }
    _10
    );

    You can now easily cancel delayed events to prevent subsequent job runs with the new cancelEvent method:


    _10
    await client.cancelEvent("event-1");

    This functionality requires @trigger.dev/[email protected] or later.

  • You can now disable jobs in your code by setting the enabled option to false:


    _10
    client.defineJob({
    _10
    id: "example-job",
    _10
    name: "Example Job",
    _10
    version: "0.1.0",
    _10
    trigger: eventTrigger({ name: "example.event" }),
    _10
    enabled: false,
    _10
    run: async (payload, io, ctx) => {
    _10
    // your job code here
    _10
    },
    _10
    });

    Which will show the job as disabled in the dashboard:

    disabled job

    Once you've disabled your job, you can delete it from the dashboard:

    delete job

    For more detailed information, checkout our documentation on managing Jobs.

  • We had an issue where runs that included tasks that had large task outputs could not be resumed after a delay. This was because we send completed task outputs in the request body when we resume a run, and some platforms have a limit on the size of the request body. We now cap the size of the task outputs we send in the request body to 3.5MB.

  • Increased performance

    We've redesigned the way we queue and execute job runs in order to increase the speed of job execution.

    • Fixed the cached task miss issue in the @trigger.dev/sdk which should speed up resumed runs by A LOT
    • Allow setting graphile worker concurrency settings through env vars WORKER_CONCURRENCY and EXECUTION_WORKER_CONCURRENCY
    • Allow settings prisma pool settings through env vars DATABASE_CONNECTION_LIMIT and DATABASE_POOL_TIMEOUT
    • You can now selectively enable/disable the workers through WORKER_ENABLED=false and EXECUTION_WORKER_ENABLED=false. This means the image can be deployed as 2 or 3 separate services:
      • A WebApp service that serves the API and the Dashboard
      • A Worker service that runs tasks that have been added the standard worker
      • An Execution Worker service that only runs "run execution" tasks
    • Deprecated the JobOptions.queue options as we are no longer using that to control job concurrency. We'll add proper queue support in the future.
  • Trigger.dev v2.0.0

    We've dropped the beta label on our v2.0.0 release of the Trigger.dev service, and moving forward we'll be updating the version number like good semantic version citizens.

    Blog post