Changelog

Improvements, new features and fixes

  • Resend v2.0.0 support

    Our @trigger.dev/resend package has been updated to work with the latest resend-node 2.0.0 version, which brings with it a number of fixes and some additional tasks.

    See our Resend integration docs for more.

    How to update

    The trigger.dev/* packages are now at v2.2.7. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Run Notifications

    Before today, it was a very clunky experience trying to build workflows that responded to your job runs completing or failing. But now, you can easily subscribe to run notifications and perform additional work.

    There are two ways of subscribing to run notifications:

    Across all jobs

    You can use the TriggerClient.on instance method to subscribe to all job runs notifications. This is useful if you want to perform some work after any job run completes or fails:


    _12
    export const client = new TriggerClient({
    _12
    id: "my-project",
    _12
    apiKey: process.env.TRIGGER_API_KEY,
    _12
    });
    _12
    _12
    client.on("runSucceeeded", async (notification) => {
    _12
    console.log(`Run on job ${notification.job.id} succeeded`);
    _12
    });
    _12
    _12
    client.on("runFailed", async (notification) => {
    _12
    console.log(`Run on job ${notification.job.id} failed`);
    _12
    });

    On a specific job

    You can also pass onSuccess or onFailure when defining a job to subscribe to notifications for that specific job:


    _19
    client.defineJob({
    _19
    id: "github-integration-on-issue",
    _19
    name: "GitHub Integration - On Issue",
    _19
    version: "0.1.0",
    _19
    trigger: github.triggers.repo({
    _19
    event: events.onIssue,
    _19
    owner: "triggerdotdev",
    _19
    repo: "empty",
    _19
    }),
    _19
    onSuccess: async (notification) => {
    _19
    console.log("Job succeeded", notification);
    _19
    },
    _19
    onFailure: async (notification) => {
    _19
    console.log("Job failed", notification);
    _19
    },
    _19
    run: async (payload, io, ctx) => {
    _19
    // ...
    _19
    },
    _19
    });

    Run notifications

    All run notifications contain the following info:

    • The run's ID
    • The run's status
    • The run's duration
    • The run's start time
    • The run's payload
    • The run's explicit statuses (if any)
    • Whether or not the run was a test run
    • Which job the run belongs to
    • Which environment the run belongs to
    • Which project the run belongs to
    • Which organization the run belongs to
    • The external account associated with the run (if any)

    Successful run notifications also contain the output of the run. Failed run notifications contain the error and the task that failed.

    You can see the full run notification schema here

    How does it work?

    Run notifications work by making a separate HTTP request to your endpoint URL after a run completes or fails, which means that you get a fresh serverless function execution to perform additional work.

    We only send these notifications if you've subscribed to them, so ff you want to stop receiving them, just remove the code that subscribes to them, and we'll stop sending them.

    How to update

    The Trigger.dev Cloud is now running v2.2.10. If you are self-hosting you can upgrade by pinning to the v2.2.10 tag.

    The trigger.dev/* packages are now at v2.2.7. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • New API section

    We've added a new APIs section to the site. Here you can browse APIs by category and view working code samples of how to use each API with Trigger.dev (40+ APIs and counting!).

    We've included job examples for all of our integrations, as well as code samples showing how to connect to many APIs using their official Node SDKs or fetch.

    For example, our OpenAI API page includes multiple working code samples as well as full-stack projects, all using our OpenAI integration:

    OpenAI API page

    All the code is regularly maintained and updated by our team and the amazing open source community, and can be copied and pasted to use in your own projects.

  • Task Library & more tasks

    Task Library

    We now have a dedicated docs page called Task Library where you can easily find and learn about built-in tasks to use in your jobs, like waitForEvent() or backgroundFetch():

    image

    New Tasks

    We also have a few new tasks that we've added to the library:

    io.backgroundPoll()

    image

    This task is similar to backgroundFetch, but instead of waiting for a single request to complete, it will poll a URL until it returns a certain value:


    _12
    const result = await io.backgroundPoll<{ foo: string }>("🔃", {
    _12
    url: "https://example.com/api/endpoint",
    _12
    interval: 10, // every 10 seconds
    _12
    timeout: 300, // stop polling after 5 minutes
    _12
    responseFilter: {
    _12
    // stop polling once this filter matches
    _12
    status: [200],
    _12
    body: {
    _12
    status: ["SUCCESS"],
    _12
    },
    _12
    },
    _12
    });

    We even display each poll request in the run dashboard:

    image

    See our reference docs to learn more.

    io.sendEvents()

    image

    io.sendEvents() allows you to send multiple events at a time:


    _14
    await io.sendEvents("send-events", [
    _14
    {
    _14
    name: "new.user",
    _14
    payload: {
    _14
    userId: "u_12345",
    _14
    },
    _14
    },
    _14
    {
    _14
    name: "new.user",
    _14
    payload: {
    _14
    userId: "u_67890",
    _14
    },
    _14
    },
    _14
    ]);

    See our reference docs to learn more.

    io.random()

    io.random() is identical to Math.random() when called without options but ensures your random numbers are not regenerated on resume or retry. It will return a pseudo-random floating-point number between optional min (default: 0, inclusive) and max (default: 1, exclusive). Can optionally round to the nearest integer.

    How to update

    The trigger.dev/* packages are now at v2.2.6. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Wait for Request

    When we released our Replicate integration last month, we added support for tasks that could be completed via a one-time webhook request to support Replicate Prediction webhooks. Replicate webhooks work by providing a URL for a "callback" request when creating a prediction:


    _10
    await replicate.predictions.create({
    _10
    version: "d55b9f2d...",
    _10
    input: { prompt: "call me later maybe" },
    _10
    webhook: "https://example.com/replicate-webhook",
    _10
    webhook_events_filter: ["completed"], // optional
    _10
    });

    This allowed us to create an integration task that uses these webhooks to provide a seemless experience when creating a prediction:


    _10
    const prediction = await io.replicate.predictions.createAndAwait(
    _10
    "create-prediction",
    _10
    {
    _10
    version: "d55b9f2d...",
    _10
    input: {
    _10
    prompt: "call me later maybe",
    _10
    },
    _10
    }
    _10
    );

    We've now nicely exposed the same functionality so anyone can take advantage of similar APIs with our new io.waitForRequest() built-in task. This task allows you to create a task that will wait for a request to be made to a specific URL, and then return the request body as the task result. This is useful for any API that requires a webhook to be set up, or for any API that requires a callback URL to be provided.

    For example, you could use it to interface with ScreenshotOne.com to take a screenshot of a website and resume execution once the screenshot is ready:


    _24
    const result = await io.waitForRequest(
    _24
    "screenshot-one",
    _24
    async (url) => {
    _24
    await fetch(`https://api.screenshotone.com/take`, {
    _24
    method: "POST",
    _24
    headers: {
    _24
    "Content-Type": "application/json",
    _24
    },
    _24
    body: JSON.stringify({
    _24
    access_key: process.env.SCREENSHOT_ONE_API_KEY,
    _24
    url: "https://trigger.dev",
    _24
    store: "true",
    _24
    storage_path: "my-screeshots",
    _24
    response_type: "json",
    _24
    async: "true",
    _24
    webhook_url: url, // this is the URL that will be called when the screenshot is ready
    _24
    storage_return_location: "true",
    _24
    }),
    _24
    });
    _24
    },
    _24
    {
    _24
    timeoutInSeconds: 300, // wait up to 5 minutes for the screenshot to be ready
    _24
    }
    _24
    );

    How to update

    The trigger.dev/* packages are now at v2.2.6. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Wait for Event

    Up until now, you could only trigger a new job run when sending an event to Trigger.dev using eventTrigger():


    _17
    client.defineJob({
    _17
    id: "payment-accepted",
    _17
    name: "Payment Accepted",
    _17
    version: "1.0.0",
    _17
    trigger: eventTrigger({
    _17
    name: "payment.accepted",
    _17
    schema: z.object({
    _17
    id: z.string(),
    _17
    amount: z.number(),
    _17
    currency: z.string(),
    _17
    userId: z.string(),
    _17
    }),
    _17
    }),
    _17
    run: async (payload, io, ctx) => {
    _17
    // Do something when a payment is accepted
    _17
    },
    _17
    });

    Now with io.waitForEvent(), you can wait for an event to be sent in the middle of a job run:


    _12
    const event = await io.waitForEvent("🤑", {
    _12
    name: "payment.accepted",
    _12
    schema: z.object({
    _12
    id: z.string(),
    _12
    amount: z.number(),
    _12
    currency: z.string(),
    _12
    userId: z.string(),
    _12
    }),
    _12
    filter: {
    _12
    userId: ["user_1234"], // only wait for events from this specific user
    _12
    },
    _12
    });

    By default, io.waitForEvent() will wait for 1 hour for an event to be sent. If no event is sent within that time, it will throw an error. You can customize the timeout by passing a second argument:


    _18
    const event = await io.waitForEvent(
    _18
    "🤑",
    _18
    {
    _18
    name: "payment.accepted",
    _18
    schema: z.object({
    _18
    id: z.string(),
    _18
    amount: z.number(),
    _18
    currency: z.string(),
    _18
    userId: z.string(),
    _18
    }),
    _18
    filter: {
    _18
    userId: ["user_1234"], // only wait for events from this specific user
    _18
    },
    _18
    },
    _18
    {
    _18
    timeoutInSeconds: 60 * 60 * 24 * 7, // wait for 1 week
    _18
    }
    _18
    );

    This will allow you to build more complex workflows that simply were not possible before, or were at least a pain to implement. We're excited to see what you build with this new feature!

    Read more about it in the docs.

    How to update

    The trigger.dev/* packages are now at v2.2.6. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • We've added a Cloudflare worker proxy and SQS queue to improve the performance and reliability of our API. It's not just for our Cloud product, you can optionally use it if you're self-hosting as all the code is in our open source repository.

    To begin with we're using this when events are sent to us. That happens when you use client.sendEvent and client.sendEvents. More API routes will be supported in the future.

    How does it work?

    Requests to the API are proxied through a Cloudflare worker. That worker intercepts certain API routes and sends the data to an SQS queue. The main API server then polls the queue for new data and processes it.

    Why is this better?

    If there is any downtime on the main API servers, we won't lose events. The Cloudflare worker will queue them up and the main API server will process them when it's back online.

    Also, it allows us to deal with more load than before. The Cloudflare worker can handle a lot of requests and the main API server can process them at its own pace.

    Event processing can be scaled horizontally with more workers that poll the queue.

    How to update

    The Trigger.dev Cloud is now running v2.2.9. If you are self-hosting you can upgrade by pinning to the v2.2.9 tag.

  • OpenAI Assistants & more

    After OpenAI DevDay last week we got busy working on our @trigger.dev/openai integration and we're happy to announce that we now support GPT-4 Turbo, the new Assistants API, Dalle-3, and more, as well as some additional enhancements.

    GPT-4 Turbo

    GPT-4 Turbo is the newest model from OpenAI that includes up to 128K context and lower prices, and is supported by our integration by specifying the gpt-4-1106-preview model:


    _10
    await io.openai.chat.completions.create("debater-completion", {
    _10
    model: "gpt-4-1106-preview",
    _10
    messages: [
    _10
    {
    _10
    role: "user",
    _10
    content:
    _10
    'I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand. My first request is "I want an opinion piece about Deno."',
    _10
    },
    _10
    ],
    _10
    });

    Although we recommend the backgroundCreate variant as even though it's called Turbo, during preview it can take awhile to complete:


    _11
    // This will run in the background, so you don't have to worry about serverless function timeouts
    _11
    await io.openai.chat.completions.backgroundCreate("debater-completion", {
    _11
    model: "gpt-4-1106-preview",
    _11
    messages: [
    _11
    {
    _11
    role: "user",
    _11
    content:
    _11
    'I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand. My first request is "I want an opinion piece about Deno."',
    _11
    },
    _11
    ],
    _11
    });

    We've also improved completion tasks and added additional properties that make it easier to see your rate limits and how many tokens you have left:

    image

    Additionally, if an OpenAI request fails because of a rate limit error, we will automatically retry the request only after the rate limit has been reset.

    Assistants

    Also released on DevDay was the new Assistants API, which we now have support for:


    _44
    // Create a file and wait for it to be processed
    _44
    const file = await io.openai.files.createAndWaitForProcessing("upload-file", {
    _44
    purpose: "assistants",
    _44
    file: fs.createReadStream("./fixtures/mydata.csv"),
    _44
    });
    _44
    _44
    // Create the assistant
    _44
    const assistant = await io.openai.beta.assistants.create("create-assistant", {
    _44
    name: "Data visualizer",
    _44
    description:
    _44
    "You are great at creating beautiful data visualizations. You analyze data present in .csv files, understand trends, and come up with data visualizations relevant to those trends. You also share a brief text summary of the trends observed.",
    _44
    model: payload.model,
    _44
    tools: [{ type: "code_interpreter" }],
    _44
    file_ids: [file.id],
    _44
    });
    _44
    _44
    // Sometime later, you can now use the assistant by the assistant id:
    _44
    const run = await io.openai.beta.threads.createAndRunUntilCompletion(
    _44
    "create-thread",
    _44
    {
    _44
    assistant_id: payload.id,
    _44
    thread: {
    _44
    messages: [
    _44
    {
    _44
    role: "user",
    _44
    content:
    _44
    "Create 3 data visualizations based on the trends in this file.",
    _44
    file_ids: [payload.fileId],
    _44
    },
    _44
    ],
    _44
    },
    _44
    }
    _44
    );
    _44
    _44
    if (run.status !== "completed") {
    _44
    throw new Error(
    _44
    `Run finished with status ${run.status}: ${JSON.stringify(run.last_error)}`
    _44
    );
    _44
    }
    _44
    _44
    const messages = await io.openai.beta.threads.messages.list(
    _44
    "list-messages",
    _44
    run.thread_id
    _44
    );

    For more about how to use Assistants, check out our new OpenAI docs.

    Images

    We've added support for creating images in the background, similar to how our background completion works:


    _10
    const response = await io.openai.images.backgroundCreate("dalle-3-background", {
    _10
    model: "dall-e-3",
    _10
    prompt:
    _10
    "Create a comic strip featuring miles morales and spiderpunk fighting off the sinister six",
    _10
    });

    Files

    You can now wait for a file to be processed before continuing:


    _10
    const file = await io.openai.files.create("upload-file", {
    _10
    purpose: "assistants",
    _10
    file: fs.createReadStream("./fixtures/mydata.csv"),
    _10
    });
    _10
    _10
    await io.openai.files.waitForProcessing("wait-for-file", file.id);

    Or you can combine that into a single call:


    _10
    const file = await io.openai.files.createAndWaitForProcessing("upload-file", {
    _10
    purpose: "assistants",
    _10
    file: fs.createReadStream("./fixtures/mydata.csv"),
    _10
    });

    New Docs

    We've completely rewritten our OpenAI docs to make it easier to understand how to use our integration. Check them out at here.

    How to update

    The trigger.dev/* packages are now at v2.2.6. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Often you want to trigger your Jobs from events that happen in other APIs. This is where webhooks come in.

    Now you can easily subscribe to any APIs that support webhooks, without needing to use a Trigger.dev Integration. This should unlock you to create far more Jobs than was previously possible.

    How to create an HTTP endpoint

    We want to send a Slack message when one of our cal.com meetings is cancelled. To do this we need to create an HTTP endpoint that cal.com can send a webhook to.


    _15
    //create an HTTP endpoint
    _15
    const caldotcom = client.defineHttpEndpoint({
    _15
    id: "cal.com",
    _15
    source: "cal.com",
    _15
    icon: "caldotcom",
    _15
    verify: async (request) => {
    _15
    //this helper function makes verifying most webhooks easy
    _15
    return await verifyRequestSignature({
    _15
    request,
    _15
    headerName: "X-Cal-Signature-256",
    _15
    secret: process.env.CALDOTCOM_SECRET!,
    _15
    algorithm: "sha256",
    _15
    });
    _15
    },
    _15
    });

    Getting the URL and secret from the Trigger.dev dashboard

    There's a new section in the sidebar: "HTTP endpoints".

    HTTP endpoints

    From there you can select the HTTP endpoint you just created and get the URL and secret. In this case, it's cal.com.

    HTTP endpoint details

    Each environment has a different Webhook URL so you can control which environment you want to trigger Jobs for.

    Setting up the webhook in cal.com

    In cal.com you can navigate to "Settings/Webhooks/New" to create a new webhook.

    cal.com webhook

    Enter the URL and secret from the Trigger.dev dashboard and select the events you want to trigger Jobs for.

    We could only select "Booking cancelled" but we're going to select all the events so we can reuse this webhook for more than just a single trigger.

    Using HTTP endpoints to create Triggers

    Then we can use that HTTP endpoint to create multiple Triggers for your Jobs. They can have different filters, using the data from the webhook.


    _24
    client.defineJob({
    _24
    id: "http-caldotcom",
    _24
    name: "HTTP Cal.com",
    _24
    version: "1.0.0",
    _24
    enabled: true,
    _24
    //create a Trigger from the HTTP endpoint above. The filter is optional.
    _24
    trigger: caldotcom.onRequest({
    _24
    filter: { body: { triggerEvent: ["BOOKING_CANCELLED"] } },
    _24
    }),
    _24
    run: async (request, io, ctx) => {
    _24
    //note that when using HTTP endpoints, the first parameter is the request
    _24
    //you need to get the body, usually it will be json so you do:
    _24
    const body = await request.json();
    _24
    _24
    //this prints out "Matt Aitken cancelled their meeting"
    _24
    await io.logger.info(
    _24
    `${body.payload.attendees
    _24
    .map((a) => a.name)
    _24
    .join(", ")} cancelled their meeting ${new Date(
    _24
    body.payload.startTime
    _24
    )}`
    _24
    );
    _24
    },
    _24
    });

    See our HTTP endpoint docs for more info. Upgrade to the latest version of the SDK to start using this feature.

    How to update

    The Trigger.dev Cloud is now running v2.2.4. If you are self-hosting you can upgrade by pinning to the v2.2.4 tag.

    The trigger.dev/* packages are now at v2.2.5. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Invoke Trigger

    Up until now, Trigger.dev only supported the following 3 types of triggers: Events, Webhooks, and Scheduled (e.g. cron/interval).

    But sometimes it makes sense to be able to invoke a Job manually, without having to specify an event, especially for cases where you want to get notified when the invoked Job Run is complete.

    To specify that a job is manually invokable, you can use the invokeTrigger() function when defining a job:


    _16
    import { invokeTrigger } from "@trigger.dev/sdk";
    _16
    import { client } from "@/trigger";
    _16
    _16
    export const exampleJob = client.defineJob({
    _16
    id: "example-job",
    _16
    name: "Example job",
    _16
    version: "1.0.1",
    _16
    trigger: invokeTrigger({
    _16
    schema: z.object({
    _16
    foo: z.string(),
    _16
    }),
    _16
    }),
    _16
    run: async (payload, io, ctx) => {
    _16
    // do something with the payload
    _16
    },
    _16
    });

    And then you can invoke the job using the Job.invoke() method:


    _10
    import { exampleJob } from "./exampleJob";
    _10
    _10
    const jobRun = await exampleJob.invoke(
    _10
    { foo: "bar" },
    _10
    { callbackUrl: `${process.env.VERCEL_URL}/api/callback` }
    _10
    );

    Which is great but things become really cool when you invoke a job from another job and wait for the invoked job to complete:


    _15
    import { exampleJob } from "./exampleJob";
    _15
    _15
    client.defineJob({
    _15
    id: "example-job2",
    _15
    name: "Example job 2",
    _15
    version: "1.0.1",
    _15
    trigger: intervalTrigger({
    _15
    seconds: 60,
    _15
    }),
    _15
    run: async (payload, io, ctx) => {
    _15
    const runResult = await exampleJob.invokeAndWaitForCompletion("⚡", {
    _15
    foo: "123",
    _15
    });
    _15
    },
    _15
    });

    You can also batch up to 25 invocations at once, and we will run them in parallel and wait for all of them to complete before continuing execution of the current job.


    _28
    import { exampleJob } from "./exampleJob";
    _28
    _28
    client.defineJob({
    _28
    id: "example-job2",
    _28
    name: "Example job 2",
    _28
    version: "1.0.1",
    _28
    trigger: intervalTrigger({
    _28
    seconds: 60,
    _28
    }),
    _28
    run: async (payload, io, ctx) => {
    _28
    const runs = await exampleJob.batchInvokeAndWaitForCompletion("⚡", [
    _28
    {
    _28
    payload: {
    _28
    userId: "123",
    _28
    tier: "free",
    _28
    },
    _28
    },
    _28
    {
    _28
    payload: {
    _28
    userId: "abc",
    _28
    tier: "paid",
    _28
    },
    _28
    },
    _28
    ]);
    _28
    _28
    // runs is an array of RunNotification objects
    _28
    },
    _28
    });

    See our Invoke Trigger docs for more info. Upgrade to the latest version of the SDK to start using this feature.

    How to update

    The Trigger.dev Cloud is now running v2.2.4. If you are self-hosting you can upgrade by pinning to the v2.2.4 tag.

    The trigger.dev/* packages are now at v2.2.5. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • New navigation

    Trigger.dev has a new side menu to make navigating the app much easier. Here's a quick overview:

    New side menu

    More of the app is now accessible from the new side menu. Project related pages are grouped together at the top, followed by organization pages.

    Organization and Projects

    Organization and Projects menu

    Switching Organizations and Projects is now much easier.

    Organization menu

    Your profile page

    You can now access your profile from the avatar icon. View profile

    All the most helpful links are now in one place. You can access the documentation, changelog, and support from the bottom of the menu. View profile

    How to update

    The Trigger.dev Cloud is now running v2.2.4. If you are self-hosting you can upgrade by pinning to the v2.2.4 tag.

  • Next.js 14 support

    Next.js 14 was just announced on stage at Next.js Conf and we're happy to announce that we've just released support for it in our @trigger.dev/[email protected] release.

    You can now create a new Next.js 14 app with Trigger.dev as easy as:


    _10
    npx create-next-app@latest
    _10
    npx @trigger.dev/cli@latest init

    Our @trigger.dev/cli init command will automatically detect that you're using Next.js 14 and auto-configure your project, whether it uses Pages or the new App directory.

    Check out our Next.js Quickstart for more on how to get started with Next.js and Trigger.dev.

  • Auto-yielding executions

    We've just released Trigger.dev server v2.2.4 and @trigger.dev/* packages @ 2.2.2, which includes a new feature called Auto Yielding Executions that will drastically cut down on serverless function timeouts and provides stronger guarantees around duplicate task executions.

    The TLDR is that our @trigger.dev/sdk will now automatically yield Job Run executions that are about to timeout, and resume them in another function execution. Previously when executing a Job Run we'd keep executing until the serverless function timed out, and resume executing only after the timeout was received.

    The issue is that we didn't have good control over when the timeout would occur, and it could occur at any time during the execution. This could result in some tasks getting executed multiple times, which is not ideal. It would also mean unwanted timeout logs, which could cause issues with any downstream alert systems. This is what happened when upgrading one of our projects to the new @trigger.dev/[email protected]:

    image

    If you want to learn more about how this works, read the full Auto Yielding Executions discussion.

  • OpenAI Universal SDK

    We've made a small tweak to our OpenAI integration that allows it to be used with any OpenAI compatible API, such as Perplexity.ai:


    _10
    import { OpenAI } from "@trigger.dev/openai";
    _10
    _10
    const perplexity = new OpenAI({
    _10
    id: "perplexity",
    _10
    apiKey: process.env["PERPLEXITY_API_KEY"]!,
    _10
    baseURL: "https://api.perplexity.ai", // specify the base URL for Perplexity.ai
    _10
    icon: "brand-open-source", // change the task icon to a generic open source logo
    _10
    });

    Since Perplexity.ai is compatible with OpenAI, you can use the same tasks as with OpenAI but using Open Source models, like minstral-7b-instruct:


    _37
    client.defineJob({
    _37
    id: "perplexity-tasks",
    _37
    name: "Perplexity Tasks",
    _37
    version: "0.0.1",
    _37
    trigger: eventTrigger({
    _37
    name: "perplexity.tasks",
    _37
    }),
    _37
    integrations: {
    _37
    perplexity,
    _37
    },
    _37
    run: async (payload, io, ctx) => {
    _37
    await io.perplexity.chat.completions.create("chat-completion", {
    _37
    model: "mistral-7b-instruct",
    _37
    messages: [
    _37
    {
    _37
    role: "user",
    _37
    content: "Create a good programming joke about background jobs",
    _37
    },
    _37
    ],
    _37
    });
    _37
    _37
    // Run this in the background
    _37
    await io.perplexity.chat.completions.backgroundCreate(
    _37
    "background-chat-completion",
    _37
    {
    _37
    model: "mistral-7b-instruct",
    _37
    messages: [
    _37
    {
    _37
    role: "user",
    _37
    content:
    _37
    "If you were a programming language, what would you be and why?",
    _37
    },
    _37
    ],
    _37
    }
    _37
    );
    _37
    },
    _37
    });

    And you'll get the same experience in the Run Dashboard when viewing the logs:

    Perplexity.ai logs

    We also support the Azure OpenAI Service through the defaultHeaders and defaultQuery options:


    _11
    import { OpenAI } from "@trigger.dev/openai";
    _11
    _11
    const azureOpenAI = new OpenAI({
    _11
    id: "azure-openai",
    _11
    apiKey: process.env["AZURE_API_KEY"]!,
    _11
    icon: "brand-azure",
    _11
    baseURL:
    _11
    "https://my-resource.openai.azure.com/openai/deployments/my-gpt35-16k-deployment",
    _11
    defaultQuery: { "api-version": "2023-06-01-preview" },
    _11
    defaultHeaders: { "api-key": process.env["AZURE_API_KEY"] },
    _11
    });

  • Server v2.2.4

    These additional changes made it into the server in v2.2.4:


    _10
    await io.runTask(
    _10
    "cache-key",
    _10
    async () => {
    _10
    // do something cubey here
    _10
    },
    _10
    { icon: "3d-cube-sphere" }
    _10
    );

    • [6d3b761c][@hmacr] Fixed run list pagination
    • [5ea6a49d] Made the app work very basically on mobile devices
    • [627c767c] Fixed an issue where webhook triggers would erronously attempt to re-register whenever jobs were indexed.

    @trigger.dev/[email protected]

    • [6769d6b4]: Detects JSRuntime (Node/Deno at the moment). Adds basic Deno support
    • [9df93d07]: Improve create-integration output. Use templates and shared configs.
    • [50e31924]: add ability to use custom tunnel in dev command
    • [0adf41c7]: Added Next.js maxDuration commented out to the api/trigger file using CLI init

    How to update

    The Trigger.dev Cloud is now running v2.2.4. If you are self-hosting you can upgrade by pinning to the v2.2.4 tag.

    The trigger.dev/* packages are now at v2.2.2. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • For your Jobs to work they need to be registered with the Trigger.dev server (cloud or self-hosted). When they're registered we can trigger runs.

    We've just fixed a major problem: we now show errors in your Job definitions so you can fix them. Before you had no idea why they weren't appearing or being updated in the dashboard.

    In the console

    When you run npx @trigger.dev/cli@latest dev you'll now see any errors with your Job definitions.

    In this case, we've set an interval of less than 60 seconds on our intervalTrigger, and we've left the name off a Job:

    The console now shows errors

    In the dashboard

    These errors are also shown on the "Environments" page of the dashboard. You can manually refresh from this page as well, which is useful for Staging/Production if you haven't setup automatic refreshing.

    The dashboard now shows errors

    Other changes

    Improvements

    • Added a filter for active jobs in the dashboard (PR #601 by hmacr)
    • Replaced React Hot Toast with Sonner toasts (Issue #555 by arjunindiai)
    • Upgraded packages to use Node 18 and fetch instead of node-fetch (PR #581 by Rutam21)
    • Added contributors section to readme.md (PR #594 by mohitd404)
    • Added SvelteKit adaptor (PR #467 by Chigala)

    Fixes

    • intervalTriggers of >10 mins didn't ever start in Staging/Prod (Issue #611)
    • Improved the robustness and error reporting for login with magic link

    How to update

    The Trigger.dev Cloud is now running v2.2.0. If you are self-hosting you can upgrade by pinning to the v2.2.0 tag.

    The trigger.dev/* packages are now at v2.2.0. You can update using the following command:


    _10
    npx @trigger.dev/cli@latest update

  • New Test page

    Examples and recent payloads

    You can easily select from our example payloads and the most recent 5 payloads that triggered this Job. We also automatically populate the editor with an example or the most recent payload.

    JSON linting

    As you're editing the JSON it's linted which means you get useful errors pointing you at where there are problems.

    Submit using your keyboard

    You can press ⌘↵ on Mac, CTRL+Enter on Windows to quickly submit the test.

    Test editor video tour (1m 37s)

  • Highlights

    We've added a new integration for Replicate, which lets you run machine learning models with a few lines of code. Powered by a new feature of the platform we call "Task Callbacks", which allows tasks to be "completed" via a webhook or failed via a timeout. You can use these using io.runTask:


    _14
    await io.runTask(
    _14
    "use-callback-url",
    _14
    async (task) => {
    _14
    // task.callbackUrl is the URL to call when the task is done
    _14
    // The output of this task will be the body POSTed to this URL
    _14
    },
    _14
    {
    _14
    name: "Use the callbackUrl to notify the caller when the task is done",
    _14
    callback: {
    _14
    enabled: true,
    _14
    timeoutInSeconds: 300, // If task.callbackUrl is not called within 300 seconds, the task will fail
    _14
    },
    _14
    }
    _14
    );

    Improvements

    We've done a lot of performance work this release, especially for job runs with a large amount of tasks and logs. Long story short we now do a much better job with using cached task outputs when resuming runs. For a deep dive check out this pull request

    Bug Fixes

    • Fixed an issue with the Linear getAll types #81e886a1
    • Updated the adapter packages (Remix, Next, Astro, and Express) to return response headers.

    Credits

    Thanks to @nicktrn for the Linear fix!

    How to update

    The Trigger.dev Cloud is now running 2.1.10. If you are self-hosting you can upgrade by pinning to the v2.1.10 tag.

    To upgrade your @trigger.dev/* packages, you can issue the following command:


    _10
    npx @trigger.dev/cli@latest update

  • Replicate Integration

    Replicate lets you run machine learning models with a few lines of code, without needing to understand how machine learning works. And now you can easily use the Replicate API in your own applications using Trigger.dev and our new Replicate integration:


    _25
    client.defineJob({
    _25
    id: "replicate-cinematic-prompt",
    _25
    name: "Replicate - Cinematic Prompt",
    _25
    version: "0.1.0",
    _25
    integrations: { replicate },
    _25
    trigger: eventTrigger({
    _25
    name: "replicate.cinematic",
    _25
    }),
    _25
    run: async (payload, io, ctx) => {
    _25
    const prediction = await io.replicate.predictions.createAndAwait(
    _25
    "await-prediction",
    _25
    {
    _25
    version:
    _25
    "af1a68a271597604546c09c64aabcd7782c114a63539a4a8d14d1eeda5630c33",
    _25
    input: {
    _25
    prompt: `rick astley riding a harley through post-apocalyptic miami, cinematic, 70mm, anamorphic, bokeh`,
    _25
    width: 1280,
    _25
    height: 720,
    _25
    },
    _25
    }
    _25
    );
    _25
    _25
    return prediction.output;
    _25
    },
    _25
    });

    We make use of replicate webhooks and a new feature of Trigger.dev called "Task Callbacks" to ensure long-running predictions don't result in function timeout errors.

    See more details in the Replicate integration docs.

    Thanks to @nicktrn for the awesome work on this integration 🚀

  • Hacktoberfest 2023

    It's October! Which can only mean one thing... Hacktoberfest is back! 🎉 This year we've lined up some great swag, and plenty of GitHub issues to contribute to.

    Here's how to get involved:

    • We've created GitHub issues tagged: 🎃 hacktoberfest
    • Each issue is also tagged with points: 💎 100 points
    • For every PR you get merged, you collect points
    • Collect as many points before October 31st 2023
    • Then spend your points in our shop 🎁

    Get involved

  • Staging environment

    We've added support for an additional environment between DEV and PROD called STAGING. This environment is useful for testing your Jobs in a production-like environment before deploying to production.

    All existing projects will automatically have a STAGING environment created for them. The API Key for this environment will start with tr_stg_.

    We will be adding support for ephemeral PREVIEW environments for popular platforms like Vercel in the future, so stay tuned!

  • You can now redact data from Task outputs, so it won't be visible in the dashboard. This is useful for sensitive data like Personally Identifiable Information (PII).

    To use, add the redact option to runTask like so:


    _21
    const result = await io.runTask(
    _21
    "task-example-1",
    _21
    async () => {
    _21
    return {
    _21
    id: "evt_3NYWgVI0XSgju2ur0PN22Hsu",
    _21
    object: "event",
    _21
    api_version: "2022-11-15",
    _21
    created: 1690473903,
    _21
    data: {
    _21
    object: {
    _21
    id: "ch_3NYWgVI0XSgju2ur0C2UzeKC",
    _21
    },
    _21
    },
    _21
    };
    _21
    },
    _21
    {
    _21
    redact: {
    _21
    paths: ["data.object.id"],
    _21
    },
    _21
    }
    _21
    );

    View docs

  • We've had a lot of requests for using Trigger.dev with other frameworks other than Next.js, so today we're announcing three new ones:

    • Next.js
    • Remix
    • Astro
    • Express

    With many more coming very soon:

    • SvelteKit
    • RedwoodJS
    • Nuxt.js
    • Nest.js
    • Fastify
  • React Status Hooks

    You can now create statuses in your Job code that lets you do some pretty cool stuff in your UI, like:

    • Show exactly what you want in your UI (with as many statuses as you want).
    • Pass arbitrary data to your UI, which you can use to render elements.
    • Update existing elements in your UI as the progress of the run continues.

    Here's some example code showing for a Job that generates memes. We've created a single status generatingMemes (you can create as many as you like) and then we've updated it (you can update it as often as you like). It gives you fine-grained control over how you report progress and output data from your Job.


    _29
    client.defineJob({
    _29
    id: "meme-generator",
    _29
    name: "Generate memes",
    _29
    version: "0.1.1",
    _29
    trigger: eventTrigger({
    _29
    name: "generate-memes",
    _29
    }),
    _29
    run: async (payload, io, ctx) => {
    _29
    const generatingMemes = await io.createStatus("generating-memes", {
    _29
    label: "Generating memes",
    _29
    state: "loading",
    _29
    data: {
    _29
    progress: 0.1,
    _29
    },
    _29
    });
    _29
    _29
    //...do stuff, like generate memes
    _29
    _29
    await generatingMemes.update("middle-generation", {
    _29
    state: "success",
    _29
    data: {
    _29
    progress: 1,
    _29
    urls: [
    _29
    "https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExZnZoMndsdWh0MmhvY2kyaDF6YjZjZzg1ZGsxdnhhYm13a3Q1Y3lkbyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/13HgwGsXF0aiGY/giphy.gif",
    _29
    ],
    _29
    },
    _29
    });
    _29
    },
    _29
    });

    Check out React Status Hooks in the docs.

  • Bring-Your-Own-Auth

    You can now authenticate as your users!

    Before, you could only use our Integrations with API Keys or OAuth authentication as yourself, the developer. Now, you can authenticate using the auth credentials of your users.

    Auth Resolver allows you to implement your own custom auth resolving using a third-party service like Clerk or Nango.

    Documentation

  • Showcase

    We've created a beautiful showcase for Jobs (and projects) that you can build using Trigger.dev. They have code you can copy and paste to get started quickly.

    View the showcase

  • Usage Dashboard

    Welcome to our new Usage Dashboard! You can now keep track of your Jobs on the 'Usage & Billing' page of the app. Here's a list of all the data you can now see at a glance:

    • The number of Job Runs each month
    • The total number of Runs this month
    • The total number of Jobs
    • The total number of Integrations
    • The number of team members in your Organization

    usage-dashboard

    Login to your account and click 'Usage & Billing' in the side menu of a Project page to see your Usage Dashboard.

  • Linear Integration

    Streamline your project and issue tracking with our new Linear Integration.

  • We've improved our Integrations to support generic interfaces and better ergonomics.

    Previously, integrations could not support tasks with generic type parameters or fluent interfaces. For example, previously our OpenAI integration looked like this:


    _10
    await io.openai.createChatCompletion("chat-completion", {
    _10
    model: "gpt-3.5-turbo",
    _10
    messages: [
    _10
    {
    _10
    role: "user",
    _10
    content: "Create a good programming joke about background jobs",
    _10
    },
    _10
    ],
    _10
    });

    Which is now replaced with the following that much more closely matches the OpenAI SDK:


    _10
    await io.openai.chat.completions.create("chat-completion", {
    _10
    model: "gpt-3.5-turbo",
    _10
    messages: [
    _10
    {
    _10
    role: "user",
    _10
    content: "Create a good programming joke about background jobs",
    _10
    },
    _10
    ],
    _10
    });

    Tasks can also now have generic type parameters as well, which is useful for integrations like Supabase or Airtable that have user-defined schemas:


    _10
    const table = io.airtable
    _10
    .base(payload.baseId)
    _10
    .table<LaunchGoalsAndOkRs>(payload.tableName);
    _10
    _10
    const records = await table.getRecords("muliple records", {
    _10
    fields: ["Status"],
    _10
    });

  • Interact with your Airtable bases with our new Airtable Integration.

  • Improved Documentation

    We've improved our documentation for:

  • New testing package

    We've added a new @trigger.dev/testing package.

  • We've fixed the Zod errors that were occuring because of excessively deep Type instantiation when using eventTrigger and Zod 3.22.2.

  • Thanks to Liran Tal, we now have a native package to use Trigger.dev with Astro.

    To update your existing projects to the latest version of the SDK, run the following command:


    _10
    npx @trigger.dev/cli update

    If you are self-hosting the Trigger.dev service, you'll need to update to the latest image:


    _10
    docker pull triggerdotdev/trigger.dev:v2.0.0
    _10
    # or
    _10
    docker pull triggerdotdev/trigger.dev:latest@sha256:00d9d9646c3781c04b84b4a7fe2c3b9ffa79e22559ca70ffa1ca1e9ce570a799

    If you are using the Trigger.dev Cloud, you'll automatically get the latest version of the service.

    • a907e2a: chore: updated the type in the eventId argument (thx @Chigala ✨)
  • Our CLI has been updated with some fixes and improvements:

    • 3ce5397: Added the send-event command
    • 3897e6e: Make it more clear which API key the init command expects
    • dd10717: Added --hostname option to the cli dev command
    • 8cf8544: Bugfix: @trigger.dev/cli init now correctly identifies the App Dir when using JS (thx @Chigala ✨)
    • 4e78da3: fix: Add an update sub-command the @trigger.dev/cli that updates all @trigger.dev/* packages (thx @hugomn ✨)
    • 135cb49: fixed the cli init log message to show the correct path to the app route created (thx @Chigala ✨)
  • We've updated our OpenAI package to use the new and improved v4 of the OpenAI SDK.

    All of our existing tasks should work as before, but now when you use the .native property you'll get back and nice and shiny v4 SDK:


    _12
    import { OpenAI } from "@trigger.dev/openai";
    _12
    _12
    const openai = new OpenAI({
    _12
    id: "openai",
    _12
    apiKey: process.env["OPENAI_API_KEY"]!,
    _12
    });
    _12
    _12
    // Before: v3 SDK
    _12
    openai.native.createCompletion({...});
    _12
    _12
    // Now: v4 SDK
    _12
    openai.native.completions.create({...});

    We've also added some new tasks for enabling fine tuning jobs:

    • createFineTuningJob - Create a fine-tuning job for a fine-tuning model
    • retrieveFineTuningJob - Retrieve a fine-tuning job for a fine-tuning model
    • listFineTuningJobs - List fine-tuning jobs for a fine-tuning model
    • cancelFineTuningJob - Cancel a fine-tuning job for a fine-tuning model
    • listFineTuningJobEvents - List fine-tuning job events for a fine-tuning model
  • Cancel delayed events

    When sending events, you can delay the delivery by setting either the deliverAt or deliverAfter option:


    _10
    await client.sendEvent(
    _10
    {
    _10
    id: "event-1",
    _10
    name: "example.event",
    _10
    payload: { hello: "world" },
    _10
    },
    _10
    {
    _10
    deliverAfter: 1000 * 60 * 60 * 24, // 1 day
    _10
    }
    _10
    );

    You can now easily cancel delayed events to prevent subsequent job runs with the new cancelEvent method:


    _10
    await client.cancelEvent("event-1");

    This functionality requires @trigger.dev/[email protected] or later.

  • You can now disable jobs in your code by setting the enabled option to false:


    _10
    client.defineJob({
    _10
    id: "example-job",
    _10
    name: "Example Job",
    _10
    version: "0.1.0",
    _10
    trigger: eventTrigger({ name: "example.event" }),
    _10
    enabled: false,
    _10
    run: async (payload, io, ctx) => {
    _10
    // your job code here
    _10
    },
    _10
    });

    Which will show the job as disabled in the dashboard:

    disabled job

    Once you've disabled your job, you can delete it from the dashboard:

    delete job

    For more detailed information, checkout our documentation on managing Jobs.

  • We had an issue where runs that included tasks that had large task outputs could not be resumed after a delay. This was because we send completed task outputs in the request body when we resume a run, and some platforms have a limit on the size of the request body. We now cap the size of the task outputs we send in the request body to 3.5MB.

  • Increased performance

    We've redesigned the way we queue and execute job runs in order to increase the speed of job execution.

    • Fixed the cached task miss issue in the @trigger.dev/sdk which should speed up resumed runs by A LOT
    • Allow setting graphile worker concurrency settings through env vars WORKER_CONCURRENCY and EXECUTION_WORKER_CONCURRENCY
    • Allow settings prisma pool settings through env vars DATABASE_CONNECTION_LIMIT and DATABASE_POOL_TIMEOUT
    • You can now selectively enable/disable the workers through WORKER_ENABLED=false and EXECUTION_WORKER_ENABLED=false. This means the image can be deployed as 2 or 3 separate services:
      • A WebApp service that serves the API and the Dashboard
      • A Worker service that runs tasks that have been added the standard worker
      • An Execution Worker service that only runs "run execution" tasks
    • Deprecated the JobOptions.queue options as we are no longer using that to control job concurrency. We'll add proper queue support in the future.
  • Trigger.dev v2.0.0

    We've dropped the beta label on our v2.0.0 release of the Trigger.dev service, and moving forward we'll be updating the version number like good semantic version citizens.

    Blog post