Changelog
Improvements, new features and fixes
Our
@trigger.dev/resend
package has been updated to work with the latest resend-node 2.0.0 version, which brings with it a number of fixes and some additional tasks.See our Resend integration docs for more.
How to update
The
trigger.dev/*
packages are now atv2.2.7
. You can update using the following command:_10npx @trigger.dev/cli@latest updateBefore today, it was a very clunky experience trying to build workflows that responded to your job runs completing or failing. But now, you can easily subscribe to run notifications and perform additional work.
There are two ways of subscribing to run notifications:
Across all jobs
You can use the
TriggerClient.on
instance method to subscribe to all job runs notifications. This is useful if you want to perform some work after any job run completes or fails:_12export const client = new TriggerClient({_12id: "my-project",_12apiKey: process.env.TRIGGER_API_KEY,_12});_12_12client.on("runSucceeeded", async (notification) => {_12console.log(`Run on job ${notification.job.id} succeeded`);_12});_12_12client.on("runFailed", async (notification) => {_12console.log(`Run on job ${notification.job.id} failed`);_12});On a specific job
You can also pass
onSuccess
oronFailure
when defining a job to subscribe to notifications for that specific job:_19client.defineJob({_19id: "github-integration-on-issue",_19name: "GitHub Integration - On Issue",_19version: "0.1.0",_19trigger: github.triggers.repo({_19event: events.onIssue,_19owner: "triggerdotdev",_19repo: "empty",_19}),_19onSuccess: async (notification) => {_19console.log("Job succeeded", notification);_19},_19onFailure: async (notification) => {_19console.log("Job failed", notification);_19},_19run: async (payload, io, ctx) => {_19// ..._19},_19});Run notifications
All run notifications contain the following info:
- The run's ID
- The run's status
- The run's duration
- The run's start time
- The run's payload
- The run's explicit statuses (if any)
- Whether or not the run was a test run
- Which job the run belongs to
- Which environment the run belongs to
- Which project the run belongs to
- Which organization the run belongs to
- The external account associated with the run (if any)
Successful run notifications also contain the output of the run. Failed run notifications contain the error and the task that failed.
You can see the full run notification schema here
How does it work?
Run notifications work by making a separate HTTP request to your endpoint URL after a run completes or fails, which means that you get a fresh serverless function execution to perform additional work.
We only send these notifications if you've subscribed to them, so ff you want to stop receiving them, just remove the code that subscribes to them, and we'll stop sending them.
How to update
The Trigger.dev Cloud is now running
v2.2.10
. If you are self-hosting you can upgrade by pinning to thev2.2.10
tag.The
trigger.dev/*
packages are now atv2.2.7
. You can update using the following command:_10npx @trigger.dev/cli@latest updateWe've added a new APIs section to the site. Here you can browse APIs by category and view working code samples of how to use each API with Trigger.dev (40+ APIs and counting!).
We've included job examples for all of our integrations, as well as code samples showing how to connect to many APIs using their official Node SDKs or fetch.
For example, our OpenAI API page includes multiple working code samples as well as full-stack projects, all using our OpenAI integration:
All the code is regularly maintained and updated by our team and the amazing open source community, and can be copied and pasted to use in your own projects.
Task Library
We now have a dedicated docs page called Task Library where you can easily find and learn about built-in tasks to use in your jobs, like waitForEvent() or backgroundFetch():
New Tasks
We also have a few new tasks that we've added to the library:
io.backgroundPoll()
This task is similar to
backgroundFetch
, but instead of waiting for a single request to complete, it will poll a URL until it returns a certain value:_12const result = await io.backgroundPoll<{ foo: string }>("🔃", {_12url: "https://example.com/api/endpoint",_12interval: 10, // every 10 seconds_12timeout: 300, // stop polling after 5 minutes_12responseFilter: {_12// stop polling once this filter matches_12status: [200],_12body: {_12status: ["SUCCESS"],_12},_12},_12});We even display each poll request in the run dashboard:
See our reference docs to learn more.
io.sendEvents()
io.sendEvents() allows you to send multiple events at a time:
_14await io.sendEvents("send-events", [_14{_14name: "new.user",_14payload: {_14userId: "u_12345",_14},_14},_14{_14name: "new.user",_14payload: {_14userId: "u_67890",_14},_14},_14]);See our reference docs to learn more.
io.random()
io.random()
is identical toMath.random()
when called without options but ensures your random numbers are not regenerated on resume or retry. It will return a pseudo-random floating-point number between optional min (default: 0, inclusive) and max (default: 1, exclusive). Can optionally round to the nearest integer.How to update
The
trigger.dev/*
packages are now atv2.2.6
. You can update using the following command:_10npx @trigger.dev/cli@latest updateWhen we released our Replicate integration last month, we added support for tasks that could be completed via a one-time webhook request to support Replicate Prediction webhooks. Replicate webhooks work by providing a URL for a "callback" request when creating a prediction:
_10await replicate.predictions.create({_10version: "d55b9f2d...",_10input: { prompt: "call me later maybe" },_10webhook: "https://example.com/replicate-webhook",_10webhook_events_filter: ["completed"], // optional_10});This allowed us to create an integration task that uses these webhooks to provide a seemless experience when creating a prediction:
_10const prediction = await io.replicate.predictions.createAndAwait(_10"create-prediction",_10{_10version: "d55b9f2d...",_10input: {_10prompt: "call me later maybe",_10},_10}_10);We've now nicely exposed the same functionality so anyone can take advantage of similar APIs with our new io.waitForRequest() built-in task. This task allows you to create a task that will wait for a request to be made to a specific URL, and then return the request body as the task result. This is useful for any API that requires a webhook to be set up, or for any API that requires a callback URL to be provided.
For example, you could use it to interface with ScreenshotOne.com to take a screenshot of a website and resume execution once the screenshot is ready:
_24const result = await io.waitForRequest(_24"screenshot-one",_24async (url) => {_24await fetch(`https://api.screenshotone.com/take`, {_24method: "POST",_24headers: {_24"Content-Type": "application/json",_24},_24body: JSON.stringify({_24access_key: process.env.SCREENSHOT_ONE_API_KEY,_24url: "https://trigger.dev",_24store: "true",_24storage_path: "my-screeshots",_24response_type: "json",_24async: "true",_24webhook_url: url, // this is the URL that will be called when the screenshot is ready_24storage_return_location: "true",_24}),_24});_24},_24{_24timeoutInSeconds: 300, // wait up to 5 minutes for the screenshot to be ready_24}_24);How to update
The
trigger.dev/*
packages are now atv2.2.6
. You can update using the following command:_10npx @trigger.dev/cli@latest updateUp until now, you could only trigger a new job run when sending an event to Trigger.dev using
eventTrigger()
:_17client.defineJob({_17id: "payment-accepted",_17name: "Payment Accepted",_17version: "1.0.0",_17trigger: eventTrigger({_17name: "payment.accepted",_17schema: z.object({_17id: z.string(),_17amount: z.number(),_17currency: z.string(),_17userId: z.string(),_17}),_17}),_17run: async (payload, io, ctx) => {_17// Do something when a payment is accepted_17},_17});Now with
io.waitForEvent()
, you can wait for an event to be sent in the middle of a job run:_12const event = await io.waitForEvent("🤑", {_12name: "payment.accepted",_12schema: z.object({_12id: z.string(),_12amount: z.number(),_12currency: z.string(),_12userId: z.string(),_12}),_12filter: {_12userId: ["user_1234"], // only wait for events from this specific user_12},_12});By default,
io.waitForEvent()
will wait for 1 hour for an event to be sent. If no event is sent within that time, it will throw an error. You can customize the timeout by passing a second argument:_18const event = await io.waitForEvent(_18"🤑",_18{_18name: "payment.accepted",_18schema: z.object({_18id: z.string(),_18amount: z.number(),_18currency: z.string(),_18userId: z.string(),_18}),_18filter: {_18userId: ["user_1234"], // only wait for events from this specific user_18},_18},_18{_18timeoutInSeconds: 60 * 60 * 24 * 7, // wait for 1 week_18}_18);This will allow you to build more complex workflows that simply were not possible before, or were at least a pain to implement. We're excited to see what you build with this new feature!
Read more about it in the docs.
How to update
The
trigger.dev/*
packages are now atv2.2.6
. You can update using the following command:_10npx @trigger.dev/cli@latest updateWe've added a Cloudflare worker proxy and SQS queue to improve the performance and reliability of our API. It's not just for our Cloud product, you can optionally use it if you're self-hosting as all the code is in our open source repository.
To begin with we're using this when events are sent to us. That happens when you use
client.sendEvent
andclient.sendEvents
. More API routes will be supported in the future.How does it work?
Requests to the API are proxied through a Cloudflare worker. That worker intercepts certain API routes and sends the data to an SQS queue. The main API server then polls the queue for new data and processes it.
Why is this better?
If there is any downtime on the main API servers, we won't lose events. The Cloudflare worker will queue them up and the main API server will process them when it's back online.
Also, it allows us to deal with more load than before. The Cloudflare worker can handle a lot of requests and the main API server can process them at its own pace.
Event processing can be scaled horizontally with more workers that poll the queue.
How to update
The Trigger.dev Cloud is now running
v2.2.9
. If you are self-hosting you can upgrade by pinning to thev2.2.9
tag.After OpenAI DevDay last week we got busy working on our
@trigger.dev/openai
integration and we're happy to announce that we now support GPT-4 Turbo, the new Assistants API, Dalle-3, and more, as well as some additional enhancements.GPT-4 Turbo
GPT-4 Turbo is the newest model from OpenAI that includes up to 128K context and lower prices, and is supported by our integration by specifying the
gpt-4-1106-preview
model:_10await io.openai.chat.completions.create("debater-completion", {_10model: "gpt-4-1106-preview",_10messages: [_10{_10role: "user",_10content:_10'I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand. My first request is "I want an opinion piece about Deno."',_10},_10],_10});Although we recommend the
backgroundCreate
variant as even though it's called Turbo, during preview it can take awhile to complete:_11// This will run in the background, so you don't have to worry about serverless function timeouts_11await io.openai.chat.completions.backgroundCreate("debater-completion", {_11model: "gpt-4-1106-preview",_11messages: [_11{_11role: "user",_11content:_11'I want you to act as a debater. I will provide you with some topics related to current events and your task is to research both sides of the debates, present valid arguments for each side, refute opposing points of view, and draw persuasive conclusions based on evidence. Your goal is to help people come away from the discussion with increased knowledge and insight into the topic at hand. My first request is "I want an opinion piece about Deno."',_11},_11],_11});We've also improved completion tasks and added additional properties that make it easier to see your rate limits and how many tokens you have left:
Additionally, if an OpenAI request fails because of a rate limit error, we will automatically retry the request only after the rate limit has been reset.
Assistants
Also released on DevDay was the new Assistants API, which we now have support for:
_44// Create a file and wait for it to be processed_44const file = await io.openai.files.createAndWaitForProcessing("upload-file", {_44purpose: "assistants",_44file: fs.createReadStream("./fixtures/mydata.csv"),_44});_44_44// Create the assistant_44const assistant = await io.openai.beta.assistants.create("create-assistant", {_44name: "Data visualizer",_44description:_44"You are great at creating beautiful data visualizations. You analyze data present in .csv files, understand trends, and come up with data visualizations relevant to those trends. You also share a brief text summary of the trends observed.",_44model: payload.model,_44tools: [{ type: "code_interpreter" }],_44file_ids: [file.id],_44});_44_44// Sometime later, you can now use the assistant by the assistant id:_44const run = await io.openai.beta.threads.createAndRunUntilCompletion(_44"create-thread",_44{_44assistant_id: payload.id,_44thread: {_44messages: [_44{_44role: "user",_44content:_44"Create 3 data visualizations based on the trends in this file.",_44file_ids: [payload.fileId],_44},_44],_44},_44}_44);_44_44if (run.status !== "completed") {_44throw new Error(_44`Run finished with status ${run.status}: ${JSON.stringify(run.last_error)}`_44);_44}_44_44const messages = await io.openai.beta.threads.messages.list(_44"list-messages",_44run.thread_id_44);For more about how to use Assistants, check out our new OpenAI docs.
Images
We've added support for creating images in the background, similar to how our background completion works:
_10const response = await io.openai.images.backgroundCreate("dalle-3-background", {_10model: "dall-e-3",_10prompt:_10"Create a comic strip featuring miles morales and spiderpunk fighting off the sinister six",_10});Files
You can now wait for a file to be processed before continuing:
_10const file = await io.openai.files.create("upload-file", {_10purpose: "assistants",_10file: fs.createReadStream("./fixtures/mydata.csv"),_10});_10_10await io.openai.files.waitForProcessing("wait-for-file", file.id);Or you can combine that into a single call:
_10const file = await io.openai.files.createAndWaitForProcessing("upload-file", {_10purpose: "assistants",_10file: fs.createReadStream("./fixtures/mydata.csv"),_10});New Docs
We've completely rewritten our OpenAI docs to make it easier to understand how to use our integration. Check them out at here.
How to update
The
trigger.dev/*
packages are now atv2.2.6
. You can update using the following command:_10npx @trigger.dev/cli@latest updateOften you want to trigger your Jobs from events that happen in other APIs. This is where webhooks come in.
Now you can easily subscribe to any APIs that support webhooks, without needing to use a Trigger.dev Integration. This should unlock you to create far more Jobs than was previously possible.
How to create an HTTP endpoint
We want to send a Slack message when one of our cal.com meetings is cancelled. To do this we need to create an HTTP endpoint that cal.com can send a webhook to.
_15//create an HTTP endpoint_15const caldotcom = client.defineHttpEndpoint({_15id: "cal.com",_15source: "cal.com",_15icon: "caldotcom",_15verify: async (request) => {_15//this helper function makes verifying most webhooks easy_15return await verifyRequestSignature({_15request,_15headerName: "X-Cal-Signature-256",_15secret: process.env.CALDOTCOM_SECRET!,_15algorithm: "sha256",_15});_15},_15});Getting the URL and secret from the Trigger.dev dashboard
There's a new section in the sidebar: "HTTP endpoints".
From there you can select the HTTP endpoint you just created and get the URL and secret. In this case, it's cal.com.
Each environment has a different Webhook URL so you can control which environment you want to trigger Jobs for.
Setting up the webhook in cal.com
In cal.com you can navigate to "Settings/Webhooks/New" to create a new webhook.
Enter the URL and secret from the Trigger.dev dashboard and select the events you want to trigger Jobs for.
We could only select "Booking cancelled" but we're going to select all the events so we can reuse this webhook for more than just a single trigger.
Using HTTP endpoints to create Triggers
Then we can use that HTTP endpoint to create multiple Triggers for your Jobs. They can have different filters, using the data from the webhook.
_24client.defineJob({_24id: "http-caldotcom",_24name: "HTTP Cal.com",_24version: "1.0.0",_24enabled: true,_24//create a Trigger from the HTTP endpoint above. The filter is optional._24trigger: caldotcom.onRequest({_24filter: { body: { triggerEvent: ["BOOKING_CANCELLED"] } },_24}),_24run: async (request, io, ctx) => {_24//note that when using HTTP endpoints, the first parameter is the request_24//you need to get the body, usually it will be json so you do:_24const body = await request.json();_24_24//this prints out "Matt Aitken cancelled their meeting"_24await io.logger.info(_24`${body.payload.attendees_24.map((a) => a.name)_24.join(", ")} cancelled their meeting ${new Date(_24body.payload.startTime_24)}`_24);_24},_24});See our HTTP endpoint docs for more info. Upgrade to the latest version of the SDK to start using this feature.
How to update
The Trigger.dev Cloud is now running
v2.2.4
. If you are self-hosting you can upgrade by pinning to thev2.2.4
tag.The
trigger.dev/*
packages are now atv2.2.5
. You can update using the following command:_10npx @trigger.dev/cli@latest updateUp until now, Trigger.dev only supported the following 3 types of triggers: Events, Webhooks, and Scheduled (e.g. cron/interval).
But sometimes it makes sense to be able to invoke a Job manually, without having to specify an event, especially for cases where you want to get notified when the invoked Job Run is complete.
To specify that a job is manually invokable, you can use the
invokeTrigger()
function when defining a job:_16import { invokeTrigger } from "@trigger.dev/sdk";_16import { client } from "@/trigger";_16_16export const exampleJob = client.defineJob({_16id: "example-job",_16name: "Example job",_16version: "1.0.1",_16trigger: invokeTrigger({_16schema: z.object({_16foo: z.string(),_16}),_16}),_16run: async (payload, io, ctx) => {_16// do something with the payload_16},_16});And then you can invoke the job using the
Job.invoke()
method:_10import { exampleJob } from "./exampleJob";_10_10const jobRun = await exampleJob.invoke(_10{ foo: "bar" },_10{ callbackUrl: `${process.env.VERCEL_URL}/api/callback` }_10);Which is great but things become really cool when you invoke a job from another job and wait for the invoked job to complete:
_15import { exampleJob } from "./exampleJob";_15_15client.defineJob({_15id: "example-job2",_15name: "Example job 2",_15version: "1.0.1",_15trigger: intervalTrigger({_15seconds: 60,_15}),_15run: async (payload, io, ctx) => {_15const runResult = await exampleJob.invokeAndWaitForCompletion("⚡", {_15foo: "123",_15});_15},_15});You can also batch up to 25 invocations at once, and we will run them in parallel and wait for all of them to complete before continuing execution of the current job.
_28import { exampleJob } from "./exampleJob";_28_28client.defineJob({_28id: "example-job2",_28name: "Example job 2",_28version: "1.0.1",_28trigger: intervalTrigger({_28seconds: 60,_28}),_28run: async (payload, io, ctx) => {_28const runs = await exampleJob.batchInvokeAndWaitForCompletion("⚡", [_28{_28payload: {_28userId: "123",_28tier: "free",_28},_28},_28{_28payload: {_28userId: "abc",_28tier: "paid",_28},_28},_28]);_28_28// runs is an array of RunNotification objects_28},_28});See our Invoke Trigger docs for more info. Upgrade to the latest version of the SDK to start using this feature.
How to update
The Trigger.dev Cloud is now running
v2.2.4
. If you are self-hosting you can upgrade by pinning to thev2.2.4
tag.The
trigger.dev/*
packages are now atv2.2.5
. You can update using the following command:_10npx @trigger.dev/cli@latest updateTrigger.dev has a new side menu to make navigating the app much easier. Here's a quick overview:
New side menu
More of the app is now accessible from the new side menu. Project related pages are grouped together at the top, followed by organization pages.
Organization and Projects menu
Switching Organizations and Projects is now much easier.
Your profile page
You can now access your profile from the avatar icon.
Helpful links
All the most helpful links are now in one place. You can access the documentation, changelog, and support from the bottom of the menu.
How to update
The Trigger.dev Cloud is now running
v2.2.4
. If you are self-hosting you can upgrade by pinning to thev2.2.4
tag.
Next.js 14 was just announced on stage at Next.js Conf and we're happy to announce that we've just released support for it in our
@trigger.dev/[email protected]
release.You can now create a new Next.js 14 app with Trigger.dev as easy as:
_10npx create-next-app@latest_10npx @trigger.dev/cli@latest initOur
@trigger.dev/cli init
command will automatically detect that you're using Next.js 14 and auto-configure your project, whether it uses Pages or the new App directory.Check out our Next.js Quickstart for more on how to get started with Next.js and Trigger.dev.
We've just released Trigger.dev server
v2.2.4
and@trigger.dev/*
packages @2.2.2
, which includes a new feature called Auto Yielding Executions that will drastically cut down on serverless function timeouts and provides stronger guarantees around duplicate task executions.The TLDR is that our
@trigger.dev/sdk
will now automatically yield Job Run executions that are about to timeout, and resume them in another function execution. Previously when executing a Job Run we'd keep executing until the serverless function timed out, and resume executing only after the timeout was received.The issue is that we didn't have good control over when the timeout would occur, and it could occur at any time during the execution. This could result in some tasks getting executed multiple times, which is not ideal. It would also mean unwanted timeout logs, which could cause issues with any downstream alert systems. This is what happened when upgrading one of our projects to the new
@trigger.dev/[email protected]
:If you want to learn more about how this works, read the full Auto Yielding Executions discussion.
We've made a small tweak to our OpenAI integration that allows it to be used with any OpenAI compatible API, such as Perplexity.ai:
_10import { OpenAI } from "@trigger.dev/openai";_10_10const perplexity = new OpenAI({_10id: "perplexity",_10apiKey: process.env["PERPLEXITY_API_KEY"]!,_10baseURL: "https://api.perplexity.ai", // specify the base URL for Perplexity.ai_10icon: "brand-open-source", // change the task icon to a generic open source logo_10});Since Perplexity.ai is compatible with OpenAI, you can use the same tasks as with OpenAI but using Open Source models, like
minstral-7b-instruct
:_37client.defineJob({_37id: "perplexity-tasks",_37name: "Perplexity Tasks",_37version: "0.0.1",_37trigger: eventTrigger({_37name: "perplexity.tasks",_37}),_37integrations: {_37perplexity,_37},_37run: async (payload, io, ctx) => {_37await io.perplexity.chat.completions.create("chat-completion", {_37model: "mistral-7b-instruct",_37messages: [_37{_37role: "user",_37content: "Create a good programming joke about background jobs",_37},_37],_37});_37_37// Run this in the background_37await io.perplexity.chat.completions.backgroundCreate(_37"background-chat-completion",_37{_37model: "mistral-7b-instruct",_37messages: [_37{_37role: "user",_37content:_37"If you were a programming language, what would you be and why?",_37},_37],_37}_37);_37},_37});And you'll get the same experience in the Run Dashboard when viewing the logs:
We also support the Azure OpenAI Service through the
defaultHeaders
anddefaultQuery
options:_11import { OpenAI } from "@trigger.dev/openai";_11_11const azureOpenAI = new OpenAI({_11id: "azure-openai",_11apiKey: process.env["AZURE_API_KEY"]!,_11icon: "brand-azure",_11baseURL:_11"https://my-resource.openai.azure.com/openai/deployments/my-gpt35-16k-deployment",_11defaultQuery: { "api-version": "2023-06-01-preview" },_11defaultHeaders: { "api-key": process.env["AZURE_API_KEY"] },_11});Server v2.2.4
These additional changes made it into the server in v2.2.4:
- [abc9737a][@chaturrved] Support for Tabler Icons in tasks:
_10await io.runTask(_10"cache-key",_10async () => {_10// do something cubey here_10},_10{ icon: "3d-cube-sphere" }_10);- [6d3b761c][@hmacr] Fixed run list pagination
- [5ea6a49d] Made the app work very basically on mobile devices
- [627c767c] Fixed an issue where webhook triggers would erronously attempt to re-register whenever jobs were indexed.
@trigger.dev/[email protected]
- [6769d6b4]: Detects JSRuntime (Node/Deno at the moment). Adds basic Deno support
- [9df93d07]: Improve create-integration output. Use templates and shared configs.
- [50e31924]: add ability to use custom tunnel in dev command
- [0adf41c7]: Added Next.js
maxDuration
commented out to the api/trigger file using CLI init
How to update
The Trigger.dev Cloud is now running
v2.2.4
. If you are self-hosting you can upgrade by pinning to thev2.2.4
tag.The
trigger.dev/*
packages are now atv2.2.2
. You can update using the following command:_10npx @trigger.dev/cli@latest updateFor your Jobs to work they need to be registered with the Trigger.dev server (cloud or self-hosted). When they're registered we can trigger runs.
We've just fixed a major problem: we now show errors in your Job definitions so you can fix them. Before you had no idea why they weren't appearing or being updated in the dashboard.
In the console
When you run
npx @trigger.dev/cli@latest dev
you'll now see any errors with your Job definitions.In this case, we've set an interval of less than 60 seconds on our
intervalTrigger
, and we've left the name off a Job:In the dashboard
These errors are also shown on the "Environments" page of the dashboard. You can manually refresh from this page as well, which is useful for Staging/Production if you haven't setup automatic refreshing.
Other changes
Improvements
- Added a filter for active jobs in the dashboard (PR #601 by hmacr)
- Replaced React Hot Toast with Sonner toasts (Issue #555 by arjunindiai)
- Upgraded packages to use Node 18 and fetch instead of node-fetch (PR #581 by Rutam21)
- Added contributors section to readme.md (PR #594 by mohitd404)
- Added SvelteKit adaptor (PR #467 by Chigala)
Fixes
- intervalTriggers of >10 mins didn't ever start in Staging/Prod (Issue #611)
- Improved the robustness and error reporting for login with magic link
How to update
The Trigger.dev Cloud is now running
v2.2.0
. If you are self-hosting you can upgrade by pinning to thev2.2.0
tag.The
trigger.dev/*
packages are now atv2.2.0
. You can update using the following command:_10npx @trigger.dev/cli@latest updateExamples and recent payloads
You can easily select from our example payloads and the most recent 5 payloads that triggered this Job. We also automatically populate the editor with an example or the most recent payload.
JSON linting
As you're editing the JSON it's linted which means you get useful errors pointing you at where there are problems.
Submit using your keyboard
You can press ⌘↵ on Mac, CTRL+Enter on Windows to quickly submit the test.
Test editor video tour (1m 37s)
Highlights
We've added a new integration for Replicate, which lets you run machine learning models with a few lines of code. Powered by a new feature of the platform we call "Task Callbacks", which allows tasks to be "completed" via a webhook or failed via a timeout. You can use these using
io.runTask
:_14await io.runTask(_14"use-callback-url",_14async (task) => {_14// task.callbackUrl is the URL to call when the task is done_14// The output of this task will be the body POSTed to this URL_14},_14{_14name: "Use the callbackUrl to notify the caller when the task is done",_14callback: {_14enabled: true,_14timeoutInSeconds: 300, // If task.callbackUrl is not called within 300 seconds, the task will fail_14},_14}_14);Improvements
We've done a lot of performance work this release, especially for job runs with a large amount of tasks and logs. Long story short we now do a much better job with using cached task outputs when resuming runs. For a deep dive check out this pull request
Bug Fixes
- Fixed an issue with the Linear
getAll
types #81e886a1 - Updated the adapter packages (Remix, Next, Astro, and Express) to return response headers.
Credits
Thanks to @nicktrn for the Linear fix!
How to update
The Trigger.dev Cloud is now running
2.1.10
. If you are self-hosting you can upgrade by pinning to thev2.1.10
tag.To upgrade your
@trigger.dev/*
packages, you can issue the following command:_10npx @trigger.dev/cli@latest update- Fixed an issue with the Linear
Replicate lets you run machine learning models with a few lines of code, without needing to understand how machine learning works. And now you can easily use the Replicate API in your own applications using Trigger.dev and our new Replicate integration:
_25client.defineJob({_25id: "replicate-cinematic-prompt",_25name: "Replicate - Cinematic Prompt",_25version: "0.1.0",_25integrations: { replicate },_25trigger: eventTrigger({_25name: "replicate.cinematic",_25}),_25run: async (payload, io, ctx) => {_25const prediction = await io.replicate.predictions.createAndAwait(_25"await-prediction",_25{_25version:_25"af1a68a271597604546c09c64aabcd7782c114a63539a4a8d14d1eeda5630c33",_25input: {_25prompt: `rick astley riding a harley through post-apocalyptic miami, cinematic, 70mm, anamorphic, bokeh`,_25width: 1280,_25height: 720,_25},_25}_25);_25_25return prediction.output;_25},_25});We make use of replicate webhooks and a new feature of Trigger.dev called "Task Callbacks" to ensure long-running predictions don't result in function timeout errors.
See more details in the Replicate integration docs.
Thanks to @nicktrn for the awesome work on this integration 🚀
It's October! Which can only mean one thing... Hacktoberfest is back! 🎉 This year we've lined up some great swag, and plenty of GitHub issues to contribute to.
Here's how to get involved:
- We've created GitHub issues tagged: 🎃 hacktoberfest
- Each issue is also tagged with points: 💎 100 points
- For every PR you get merged, you collect points
- Collect as many points before October 31st 2023
- Then spend your points in our shop 🎁
We've added support for an additional environment between
DEV
andPROD
calledSTAGING
. This environment is useful for testing your Jobs in a production-like environment before deploying to production.All existing projects will automatically have a
STAGING
environment created for them. The API Key for this environment will start withtr_stg_
.We will be adding support for ephemeral
PREVIEW
environments for popular platforms like Vercel in the future, so stay tuned!You can now redact data from Task outputs, so it won't be visible in the dashboard. This is useful for sensitive data like Personally Identifiable Information (PII).
To use, add the
redact
option torunTask
like so:_21const result = await io.runTask(_21"task-example-1",_21async () => {_21return {_21id: "evt_3NYWgVI0XSgju2ur0PN22Hsu",_21object: "event",_21api_version: "2022-11-15",_21created: 1690473903,_21data: {_21object: {_21id: "ch_3NYWgVI0XSgju2ur0C2UzeKC",_21},_21},_21};_21},_21{_21redact: {_21paths: ["data.object.id"],_21},_21}_21);We've had a lot of requests for using Trigger.dev with other frameworks other than Next.js, so today we're announcing three new ones:
- Next.js
- Remix
- Astro
- Express
With many more coming very soon:
- SvelteKit
- RedwoodJS
- Nuxt.js
- Nest.js
- Fastify
You can now create
statuses
in your Job code that lets you do some pretty cool stuff in your UI, like:- Show exactly what you want in your UI (with as many statuses as you want).
- Pass arbitrary data to your UI, which you can use to render elements.
- Update existing elements in your UI as the progress of the run continues.
Here's some example code showing for a Job that generates memes. We've created a single status
generatingMemes
(you can create as many as you like) and then we've updated it (you can update it as often as you like). It gives you fine-grained control over how you report progress and output data from your Job._29client.defineJob({_29id: "meme-generator",_29name: "Generate memes",_29version: "0.1.1",_29trigger: eventTrigger({_29name: "generate-memes",_29}),_29run: async (payload, io, ctx) => {_29const generatingMemes = await io.createStatus("generating-memes", {_29label: "Generating memes",_29state: "loading",_29data: {_29progress: 0.1,_29},_29});_29_29//...do stuff, like generate memes_29_29await generatingMemes.update("middle-generation", {_29state: "success",_29data: {_29progress: 1,_29urls: [_29"https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExZnZoMndsdWh0MmhvY2kyaDF6YjZjZzg1ZGsxdnhhYm13a3Q1Y3lkbyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/13HgwGsXF0aiGY/giphy.gif",_29],_29},_29});_29},_29});Check out React Status Hooks in the docs.
You can now authenticate as your users!
Before, you could only use our Integrations with API Keys or OAuth authentication as yourself, the developer. Now, you can authenticate using the auth credentials of your users.
Auth Resolver allows you to implement your own custom auth resolving using a third-party service like Clerk or Nango.
We've created a beautiful showcase for Jobs (and projects) that you can build using Trigger.dev. They have code you can copy and paste to get started quickly.
Welcome to our new Usage Dashboard! You can now keep track of your Jobs on the 'Usage & Billing' page of the app. Here's a list of all the data you can now see at a glance:
- The number of Job Runs each month
- The total number of Runs this month
- The total number of Jobs
- The total number of Integrations
- The number of team members in your Organization
Login to your account and click 'Usage & Billing' in the side menu of a Project page to see your Usage Dashboard.
Streamline your project and issue tracking with our new Linear Integration.
We've improved our Integrations to support generic interfaces and better ergonomics.
Previously, integrations could not support tasks with generic type parameters or fluent interfaces. For example, previously our OpenAI integration looked like this:
_10await io.openai.createChatCompletion("chat-completion", {_10model: "gpt-3.5-turbo",_10messages: [_10{_10role: "user",_10content: "Create a good programming joke about background jobs",_10},_10],_10});Which is now replaced with the following that much more closely matches the OpenAI SDK:
_10await io.openai.chat.completions.create("chat-completion", {_10model: "gpt-3.5-turbo",_10messages: [_10{_10role: "user",_10content: "Create a good programming joke about background jobs",_10},_10],_10});Tasks can also now have generic type parameters as well, which is useful for integrations like Supabase or Airtable that have user-defined schemas:
_10const table = io.airtable_10.base(payload.baseId)_10.table<LaunchGoalsAndOkRs>(payload.tableName);_10_10const records = await table.getRecords("muliple records", {_10fields: ["Status"],_10});Interact with your Airtable bases with our new Airtable Integration.
We've improved our documentation for:
We've added a new @trigger.dev/testing package.
We've fixed the Zod errors that were occuring because of excessively deep Type instantiation when using
eventTrigger
and Zod 3.22.2.
Thanks to Liran Tal, we now have a native package to use Trigger.dev with Astro.
To update your existing projects to the latest version of the SDK, run the following command:
_10npx @trigger.dev/cli updateIf you are self-hosting the Trigger.dev service, you'll need to update to the
latest
image:_10docker pull triggerdotdev/trigger.dev:v2.0.0_10# or_10docker pull triggerdotdev/trigger.dev:latest@sha256:00d9d9646c3781c04b84b4a7fe2c3b9ffa79e22559ca70ffa1ca1e9ce570a799If you are using the Trigger.dev Cloud, you'll automatically get the latest version of the service.
Our CLI has been updated with some fixes and improvements:
- 3ce5397: Added the send-event command
- 3897e6e: Make it more clear which API key the init command expects
- dd10717: Added
--hostname
option to the cli dev command - 8cf8544: Bugfix: @trigger.dev/cli init now correctly identifies the App Dir when using JS (thx @Chigala ✨)
- 4e78da3: fix: Add an update sub-command the @trigger.dev/cli that updates all @trigger.dev/* packages (thx @hugomn ✨)
- 135cb49: fixed the cli init log message to show the correct path to the app route created (thx @Chigala ✨)
We've updated our OpenAI package to use the new and improved v4 of the OpenAI SDK.
All of our existing tasks should work as before, but now when you use the
.native
property you'll get back and nice and shiny v4 SDK:_12import { OpenAI } from "@trigger.dev/openai";_12_12const openai = new OpenAI({_12id: "openai",_12apiKey: process.env["OPENAI_API_KEY"]!,_12});_12_12// Before: v3 SDK_12openai.native.createCompletion({...});_12_12// Now: v4 SDK_12openai.native.completions.create({...});We've also added some new tasks for enabling fine tuning jobs:
createFineTuningJob
- Create a fine-tuning job for a fine-tuning modelretrieveFineTuningJob
- Retrieve a fine-tuning job for a fine-tuning modellistFineTuningJobs
- List fine-tuning jobs for a fine-tuning modelcancelFineTuningJob
- Cancel a fine-tuning job for a fine-tuning modellistFineTuningJobEvents
- List fine-tuning job events for a fine-tuning model
When sending events, you can delay the delivery by setting either the
deliverAt
ordeliverAfter
option:_10await client.sendEvent(_10{_10id: "event-1",_10name: "example.event",_10payload: { hello: "world" },_10},_10{_10deliverAfter: 1000 * 60 * 60 * 24, // 1 day_10}_10);You can now easily cancel delayed events to prevent subsequent job runs with the new
cancelEvent
method:_10await client.cancelEvent("event-1");This functionality requires
@trigger.dev/[email protected]
or later.You can now disable jobs in your code by setting the
enabled
option tofalse
:_10client.defineJob({_10id: "example-job",_10name: "Example Job",_10version: "0.1.0",_10trigger: eventTrigger({ name: "example.event" }),_10enabled: false,_10run: async (payload, io, ctx) => {_10// your job code here_10},_10});Which will show the job as disabled in the dashboard:
Once you've disabled your job, you can delete it from the dashboard:
For more detailed information, checkout our documentation on managing Jobs.
We had an issue where runs that included tasks that had large task outputs could not be resumed after a delay. This was because we send completed task outputs in the request body when we resume a run, and some platforms have a limit on the size of the request body. We now cap the size of the task outputs we send in the request body to 3.5MB.
We've redesigned the way we queue and execute job runs in order to increase the speed of job execution.
- Fixed the cached task miss issue in the
@trigger.dev/sdk
which should speed up resumed runs by A LOT - Allow setting graphile worker concurrency settings through env vars
WORKER_CONCURRENCY
andEXECUTION_WORKER_CONCURRENCY
- Allow settings prisma pool settings through env vars
DATABASE_CONNECTION_LIMIT
andDATABASE_POOL_TIMEOUT
- You can now selectively enable/disable the workers through
WORKER_ENABLED=false
andEXECUTION_WORKER_ENABLED=false
. This means the image can be deployed as 2 or 3 separate services:- A WebApp service that serves the API and the Dashboard
- A Worker service that runs tasks that have been added the standard worker
- An Execution Worker service that only runs "run execution" tasks
- Deprecated the
JobOptions.queue
options as we are no longer using that to control job concurrency. We'll add proper queue support in the future.
- Fixed the cached task miss issue in the
We've dropped the
beta
label on our v2.0.0 release of the Trigger.dev service, and moving forward we'll be updating the version number like good semantic version citizens.