The main difference is that things in v3 are far simpler. That’s because in v3 your code is deployed to our servers (unless you self-host) which are long-running.
The prompt in the accordion below gives good results when using Anthropic Claude 3.5 Sonnet. You’ll need a relatively large token limit.
Don’t forget to paste your own v2 code in a markdown codeblock at the bottom of the prompt before running it.
I would like you to help me convert from Trigger.dev v2 to Trigger.dev v3.
The important differences:
The syntax for creating “background jobs” has changed. In v2 it looked like this:
import{ eventTrigger }from"@trigger.dev/sdk";import{ client }from"@/trigger";import{ db }from"@/lib/db";client.defineJob({ enabled:true, id:"my-job-id", name:"My job name", version:"0.0.1",// This is triggered by an event using eventTrigger. You can also trigger Jobs with webhooks, on schedules, and more: https://trigger.dev/docs/documentation/concepts/triggers/introduction trigger:eventTrigger({ name:"theevent.name", schema: z.object({ phoneNumber: z.string(), verified: z.boolean(),}),}),run:async(payload, io)=>{//everything needed to be wrapped in io.runTask in v2, to make it possible for long-running code to workconst result =await io.runTask("get-stuff-from-db",async()=>{const socials =await db.query.Socials.findMany({ where:eq(Socials.service,"tiktok"),});return socials;}); io.logger.info("Completed fetch successfully");},});
In v3 it looks like this:
import{ task }from"@trigger.dev/sdk/v3";import{ db }from"@/lib/db";exportconst getCreatorVideosFromTikTok =task({ id:"my-job-id",run:async(payload:{ phoneNumber:string, verified:boolean})=>{//in v3 there are no timeouts, so you can just use the code as is, no need to wrap in `io.runTask`const socials =await db.query.Socials.findMany({ where:eq(Socials.service,"tiktok"),});//use `logger` instead of `io.logger` logger.info("Completed fetch successfully");},});
Notice that the schema on v2 eventTrigger defines the payload type. In v3 that needs to be done on the TypeScript type of the run payload param.
2. v2 had integrations with some APIs. Any package that isn’t @trigger.dev/sdk can be replaced with an official SDK. The syntax may need to be adapted.
For example:
v2:
import{ OpenAI }from"@trigger.dev/openai";const openai =newOpenAI({ id:"openai", apiKey: process.env.OPENAI_API_KEY!,});client.defineJob({ id:"openai-job", name:"OpenAI Job", version:"1.0.0", trigger:invokeTrigger(), integrations:{ openai,// Add the OpenAI client as an integration},run:async(payload, io, ctx)=>{// Now you can access it through the io objectconst completion =await io.openai.chat.completions.create("completion",{ model:"gpt-3.5-turbo", messages:[{ role:"user", content:"Create a good programming joke about background jobs",},],});},});
Would become in v3:
import OpenAI from"openai";const openai =newOpenAI({ apiKey: process.env.OPENAI_API_KEY,});exportconst openaiJob =task({ id:"openai-job",run:async(payload)=>{const completion =await openai.chat.completions.create({ model:"gpt-3.5-turbo", messages:[{ role:"user", content:"Create a good programming joke about background jobs",},],});},});
So don’t use the @trigger.dev/openai package in v3, use the official OpenAI SDK.
Bear in mind that the syntax for the latest official SDK will probably be different from the @trigger.dev integration SDK. You will need to adapt the code accordingly.
3. The most critical difference is that inside the run function you do NOT need to wrap everything in io.runTask. So anything inside there can be extracted out and be used in the main body of the function without wrapping it.
4. The import for task in v3 is import { task } from "@trigger.dev/sdk/v3";
5. You can trigger jobs from other jobs. In v2 this was typically done by either calling io.sendEvent() or by calling yourOtherTask.invoke(). In v3 you call .trigger() on the other task, there are no events in v3.
v2:
This is a (very contrived) example that does a long OpenAI API call (>10s), stores the result in a database, waits for 5 mins, and then returns the result.
First, the old v2 code, which uses the OpenAI integration. Comments inline:
v2 OpenAI task
import{ client }from"~/trigger";import{ eventTrigger }from"@trigger.dev/sdk";//1. A Trigger.dev integration for OpenAIimport{ OpenAI }from"@trigger.dev/openai";const openai =newOpenAI({ id:"openai", apiKey: process.env["OPENAI_API_KEY"]!,});//2. Use the client to define a "Job"client.defineJob({ id:"openai-tasks", name:"OpenAI Tasks", version:"0.0.1", trigger:eventTrigger({ name:"openai.tasks", schema: z.object({ prompt: z.string(),}),}),//3. integrations are added and come through to `io` in the run fn integrations:{ openai,},run:async(payload, io, ctx)=>{//4. You use `io` to get the integration//5. Also note that "backgroundCreate" was needed for OpenAI// to do work that lasted longer than your serverless timeoutconst chatCompletion =await io.openai.chat.completions.backgroundCreate(//6. You needed to add "cacheKeys" to any "task""background-chat-completion",{ messages:[{ role:"user", content: payload.prompt }], model:"gpt-3.5-turbo",});const result = chatCompletion.choices[0]?.message.content;if(!result){//7. throwing an error at the top-level in v2 failed the task immediatelythrownewError("No result from OpenAI");}//8. io.runTask needed to be used to prevent work from happening twiceconst dbRow =await io.runTask("store-in-db",async(task)=>{//9. Custom logic can be put here// Anything returned must be JSON-serializable, so no Date objects etc.returnsaveToDb(result);});//10. Wait for 5 minutes.// You need a cacheKey and the 2nd param is a numberawait io.wait("wait some time",60*5);//11. Anything returned must be JSON-serializable, so no Date objects etc.return result;},});
In v3 we eliminate a lot of code mainly because we don’t need tricks to try avoid timeouts. Here’s the equivalent v3 code:
v3 OpenAI task
import{ logger, task, wait }from"@trigger.dev/sdk/v3";//1. Official OpenAI SDKimport OpenAI from"openai";const openai =newOpenAI({ apiKey: process.env.OPENAI_API_KEY,});//2. Jobs don't exist now, use "task"exportconst openaiTask =task({ id:"openai-task",//3. Retries happen if a task throws an error that isn't caught// The default settings are in your trigger.config.ts (used if not overriden here) retry:{ maxAttempts:3,},run:async(payload:{ prompt:string})=>{//4. Use the official SDK//5. No timeouts, so this can take a long timeconst chatCompletion =await openai.chat.completions.create({ messages:[{ role:"user", content: payload.prompt }], model:"gpt-3.5-turbo",});const result = chatCompletion.choices[0]?.message.content;if(!result){//6. throwing an error at the top-level will retry the task (if retries are enabled)thrownewError("No result from OpenAI");}//7. No need to use runTask, just call the functionconst dbRow =awaitsaveToDb(result);//8. You can provide seconds, minutes, hours etc.// You don't need cacheKeys in v3await wait.for({ minutes:5});//9. You can return anything that's serializable using SuperJSON// That includes undefined, Date, bigint, RegExp, Set, Map, Error and URL.return result;},});
In v2 there were different trigger types and triggering each type was slightly different.
v2 triggering
asyncfunctionyourBackendFunction(){//1. for `eventTrigger` you use `client.sendEvent`const event =await client.sendEvent({ name:"openai.tasks", payload:{ prompt:"Create a good programming joke about background jobs"},});//2. for `invokeTrigger` you'd call `invoke` on the jobconst{ id }=await invocableJob.invoke({ prompt:"What is the meaning of life?",});}
We’ve unified triggering in v3. You use trigger() or batchTrigger() which you can do on any type of task. Including scheduled, webhooks, etc if you want.
v3 triggering
asyncfunctionyourBackendFunction(){//call `trigger()` on any taskconst handle =await openaiTask.trigger({ prompt:"Tell me a programming joke",});}