Here we add Prisma and OpenAI instrumentations to your `trigger.config.ts` file.
```ts trigger.config.ts theme={null}
import { defineConfig } from "@trigger.dev/sdk";
import { PrismaInstrumentation } from "@prisma/instrumentation";
import { OpenAIInstrumentation } from "@traceloop/instrumentation-openai";
export default defineConfig({
project: "
3. Hit the "Save" button to apply the changes.
Now whenever you push to your main branch, Vercel will deploy your application to the production environment without promoting it, and you can control the promotion manually.
### Deploy with Trigger.dev
Now we want to deploy that same commit to Trigger.dev, and then promote the Vercel deployment when that completes. Here's a sample GitHub Actions workflow that does this:
```yml theme={null}
name: Deploy to Trigger.dev (prod)
on:
push:
branches:
- main
concurrency:
group: ${{ github.workflow }}
cancel-in-progress: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: "20.x"
- name: Install dependencies
run: npm install
- name: Wait for vercel deployment (push)
id: wait-for-vercel
uses: ludalex/vercel-wait@v1
with:
project-id: ${{ secrets.VERCEL_PROJECT_ID }}
team-id: ${{ secrets.VERCEL_SCOPE_NAME }}
token: ${{ secrets.VERCEL_TOKEN }}
sha: ${{ github.sha }}
- name: 🚀 Deploy Trigger.dev
id: deploy-trigger
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
run: |
npx trigger.dev@latest deploy
- name: Promote Vercel deploy
run: npx vercel promote $VERCEL_DEPLOYMENT_ID --yes --token $VERCEL_TOKEN --scope $VERCEL_SCOPE_NAME
env:
VERCEL_DEPLOYMENT_ID: ${{ steps.wait-for-vercel.outputs.deployment-id }}
VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
VERCEL_SCOPE_NAME: ${{ secrets.VERCEL_SCOPE_NAME }}
```
This workflow does the following:
1. Waits for the Vercel deployment to complete using the `ludalex/vercel-wait` action.
2. Deploys the tasks to Trigger.dev using the `npx trigger.dev deploy` command. There's no need to use the `--skip-promotion` flag because we want to promote the deployment.
3. Promotes the Vercel deployment using the `npx vercel promote` command.
For this workflow to work, you need to set up the following secrets in your GitHub repository:
* `TRIGGER_ACCESS_TOKEN`: Your Trigger.dev personal access token. View the instructions [here](/github-actions) to learn more.
* `VERCEL_TOKEN`: Your Vercel personal access token. You can find this in your Vercel account settings.
* `VERCEL_PROJECT_ID`: Your Vercel project ID. You can find this in your Vercel project settings.
* `VERCEL_SCOPE_NAME`: Your Vercel team slug.
Checkout our [example repo](https://github.com/ericallam/vercel-atomic-deploys) to see this workflow in action.
Copy the API key from the dashboard and set the `TRIGGER_SECRET_KEY` environment variable, and then any tasks you trigger will run against the deployed version:
```txt .env theme={null}
TRIGGER_SECRET_KEY="tr_prod_abc123"
```
Now you can trigger your tasks:
```ts theme={null}
import { myTask } from "./trigger/tasks";
await myTask.trigger({ foo: "bar" });
```
See our [triggering tasks](/triggering) guide for more information.
## Versions
When you deploy your tasks, Trigger.dev creates a new version of all tasks in your project. A version is a snapshot of your tasks at a certain point in time. This ensures that tasks are not affected by changes to the code.
### Current version
When you deploy, the version number is automatically incremented, and the new version is set as the current version for that environment.
This allows you to deploy and test a new version without affecting new task runs. When you want to promote the version, you can do so from the CLI:
```bash theme={null}
npx trigger.dev promote 20250228.1
```
Or from the dashboard:
To learn more about skipping promotion and how this enables atomic deployments, see our [Atomic deployment](/deployment/atomic-deployment) guide.
## Staging deploys
By default, the `deploy` command will deploy to the `prod` environment. If you want to deploy to a different environment, you can use the `--env` flag:
```bash theme={null}
npx trigger.dev deploy --env staging
```
Now you can trigger tasks against the staging environment by setting the `TRIGGER_SECRET_KEY` environment variable to the staging API key:
```txt .env theme={null}
TRIGGER_SECRET_KEY="tr_stg_abcd123"
```
Currently, we only support two environments: `prod` and `staging`. Multiple environments are on our roadmap which you can track [here](https://feedback.trigger.dev/p/more-environments).
## Environment variables
To add custom environment variables to your deployed tasks, you need to add them to your project in the Trigger.dev dashboard, or automatically sync them using our [syncEnvVars](/config/config-file#syncenvvars) or [syncVercelEnvVars](/config/config-file#syncvercelenvvars) build extensions.
For more information on environment variables, see our [environment variables](/deploy-environment-variables) guide.
## Troubleshooting
When things go wrong with your deployment, there are a few things you can do to diagnose the issue:
### Dry runs
You can do a "dry run" of the deployment to see what is built and uploaded without actually deploying:
```bash theme={null}
npx trigger.dev deploy --dry-run
# Dry run complete. View the built project at /
We recommend you automatically create a preview branch for each git branch when a Pull Request is opened and then archive it automatically when the PR is merged/closed.
The process to use preview branches looks like this:
1. Create a preview branch
2. Deploy to the preview branch (1+ times)
3. Trigger runs using your Preview API key (`TRIGGER_SECRET_KEY`) and the branch name (`TRIGGER_PREVIEW_BRANCH`).
4. Archive the preview branch when the branch is done.
There are two main ways to do this:
1. Automatically: using GitHub Actions (recommended).
2. Manually: in the dashboard and/or using the CLI.
### Limits on active preview branches
We restrict the number of active preview branches (per project). You can archive a preview branch at any time (automatically or manually) to unlock another slot – or you can upgrade your plan.
Once archived you can still view the dashboard for the branch but you can't trigger or execute runs (or other write operations).
This limit exists because each branch has an independent concurrency limit. For the Cloud product these are the limits:
| Plan | Active preview branches |
| ----- | ----------------------- |
| Free | 0 |
| Hobby | 5 |
| Pro | 20 (then paid for more) |
For full details see our [pricing page](https://trigger.dev/pricing).
## Triggering runs and using the SDK
Before we talk about how to deploy to preview branches, one important thing to understand is that you must set the `TRIGGER_PREVIEW_BRANCH` environment variable as well as the `TRIGGER_SECRET_KEY` environment variable.
When deploying to somewhere that supports `process.env` (like Node.js runtimes) you can just set the environment variables:
```bash theme={null}
TRIGGER_SECRET_KEY="tr_preview_1234567890"
TRIGGER_PREVIEW_BRANCH="your-branch-name"
```
If you're deploying somewhere that doesn't support `process.env` (like some edge runtimes) you can manually configure the SDK:
```ts theme={null}
import { configure } from "@trigger.dev/sdk";
import { myTask } from "./trigger/myTasks";
configure({
secretKey: "tr_preview_1234567890", // WARNING: Never actually hardcode your secret key like this
previewBranch: "your-branch-name",
});
async function triggerTask() {
await myTask.trigger({ userId: "1234" }); // Trigger a run in your-branch-name
}
```
## Preview branches with GitHub Actions (recommended)
This GitHub Action will:
1. Automatically create a preview branch for your Pull Request (if the branch doesn't already exist).
2. Deploy the preview branch.
3. Archive the preview branch when the Pull Request is merged/closed.
```yml .github/workflows/trigger-preview-branches.yml theme={null}
name: Deploy to Trigger.dev (preview branches)
on:
pull_request:
types: [opened, synchronize, reopened, closed]
jobs:
deploy-preview:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: "20.x"
- name: Install dependencies
run: npm install
- name: Deploy preview branch
run: npx trigger.dev@latest deploy --env preview
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
```
For this workflow to work, you need to set the following secrets in your GitHub repository:
* `TRIGGER_ACCESS_TOKEN`: A Trigger.dev personal access token (they start with `tr_pat_`). [Learn how to create one and set it in GitHub](/github-actions#creating-a-personal-access-token).
Notice that the deploy command has `--env preview` at the end. We automatically detect the preview branch from the GitHub actions env var.
You can manually specify the branch using `--branch
You can also archive a branch:
## Environment variables
You can set environment variables for "Preview" and they will get applied to all branches (existing and new). You can also set environment variables for a specific branch. If they are set for both then the branch-specific variables will take precedence.
These can be set manually in the dashboard, or automatically at deploy time using the [syncEnvVars()](/config/extensions/syncEnvVars) or [syncVercelEnvVars()](/config/extensions/syncEnvVars#syncvercelenvvars) build extensions.
### Sync environment variables
Full instructions are in the [syncEnvVars()](/config/extensions/syncEnvVars) documentation.
```ts trigger.config.ts theme={null}
import { defineConfig } from "@trigger.dev/sdk";
// You will need to install the @trigger.dev/build package
import { syncEnvVars } from "@trigger.dev/build/extensions/core";
export default defineConfig({
//... other config
build: {
// This will automatically detect and sync environment variables
extensions: [
syncEnvVars(async (ctx) => {
// You can fetch env variables from a 3rd party service like Infisical, Hashicorp Vault, etc.
// The ctx.branch will be set if it's a preview deployment.
return await fetchEnvVars(ctx.environment, ctx.branch);
}),
],
},
});
```
### Sync Vercel environment variables
You need to set the `VERCEL_ACCESS_TOKEN`, `VERCEL_PROJECT_ID` and `VERCEL_TEAM_ID` environment variables. You can find these in the Vercel dashboard. Full instructions are in the [syncVercelEnvVars()](/config/extensions/syncEnvVars#syncvercelenvvars) documentation.
The extension will automatically detect a preview branch deploy from Vercel and sync the appropriate environment variables.
```ts trigger.config.ts theme={null}
import { defineConfig } from "@trigger.dev/sdk";
// You will need to install the @trigger.dev/build package
import { syncVercelEnvVars } from "@trigger.dev/build/extensions/core";
export default defineConfig({
//... other config
build: {
// This will automatically detect and sync environment variables
extensions: [syncVercelEnvVars()],
},
});
```
# Errors & Retrying
Source: https://trigger.dev/docs/errors-retrying
How to deal with errors and write reliable tasks.
When an uncaught error is thrown inside your task, that task attempt will fail.
You can configure retrying in two ways:
1. In your [trigger.config file](/config/config-file) you can set the default retrying behavior for all tasks.
2. On each task you can set the retrying behavior.
### Production and staging branches
When you connect a repository, the default branch of your repository will be used as the production tracking branch, by default.
When you configure a production or staging branch, every push to that branch will trigger a deployment.
Our build server will install the project dependencies, build your project, and deploy it to the corresponding environment.
If there are multiple consecutive pushes to a tracked branch, the later deployments will be queued until the previous deployment completes.
Alternatively, you can follow these steps on GitHub:
1. Go to your GitHub account settings
2. Navigate to **Settings** → **Applications** → **Installed GitHub Apps**
3. Click **Configure** next to `Trigger.dev App`
4. Update repository access under `Repository access`
Changes to repository access will be reflected immediately in your Trigger.dev project settings.
## Environment variables at build time
You can expose environment variables during the build and deployment process by prefixing them with `TRIGGER_BUILD_`.
In the build server, the `TRIGGER_BUILD_` prefix is stripped from the variable name, i.e., `TRIGGER_BUILD_MY_TOKEN` is exposed as `MY_TOKEN`.
Build extensions will also have access to these variables.
## Example task
In this example, we'll create a workflow that generates and translates copy. This approach is particularly effective when tasks require different models or approaches for different inputs.
**This task:**
* Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
* Uses `experimental_telemetry` to provide LLM logs
* Generates marketing copy based on subject and target word count
* Validates the generated copy meets word count requirements (±10 words)
* Translates the validated copy to the target language while preserving tone
```typescript theme={null}
import { openai } from "@ai-sdk/openai";
import { task } from "@trigger.dev/sdk";
import { generateText } from "ai";
export interface TranslatePayload {
marketingSubject: string;
targetLanguage: string;
targetWordCount: number;
}
export const generateAndTranslateTask = task({
id: "generate-and-translate-copy",
maxDuration: 300, // Stop executing after 5 mins of compute
run: async (payload: TranslatePayload) => {
// Step 1: Generate marketing copy
const generatedCopy = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content: "You are an expert copywriter.",
},
{
role: "user",
content: `Generate as close as possible to ${payload.targetWordCount} words of compelling marketing copy for ${payload.marketingSubject}`,
},
],
experimental_telemetry: {
isEnabled: true,
functionId: "generate-and-translate-copy",
},
});
// Gate: Validate the generated copy meets the word count target
const wordCount = generatedCopy.text.split(/\s+/).length;
if (
wordCount < payload.targetWordCount - 10 ||
wordCount > payload.targetWordCount + 10
) {
throw new Error(
`Generated copy length (${wordCount} words) is outside acceptable range of ${
payload.targetWordCount - 10
}-${payload.targetWordCount + 10} words`
);
}
// Step 2: Translate to target language
const translatedCopy = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content: `You are an expert translator specializing in marketing content translation into ${payload.targetLanguage}.`,
},
{
role: "user",
content: `Translate the following marketing copy to ${payload.targetLanguage}, maintaining the same tone and marketing impact:\n\n${generatedCopy.text}`,
},
],
experimental_telemetry: {
isEnabled: true,
functionId: "generate-and-translate-copy",
},
});
return {
englishCopy: generatedCopy,
translatedCopy,
};
},
});
```
## Run a test
On the Test page in the dashboard, select the `generate-and-translate-copy` task and include a payload like the following:
```json theme={null}
{
marketingSubject: "The controversial new Jaguar electric concept car",
targetLanguage: "Spanish",
targetWordCount: 100,
}
```
This example payload generates copy and then translates it using sequential LLM calls. The translation only begins after the generated copy has been validated against the word count requirements.
# AI agents overview
Source: https://trigger.dev/docs/guides/ai-agents/overview
Real world AI agent example tasks using Trigger.dev
## Overview
These guides will show you how to set up different types of AI agent workflows with Trigger.dev. The examples take inspiration from Anthropic's blog post on [building effective agents](https://www.anthropic.com/research/building-effective-agents).
## Example task
In this example, we'll create a workflow that simultaneously checks content for issues while responding to customer inquiries. This approach is particularly effective when tasks require multiple perspectives or parallel processing streams, with the orchestrator synthesizing the results into a cohesive output.
**This task:**
* Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
* Uses `experimental_telemetry` to provide LLM logs
* Uses [`batch.triggerByTaskAndWait`](/triggering#batch-triggerbytaskandwait) to run customer response and content moderation tasks in parallel
* Generates customer service responses using an AI model
* Simultaneously checks for inappropriate content while generating responses
```typescript theme={null}
import { openai } from "@ai-sdk/openai";
import { batch, task } from "@trigger.dev/sdk";
import { generateText } from "ai";
// Task to generate customer response
export const generateCustomerResponse = task({
id: "generate-customer-response",
run: async (payload: { question: string }) => {
const response = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content: "You are a helpful customer service representative.",
},
{ role: "user", content: payload.question },
],
experimental_telemetry: {
isEnabled: true,
functionId: "generate-customer-response",
},
});
return response.text;
},
});
// Task to check for inappropriate content
export const checkInappropriateContent = task({
id: "check-inappropriate-content",
run: async (payload: { text: string }) => {
const response = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content:
"You are a content moderator. Respond with 'true' if the content is inappropriate or contains harmful, threatening, offensive, or explicit content, 'false' otherwise.",
},
{ role: "user", content: payload.text },
],
experimental_telemetry: {
isEnabled: true,
functionId: "check-inappropriate-content",
},
});
return response.text.toLowerCase().includes("true");
},
});
// Main task that coordinates the parallel execution
export const handleCustomerQuestion = task({
id: "handle-customer-question",
run: async (payload: { question: string }) => {
const {
runs: [responseRun, moderationRun],
} = await batch.triggerByTaskAndWait([
{
task: generateCustomerResponse,
payload: { question: payload.question },
},
{
task: checkInappropriateContent,
payload: { text: payload.question },
},
]);
// Check moderation result first
if (moderationRun.ok && moderationRun.output === true) {
return {
response:
"I apologize, but I cannot process this request as it contains inappropriate content.",
wasInappropriate: true,
};
}
// Return the generated response if everything is ok
if (responseRun.ok) {
return {
response: responseRun.output,
wasInappropriate: false,
};
}
// Handle any errors
throw new Error("Failed to process customer question");
},
});
```
## Run a test
On the Test page in the dashboard, select the `handle-customer-question` task and include a payload like the following:
```json theme={null}
{
"question": "Can you explain 2FA?"
}
```
When triggered with a question, the task simultaneously generates a response while checking for inappropriate content using two parallel LLM calls. The main task waits for both operations to complete before delivering the final response.
# Route a question to a different AI model
Source: https://trigger.dev/docs/guides/ai-agents/route-question
Create an AI agent workflow that routes a question to a different AI model depending on its complexity
## Overview
**Routing** is a workflow pattern that classifies an input and directs it to a specialized followup task. This pattern allows for separation of concerns and building more specialized prompts, which is particularly effective when there are distinct categories that are better handled separately. Without routing, optimizing for one kind of input can hurt performance on other inputs.
## Example task
In this example, we'll create a workflow that routes a question to a different AI model depending on its complexity. This approach is particularly effective when tasks require different models or approaches for different inputs.
**This task:**
* Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
* Uses `experimental_telemetry` in the source verification and historical analysis tasks to provide LLM logs
* Routes questions using a lightweight model (`o1-mini`) to classify complexity
* Directs simple questions to `gpt-4o` and complex ones to `gpt-o3-mini`
* Returns both the answer and metadata about the routing decision
````typescript theme={null}
import { openai } from "@ai-sdk/openai";
import { task } from "@trigger.dev/sdk";
import { generateText } from "ai";
import { z } from "zod";
// Schema for router response
const routingSchema = z.object({
model: z.enum(["gpt-4o", "gpt-o3-mini"]),
reason: z.string(),
});
// Router prompt template
const ROUTER_PROMPT = `You are a routing assistant that determines the complexity of questions.
Analyze the following question and route it to the appropriate model:
- Use "gpt-4o" for simple, common, or straightforward questions
- Use "gpt-o3-mini" for complex, unusual, or questions requiring deep reasoning
Respond with a JSON object in this exact format:
{"model": "gpt-4o" or "gpt-o3-mini", "reason": "your reasoning here"}
Question: `;
export const routeAndAnswerQuestion = task({
id: "route-and-answer-question",
run: async (payload: { question: string }) => {
// Step 1: Route the question
const routingResponse = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content:
"You must respond with a valid JSON object containing only 'model' and 'reason' fields. No markdown, no backticks, no explanation.",
},
{
role: "user",
content: ROUTER_PROMPT + payload.question,
},
],
temperature: 0.1,
experimental_telemetry: {
isEnabled: true,
functionId: "route-and-answer-question",
},
});
// Add error handling and cleanup
let jsonText = routingResponse.text.trim();
if (jsonText.startsWith("```")) {
jsonText = jsonText.replace(/```json\n|\n```/g, "");
}
const routingResult = routingSchema.parse(JSON.parse(jsonText));
// Step 2: Get the answer using the selected model
const answerResult = await generateText({
model: openai(routingResult.model),
messages: [{ role: "user", content: payload.question }],
});
return {
answer: answerResult.text,
selectedModel: routingResult.model,
routingReason: routingResult.reason,
};
},
});
````
## Run a test
Triggering our task with a simple question shows it routing to the gpt-4o model and returning the answer with reasoning:
```json theme={null}
{
"question": "How many planets are there in the solar system?"
}
```
# Translate text and refine it based on feedback
Source: https://trigger.dev/docs/guides/ai-agents/translate-and-refine
This guide will show you how to create a task that translates text and refines it based on feedback.
## Overview
This example is based on the **evaluator-optimizer** pattern, where one LLM generates a response while another provides evaluation and feedback in a loop. This is particularly effective for tasks with clear evaluation criteria where iterative refinement provides better results.
## Example task
This example task translates text into a target language and refines the translation over a number of iterations based on feedback provided by the LLM.
**This task:**
* Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to generate the translation
* Uses `experimental_telemetry` to provide LLM logs on the Run page in the dashboard
* Runs for a maximum of 10 iterations
* Uses `generateText` again to evaluate the translation
* Recursively calls itself to refine the translation based on the feedback
```typescript theme={null}
import { task } from "@trigger.dev/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
interface TranslationPayload {
text: string;
targetLanguage: string;
previousTranslation?: string;
feedback?: string;
rejectionCount?: number;
}
export const translateAndRefine = task({
id: "translate-and-refine",
run: async (payload: TranslationPayload) => {
const rejectionCount = payload.rejectionCount || 0;
// Bail out if we've hit the maximum attempts
if (rejectionCount >= 10) {
return {
finalTranslation: payload.previousTranslation,
iterations: rejectionCount,
status: "MAX_ITERATIONS_REACHED",
};
}
// Generate translation (or refinement if we have previous feedback)
const translationPrompt = payload.feedback
? `Previous translation: "${payload.previousTranslation}"\n\nFeedback received: "${payload.feedback}"\n\nPlease provide an improved translation addressing this feedback.`
: `Translate this text into ${payload.targetLanguage}, preserving style and meaning: "${payload.text}"`;
const translation = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content: `You are an expert literary translator into ${payload.targetLanguage}.
Focus on accuracy first, then style and natural flow.`,
},
{
role: "user",
content: translationPrompt,
},
],
experimental_telemetry: {
isEnabled: true,
functionId: "translate-and-refine",
},
});
// Evaluate the translation
const evaluation = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content: `You are an expert literary critic and translator focused on practical, high-quality translations.
Your goal is to ensure translations are accurate and natural, but not necessarily perfect.
This is iteration ${
rejectionCount + 1
} of a maximum 5 iterations.
RESPONSE FORMAT:
- If the translation meets 90%+ quality: Respond with exactly "APPROVED" (nothing else)
- If improvements are needed: Provide only the specific issues that must be fixed
Evaluation criteria:
- Accuracy of meaning (primary importance)
- Natural flow in the target language
- Preservation of key style elements
DO NOT provide detailed analysis, suggestions, or compliments.
DO NOT include the translation in your response.
IMPORTANT RULES:
- First iteration MUST receive feedback for improvement
- Be very strict on accuracy in early iterations
- After 3 iterations, lower quality threshold to 85%`,
},
{
role: "user",
content: `Original: "${payload.text}"
Translation: "${translation.text}"
Target Language: ${payload.targetLanguage}
Iteration: ${rejectionCount + 1}
Previous Feedback: ${
payload.feedback ? `"${payload.feedback}"` : "None"
}
${
rejectionCount === 0
? "This is the first attempt. Find aspects to improve."
: 'Either respond with exactly "APPROVED" or provide only critical issues that must be fixed.'
}`,
},
],
experimental_telemetry: {
isEnabled: true,
functionId: "translate-and-refine",
},
});
// If approved, return the final result
if (evaluation.text.trim() === "APPROVED") {
return {
finalTranslation: translation.text,
iterations: rejectionCount,
status: "APPROVED",
};
}
// If not approved, recursively call the task with feedback
await translateAndRefine
.triggerAndWait({
text: payload.text,
targetLanguage: payload.targetLanguage,
previousTranslation: translation.text,
feedback: evaluation.text,
rejectionCount: rejectionCount + 1,
})
.unwrap();
},
});
```
## Run a test
On the Test page in the dashboard, select the `translate-and-refine` task and include a payload like the following:
```json theme={null}
{
"text": "In the twilight of his years, the old clockmaker's hands, once steady as the timepieces he crafted, now trembled like autumn leaves in the wind.",
"targetLanguage": "French"
}
```
This example payload translates the text into French and should be suitably difficult to require a few iterations, depending on the model used and the prompt criteria you set.
# Verify a news article
Source: https://trigger.dev/docs/guides/ai-agents/verify-news-article
Create an AI agent workflow that verifies the facts in a news article
## Overview
This example demonstrates the **orchestrator-workers** pattern, where a central AI agent dynamically breaks down complex tasks and delegates them to specialized worker agents. This pattern is particularly effective when tasks require multiple perspectives or parallel processing streams, with the orchestrator synthesizing the results into a cohesive output.
## Example task
Our example task uses multiple LLM calls to extract claims from a news article and analyze them in parallel, combining source verification and historical context to assess their credibility.
**This task:**
* Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
* Uses `experimental_telemetry` to provide LLM logs
* Uses [`batch.triggerByTaskAndWait`](/triggering#batch-triggerbytaskandwait) to orchestrate parallel processing of claims
* Extracts factual claims from news articles using the `o1-mini` model
* Evaluates claims against recent sources and analyzes historical context in parallel
* Combines results into a structured analysis report
```typescript theme={null}
import { openai } from "@ai-sdk/openai";
import { batch, logger, task } from "@trigger.dev/sdk";
import { CoreMessage, generateText } from "ai";
// Define types for our workers' outputs
interface Claim {
id: number;
text: string;
}
interface SourceVerification {
claimId: number;
isVerified: boolean;
confidence: number;
explanation: string;
}
interface HistoricalAnalysis {
claimId: number;
feasibility: number;
historicalContext: string;
}
// Worker 1: Claim Extractor
export const extractClaims = task({
id: "extract-claims",
run: async ({ article }: { article: string }) => {
try {
const messages: CoreMessage[] = [
{
role: "system",
content:
"Extract distinct factual claims from the news article. Format as numbered claims.",
},
{
role: "user",
content: article,
},
];
const response = await generateText({
model: openai("o1-mini"),
messages,
});
const claims = response.text
.split("\n")
.filter((line: string) => line.trim())
.map((claim: string, index: number) => ({
id: index + 1,
text: claim.replace(/^\d+\.\s*/, ""),
}));
logger.info("Extracted claims", { claimCount: claims.length });
return claims;
} catch (error) {
logger.error("Error in claim extraction", {
error: error instanceof Error ? error.message : "Unknown error",
});
throw error;
}
},
});
// Worker 2: Source Verifier
export const verifySource = task({
id: "verify-source",
run: async (claim: Claim) => {
const response = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content:
"Verify this claim by considering recent news sources and official statements. Assess reliability.",
},
{
role: "user",
content: claim.text,
},
],
experimental_telemetry: {
isEnabled: true,
functionId: "verify-source",
},
});
return {
claimId: claim.id,
isVerified: false,
confidence: 0.7,
explanation: response.text,
};
},
});
// Worker 3: Historical Context Analyzer
export const analyzeHistory = task({
id: "analyze-history",
run: async (claim: Claim) => {
const response = await generateText({
model: openai("o1-mini"),
messages: [
{
role: "system",
content:
"Analyze this claim in historical context, considering past announcements and technological feasibility.",
},
{
role: "user",
content: claim.text,
},
],
experimental_telemetry: {
isEnabled: true,
functionId: "analyze-history",
},
});
return {
claimId: claim.id,
feasibility: 0.8,
historicalContext: response.text,
};
},
});
// Orchestrator
export const newsFactChecker = task({
id: "news-fact-checker",
run: async ({ article }: { article: string }) => {
// Step 1: Extract claims
const claimsResult = await batch.triggerByTaskAndWait([
{ task: extractClaims, payload: { article } },
]);
if (!claimsResult.runs[0].ok) {
logger.error("Failed to extract claims", {
error: claimsResult.runs[0].error,
runId: claimsResult.runs[0].id,
});
throw new Error(
`Failed to extract claims: ${claimsResult.runs[0].error}`
);
}
const claims = claimsResult.runs[0].output;
// Step 2: Process claims in parallel
const parallelResults = await batch.triggerByTaskAndWait([
...claims.map((claim) => ({ task: verifySource, payload: claim })),
...claims.map((claim) => ({ task: analyzeHistory, payload: claim })),
]);
// Split and process results
const verifications = parallelResults.runs
.filter(
(run): run is typeof run & { ok: true } =>
run.ok && run.taskIdentifier === "verify-source"
)
.map((run) => run.output as SourceVerification);
const historicalAnalyses = parallelResults.runs
.filter(
(run): run is typeof run & { ok: true } =>
run.ok && run.taskIdentifier === "analyze-history"
)
.map((run) => run.output as HistoricalAnalysis);
return { claims, verifications, historicalAnalyses };
},
});
```
## Run a test
On the Test page in the dashboard, select the `news-fact-checker` task and include a payload like the following:
```json theme={null}
{
"article": "Tesla announced a new breakthrough in battery technology today. The company claims their new batteries will have 50% more capacity and cost 30% less to produce. Elon Musk stated this development will enable electric vehicles to achieve price parity with gasoline cars by 2024. The new batteries are scheduled to enter production next quarter at the Texas Gigafactory."
}
```
This example payload verifies the claims in the news article and provides a report on the results.
# dotenvx
Source: https://trigger.dev/docs/guides/community/dotenvx
A dotenvx package for Trigger.dev.
This is a community developed package from [dotenvx](https://dotenvx.com/) that enables you to use dotenvx with Trigger.dev.
[View the docs](https://dotenvx.com/docs/background-jobs/triggerdotdev)
# Fatima
Source: https://trigger.dev/docs/guides/community/fatima
A Fatima package for Trigger.dev.
This is a community developed package from [@Fgc17](https://github.com/Fgc17) that enables you to use Fatima with Trigger.dev.
[View the Fatima docs](https://fatimajs.vercel.app/docs/adapters/trigger)
[View the repo](https://github.com/Fgc17/fatima)
# Rate limiter
Source: https://trigger.dev/docs/guides/community/rate-limiter
A rate limiter for Trigger.dev.
This is a community developed package from [@ian](https://github.com/ian) that uses Redis to rate limit Trigger.dev tasks.
[View the repo](https://github.com/ian/trigger-rate-limiting)
# SvelteKit setup guide
Source: https://trigger.dev/docs/guides/community/sveltekit
A plugin for SvelteKit to integrate with Trigger.dev.
export const framework_0 = "SvelteKit"
This is a community developed Vite plugin from [@cptCrunch\_](https://x.com/cptCrunch_) that enables seamless integration between SvelteKit and Trigger.dev by allowing you to use your SvelteKit functions directly in your Trigger.dev projects.
## Features
* Use SvelteKit functions directly in Trigger.dev tasks
* Automatic function discovery and export
* TypeScript support with type preservation
* Works with Trigger.dev V3
* Configurable directory scanning
## Prerequisites
* Setup a project in {framework_0}
* Ensure TypeScript is installed
* [Create a Trigger.dev account](https://cloud.trigger.dev)
* Create a new Trigger.dev project
## Setup
[View setup guide on npm](https://www.npmjs.com/package/triggerkit)
```bash theme={null}
npm i triggerkit
```
## Relevant code
* **Meme generator task**:
* The [memegenerator.ts](https://github.com/triggerdotdev/examples/blob/main/meme-generator-human-in-the-loop/src/trigger/memegenerator.ts) task:
* Generates two meme variants using DALL-E 3
* Uses [batchTriggerAndWait](/triggering#yourtask-batchtriggerandwait) to generate multiple meme variants simultaneously (this is because you can only generate 1 image at a time with DALL-E 3)
* Creates a [waitpoint token](/wait-for-token)
* Sends the generated images with approval buttons to Slack for review
* Handles the approval workflow
* **Approval Endpoint**:
* The waitpoint approval handling is in [page.tsx](https://github.com/triggerdotdev/examples/blob/main/meme-generator-human-in-the-loop/src/app/endpoints/\[slug]/page.tsx), which processes:
* User selections from Slack buttons
* Waitpoint completion with the chosen meme variant
* Success/failure feedback to the approver
## Learn more
To learn more, take a look at the following resources:
* [Waitpoint tokens](/wait-for-token) - learn about waitpoint tokens in Trigger.dev and human-in-the-loop flows
* [OpenAI DALL-E API](https://platform.openai.com/docs/guides/images) - learn about the DALL-E image generation API
* [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API
* [Slack Incoming Webhooks](https://api.slack.com/messaging/webhooks) - learn about integrating with Slack
# OpenAI Agents SDK for Python guardrails
Source: https://trigger.dev/docs/guides/example-projects/openai-agent-sdk-guardrails
This example project demonstrates how to implement different types of guardrails using the OpenAI Agent SDK for Python with Trigger.dev.
## Overview
This demo is a practical guide that demonstrates:
* **Three types of AI guardrails**: Input validation, output checking, and real-time streaming monitoring
* Integration of the [OpenAI Agent SDK for Python](https://openai.github.io/openai-agents-python/) with [Trigger.dev](https://trigger.dev) for production AI workflows
* Triggering Python scripts from tasks using our [Python build extension](/config/extensions/pythonExtension)
* **Educational examples** of implementing guardrails for AI safety and control mechanisms
* Real-world scenarios like math tutoring agents with content validation and complexity monitoring
Guardrails are safety mechanisms that run alongside AI agents to validate input, check output, monitor streaming content in real-time, and prevent unwanted or harmful behavior.
## GitHub repo
Hi {name},
{message}
This is just a simple implementation, you can customize the email to be as complex as you want. Check out the [React email templates](https://react.email/templates) for more inspiration.
## Testing your task
To test this task in the [dashboard](https://cloud.trigger.dev), you can use the following payload:
```json theme={null}
{
"to": "recipient@example.com",
"name": "Jane Doe",
"message": "Thank you for signing up for our service!",
"subject": "Welcome to Acme Inc."
}
```
## Deploying your task
Deploy the task to production using the Trigger.dev CLI `deploy` command.
## Using Cursor / AI to build your emails
In this video you can see how we use Cursor to build a welcome email.
We recommend using our [Cursor rules](https://trigger.dev/changelog/cursor-rules-writing-tasks/) to help you build your tasks and emails.
#### Video: creating a new email template using Cursor
#### The generated email template
#### The generated code
```tsx emails/trigger-welcome-email.tsx theme={null}
import {
Body,
Button,
Container,
Head,
Heading,
Hr,
Html,
Img,
Link,
Preview,
Section,
Text,
} from "@react-email/components";
const baseUrl = process.env.VERCEL_URL ? `https://${process.env.VERCEL_URL}` : "";
export interface TriggerWelcomeEmailProps {
name: string;
}
export const TriggerWelcomeEmail = ({ name }: TriggerWelcomeEmailProps) => (
Hello ${payload.name},
Welcome to Trigger.dev
`, }); if (error) { // Throwing an error will trigger a retry of this block throw error; } return data; }, { maxAttempts: 3 } ); // Then wait 3 days await wait.for({ days: 3 }); // Send the second email const secondEmailResult = await retry.onThrow( async ({ attempt }) => { const { data, error } = await resend.emails.send({ from: "hello@trigger.dev", to: payload.email, subject: "Some tips for you", html: `Hello ${payload.name},
Here are some tips for you…
`, }); if (error) { // Throwing an error will trigger a retry of this block throw error; } return data; }, { maxAttempts: 3 } ); //etc... }, }); ``` ## Testing your task To test this task in the dashboard, you can use the following payload: ```json theme={null} { "userId": "123", "email": "
## Testing your task
To test this task in the [dashboard](https://cloud.trigger.dev), you can use the following payload:
```json theme={null}
{
"title": "My Awesome OG image",
"imageUrl": "
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
You can add values for your local dev environment, staging and prod. in this case we will add the `DATABASE_URL` for the production environment.
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
For more information on authenticating with Trigger.dev, see the [API keys page](/apikeys).
## Triggering your task in Next.js
Here are the steps to trigger your task in the Next.js App and Pages router and Server Actions. Alternatively, check out this repo for a [full working example](https://github.com/triggerdotdev/example-projects/tree/main/nextjs/server-actions/my-app) of a Next.js app with a Trigger.dev task triggered using a Server Action.
Visit the [Trigger.dev dashboard](https://cloud.trigger.dev) to see your run.
Visit the [Trigger.dev dashboard](https://cloud.trigger.dev) to see your run.
Visit the [Trigger.dev dashboard](https://cloud.trigger.dev) to see your run.
You can add values for your local dev environment, staging and prod.
You can also add environment variables in code by following the steps on the [Environment Variables page](/deploy-environment-variables#in-your-code).
## Deploying your task to Trigger.dev
For this guide, we'll manually deploy your task by running the [CLI deploy command](/cli-deploy) below. Other ways to deploy are listed in the next section.
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
For more information on authenticating with Trigger.dev, see the [API keys page](/apikeys).
## Triggering your task in Remix
Visit the [Trigger.dev dashboard](https://cloud.trigger.dev) to see your run.
You can add values for your local dev environment, staging and prod.
You can also add environment variables in code by following the steps on the [Environment Variables page](/deploy-environment-variables#in-your-code).
## Deploying your task to Trigger.dev
For this guide, we'll manually deploy your task by running the [CLI deploy command](/cli-deploy) below. Other ways to deploy are listed in the next section.
In this guide, you'll learn how to use Sequin to trigger Trigger.dev tasks from database changes.
## Prerequisites
You are about to create a [regular Trigger.dev task](/tasks-regular) that you will execute when ever a post is inserted or updated in your database. Sequin will detect all the changes on the `posts` table and then send the payload of the post to an API endpoint that will call `tasks.trigger()` to create the embedding and update the database.
As long as you create an HTTP endpoint that Sequin can deliver webhooks to, you can use any web framework or edge function (e.g. Supabase Edge Functions, Vercel Functions, Cloudflare Workers, etc.) to invoke your Trigger.dev task. In this guide, we'll show you how to setup Trigger.dev tasks using Next.js API Routes.
You'll need the following to follow this guide:
* A Next.js project with [Trigger.dev](https://trigger.dev) installed
5. On the next screen, select **Push** to have Sequin send the events to your webhook URL. Click **Continue**.
6. Now, give your consumer a name (i.e. `posts_push_consumer`) and in the **HTTP Endpoint** section select the `local_endpoint` you created above. Add the exact API route you created in the previous step (i.e. `/api/create-embedding-for-post`):
7. Click the **Create Consumer** button.
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
Then, in [Supabase](https://supabase.com/dashboard/projects), select your project, navigate to 'Project settings'
## Deploy your task and trigger it from your edge function
The task will be triggered when your edge function URL is accessed.
Check your [cloud.trigger.dev](http://cloud.trigger.dev) dashboard and you should see a succesful `hello-world` task.
**Congratulations, you have run a simple Hello World task from a Supabase edge function!**
Call your table `video_transcriptions`.
## Create and deploy the Trigger.dev task
### Generate the Database type definitions
To allow you to use TypeScript to interact with your table, you need to [generate the type definitions](https://supabase.com/docs/guides/api/rest/generating-types) for your Supabase table using the Supabase CLI.
```bash theme={null}
supabase gen types --lang=typescript --project-id
### Deploying your task
Now you can now deploy your task using the following command:
Then, in [Supabase](https://supabase.com/dashboard/projects), select the project you want to use, navigate to 'Project settings'
### Create a new Edge Function using the Supabase CLI
Now create an Edge Function using the Supabase CLI. Call it `video-processing-handler`. This function will be triggered by the Database Webhook.
```bash theme={null}
supabase functions new video-processing-handler
```
```ts functions/video-processing-handler/index.ts theme={null}
// Setup type definitions for built-in Supabase Runtime APIs
import "jsr:@supabase/functions-js/edge-runtime.d.ts";
import { tasks } from "npm:@trigger.dev/sdk@latest";
// Import the videoProcessAndUpdate task from the trigger folder
import type { videoProcessAndUpdate } from "../../../src/trigger/videoProcessAndUpdate.ts";
// 👆 type only import
// Sets up a Deno server that listens for incoming JSON requests
Deno.serve(async (req) => {
const payload = await req.json();
// This payload will contain the video url and id from the new row in the table
const videoUrl = payload.record.video_url;
const id = payload.record.id;
// Trigger the videoProcessAndUpdate task with the videoUrl payload
await tasks.trigger
Then, go to 'Database'
Your Database Webhook is now ready to use.
## Triggering the entire workflow
Your `video-processing-handler` Edge Function is now set up to trigger the `videoProcessAndUpdate` task every time a new row is inserted into your `video_transcriptions` table.
To do this, go back to your Supabase project dashboard, click on 'Table Editor'
Add a new item under `video_url`, with a public video url.
Once the new table row has been inserted, check your [cloud.trigger.dev](http://cloud.trigger.dev) project 'Runs' list
Once the run has completed successfully, go back to your Supabase `video_transcriptions` table, and you should see that in the row containing the original video URL, the transcription has now been added to the `transcription` column.
**Congratulations! You have completed the full workflow from Supabase to Trigger.dev and back again.**
## Learn more about Supabase and Trigger.dev
### Full walkthrough guides from development to deployment
Because we use standard OpenTelemetry, you can instrument your code and OpenTelemetry compatible libraries to get detailed traces and logs of your tasks. The above trace instruments both Prisma and the AWS SDK:
```ts trigger.config.ts theme={null}
import { defineConfig } from "@trigger.dev/sdk";
import { PrismaInstrumentation } from "@prisma/instrumentation";
import { AwsInstrumentation } from "@opentelemetry/instrumentation-aws-sdk";
export default defineConfig({
project: "
You can view your usage page by clicking the "Organization" menu in the top left of the dashboard and then clicking "Usage".
## Create billing alerts
Configure billing alerts in your dashboard to get notified when you approach spending thresholds. This helps you:
* Catch unexpected cost increases early
* Identify runaway tasks before they become expensive
You can view your billing alerts page by clicking the "Organization" menu in the top left of the dashboard and then clicking "Settings".
## Reduce your machine sizes
The larger the machine, the more it costs per second. [View the machine pricing](https://trigger.dev/pricing#computePricing).
Start with the smallest machine that works, then scale up only if needed:
```ts theme={null}
// Default: small-1x (0.5 vCPU, 0.5 GB RAM)
export const lightTask = task({
id: "light-task",
// No machine config needed - uses small-1x by default
run: async (payload) => {
// Simple operations
},
});
// Only use larger machines when necessary
export const heavyTask = task({
id: "heavy-task",
machine: "medium-1x", // 1 vCPU, 2 GB RAM
run: async (payload) => {
// CPU/memory intensive operations
},
});
```
You can also override machine size when triggering if you know certain payloads need more resources. [Read more about machine sizes](/machines).
## Avoid duplicate work using idempotencyKey
Idempotency keys prevent expensive duplicate work by ensuring the same operation isn't performed multiple times. This is especially valuable during task retries or when the same trigger might fire multiple times.
When you use an idempotency key, Trigger.dev remembers the result and skips re-execution, saving you compute costs:
```ts theme={null}
export const expensiveApiCall = task({
id: "expensive-api-call",
run: async (payload: { userId: string }) => {
// This expensive operation will only run once per user
await wait.for(
{ seconds: 30 },
{
idempotencyKey: `user-processing-${payload.userId}`,
idempotencyKeyTTL: "1h",
}
);
const result = await processUserData(payload.userId);
return result;
},
});
```
You can use idempotency keys with various wait functions:
```ts theme={null}
// Skip waits during retries
const token = await wait.createToken({
idempotencyKey: `daily-report-${new Date().toDateString()}`,
idempotencyKeyTTL: "24h",
});
// Prevent duplicate child task execution
await childTask.triggerAndWait(
{ data: payload },
{
idempotencyKey: `process-${payload.id}`,
idempotencyKeyTTL: "1h",
}
);
```
The `idempotencyKeyTTL` controls how long the result is cached. Use shorter TTLs (like "1h") for time-sensitive operations, or longer ones (up to 30 days default) for expensive operations that rarely need re-execution. This prevents both unnecessary duplicate work and stale data issues.
## Do more work in parallel in a single task
Sometimes it's more efficient to do more work in a single task than split across many. This is particularly true when you're doing lots of async work such as API calls – most of the time is spent waiting, so it's an ideal candidate for doing calls in parallel inside the same task.
```ts theme={null}
export const processItems = task({
id: "process-items",
run: async (payload: { items: string[] }) => {
// Process all items in parallel
const promises = payload.items.map((item) => processItem(item));
// This works very well for API calls
await Promise.all(promises);
},
});
```
## Don't needlessly retry
When an error is thrown in a task, your run will be automatically reattempted based on your [retry settings](/tasks/overview#retry-options).
Try setting lower `maxAttempts` for less critical tasks:
```ts theme={null}
export const apiTask = task({
id: "api-task",
retry: {
maxAttempts: 2, // Don't retry forever
},
run: async (payload) => {
// API calls that might fail
},
});
```
This is very useful for intermittent errors, but if there's a permanent error you don't want to retry because you will just keep failing and waste compute. Use [AbortTaskRunError](/errors-retrying#using-aborttaskrunerror) to prevent a retry:
```ts theme={null}
import { task, AbortTaskRunError } from "@trigger.dev/sdk";
export const someTask = task({
id: "some-task",
run: async (payload) => {
const result = await doSomething(payload);
if (!result.success) {
// This is a known permanent error, so don't retry
throw new AbortTaskRunError(result.error);
}
return result;
},
});
```
## Use appropriate maxDuration settings
Set realistic maxDurations to prevent runs from executing for too long:
```ts theme={null}
export const boundedTask = task({
id: "bounded-task",
maxDuration: 300, // 5 minutes max
run: async (payload) => {
// Task will be terminated after 5 minutes
},
});
```
# Idempotency
Source: https://trigger.dev/docs/idempotency
An API call or operation is “idempotent” if it has the same result when called more than once.
We currently support idempotency at the task level, meaning that if you trigger a task with the same `idempotencyKey` twice, the second request will not create a new task run.
## `idempotencyKey` option
You can provide an `idempotencyKey` to ensure that a task is only triggered once with the same key. This is useful if you are triggering a task within another task that might be retried:
```ts theme={null}
import { idempotencyKeys, task } from "@trigger.dev/sdk";
export const myTask = task({
id: "my-task",
retry: {
maxAttempts: 4,
},
run: async (payload: any) => {
// This idempotency key will be unique to this task run, meaning the childTask will only be triggered once across all retries
const idempotencyKey = await idempotencyKeys.create("my-task-key");
// childTask will only be triggered once with the same idempotency key
await childTask.trigger({ foo: "bar" }, { idempotencyKey });
// Do something else, that may throw an error and cause the task to be retried
throw new Error("Something went wrong");
},
});
```
You can use the `idempotencyKeys.create` SDK function to create an idempotency key before passing it to the `options` object.
We automatically inject the run ID when generating the idempotency key when running inside a task by default. You can turn it off by passing the `scope` option to `idempotencyKeys.create`:
```ts theme={null}
import { idempotencyKeys, task } from "@trigger.dev/sdk";
export const myTask = task({
id: "my-task",
retry: {
maxAttempts: 4,
},
run: async (payload: any) => {
// This idempotency key will be globally unique, meaning only a single task run will be triggered with this key
const idempotencyKey = await idempotencyKeys.create("my-task-key", { scope: "global" });
// childTask will only be triggered once with the same idempotency key
await childTask.trigger({ foo: "bar" }, { idempotencyKey });
},
});
```
If you are triggering a task from your backend code, you can use the `idempotencyKeys.create` SDK function to create an idempotency key.
```ts theme={null}
import { idempotencyKeys, tasks } from "@trigger.dev/sdk";
// You can also pass an array of strings to create a idempotency key
const idempotencyKey = await idempotencyKeys.create([myUser.id, "my-task"]);
await tasks.trigger("my-task", { some: "data" }, { idempotencyKey });
```
You can also pass a string to the `idempotencyKey` option, without first creating it with `idempotencyKeys.create`.
```ts theme={null}
import { myTask } from "./trigger/myTasks";
// You can also pass an array of strings to create a idempotency key
await myTask.trigger({ some: "data" }, { idempotencyKey: myUser.id });
```
By default idempotency keys are stored for 30 days. You can change this by passing the `idempotencyKeyTTL` option when triggering a task:
```ts theme={null}
import { idempotencyKeys, task, wait } from "@trigger.dev/sdk";
export const myTask = task({
id: "my-task",
retry: {
maxAttempts: 4,
},
run: async (payload: any) => {
const idempotencyKey = await idempotencyKeys.create("my-task-key");
// The idempotency key will expire after 60 seconds
await childTask.trigger({ foo: "bar" }, { idempotencyKey, idempotencyKeyTTL: "60s" });
await wait.for({ seconds: 61 });
// The idempotency key will have expired, so the childTask will be triggered again
await childTask.trigger({ foo: "bar" }, { idempotencyKey });
// Do something else, that may throw an error and cause the task to be retried
throw new Error("Something went wrong");
},
});
```
You can use the following units for the `idempotencyKeyTTL` option:
* `s` for seconds (e.g. `60s`)
* `m` for minutes (e.g. `5m`)
* `h` for hours (e.g. `2h`)
* `d` for days (e.g. `3d`)
## Payload-based idempotency
We don't currently support payload-based idempotency, but you can implement it yourself by hashing the payload and using the hash as the idempotency key.
```ts theme={null}
import { idempotencyKeys, task } from "@trigger.dev/sdk";
import { createHash } from "node:crypto";
// Somewhere in your code
const idempotencyKey = await idempotencyKeys.create(hash(childPayload));
// childTask will only be triggered once with the same idempotency key
await tasks.trigger("child-task", { some: "payload" }, { idempotencyKey });
// Create a hash of the payload using Node.js crypto
// Ideally, you'd do a stable serialization of the payload before hashing, to ensure the same payload always results in the same hash
function hash(payload: any): string {
const hash = createHash("sha256");
hash.update(JSON.stringify(payload));
return hash.digest("hex");
}
```
## Important notes
Idempotency keys, even the ones scoped globally, are actually scoped to the task and the environment. This means that you cannot collide with keys from other environments (e.g. dev will never collide with prod), or to other projects and orgs.
If you use the same idempotency key for triggering different tasks, the tasks will not be idempotent, and both tasks will be triggered. There's currently no way to make multiple tasks idempotent with the same key.
# Welcome to the Trigger.dev docs
Source: https://trigger.dev/docs/introduction
Find all the resources and guides you need to get started
The run log shows you exactly what happened in every run of your tasks. It is comprised of logs, traces and spans.
## Logs
You can use `console.log()`, `console.error()`, etc as normal and they will be shown in your run log. This is the standard function so you can use it as you would in any other JavaScript or TypeScript code. Logs from any functions/packages will also be shown.
### logger
We recommend that you use our `logger` object which creates structured logs. Structured logs will make it easier for you to search the logs to quickly find runs.
```ts /trigger/logging.ts theme={null}
import { task, logger } from "@trigger.dev/sdk";
export const loggingExample = task({
id: "logging-example",
run: async (payload: { data: Record
You can [add instrumentations](/config/config-file#instrumentations). The Prisma one above will automatically trace all Prisma queries.
### Add custom traces
If you want to add custom traces to your code, you can use the `logger.trace` function. It will create a new OTEL trace and you can set attributes on it.
```ts theme={null}
import { logger, task } from "@trigger.dev/sdk";
export const customTrace = task({
id: "custom-trace",
run: async (payload) => {
//you can wrap code in a trace, and set attributes
const user = await logger.trace("fetch-user", async (span) => {
span.setAttribute("user.id", "1");
//...do stuff
//you can return a value
return {
id: "1",
name: "John Doe",
fetchedAt: new Date(),
};
});
const usersName = user.name;
},
});
```
# Machines
Source: https://trigger.dev/docs/machines
Configure the number of vCPUs and GBs of RAM you want the task to use.
The `machine` configuration is optional. Using higher spec machines will increase the cost of running the task but can also improve the performance of the task if it is CPU or memory bound.
```ts /trigger/heavy-task.ts theme={null}
import { task } from "@trigger.dev/sdk";
export const heavyTask = task({
id: "heavy-task",
machine: "large-1x",
run: async ({ payload, ctx }) => {
//...
},
});
```
The default machine is `small-1x` which has 0.5 vCPU and 0.5 GB of RAM. You can change the default machine in your `trigger.config.ts` file:
```ts trigger.config.ts theme={null}
import type { TriggerConfig } from "@trigger.dev/sdk";
export const config: TriggerConfig = {
machine: "small-2x",
// ... other config
};
```
## Machine configurations
| Preset | vCPU | Memory | Disk space |
| :----------------- | :--- | :----- | :--------- |
| micro | 0.25 | 0.25 | 10GB |
| small-1x (default) | 0.5 | 0.5 | 10GB |
| small-2x | 1 | 1 | 10GB |
| medium-1x | 1 | 2 | 10GB |
| medium-2x | 2 | 4 | 10GB |
| large-1x | 4 | 8 | 10GB |
| large-2x | 8 | 16 | 10GB |
You can view the Trigger.dev cloud pricing for these machines [here](https://trigger.dev/pricing#computePricing).
## Overriding the machine when triggering
You can also override the task machine when you [trigger](/triggering) it:
```ts theme={null}
await tasks.trigger
If you are spawning a child process and you want to monitor its memory usage, you can pass the `processName` option to the `ResourceMonitor` class:
```ts /src/trigger/example.ts theme={null}
const resourceMonitor = new ResourceMonitor({
ctx,
processName: "ffmpeg",
});
```
This will produce logs that includes the memory and CPU usage of the `ffmpeg` process:
### Explicit OOM errors
You can explicitly throw an Out Of Memory error in your task. This can be useful if you use a native package that detects it's going to run out of memory and then stops before it runs out. If you can detect this, you can then throw this error.
```ts /trigger/heavy-task.ts theme={null}
import { task } from "@trigger.dev/sdk";
import { OutOfMemoryError } from "@trigger.dev/sdk";
export const yourTask = task({
id: "your-task",
machine: "medium-1x",
run: async (payload: any, { ctx }) => {
//...
throw new OutOfMemoryError();
},
});
```
If OOM errors happen regularly you need to either optimize the memory-efficiency of your code, or increase the machine.
### Retrying with a larger machine
If you are seeing rare OOM errors, it might make sense to add a setting to your task to retry with a large machine when an OOM happens:
```ts /trigger/heavy-task.ts theme={null}
import { task } from "@trigger.dev/sdk";
export const yourTask = task({
id: "your-task",
machine: "medium-1x",
retry: {
outOfMemory: {
machine: "large-1x",
},
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
Status: {run.status}
Progress: {run.completedAt ? "Complete" : "Running..."}
### Usage
Activate the subagent in your prompts by requesting it explicitly:
```markdown theme={null}
use the trigger-dev-expert subagent to create a trigger.dev job that accepts a video url, processes it with ffmpeg to extract the audio, runs the audio through a text-to-speech API like openai, and then uploads both the transcription and the audio to s3
```
The subagent works best when combined with the appropriate rule sets installed alongside it, providing both high-level architectural guidance and detailed implementation knowledge.
## Supported AI clients
The Trigger.dev rules work across a wide range of AI coding assistants and editors:
| Client | Rule activation | Docs |
| :------------------ | :------------------------------------------------------- | :---------------------------------------------------------------- |
| **Cursor** | Automatic when working in trigger directories | [Link](https://docs.cursor.com/en/context/rules#rules/) |
| **Claude Code** | Context-aware activation + custom subagent | [Link](https://docs.anthropic.com/en/docs/claude-code) |
| **VSCode Copilot** | Integration with GitHub Copilot chat | [Link](https://code.visualstudio.com/docs/copilot/overview) |
| **Windsurf** | Automatic activation in Trigger.dev projects | [Link](https://docs.windsurf.com/windsurf/cascade/memories#rules) |
| **Gemini CLI** | Command-line integration | [Link](https://ai.google.dev/gemini-api/docs) |
| **Cline** | Automatic context detection | [Link](https://github.com/cline/cline) |
| **Sourcegraph AMP** | Code intelligence integration | [Link](https://sourcegraph.com/docs) |
| **Kilo** | Custom rule integration | [Link](https://kilocode.ai/docs/advanced-usage/custom-rules) |
| **Ruler** | Rule management | [Link](https://github.com/intellectronica/ruler) |
| **AGENTS.md** | Universal format for OpenAI Codex, Jules, OpenCode, etc. | |
### Rule activation behavior
Different AI tools handle rules differently:
* **Automatic Activation**: Cursor, Windsurf, VSCode Copilot, and Cline automatically apply relevant rules when working in Trigger.dev projects or when `trigger.config.ts` is detected
* **Context-Aware**: Claude Code intelligently applies rules based on the current context and file types
* **Manual Integration**: AGENTS.md clients and others append rules to configuration files for manual activation
## Keeping rules updated
Trigger.dev rules are regularly updated to reflect new features, API changes, and best practices. The CLI includes automatic update detection.
### Automatic update notifications
When running `npx trigger.dev@latest dev`, you'll receive notifications when newer rule versions are available with a simple update command.
### Manual updates
Update rules anytime with:
```bash theme={null}
npx trigger.dev@latest install-rules
```
The update process replaces existing rules without creating duplicates, keeping your configuration files clean and organized.
### Why updates matter
* **Current API patterns**: Access the latest Trigger.dev APIs and features
* **Performance optimizations**: Benefit from improved patterns and practices
* **Deprecated pattern avoidance**: Prevent AI assistants from generating outdated code
* **New feature support**: Immediate access to newly released capabilities
## Getting started
1. Install the rules:
```bash theme={null}
npx trigger.dev@latest install-rules
```
2. Follow the prompts to install the rules for your AI client.
3. Consider installing the `trigger-dev-expert` subagent if using Claude Code.
## Next steps
* [Install the MCP server](/mcp-introduction) for complete Trigger.dev integration
* [Explore MCP tools](/mcp-tools) for project management and task execution
# MCP Introduction
Source: https://trigger.dev/docs/mcp-introduction
Learn how to install and configure the Trigger.dev MCP Server
## What is the Trigger.dev MCP Server?
The Trigger.dev MCP (Model Context Protocol) Server enables AI assistants to interact directly with your Trigger.dev projects. It provides a comprehensive set of tools to:
* Search Trigger.dev documentation
* Initialize new Trigger.dev projects
* List and manage your projects and organizations
* Get task information and trigger task runs
* Deploy projects to different environments
* Monitor run details and list runs with filtering options
## Installation
### Automatic Installation (Recommended)
The easiest way to install the Trigger.dev MCP Server is using the interactive installation wizard:
```bash theme={null}
npx trigger.dev@latest install-mcp
```
This command will guide you through:
1. Selecting which MCP clients to configure
2. Choosing installation scope (user, project, or local)
3. Automatically configuring the selected clients
## Command Line Options
The `install-mcp` command supports the following options:
### Core Options
* `-p, --project-ref
The self-hosting guide covers two alternative setups. The first option uses a simple setup where you run everything on one server. With the second option, the webapp and worker components are split on two separate machines.
You're going to need at least one Debian (or derivative) machine with Docker and Docker Compose installed. We'll also use Ngrok to expose the webapp to the internet.
## Support
It's dangerous to go alone! Join the self-hosting channel on our [Discord server](https://discord.gg/NQTxt5NA7s).
## Caveats
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
{text}
{text}
{weatherLocation ? `The weather in ${weatherLocation} is ${weather} degrees.` : "No weather data"}
Run ID: {run.id}
Processing: {progress.currentItem} ({progress.current}/{progress.total})
Current: {progress.currentItem}
)}Progress: {metadata.progress.percentage}%
} {metadata?.user && (User: {metadata.user.name} ({metadata.user.id})
)} {metadata?.status &&Status: {metadata.status}
}

Runs can also find themselves in lots of other states depending on what's happening at any given time. The following sections describe all the possible states in more detail.
### Initial states
## Run completion
A run is considered finished when:
1. The last attempt succeeds, or
2. The task has reached its retry limit and all attempts have failed
At this point, the run will have either an output (if successful) or an error (if failed).
## Boolean helpers
Run objects returned from the API and Realtime include convenient boolean helper methods to check the run's status:
```ts theme={null}
import { runs } from "@trigger.dev/sdk";
const run = await runs.retrieve("run_1234");
if (run.isCompleted) {
console.log("Run completed successfully");
}
```
* **`isQueued`**: Returns `true` when the status is `QUEUED`, `PENDING_VERSION`, or `DELAYED`
* **`isExecuting`**: Returns `true` when the status is `EXECUTING` or `DEQUEUED`. These count against your concurrency limits.
* **`isWaiting`**: Returns `true` when the status is `WAITING`. These do not count against your concurrency limits.
* **`isCompleted`**: Returns `true` when the status is any of the completed statuses
* **`isCanceled`**: Returns `true` when the status is `CANCELED`
* **`isFailed`**: Returns `true` when the status is any of the failed statuses
* **`isSuccess`**: Returns `true` when the status is `COMPLETED`
These helpers are also available when subscribing to Realtime run updates:
```ts theme={null}
import { runs } from "@trigger.dev/sdk";
for await (const run of runs.subscribeToRun("run_1234")) {
if (run.isCompleted) {
console.log("Run completed successfully!");
break;
}
}
```
## Advanced run features
### Idempotency Keys
When triggering a task, you can provide an idempotency key to ensure the task is executed only once, even if triggered multiple times. This is useful for preventing duplicate executions in distributed systems.
```ts theme={null}
await yourTask.trigger({ foo: "bar" }, { idempotencyKey: "unique-key" });
```
* If a run with the same idempotency key is already in progress, the new trigger will be ignored.
* If the run has already finished, the previous output or error will be returned.
See our [Idempotency docs](/idempotency) for more information.
### Canceling runs
You can cancel an in-progress run using the API or the dashboard:
```ts theme={null}
await runs.cancel(runId);
```
When a run is canceled:
– The task execution is stopped
– The run is marked as canceled
– The task will not be retried
– Any in-progress child runs are also canceled
### Time-to-live (TTL)
TTL is a time-to-live setting that defines the maximum duration a run can remain in a queued state before being automatically expired. You can set a TTL when triggering a run:
```ts theme={null}
await yourTask.trigger({ foo: "bar" }, { ttl: "10m" });
```
If the run hasn't started within the specified TTL, it will automatically expire, returning the status `Expired`. This is useful for time-sensitive tasks where immediate execution is important. For example, when you queue many runs simultaneously and exceed your concurrency limits, some runs might be delayed - using TTL ensures they only execute if they can start within your specified timeframe.
Note that dev runs automatically have a 10-minute TTL. In Staging and Production environments, no TTL is set by default.
### Delayed runs
You can schedule a run to start after a specified delay:
```ts theme={null}
await yourTask.trigger({ foo: "bar" }, { delay: "1h" });
```
This is useful for tasks that need to be executed at a specific time in the future.
### Replaying runs
You can create a new run with the same payload as a previous run:
```ts theme={null}
await runs.replay(runId);
```
This is useful for re-running a task with the same input, especially for debugging or recovering from failures. The new run will use the latest version of the task.
You can also replay runs from the dashboard using the same or different payload. Learn how to do this [here](/replaying).
### Waiting for runs
#### triggerAndWait()
The `triggerAndWait()` function triggers a task and then lets you wait for the result before continuing. [Learn more about triggerAndWait()](/triggering#yourtask-triggerandwait).
#### batchTriggerAndWait()
Similar to `triggerAndWait()`, the `batchTriggerAndWait()` function lets you batch trigger a task and wait for all the results [Learn more about batchTriggerAndWait()](/triggering#yourtask-batchtriggerandwait).
### Runs API
#### runs.list()
List runs in a specific environment. You can filter the runs by status, created at, task identifier, version, and more:
```ts theme={null}
import { runs } from "@trigger.dev/sdk";
// Get the first page of runs, returning up to 20 runs
let page = await runs.list({ limit: 20 });
for (const run of page.data) {
console.log(run);
}
// Keep getting the next page until there are no more runs
while (page.hasNextPage()) {
page = await page.getNextPage();
// Do something with the next page of runs
}
```
You can also use an Async Iterator to get all runs:
```ts theme={null}
import { runs } from "@trigger.dev/sdk";
for await (const run of runs.list({ limit: 20 })) {
console.log(run);
}
```
You can provide multiple filters to the `list()` function to narrow down the results:
```ts theme={null}
import { runs } from "@trigger.dev/sdk";
const response = await runs.list({
status: ["QUEUED", "EXECUTING"], // Filter by status
taskIdentifier: ["my-task", "my-other-task"], // Filter by task identifier
from: new Date("2024-04-01T00:00:00Z"), // Filter by created at
to: new Date(),
version: "20241127.2", // Filter by deployment version,
tag: ["tag1", "tag2"], // Filter by tags
batch: "batch_1234", // Filter by batch ID
schedule: "sched_1234", // Filter by schedule ID
});
```
#### runs.retrieve()
Fetch a single run by it's ID:
```ts theme={null}
import { runs } from "@trigger.dev/sdk";
const run = await runs.retrieve(runId);
```
You can provide the type of the task to correctly type the `run.payload` and `run.output`:
```ts theme={null}
import { runs } from "@trigger.dev/sdk";
import type { myTask } from "./trigger/myTask";
const run = await runs.retrieve
## Configuring for a task
You can set a `maxDuration` on a specific task:
```ts /trigger/max-duration-task.ts theme={null}
import { task } from "@trigger.dev/sdk";
export const maxDurationTask = task({
id: "max-duration-task",
maxDuration: 300, // 300 seconds or 5 minutes
run: async (payload: any, { ctx }) => {
//...
},
});
```
This will override the default `maxDuration` set in the config file. If you have a config file with a default `maxDuration` of 60 seconds, and you set a `maxDuration` of 300 seconds on a task, the task will run for 300 seconds.
You can "turn off" the Max duration set in your config file for a specific task like so:
```ts /trigger/max-duration-task.ts theme={null}
import { task, timeout } from "@trigger.dev/sdk";
export const maxDurationTask = task({
id: "max-duration-task",
maxDuration: timeout.None, // No max duration
run: async (payload: any, { ctx }) => {
//...
},
});
```
## Configuring for a run
You can set a `maxDuration` on a specific run when you trigger a task:
```ts /trigger/max-duration.ts theme={null}
import { maxDurationTask } from "./trigger/max-duration-task";
// Trigger the task with a maxDuration of 300 seconds
const run = await maxDurationTask.trigger(
{ foo: "bar" },
{
maxDuration: 300, // 300 seconds or 5 minutes
}
);
```
You can also set the `maxDuration` to `timeout.None` to turn off the max duration for a specific run:
```ts /trigger/max-duration.ts theme={null}
import { maxDurationTask } from "./trigger/max-duration-task";
import { timeout } from "@trigger.dev/sdk";
// Trigger the task with no maxDuration
const run = await maxDurationTask.trigger(
{ foo: "bar" },
{
maxDuration: timeout.None, // No max duration
}
);
```
## maxDuration in run context
You can access the `maxDuration` set for a run in the run context:
```ts /trigger/max-duration-task.ts theme={null}
import { task } from "@trigger.dev/sdk";
export const maxDurationTask = task({
id: "max-duration-task",
maxDuration: 300, // 300 seconds or 5 minutes
run: async (payload: any, { ctx }) => {
console.log(ctx.run.maxDuration); // 300
},
});
```
## maxDuration and lifecycle functions
When a task run exceeds the `maxDuration`, the lifecycle functions `cleanup`, `onSuccess`, and `onFailure` will not be called.
# Run metadata
Source: https://trigger.dev/docs/runs/metadata
Attach a small amount of data to a run and update it as the run progresses.
You can attach up to 256KB of metadata to a run, which you can then access from inside the run function, via the API, Realtime, and in the dashboard. You can use metadata to store additional, structured information on a run. For example, you could store your user’s full name and corresponding unique identifier from your system on every task that is associated with that user. Or you could store the progress of a long-running task, or intermediate results that you want to access later.
## Usage
Add metadata to a run when triggering by passing it as an object to the `trigger` function:
```ts theme={null}
const handle = await myTask.trigger(
{ message: "hello world" },
{ metadata: { user: { name: "Eric", id: "user_1234" } } }
);
```
You can get the current metadata at any time by calling `metadata.get()` or `metadata.current()` (only inside a run):
```ts theme={null}
import { task, metadata } from "@trigger.dev/sdk";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
// Get the whole metadata object
const currentMetadata = metadata.current();
console.log(currentMetadata);
// Get a specific key
const user = metadata.get("user");
console.log(user.name); // "Eric"
},
});
```
Any of these methods can be called anywhere "inside" the run function, or a function called from the run function:
```ts theme={null}
import { task, metadata } from "@trigger.dev/sdk";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
doSomeWork();
},
});
async function doSomeWork() {
// Set the value of a specific key
metadata.set("progress", 0.5);
}
```
If you call any of the metadata methods outside of the run function, they will have no effect:
```ts theme={null}
import { metadata } from "@trigger.dev/sdk";
// Somewhere outside of the run function
function doSomeWork() {
metadata.set("progress", 0.5); // This will do nothing
}
```
This means it's safe to call these methods anywhere in your code, and they will only have an effect when called inside the run function.
### API
You can use the `runs.retrieve()` SDK function to get the metadata for a run:
```ts theme={null}
import { runs } from "@trigger.dev/sdk";
const run = await runs.retrieve("run_1234");
console.log(run.metadata);
```
See the [API reference](/management/runs/retrieve) for more information.
## Size limit
The maximum size of the metadata object is 256KB. If you exceed this limit, the SDK will throw an error. If you are self-hosting Trigger.dev, you can increase this limit by setting the `TASK_RUN_METADATA_MAXIMUM_SIZE` environment variable. For example, to increase the limit to 16KB, you would set `TASK_RUN_METADATA_MAXIMUM_SIZE=16384`.
# Priority
Source: https://trigger.dev/docs/runs/priority
Specify a priority when triggering a run.
You can set a priority when you trigger a run. This allows you to prioritize some of your runs over others, so they are started sooner. This is very useful when:
* You have critical work that needs to start more quickly (and you have long queues).
* You want runs for your premium users to take priority over free users.
The value for priority is a time offset in seconds that determines the order of dequeuing.
If you specify a priority of `10` the run will dequeue before runs that were triggered with no priority 8 seconds ago, like in this example:
```ts theme={null}
// no priority = 0
await myTask.trigger({ foo: "bar" });
//... imagine 8s pass by
// this run will start before the run above that was triggered 8s ago (with no priority)
await myTask.trigger({ foo: "bar" }, { priority: 10 });
```
If you passed a value of `3600` the run would dequeue before runs that were triggered an hour ago (with no priority).
## Feature comparison
While [limits](#limits) are generally configurable when self-hosting, some features are only available on Trigger.dev Cloud:
| Feature | Cloud | Self-hosted | Description |
| :---------------- | :---- | :---------- | :-------------------------------------- |
| Warm starts | ✅ | ❌ | Faster startups for consecutive runs |
| Auto-scaling | ✅ | ❌ | No need for manual worker node scaling |
| Checkpoints | ✅ | ❌ | Non-blocking waits, less resource usage |
| Dedicated support | ✅ | ❌ | Direct access to our support team |
| Community support | ✅ | ✅ | Access to our Discord community |
| ARM support | ✅ | ✅ | ARM-based deployments |
## Limits
Most of the [limits](/limits) are configurable when self-hosting, with some hardcoded exceptions. You can configure them via environment variables on the [webapp](/self-hosting/env/webapp) container.
| Limit | Configurable | Hardcoded value |
| :---------------- | :----------- | :-------------- |
| Concurrency | ✅ | — |
| Rate limits | ✅ | — |
| Queued tasks | ✅ | — |
| Task payloads | ✅ | — |
| Batch payloads | ✅ | — |
| Task outputs | ✅ | — |
| Batch size | ✅ | — |
| Log size | ✅ | — |
| Machines | ✅ | — |
| OTel limits | ✅ | — |
| Log retention | — | Never deleted |
| I/O packet length | ❌ | 128KB |
| Alerts | ❌ | 100M |
| Schedules | ❌ | 100M |
| Team members | ❌ | 100M |
| Preview branches | ❌ | 100M |
### Machine overrides
You can override the machine type for a task by setting the `MACHINE_PRESETS_OVERRIDE_PATH` environment variable to a JSON file with the following structure.
```json theme={null}
{
"defaultMachine": "small-1x",
"machines": {
"micro": { "cpu": 0.25, "memory": 0.25 },
"small-1x": { "cpu": 0.5, "memory": 0.5 },
"small-2x": { "cpu": 1, "memory": 1 }
// ...etc
}
}
```
All fields are optional. Partial overrides are supported:
```json theme={null}
{
"defaultMachine": "small-2x",
"machines": {
"small-1x": { "memory": 2 }
}
}
```
## Community support
It's dangerous to go alone! Join the self-hosting channel on our [Discord server](https://discord.gg/NQTxt5NA7s).
## Next steps
### 2. Adding tags inside the `run` function
Use the `tags.add()` function to add tags to a run from inside the `run` function. This will add the tag `product_1234567` to the run:
```ts theme={null}
import { task, tags } from "@trigger.dev/sdk";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }, { ctx }) => {
// Get the tags from when the run was triggered using the context
// This is not updated if you add tags during the run
logger.log("Tags from the run context", { tags: ctx.run.tags });
// Add tags during the run (a single string or array of strings)
await tags.add("product_1234567");
},
});
```
Reminder: you can only have up to 10 tags per run. If you call `tags.add()` and the total number of tags will be more than 10 we log an error and ignore the new tags. That includes tags from triggering and from inside the run function.
### Propagating tags to child runs
Tags do not propagate to child runs automatically. By default runs have no tags and you have to set them explicitly.
It's easy to propagate tags if you want:
```ts theme={null}
export const myTask = task({
id: "my-task",
run: async (payload: Payload, { ctx }) => {
// Pass the tags from ctx into the child run
const { id } = await otherTask.trigger(
{ message: "triggered from myTask" },
{ tags: ctx.run.tags }
);
},
});
```
## Filtering runs by tags
You can filter runs by tags in the dashboard and in the SDK.
### In the dashboard
On the Runs page open the filter menu, choose "Tags" and then start typing in the name of the tag you want to filter by. You can select it and it will restrict the results to only runs with that tag. You can add multiple tags to filter by more than one.
### Using `runs.list()`
You can provide filters to the `runs.list` SDK function, including an array of tags.
```ts theme={null}
import { runs } from "@trigger.dev/sdk";
// Loop through all runs with the tag "user_123456" that have completed
for await (const run of runs.list({ tag: "user_123456", status: ["COMPLETED"] })) {
console.log(run.id, run.taskIdentifier, run.finishedAt, run.tags);
}
```
# Tasks: Overview
Source: https://trigger.dev/docs/tasks/overview
Tasks are functions that can run for a long time and provide strong resilience to failure.
There are different types of tasks including regular tasks and [scheduled tasks](/tasks/scheduled).
## Hello world task and how to trigger it
Here's an incredibly simple task:
```ts /trigger/hello-world.ts theme={null}
import { task } from "@trigger.dev/sdk";
const helloWorld = task({
//1. Use a unique id for each task
id: "hello-world",
//2. The run function is the main function of the task
run: async (payload: { message: string }) => {
//3. You can write code that runs for a long time here, there are no timeouts
console.log(payload.message);
},
});
```
You can trigger this in two ways:
1. From the dashboard [using the "Test" feature](/run-tests).
2. Trigger it from your backend code. See the [full triggering guide here](/triggering).
Here's how to trigger a single run from elsewhere in your code:
```ts Your backend code theme={null}
import { helloWorld } from "./trigger/hello-world";
async function triggerHelloWorld() {
//This triggers the task and returns a handle
const handle = await helloWorld.trigger({ message: "Hello world!" });
//You can use the handle to check the status of the task, cancel and retry it.
console.log("Task is running with handle", handle.id);
}
```
You can also [trigger a task from another task](/triggering), and wait for the result.
## Defining a `task`
The task function takes an object with the following fields.
### The `id` field
This is used to identify your task so it can be triggered, managed, and you can view runs in the dashboard. This must be unique in your project – we recommend making it descriptive and unique.
### The `run` function
Your custom code inside `run()` will be executed when your task is triggered. It’s an async function that has two arguments:
1. The run payload - the data that you pass to the task when you trigger it.
2. An object with `ctx` about the run (Context), and any output from the optional `init` function that runs before every run attempt.
Anything you return from the `run` function will be the result of the task. Data you return must be JSON serializable: strings, numbers, booleans, arrays, objects, and null.
### `retry` options
A task is retried if an error is thrown, by default we retry 3 times.
You can set the number of retries and the delay between retries in the `retry` field:
```ts /trigger/retry.ts theme={null}
export const taskWithRetries = task({
id: "task-with-retries",
retry: {
maxAttempts: 10,
factor: 1.8,
minTimeoutInMs: 500,
maxTimeoutInMs: 30_000,
randomize: false,
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
For more information read [the retrying guide](/errors-retrying).
It's also worth mentioning that you can [retry a block of code](/errors-retrying) inside your tasks as well.
### `queue` options
Queues allow you to control the concurrency of your tasks. This allows you to have one-at-a-time execution and parallel executions. There are also more advanced techniques like having different concurrencies for different sets of your users. For more information read [the concurrency & queues guide](/queue-concurrency).
```ts /trigger/one-at-a-time.ts theme={null}
export const oneAtATime = task({
id: "one-at-a-time",
queue: {
concurrencyLimit: 1,
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
### `machine` options
Some tasks require more vCPUs or GBs of RAM. You can specify these requirements in the `machine` field. For more information read [the machines guide](/machines).
```ts /trigger/heavy-task.ts theme={null}
export const heavyTask = task({
id: "heavy-task",
machine: {
preset: "large-1x", // 4 vCPU, 8 GB RAM
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
### `maxDuration` option
By default tasks can execute indefinitely, which can be great! But you also might want to set a `maxDuration` to prevent a task from running too long. You can set the `maxDuration` on a task, and all runs of that task will be stopped if they exceed the duration.
```ts /trigger/long-task.ts theme={null}
export const longTask = task({
id: "long-task",
maxDuration: 300, // 300 seconds or 5 minutes
run: async (payload: any, { ctx }) => {
//...
},
});
```
See our [maxDuration guide](/runs/max-duration) for more information.
## Global lifecycle hooks
### `init` function
This function is called before a run attempt:
```ts /trigger/init.ts theme={null}
export const taskWithInit = task({
id: "task-with-init",
init: async ({ payload, ctx }) => {
//...
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
You can also return data from the `init` function that will be available in the params of the `run`, `cleanup`, `onSuccess`, and `onFailure` functions.
```ts /trigger/init-return.ts theme={null}
export const taskWithInitReturn = task({
id: "task-with-init-return",
init: async ({ payload, ctx }) => {
return { someData: "someValue" };
},
run: async (payload: any, { ctx, init }) => {
console.log(init.someData); // "someValue"
},
});
```
These are the options when creating a schedule:
| Name | Description |
| ----------------- | --------------------------------------------------------------------------------------------- |
| Task | The id of the task you want to attach to. |
| Cron pattern | The schedule in cron format. |
| Timezone | The timezone the schedule will run in. Defaults to "UTC" |
| External id | An optional external id, usually you'd use a userId. |
| Deduplication key | An optional deduplication key. If you pass the same value, it will update rather than create. |
| Environments | The environments this schedule will run in. |
When you use both `delay` and `ttl`, the TTL will start counting down from the time the run is enqueued, not from the time the run is triggered.
So for example, when using the following code:
```ts theme={null}
await myTask.trigger({ some: "data" }, { delay: "10m", ttl: "1h" });
```
The timeline would look like this:
1. The run is created at 12:00:00
2. The run is enqueued at 12:10:00
3. The TTL starts counting down from 12:10:00
4. If the run hasn't started by 13:10:00, it will be expired
For this reason, the `ttl` option only accepts durations and not absolute timestamps.
### `idempotencyKey`
You can provide an `idempotencyKey` to ensure that a task is only triggered once with the same key. This is useful if you are triggering a task within another task that might be retried:
```typescript theme={null}
import { idempotencyKeys, task } from "@trigger.dev/sdk";
export const myTask = task({
id: "my-task",
retry: {
maxAttempts: 4,
},
run: async (payload: any) => {
// By default, idempotency keys generated are unique to the run, to prevent retries from duplicating child tasks
const idempotencyKey = await idempotencyKeys.create("my-task-key");
// childTask will only be triggered once with the same idempotency key
await childTask.trigger(payload, { idempotencyKey });
// Do something else, that may throw an error and cause the task to be retried
},
});
```
For more information, see our [Idempotency](/idempotency) documentation.
Hello ${payload.name},
...
`, }); }, }); ``` This allows you to write linear code without having to worry about the complexity of scheduling or managing cron jobs. In the Trigger.dev Cloud we automatically pause execution of tasks when they are waiting for longer than a few seconds. When triggering and waiting for subtasks, the parent is checkpointed and while waiting does not count towards compute usage. When waiting for a time period (`wait.for` or `wait.until`), if the wait is longer than 5 seconds we checkpoint and it does not count towards compute usage. ## `throwIfInThePast` You can optionally throw an error if the date is already in the past when the function is called: ```ts theme={null} await wait.until({ date: new Date(date), throwIfInThePast: true }); ``` You can of course use try/catch if you want to do something special in this case. ## Wait idempotency You can pass an idempotency key to any wait function, allowing you to skip waits if the same idempotency key is used again. This can be useful if you want to skip waits when retrying a task, for example: ```ts theme={null} // Specify the idempotency key and TTL when waiting until a date: await wait.until({ date: futureDate, idempotencyKey: "my-idempotency-key", idempotencyKeyTTL: "1h", }); ``` # Writing tasks: Overview Source: https://trigger.dev/docs/writing-tasks-introduction Tasks are the core of Trigger.dev. They are long-running processes that are triggered by events. Before digging deeper into the details of writing tasks, you should read the [fundamentals of tasks](/tasks/overview) to understand what tasks are and how they work. ## Writing tasks | Topic | Description | | :------------------------------------------- | :-------------------------------------------------------------------------------------------------- | | [Logging](/logging) | View and send logs and traces from your tasks. | | [Errors & retrying](/errors-retrying) | How to deal with errors and write reliable tasks. | | [Wait](/wait) | Wait for periods of time or for external events to occur before continuing. | | [Concurrency & Queues](/queue-concurrency) | Configure what you want to happen when there is more than one run at a time. | | [Realtime notifications](/realtime/overview) | Send realtime notifications from your task that you can subscribe to from your backend or frontend. | | [Versioning](/versioning) | How versioning works. | | [Machines](/machines) | Configure the CPU and RAM of the machine your task runs on | | [Idempotency](/idempotency) | Protect against mutations happening twice. | | [Replaying](/replaying) | You can replay a single task or many at once with a new version of your code. | | [Max duration](/runs/max-duration) | Set a maximum duration for your task to run. | | [Tags](/tags) | Tags allow you to easily filter runs in the dashboard and when using the SDK. | | [Metadata](/runs/metadata) | Attach a small amount of data to a run and update it as the run progresses. | | [Usage](/run-usage) | Get compute duration and cost from inside a run, or for a specific block of code. | | [Context](/context) | Access the context of the task run. | | [Bulk actions](/bulk-actions) | Run actions on many task runs at once. | | [Priority](/runs/priority) | Specify a priority when triggering a task. | | [Hidden tasks](/hidden-tasks) | Create tasks that are not exported from your trigger files but can still be executed. | ## Our library of examples, guides and projects