# API keys
How to authenticate with Trigger.dev so you can trigger tasks.
### Authentication and your secret keys
When you [trigger a task](/triggering) from your backend code, you need to set the `TRIGGER_SECRET_KEY` environment variable.
Each environment has its own secret key. You can find the value on the API keys page in the Trigger.dev dashboard:
![How to find your secret key](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/api-keys.png)
### Automatically Configuring the SDK
To automatically configure the SDK with your secret key, you can set the `TRIGGER_SECRET_KEY` environment variable. The SDK will automatically use this value when calling API methods (like `trigger`).
```bash .env
TRIGGER_SECRET_KEY="tr_dev_β¦"
```
You can do the same if you are self-hosting and need to change the default URL by using `TRIGGER_API_URL`.
```bash .env
TRIGGER_API_URL="https://trigger.example.com"
```
The default URL is `https://api.trigger.dev`.
### Manually Configuring the SDK
If you prefer to manually configure the SDK, you can call the `configure` method:
```ts
import { configure } from "@trigger.dev/sdk/v3";
import { myTask } from "./trigger/myTasks";
configure({
secretKey: "tr_dev_1234", // WARNING: Never actually hardcode your secret key like this
baseURL: "https://mytrigger.example.com", // Optional
});
async function triggerTask() {
await myTask.trigger({ userId: "1234" }); // This will use the secret key and base URL you configured
}
```
# Changelog
Our [changelog](https://trigger.dev/changelog) is the best way to stay up to date with the latest changes to Trigger.
# CLI deploy command
The `trigger.dev deploy` command can be used to deploy your tasks to our infrastructure.
Run the command like this:
```bash npm
npx trigger.dev@latest deploy
```
```bash pnpm
pnpm dlx trigger.dev@latest deploy
```
```bash yarn
yarn dlx trigger.dev@latest deploy
```
This will fail in CI if any version mismatches are detected. Ensure everything runs locally first
using the [dev](/cli-dev-commands) command and don't bypass the version checks!
It performs a few steps to deploy:
1. Optionally updates packages when running locally.
2. Compiles and bundles the code.
3. Deploys the code to the Trigger.dev instance.
4. Registers the tasks as a new version in the environment (prod by default).
You can also setup [GitHub Actions](/github-actions) to deploy your tasks automatically.
## Arguments
```
npx trigger.dev@latest deploy [path]
```
The path to the project. Defaults to the current directory.
## Options
The name of the config file found at the project path. Defaults to `trigger.config.ts`
The project ref. Required if there is no config file.
Load environment variables from a file. This will only hydrate the `process.env` of the CLI
process, not the tasks.
Skip checking for `@trigger.dev` package updates.
Defaults to `prod` but you can specify `staging`.
Create a deployable build but don't deploy it. Prints out the build path so you can inspect it.
The platform to build the deployment image for. Defaults to `linux/amd64`.
Turn off syncing environment variables with the Trigger.dev instance.
### Common options
These options are available on most commands.
The login profile to use. Defaults to "default".
Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
### Self-hosting
These options are typically used when [self-hosting](/open-source-self-hosting) or for local development.
Builds and loads the image using your local docker. Use the `--registry` option to specify the
registry to push the image to when using `--self-hosted`, or just use `--push` to push to the
default registry.
Load the built image into your local docker.
Loads the image into your local docker after building it.
Specify the registry to push the image to when using `--self-hosted`. Will automatically enable `--push`.
When using the `--self-hosted` flag, push the image to the registry.
The namespace to use when pushing the image to the registry. For example, if pushing to Docker
Hub, the namespace is your Docker Hub username.
The networking mode for RUN instructions when using `--self-hosted`.
## Examples
### Push to Docker Hub (self-hosted)
An example of deploying to Docker Hub when using a self-hosted setup:
```bash
npx trigger.dev@latest deploy \
--self-hosted \
--load-image \
--registry docker.io \
--namespace mydockerhubusername
```
# CLI deploy options
Use these options to help deploy your tasks to Trigger.dev.
Run the command like this:
```bash npm
npx trigger.dev@latest deploy
```
```bash pnpm
pnpm dlx trigger.dev@latest deploy
```
```bash yarn
yarn dlx trigger.dev@latest deploy
```
This will fail in CI if any version mismatches are detected. Ensure everything runs locally first
using the [dev](/cli-dev-commands) command and don't bypass the version checks!
It performs a few steps to deploy:
1. Optionally updates packages when running locally.
2. Compiles and bundles the code.
3. Deploys the code to the Trigger.dev instance.
4. Registers the tasks as a new version in the environment (prod by default).
You can also setup [GitHub Actions](/github-actions) to deploy your tasks automatically.
## Arguments
```
npx trigger.dev@latest deploy [path]
```
The path to the project. Defaults to the current directory.
## Options
The name of the config file found at the project path. Defaults to `trigger.config.ts`
The project ref. Required if there is no config file.
Load environment variables from a file. This will only hydrate the `process.env` of the CLI
process, not the tasks.
Skip checking for `@trigger.dev` package updates.
Defaults to `prod` but you can specify `staging`.
Create a deployable build but don't deploy it. Prints out the build path so you can inspect it.
The platform to build the deployment image for. Defaults to `linux/amd64`.
Turn off syncing environment variables with the Trigger.dev instance.
### Common options
These options are available on most commands.
The login profile to use. Defaults to "default".
Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
### Self-hosting
These options are typically used when [self-hosting](/open-source-self-hosting) or for local development.
Builds and loads the image using your local docker. Use the `--registry` option to specify the
registry to push the image to when using `--self-hosted`, or just use `--push` to push to the
default registry.
Load the built image into your local docker.
Loads the image into your local docker after building it.
Specify the registry to push the image to when using `--self-hosted`. Will automatically enable `--push`.
When using the `--self-hosted` flag, push the image to the registry.
The namespace to use when pushing the image to the registry. For example, if pushing to Docker
Hub, the namespace is your Docker Hub username.
The networking mode for RUN instructions when using `--self-hosted`.
## Examples
### Push to Docker Hub (self-hosted)
An example of deploying to Docker Hub when using a self-hosted setup:
```bash
npx trigger.dev@latest deploy \
--self-hosted \
--load-image \
--registry docker.io \
--namespace mydockerhubusername
```
# CLI dev command
The `trigger.dev dev` command is used to run your tasks locally.
This runs a server on your machine that can execute Trigger.dev tasks:
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
It will first perform an update check to prevent version mismatches, failed deploys, and other errors. You will always be prompted first.
You will see in the terminal that the server is running and listening for tasks. When you run a task, you will see it in the terminal along with a link to view it in the dashboard.
It is worth noting that each task runs in a separate Node process. This means that if you have a long-running task, it will not block other tasks from running.
## Options
The name of the config file found at the project path. Defaults to `trigger.config.ts`
The project ref. Required if there is no config file.
Load environment variables from a file. This will only hydrate the `process.env` of the CLI
process, not the tasks.
Skip checking for `@trigger.dev` package updates.
### Common options
These options are available on most commands.
The login profile to use. Defaults to "default".
Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
## Concurrently running the terminal
Install the concurrently package as a dev dependency:
```ts
concurrently --raw --kill-others npm:dev:remix npm:dev:trigger
```
Then add something like this in your package.json scripts:
```json
"scripts": {
"dev": "concurrently --raw --kill-others npm:dev:*",
"dev:trigger": "npx trigger.dev@latest dev",
// Add your framework-specific dev script here, for example:
// "dev:next": "next dev",
// "dev:remix": "remix dev",
//...
}
```
# CLI dev command
The `trigger.dev dev` command is used to run your tasks locally.
This runs a server on your machine that can execute Trigger.dev tasks:
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
It will first perform an update check to prevent version mismatches, failed deploys, and other errors. You will always be prompted first.
You will see in the terminal that the server is running and listening for tasks. When you run a task, you will see it in the terminal along with a link to view it in the dashboard.
It is worth noting that each task runs in a separate Node process. This means that if you have a long-running task, it will not block other tasks from running.
## Options
The name of the config file found at the project path. Defaults to `trigger.config.ts`
The project ref. Required if there is no config file.
Load environment variables from a file. This will only hydrate the `process.env` of the CLI
process, not the tasks.
Skip checking for `@trigger.dev` package updates.
### Common options
These options are available on most commands.
The login profile to use. Defaults to "default".
Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
## Concurrently running the terminal
Install the concurrently package as a dev dependency:
```ts
concurrently --raw --kill-others npm:dev:remix npm:dev:trigger
```
Then add something like this in your package.json scripts:
```json
"scripts": {
"dev": "concurrently --raw --kill-others npm:dev:*",
"dev:trigger": "npx trigger.dev@latest dev",
// Add your framework-specific dev script here, for example:
// "dev:next": "next dev",
// "dev:remix": "remix dev",
//...
}
```
# CLI init command
Use these options when running the CLI `init` command.
Run the command like this:
```bash npm
npx trigger.dev@latest init
```
```bash pnpm
pnpm dlx trigger.dev@latest init
```
```bash yarn
yarn dlx trigger.dev@latest init
```
## Options
By default, the init command assumes you are using TypeScript. Use this flag to initialize a
project that uses JavaScript.
The project ref to use when initializing the project.
The version of the `@trigger.dev/sdk` package to install. Defaults to `latest`.
Skip installing the `@trigger.dev/sdk` package.
Override the existing config file if it exists.
Additional arguments to pass to the package manager. Accepts CSV for multiple args.
### Common options
These options are available on most commands.
The login profile to use. Defaults to "default".
Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
# Introduction
The Trigger.dev CLI has a number of options and commands to help you develop locally, self host, and deploy your tasks.
## Options
Shows the help information for the command.
Displays the version number of the CLI.
## Commands
| Command | Description |
| :------------------------------------------- | :----------------------------------------------------------------- |
| [login](/cli-login-commands) | Login with Trigger.dev so you can perform authenticated actions. |
| [init](/cli-init-commands) | Initialize your existing project for development with Trigger.dev. |
| [dev](/cli-dev-commands) | Run your Trigger.dev tasks locally. |
| [deploy](/cli-deploy-commands) | Deploy your Trigger.dev v3 project to the cloud. |
| [whoami](/cli-whoami-commands) | Display the current logged in user and project details. |
| [logout](/cli-logout-commands) | Logout of Trigger.dev. |
| [list-profiles](/cli-list-profiles-commands) | List all of your CLI profiles. |
| [update](/cli-update-commands) | Updates all `@trigger.dev/*` packages to match the CLI version. |
# CLI list-profiles command
Use these options when using the `list-profiles` CLI command.
Run the command like this:
```bash npm
npx trigger.dev@latest list-profiles
```
```bash pnpm
pnpm dlx trigger.dev@latest list-profiles
```
```bash yarn
yarn dlx trigger.dev@latest list-profiles
```
## Options
### Common options
These options are available on most commands.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
# CLI login command
Use these options when logging in to Trigger.dev using the CLI.
Run the command like this:
```bash npm
npx trigger.dev@latest login
```
```bash pnpm
pnpm dlx trigger.dev@latest login
```
```bash yarn
yarn dlx trigger.dev@latest login
```
## Options
### Common options
These options are available on most commands.
The login profile to use. Defaults to "default".
Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
# CLI logout command
Use these options when using the `logout` CLI command.
Run the command like this:
```bash npm
npx trigger.dev@latest logout
```
```bash pnpm
pnpm dlx trigger.dev@latest logout
```
```bash yarn
yarn dlx trigger.dev@latest logout
```
## Options
### Common options
These options are available on most commands.
The login profile to use. Defaults to "default".
Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
# CLI update command
Use these options when using the `update` CLI command.
Run the command like this:
```bash npm
npx trigger.dev@latest update
```
```bash pnpm
pnpm dlx trigger.dev@latest update
```
```bash yarn
yarn dlx trigger.dev@latest update
```
## Options
### Common options
These options are available on most commands.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
# CLI whoami command
Use these options to display the current logged in user and project details.
Run the command like this:
```bash npm
npx trigger.dev@latest whoami
```
```bash pnpm
pnpm dlx trigger.dev@latest whoami
```
```bash yarn
yarn dlx trigger.dev@latest whoami
```
## Options
### Common options
These options are available on most commands.
The login profile to use. Defaults to "default".
Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable.
The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`.
Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string.
Shows the help information for the command.
Displays the version number of the CLI.
# Discord Community
Please [join our community on Discord](https://trigger.dev/discord) to ask questions, share your projects, and get help from other developers.
# The trigger.config.ts file
This file is used to configure your project and how it's built.
The `trigger.config.ts` file is used to configure your Trigger.dev project. It is a TypeScript file at the root of your project that exports a default configuration object. Here's an example:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
// Your project ref (you can see it on the Project settings page in the dashboard)
project: "",
//The paths for your trigger folders
dirs: ["./trigger"],
retries: {
//If you want to retry a task in dev mode (when using the CLI)
enabledInDev: false,
//the default retry settings. Used if you don't specify on a task.
default: {
maxAttempts: 3,
minTimeoutInMs: 1000,
maxTimeoutInMs: 10000,
factor: 2,
randomize: true,
},
},
});
```
The config file handles a lot of things, like:
* Specifying where your trigger tasks are located using the `dirs` option.
* Setting the default retry settings.
* Configuring OpenTelemetry instrumentations.
* Customizing the build process.
* Adding global task lifecycle functions.
The config file is bundled with your project, so code imported in the config file is also bundled,
which can have an effect on build times and cold start duration. One important qualification is
anything defined in the `build` config is automatically stripped out of the config file, and
imports used inside build config with be tree-shaken out.
## Lifecycle functions
You can add lifecycle functions to get notified when any task starts, succeeds, or fails using `onStart`, `onSuccess` and `onFailure`:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
onSuccess: async (payload, output, { ctx }) => {
console.log("Task succeeded", ctx.task.id);
},
onFailure: async (payload, error, { ctx }) => {
console.log("Task failed", ctx.task.id);
},
onStart: async (payload, { ctx }) => {
console.log("Task started", ctx.task.id);
},
init: async (payload, { ctx }) => {
console.log("I run before any task is run");
},
});
```
Read more about task lifecycle functions in the [tasks overview](/tasks/overview).
## Instrumentations
We use OpenTelemetry (OTEL) for our run logs. This means you get a lot of information about your tasks with no effort. But you probably want to add more information to your logs. For example, here's all the Prisma calls automatically logged:
![The run log](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/auto-instrumentation.png)
Here we add Prisma and OpenAI instrumentations to your `trigger.config.ts` file.
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { PrismaInstrumentation } from "@prisma/instrumentation";
import { OpenAIInstrumentation } from "@traceloop/instrumentation-openai";
export default defineConfig({
project: "",
// Your other config settings...
instrumentations: [new PrismaInstrumentation(), new OpenAIInstrumentation()],
});
```
There is a [huge library of instrumentations](https://opentelemetry.io/ecosystem/registry/?language=js) you can easily add to your project like this.
Some ones we recommend:
| Package | Description |
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| `@opentelemetry/instrumentation-undici` | Logs all fetch calls (inc. Undici fetch) |
| `@opentelemetry/instrumentation-http` | Logs all HTTP calls |
| `@prisma/instrumentation` | Logs all Prisma calls, you need to [enable tracing](https://github.com/prisma/prisma/tree/main/packages/instrumentation) |
| `@traceloop/instrumentation-openai` | Logs all OpenAI calls |
`@opentelemetry/instrumentation-fs` which logs all file system calls is currently not supported.
## Runtime
We currently only officially support the `node` runtime, but you can try our experimental `bun` runtime by setting the `runtime` option in your config file:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
runtime: "bun",
});
```
See our [Bun guide](/guides/frameworks/bun) for more information.
## Default machine
You can specify the default machine for all tasks in your project:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
defaultMachine: "large-1x",
});
```
See our [machines documentation](/machines) for more information.
## Log level
You can set the log level for your project:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
logLevel: "debug",
});
```
The `logLevel` only determines which logs are sent to the Trigger.dev instance when using the `logger` API. All `console` based logs are always sent.
## Max duration
You can set the default `maxDuration` for all tasks in your project:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
maxDuration: 60, // 60 seconds
});
```
See our [maxDuration guide](/runs/max-duration) for more information.
## Build configuration
You can customize the build process using the `build` option:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
// Don't bundle these packages
external: ["header-generator"],
},
});
```
The `trigger.config.ts` file is included in the bundle, but with the `build` configuration
stripped out. These means any imports only used inside the `build` configuration are also removed
from the final bundle.
### External
All code is bundled by default, but you can exclude some packages from the bundle using the `external` option:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
external: ["header-generator"],
},
});
```
When a package is excluded from the bundle, it will be added to a dynamically generated package.json file in the build directory. The version of the package will be the same as the version found in your `node_modules` directory.
Each entry in the external should be a package name, not necessarily the import path. For example, if you want to exclude the `ai` package, but you are importing `ai/rsc`, you should just include `ai` in the `external` array:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
external: ["ai"],
},
});
```
Any packages that install or build a native binary should be added to external, as native binaries
cannot be bundled. For example, `re2`, `sharp`, and `sqlite3` should be added to external.
### JSX
You can customize the `jsx` options that are passed to `esbuild` using the `jsx` option:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
jsx: {
// Use the Fragment component instead of React.Fragment
fragment: "Fragment",
// Use the h function instead of React.createElement
factory: "h",
// Turn off automatic runtime
automatic: false,
},
},
});
```
By default we enabled [esbuild's automatic JSX runtime](https://esbuild.github.io/content-types/#auto-import-for-jsx) which means you don't need to import `React` in your JSX files. You can disable this by setting `automatic` to `false`.
See the [esbuild JSX documentation](https://esbuild.github.io/content-types/#jsx) for more information.
### Conditions
You can add custom [import conditions](https://esbuild.github.io/api/#conditions) to your build using the `conditions` option:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
conditions: ["react-server"],
},
});
```
These conditions effect how imports are resolved during the build process. For example, the `react-server` condition will resolve `ai/rsc` to the server version of the `ai/rsc` export.
Custom conditions will also be passed to the `node` runtime when running your tasks.
### Extensions
Build extension allow you to hook into the build system and customize the build process or the resulting bundle and container image (in the case of deploying). You can use pre-built extensions by installing the `@trigger.dev/build` package into your `devDependencies`, or you can create your own.
#### additionalFiles
Import the `additionalFiles` build extension and use it in your `trigger.config.ts` file:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { additionalFiles } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [
additionalFiles({ files: ["wrangler/wrangler.toml", "./assets/**", "./fonts/**"] }),
],
},
});
```
This will copy the files specified in the `files` array to the build directory. The `files` array can contain globs. The output paths will match the path of the file, relative to the root of the project.
The root of the project is the directory that contains the trigger.config.ts file
#### `additionalPackages`
Import the `additionalPackages` build extension and use it in your `trigger.config.ts` file:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { additionalPackages } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [additionalPackages({ packages: ["wrangler"] })],
},
});
```
This allows you to include additional packages in the build that are not automatically included via imports. This is useful if you want to install a package that includes a CLI tool that you want to invoke in your tasks via `exec`. We will try to automatically resolve the version of the package but you can specify the version by using the `@` symbol:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [additionalPackages({ packages: ["wrangler@1.19.0"] })],
},
});
```
#### `emitDecoratorMetadata`
If you need support for the `emitDecoratorMetadata` typescript compiler option, import the `emitDecoratorMetadata` build extension and use it in your `trigger.config.ts` file:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { emitDecoratorMetadata } from "@trigger.dev/build/extensions/typescript";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [emitDecoratorMetadata()],
},
});
```
This is usually required if you are using certain ORMs, like TypeORM, that require this option to be enabled. It's not enabled by default because there is a performance cost to enabling it.
emitDecoratorMetadata works by hooking into the esbuild bundle process and using the TypeScript
compiler API to compile files where we detect the use of decorators. This means you must have
`emitDecoratorMetadata` enabled in your `tsconfig.json` file, as well as `typescript` installed in
your `devDependencies`.
#### Prisma
If you are using Prisma, you should use the prisma build extension.
* Automatically handles copying prisma files to the build directory.
* Generates the prisma client during the deploy process
* Optionally will migrate the database during the deploy process
* Support for TypedSQL and multiple schema files.
You can use it for a simple Prisma setup like this:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { prismaExtension } from "@trigger.dev/build/extensions/prisma";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [
prismaExtension({
version: "5.19.0", // optional, we'll automatically detect the version if not provided
schema: "prisma/schema.prisma",
}),
],
},
});
```
This does not have any effect when running the `dev` command, only when running the `deploy`
command.
If you want to also run migrations during the build process, you can pass in the `migrate` option:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { prismaExtension } from "@trigger.dev/build/extensions/prisma";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [
prismaExtension({
schema: "prisma/schema.prisma",
migrate: true,
directUrlEnvVarName: "DATABASE_URL_UNPOOLED", // optional - the name of the environment variable that contains the direct database URL if you are using a direct database URL
}),
],
},
});
```
If you have multiple `generator` statements defined in your schema file, you can pass in the `clientGenerator` option to specify the `prisma-client-js` generator, which will prevent other generators from being generated:
```prisma schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DATABASE_URL_UNPOOLED")
}
// We only want to generate the prisma-client-js generator
generator client {
provider = "prisma-client-js"
}
generator kysely {
provider = "prisma-kysely"
output = "../../src/kysely"
enumFileName = "enums.ts"
fileName = "types.ts"
}
```
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { prismaExtension } from "@trigger.dev/build/extensions/prisma";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [
prismaExtension({
schema: "prisma/schema.prisma",
clientGenerator: "client",
}),
],
},
});
```
If you are using [TypedSQL](https://www.prisma.io/typedsql), you'll need to enable it via the `typedSql` option:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [
prismaExtension({
schema: "prisma/schema.prisma",
typedSql: true,
}),
],
},
});
```
The `prismaExtension` will inject the `DATABASE_URL` environment variable into the build process. Learn more about setting environment variables for deploying in our [Environment Variables](/deploy-environment-variables) guide.
These environment variables are only used during the build process and are not embedded in the final container image.
#### syncEnvVars
The `syncEnvVars` build extension replaces the deprecated `resolveEnvVars` export. Check out our [syncEnvVars documentation](/deploy-environment-variables#sync-env-vars-from-another-service) for more information.
```ts
import { syncEnvVars } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [syncEnvVars()],
},
});
```
#### syncVercelEnvVars
The `syncVercelEnvVars` build extension syncs environment variables from your Vercel project to Trigger.dev.
You need to set the `VERCEL_ACCESS_TOKEN` and `VERCEL_PROJECT_ID` environment variables, or pass
in the token and project ID as arguments to the `syncVercelEnvVars` build extension. If you're
working with a team project, you'll also need to set `VERCEL_TEAM_ID`, which can be found in your
team settings. You can find / generate the `VERCEL_ACCESS_TOKEN` in your Vercel
[dashboard](https://vercel.com/account/settings/tokens). Make sure the scope of the token covers
the project with the environment variables you want to sync.
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { syncVercelEnvVars } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [syncVercelEnvVars()],
},
});
```
#### audioWaveform
Previously, we installed [Audio Waveform](https://github.com/bbc/audiowaveform) in the build image. That's been moved to a build extension:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { audioWaveform } from "@trigger.dev/build/extensions/audioWaveform";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [audioWaveform()], // uses verson 1.1.0 of audiowaveform by default
},
});
```
#### puppeteer
**WEB SCRAPING:** When web scraping, you MUST use a proxy to comply with our terms of service. Direct scraping of third-party websites without the site owner's permission using Trigger.dev Cloud is prohibited and will result in account suspension. See [this example](/guides/examples/puppeteer#scrape-content-from-a-web-page) which uses a proxy.
To use Puppeteer in your project, add these build settings to your `trigger.config.ts` file:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { puppeteer } from "@trigger.dev/build/extensions/puppeteer";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [puppeteer()],
},
});
```
And add the following environment variable in your Trigger.dev dashboard on the Environment Variables page:
```bash
PUPPETEER_EXECUTABLE_PATH: "/usr/bin/google-chrome-stable",
```
Follow [this example](/guides/examples/puppeteer) to get setup with Trigger.dev and Puppeteer in your project.
#### ffmpeg
You can add the `ffmpeg` build extension to your build process:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { ffmpeg } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [ffmpeg()],
},
});
```
By default, this will install the version of `ffmpeg` that is available in the Debian package manager. If you need a specific version, you can pass in the version as an argument:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { ffmpeg } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [ffmpeg({ version: "6.0-4" })],
},
});
```
This extension will also add the `FFMPEG_PATH` and `FFPROBE_PATH` to your environment variables, making it easy to use popular ffmpeg libraries like `fluent-ffmpeg`.
Follow [this example](/guides/examples/ffmpeg-video-processing) to get setup with Trigger.dev and FFmpeg in your project.
#### esbuild plugins
You can easily add existing or custom esbuild plugins to your build process using the `esbuildPlugin` extension:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { esbuildPlugin } from "@trigger.dev/build/extensions";
import { sentryEsbuildPlugin } from "@sentry/esbuild-plugin";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [
esbuildPlugin(
sentryEsbuildPlugin({
org: process.env.SENTRY_ORG,
project: process.env.SENTRY_PROJECT,
authToken: process.env.SENTRY_AUTH_TOKEN,
}),
// optional - only runs during the deploy command, and adds the plugin to the end of the list of plugins
{ placement: "last", target: "deploy" }
),
],
},
});
```
#### aptGet
You can install system packages into the deployed image using using the `aptGet` extension:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { aptGet } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [aptGet({ packages: ["ffmpeg"] })],
},
});
```
If you want to install a specific version of a package, you can specify the version like this:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [aptGet({ packages: ["ffmpeg=6.0-4"] })],
},
});
```
#### Custom extensions
You can create your own extensions to further customize the build process. Extensions are an object with a `name` and zero or more lifecycle hooks (`onBuildStart` and `onBuildComplete`) that allow you to modify the `BuildContext` object that is passed to the build process through adding layers. For example, this is how the `aptGet` extension is implemented:
```ts
import { BuildExtension } from "@trigger.dev/core/v3/build";
export type AptGetOptions = {
packages: string[];
};
export function aptGet(options: AptGetOptions): BuildExtension {
return {
name: "aptGet",
onBuildComplete(context) {
if (context.target === "dev") {
return;
}
context.logger.debug("Adding apt-get layer", {
pkgs: options.packages,
});
context.addLayer({
id: "apt-get",
image: {
pkgs: options.packages,
},
});
},
};
}
```
Instead of creating this function and worrying about types, you can define an extension inline in your `trigger.config.ts` file:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [
{
name: "aptGet",
onBuildComplete(context) {
if (context.target === "dev") {
return;
}
context.logger.debug("Adding apt-get layer", {
pkgs: ["ffmpeg"],
});
context.addLayer({
id: "apt-get",
image: {
pkgs: ["ffmpeg"],
},
});
},
},
],
},
});
```
We'll be expanding the documentation on how to create custom extensions in the future, but for now you are encouraged to look at the existing extensions in the `@trigger.dev/build` package for inspiration, which you can see in our repo [here](https://github.com/triggerdotdev/trigger.dev/tree/main/packages/build/src/extensions)
# Build extensions
Customize how your project is built and deployed to Trigger.dev with build extensions
Build extension allow you to hook into the build system and customize the build process or the resulting bundle and container image (in the case of deploying). See our [trigger.config.ts reference](/config/config-file#extensions) for more information on how to install and use our built-in extensions. Build extensions can do the following:
* Add additional files to the build
* Add dependencies to the list of externals
* Add esbuild plugins
* Add additional npm dependencies
* Add additional system packages to the image build container
* Add commands to run in the image build container
* Add environment variables to the image build container
* Sync environment variables to your Trigger.dev project
## Creating a build extension
Build extensions are added to your `trigger.config.ts` file, with a required `name` and optional build hook functions. Here's a simple example of a build extension that just logs a message when the build starts:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "my-project",
build: {
extensions: [
{
name: "my-extension",
onBuildStart: async (context) => {
console.log("Build starting!");
},
},
],
},
});
```
You can also extract that out into a function instead of defining it inline, in which case you will need to import the `BuildExtension` type from the `@trigger.dev/build` package:
You'll need to add the `@trigger.dev/build` package to your `devDependencies` before the below
code will work. Make sure it's version matches that of the installed `@trigger.dev/sdk` package.
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { BuildExtension } from "@trigger.dev/build";
export default defineConfig({
project: "my-project",
build: {
extensions: [myExtension()],
},
});
function myExtension(): BuildExtension {
return {
name: "my-extension",
onBuildStart: async (context) => {
console.log("Build starting!");
},
};
}
```
## Build hooks
### externalsForTarget
This allows the extension to add additional dependencies to the list of externals for the build. This is useful for dependencies that are not included in the bundle, but are expected to be available at runtime.
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "my-project",
build: {
extensions: [
{
name: "my-extension",
externalsForTarget: async (target) => {
return ["my-dependency"];
},
},
],
},
});
```
### onBuildStart
This hook runs before the build starts. It receives the `BuildContext` object as an argument.
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "my-project",
build: {
extensions: [
{
name: "my-extension",
onBuildStart: async (context) => {
console.log("Build starting!");
},
},
],
},
});
```
If you want to add an esbuild plugin, you must do so in the `onBuildStart` hook. Here's an example of adding a custom esbuild plugin:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "my-project",
build: {
extensions: [
{
name: "my-extension",
onBuildStart: async (context) => {
context.registerPlugin({
name: "my-plugin",
setup(build) {
build.onLoad({ filter: /.*/, namespace: "file" }, async (args) => {
return {
contents: "console.log('Hello, world!')",
loader: "js",
};
});
},
});
},
},
],
},
});
```
You can use the `BuildContext.target` property to determine if the build is for `dev` or `deploy`:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "my-project",
build: {
extensions: [
{
name: "my-extension",
onBuildStart: async (context) => {
if (context.target === "dev") {
console.log("Building for dev");
} else {
console.log("Building for deploy");
}
},
},
],
},
});
```
### onBuildComplete
This hook runs after the build completes. It receives the `BuildContext` object and a `BuildManifest` object as arguments. This is where you can add in one or more `BuildLayer`'s to the context.
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "my-project",
build: {
extensions: [
{
name: "my-extension",
onBuildComplete: async (context, manifest) => {
context.addLayer({
id: "more-dependencies",
dependencies,
});
},
},
],
},
});
```
See the [addLayer](#addlayer) documentation for more information on how to use `addLayer`.
## BuildTarget
Can either be `dev` or `deploy`, matching the CLI command name that is being run.
```sh
npx trigger.dev@latest dev # BuildTarget is "dev"
npx trigger.dev@latest deploy # BuildTarget is "deploy"
```
## BuildContext
### addLayer()
The layer to add to the build context. See the [BuildLayer](#buildlayer) documentation for more
information.
### registerPlugin()
The esbuild plugin to register.
An optional target to register the plugin for. If not provided, the plugin will be registered
for all targets.
An optional placement for the plugin. If not provided, the plugin will be registered in place.
This allows you to control the order of plugins.
### resolvePath()
Resolves a path relative to the project's working directory.
The path to resolve.
```ts
const resolvedPath = context.resolvePath("my-other-dependency");
```
### properties
The target of the build, either `dev` or `deploy`.
The runtime of the project (either node or bun)
The project ref
The trigger directories to search for tasks
The build configuration object
The working directory of the project
The root workspace directory of the project
The path to the package.json file
The path to the lockfile (package-lock.json, yarn.lock, or pnpm-lock.yaml)
The path to the trigger.config.ts file
The path to the tsconfig.json file
A logger object that can be used to log messages to the console.
## BuildLayer
A unique identifier for the layer.
An array of commands to run in the image build container.
```ts
commands: ["echo 'Hello, world!'"];
```
These commands are run after packages have been installed and the code copied into the container in the "build" stage of the Dockerfile. This means you cannot install system packages in these commands because they won't be available in the final stage. To do that, please use the `pkgs` property of the `image` object.
An array of system packages to install in the image build container.
An array of instructions to add to the Dockerfile.
Environment variables to add to the image build container, but only during the "build" stage
of the Dockerfile. This is where you'd put environment variables that are needed when running
any of the commands in the `commands` array.
Environment variables that should sync to the Trigger.dev project, which will then be avalable
in your tasks at runtime. Importantly, these are NOT added to the image build container, but
are instead added to the Trigger.dev project and stored securely.
An object of dependencies to add to the build. The key is the package name and the value is the
version.
```ts
dependencies: {
"my-dependency": "^1.0.0",
};
```
### examples
Add a command that will echo the value of an environment variable:
```ts
context.addLayer({
id: "my-layer",
commands: [`echo $MY_ENV_VAR`],
build: {
env: {
MY_ENV_VAR: "Hello, world!",
},
},
});
```
## Troubleshooting
When creating a build extension, you may run into issues with the build process. One thing that can help is turning on `debug` logging when running either `dev` or `deploy`:
```sh
npx trigger.dev@latest dev --log-level debug
npx trigger.dev@latest deploy --log-level debug
```
Another helpful tool is the `--dry-run` flag on the `deploy` command, which will bundle your project and generate the Containerfile (e.g. the Dockerfile) without actually deploying it. This can help you see what the final image will look like and debug any issues with the build process.
```sh
npx trigger.dev@latest deploy --dry-run
```
You should also take a look at our built in extensions for inspiration on how to create your own. You can find them in in [the source code here](https://github.com/triggerdotdev/trigger.dev/tree/main/packages/build/src/extensions).
# Context
Get the context of a task run.
Context (`ctx`) is a way to get information about a run.
The context object does not change whilst your code is executing. This means values like `ctx.run.durationMs` will be fixed at the moment the `run()` function is called.
```typescript Context example
import { task } from "@trigger.dev/sdk/v3";
export const parentTask = task({
id: "parent-task",
run: async (payload: { message: string }, { ctx }) => {
if (ctx.environment.type === "DEVELOPMENT") {
return;
}
},
});
```
## Context properties
The exported function name of the task e.g. `myTask` if you defined it like this: `export const myTask = task(...)`.
The ID of the task.
The file path of the task.
The ID of the execution attempt.
The attempt number.
The start time of the attempt.
The ID of the background worker.
The ID of the background worker task.
The current status of the attempt.
The ID of the task run.
The context of the task run.
An array of [tags](/tags) associated with the task run.
Whether this is a [test run](/run-tests).
The creation time of the task run.
The start time of the task run.
An optional [idempotency key](/idempotency) for the task run.
The [maximum number of attempts](/triggering#maxattempts) allowed for this task run.
The duration of the task run in milliseconds when the `run()` function is called. For live values use the [usage SDK functions](/run-usage).
The cost of the task run in cents when the `run()` function is called. For live values use the [usage SDK functions](/run-usage).
The base cost of the task run in cents when the `run()` function is called. For live values use the [usage SDK functions](/run-usage).
The [version](/versioning) of the task run.
The [maximum allowed duration](/runs/max-duration) for the task run.
The ID of the queue.
The name of the queue.
The ID of the environment.
The slug of the environment.
The type of the environment (PRODUCTION, STAGING, DEVELOPMENT, or PREVIEW).
The ID of the organization.
The slug of the organization.
The name of the organization.
The ID of the project.
The reference of the project.
The slug of the project.
The name of the project.
Optional information about the batch, if applicable.
The ID of the batch.
Optional information about the machine preset used for execution.
The name of the machine preset.
The CPU allocation for the machine.
The memory allocation for the machine.
The cost in cents per millisecond for this machine preset.
# Environment Variables
Any environment variables used in your tasks need to be added so the deployed code will run successfully.
An environment variable in Node.js is accessed in your code using `process.env.MY_ENV_VAR`.
We deploy your tasks and scale them up and down when they are triggered. So any environment variables you use in your tasks need to accessible to us so your code will run successfully.
## In the dashboard
### Setting environment variables
In the sidebar select the "Environment Variables" page, then press the "New environment variable"
button. ![Environment variables page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-page.jpg)
You can add values for your local dev environment, staging and prod. ![Environment variables
page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-panel.jpg)
Specifying Dev values is optional. They will be overriden by values in your .env file when running
locally.
### Editing environment variables
You can edit an environment variable's values. You cannot edit the key name, you must delete and create a new one.
![Environment variables page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-actions.png)
![Environment variables page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-edit-popover.png)
### Deleting environment variables
Environment variables are fetched and injected before a runs begins. So if you delete one you can
cause runs to fail that are expecting variables to be set.
![Environment variables page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-actions.png)
This will immediately delete the variable. ![Environment variables
page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-delete-popover.png)
## In your code
You can use our SDK to get and manipulate environment variables. You can also easily sync environment variables from another service into Trigger.dev.
### Directly manipulating environment variables
We have a complete set of SDK functions (and REST API) you can use to directly manipulate environment variables.
| Function | Description |
| -------------------------------------------------- | ----------------------------------------------------------- |
| [envvars.list()](/management/envvars/list) | List all environment variables |
| [envvars.upload()](/management/envvars/import) | Upload multiple env vars. You can override existing values. |
| [envvars.create()](/management/envvars/create) | Create a new environment variable |
| [envvars.retrieve()](/management/envvars/retrieve) | Retrieve an environment variable |
| [envvars.update()](/management/envvars/update) | Update a single environment variable |
| [envvars.del()](/management/envvars/delete) | Delete a single environment variable |
### Sync env vars from another service
You could use the SDK functions above but it's much easier to use our `syncEnvVars` build extension in your `trigger.config` file.
To use the `syncEnvVars` build extension, you should first install the `@trigger.dev/build`
package into your devDependencies.
In this example we're using env vars from [Infisical](https://infisical.com).
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { syncEnvVars } from "@trigger.dev/build/extensions/core";
import { InfisicalClient } from "@infisical/sdk";
export default defineConfig({
build: {
extensions: [
syncEnvVars(async (ctx) => {
const client = new InfisicalClient({
clientId: process.env.INFISICAL_CLIENT_ID,
clientSecret: process.env.INFISICAL_CLIENT_SECRET,
});
const secrets = await client.listSecrets({
environment: ctx.environment,
projectId: process.env.INFISICAL_PROJECT_ID!,
});
return secrets.map((secret) => ({
name: secret.secretKey,
value: secret.secretValue,
}));
}),
],
},
});
```
#### Syncing environment variables from Vercel
To sync environment variables from your Vercel projects to Trigger.dev, you can use our build extension. Check out our [syncing environment variables from Vercel guide](/guides/examples/vercel-sync-env-vars).
#### Deploy
When you run the [CLI deploy command](/cli-deploy) directly or using [GitHub Actions](/github-actions) it will sync the environment variables from [Infisical](https://infisical.com) to Trigger.dev. This means they'll appear on the Environment Variables page so you can confirm that it's worked.
This means that you need to redeploy your Trigger.dev tasks if you change the environment variables in [Infisical](https://infisical.com).
The `process.env.INFISICAL_CLIENT_ID`, `process.env.INFISICAL_CLIENT_SECRET` and
`process.env.INFISICAL_PROJECT_ID` will need to be supplied to the `deploy` CLI command. You can
do this via the `--env-file .env` flag or by setting them as environment variables in your
terminal.
#### Dev
`syncEnvVars` does not have any effect when running the `dev` command locally. If you want to inject environment variables from another service into your local environment you can do so via a `.env` file or just supplying them as environment variables in your terminal. Most services will have a CLI tool that allows you to run a command with environment variables set:
```sh
infisical run -- npx trigger.dev@latest dev
```
Any environment variables set in the CLI command will be available to your local Trigger.dev tasks.
### The syncEnvVars callback return type
You can return env vars as an object with string keys and values, or an array of names + values.
```ts
return {
MY_ENV_VAR: "my value",
MY_OTHER_ENV_VAR: "my other value",
};
```
or
```ts
return [
{
name: "MY_ENV_VAR",
value: "my value",
},
{
name: "MY_OTHER_ENV_VAR",
value: "my other value",
},
];
```
This should mean that for most secret services you won't need to convert the data into a different format.
### Using Google credential JSON files
Securely pass a Google credential JSON file to your Trigger.dev task using environment variables.
In your terminal, run the following command and copy the resulting base64 string:
```
base64 path/to/your/service-account-file.json
```
Follow [these steps](/deploy-environment-variables) to set a new environment variable using the base64 string as the value.
```
GOOGLE_CREDENTIALS_BASE64=""
```
Add the following code to your Trigger.dev task:
```ts
import { google } from "googleapis";
const credentials = JSON.parse(
Buffer.from(process.env.GOOGLE_CREDENTIALS_BASE64, "base64").toString("utf8")
);
const auth = new google.auth.GoogleAuth({
credentials,
scopes: ["https://www.googleapis.com/auth/cloud-platform"],
});
const client = await auth.getClient();
```
You can now use the `client` object to make authenticated requests to Google APIs
# Errors & Retrying
How to deal with errors and write reliable tasks.
When an uncaught error is thrown inside your task, that task attempt will fail.
You can configure retrying in two ways:
1. In your [trigger.config file](/config/config-file) you can set the default retrying behavior for all tasks.
2. On each task you can set the retrying behavior.
By default when you create your project using the CLI init command we disabled retrying in the DEV
environment. You can enable it in your [trigger.config file](/config/config-file).
## A simple example with OpenAI
This task will retry 10 times with exponential backoff.
* `openai.chat.completions.create()` can throw an error.
* The result can be empty and we want to try again. So we manually throw an error.
```ts /trigger/openai.ts
import { task } from "@trigger.dev/sdk/v3";
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export const openaiTask = task({
id: "openai-task",
//specifying retry options overrides the defaults defined in your trigger.config file
retry: {
maxAttempts: 10,
factor: 1.8,
minTimeoutInMs: 500,
maxTimeoutInMs: 30_000,
randomize: false,
},
run: async (payload: { prompt: string }) => {
//if this fails, it will throw an error and retry
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: payload.prompt }],
model: "gpt-3.5-turbo",
});
if (chatCompletion.choices[0]?.message.content === undefined) {
//sometimes OpenAI returns an empty response, let's retry by throwing an error
throw new Error("OpenAI call failed");
}
return chatCompletion.choices[0].message.content;
},
});
```
## Combining tasks
One way to gain reliability is to break your work into smaller tasks and [trigger](/triggering) them from each other. Each task can have its own retrying behavior:
```ts /trigger/multiple-tasks.ts
import { task } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
retry: {
maxAttempts: 10,
},
run: async (payload: string) => {
const result = await otherTask.triggerAndWait("some data");
//...do other stuff
},
});
export const otherTask = task({
id: "other-task",
retry: {
maxAttempts: 5,
},
run: async (payload: string) => {
return {
foo: "bar",
};
},
});
```
Another benefit of this approach is that you can view the logs and retry each task independently from the dashboard.
## Retrying smaller parts of a task
Another complimentary strategy is to perform retrying inside of your task.
We provide some useful functions that you can use to retry smaller parts of a task. Of course, you can also write your own logic or use other packages.
### retry.onThrow()
You can retry a block of code that can throw an error, with the same retry settings as a task.
```ts /trigger/retry-on-throw.ts
import { task, logger, retry } from "@trigger.dev/sdk/v3";
export const retryOnThrow = task({
id: "retry-on-throw",
run: async (payload: any) => {
//Will retry up to 3 times. If it fails 3 times it will throw.
const result = await retry.onThrow(
async ({ attempt }) => {
//throw on purpose the first 2 times, obviously this is a contrived example
if (attempt < 3) throw new Error("failed");
//...
return {
foo: "bar",
};
},
{ maxAttempts: 3, randomize: false }
);
//this will log out after 3 attempts of retry.onThrow
logger.info("Result", { result });
},
});
```
If all of the attempts with `retry.onThrow` fail, an error will be thrown. You can catch this or
let it cause a retry of the entire task.
### retry.fetch()
You can use `fetch`, `axios`, or any other library in your code.
But we do provide a convenient function to perform HTTP requests with conditional retrying based on the response:
```ts /trigger/retry-fetch.ts
import { task, logger, retry } from "@trigger.dev/sdk/v3";
export const taskWithFetchRetries = task({
id: "task-with-fetch-retries",
run: async ({ payload, ctx }) => {
//if the Response is a 429 (too many requests), it will retry using the data from the response. A lot of good APIs send these headers.
const headersResponse = await retry.fetch("http://my.host/test-headers", {
retry: {
byStatus: {
"429": {
strategy: "headers",
limitHeader: "x-ratelimit-limit",
remainingHeader: "x-ratelimit-remaining",
resetHeader: "x-ratelimit-reset",
resetFormat: "unix_timestamp_in_ms",
},
},
},
});
const json = await headersResponse.json();
logger.info("Fetched headers response", { json });
//if the Response is a 500-599 (issue with the server you're calling), it will retry up to 10 times with exponential backoff
const backoffResponse = await retry.fetch("http://my.host/test-backoff", {
retry: {
byStatus: {
"500-599": {
strategy: "backoff",
maxAttempts: 10,
factor: 2,
minTimeoutInMs: 1_000,
maxTimeoutInMs: 30_000,
randomize: false,
},
},
},
});
const json2 = await backoffResponse.json();
logger.info("Fetched backoff response", { json2 });
//You can additionally specify a timeout. In this case if the response takes longer than 1 second, it will retry up to 5 times with exponential backoff
const timeoutResponse = await retry.fetch("https://httpbin.org/delay/2", {
timeoutInMs: 1000,
retry: {
timeout: {
maxAttempts: 5,
factor: 1.8,
minTimeoutInMs: 500,
maxTimeoutInMs: 30_000,
randomize: false,
},
},
});
const json3 = await timeoutResponse.json();
logger.info("Fetched timeout response", { json3 });
return {
result: "success",
payload,
json,
json2,
json3,
};
},
});
```
If all of the attempts with `retry.fetch` fail, an error will be thrown. You can catch this or let
it cause a retry of the entire task.
## Advanced error handling and retrying
We provide a `handleError` callback on the task and in your `trigger.config` file. This gets called when an uncaught error is thrown in your task.
You can
* Inspect the error, log it, and return a different error if you'd like.
* Modify the retrying behavior based on the error, payload, context, etc.
If you don't return anything from the function it will use the settings on the task (or inherited from the config). So you only need to use this to override things.
### OpenAI error handling example
OpenAI calls can fail for a lot of reasons and the ideal retry behavior is different for each.
In this complicated example:
* We skip retrying if there's no Response status.
* We skip retrying if you've run out of credits.
* If there are no Response headers we let the normal retrying logic handle it (return undefined).
* If we've run out of requests or tokens we retry at the time specified in the headers.
```ts tasks.ts
import { task } from "@trigger.dev/sdk/v3";
import { calculateISO8601DurationOpenAIVariantResetAt, openai } from "./openai.js";
export const openaiTask = task({
id: "openai-task",
retry: {
maxAttempts: 1,
},
run: async (payload: { prompt: string }) => {
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: payload.prompt }],
model: "gpt-3.5-turbo",
});
return chatCompletion.choices[0].message.content;
},
handleError: async (payload, error, { ctx, retryAt }) => {
if (error instanceof OpenAI.APIError) {
if (!error.status) {
return {
skipRetrying: true,
};
}
if (error.status === 429 && error.type === "insufficient_quota") {
return {
skipRetrying: true,
};
}
if (!error.headers) {
//returning undefined means the normal retrying logic will be used
return;
}
const remainingRequests = error.headers["x-ratelimit-remaining-requests"];
const requestResets = error.headers["x-ratelimit-reset-requests"];
if (typeof remainingRequests === "string" && Number(remainingRequests) === 0) {
return {
retryAt: calculateISO8601DurationOpenAIVariantResetAt(requestResets),
};
}
const remainingTokens = error.headers["x-ratelimit-remaining-tokens"];
const tokensResets = error.headers["x-ratelimit-reset-tokens"];
if (typeof remainingTokens === "string" && Number(remainingTokens) === 0) {
return {
retryAt: calculateISO8601DurationOpenAIVariantResetAt(tokensResets),
};
}
}
},
});
```
```ts openai.ts
import { OpenAI } from "openai";
export const openai = new OpenAI({ apiKey: env.OPENAI_API_KEY });
export function calculateISO8601DurationOpenAIVariantResetAt(
resets: string,
now: Date = new Date()
): Date | undefined {
// Check if the input is null or undefined
if (!resets) return undefined;
// Regular expression to match the duration string pattern
const pattern = /^(?:(\d+)d)?(?:(\d+)h)?(?:(\d+)m)?(?:(\d+(?:\.\d+)?)s)?(?:(\d+)ms)?$/;
const match = resets.match(pattern);
// If the string doesn't match the expected format, return undefined
if (!match) return undefined;
// Extract days, hours, minutes, seconds, and milliseconds from the string
const days = parseInt(match[1] ?? "0", 10) || 0;
const hours = parseInt(match[2] ?? "0", 10) || 0;
const minutes = parseInt(match[3] ?? "0", 10) || 0;
const seconds = parseFloat(match[4] ?? "0") || 0;
const milliseconds = parseInt(match[5] ?? "0", 10) || 0;
// Calculate the future date based on the current date plus the extracted time
const resetAt = new Date(now);
resetAt.setDate(resetAt.getDate() + days);
resetAt.setHours(resetAt.getHours() + hours);
resetAt.setMinutes(resetAt.getMinutes() + minutes);
resetAt.setSeconds(resetAt.getSeconds() + Math.floor(seconds));
resetAt.setMilliseconds(
resetAt.getMilliseconds() + (seconds - Math.floor(seconds)) * 1000 + milliseconds
);
return resetAt;
}
```
## Preventing retries
### Using `AbortTaskRunError`
You can prevent retries by throwing an `AbortTaskRunError`. This will fail the task attempt and disable retrying.
```ts /trigger/myTasks.ts
import { task, AbortTaskRunError } from "@trigger.dev/sdk/v3";
export const openaiTask = task({
id: "openai-task",
run: async (payload: { prompt: string }) => {
//if this fails, it will throw an error and stop retrying
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: payload.prompt }],
model: "gpt-3.5-turbo",
});
if (chatCompletion.choices[0]?.message.content === undefined) {
// If OpenAI returns an empty response, abort retrying
throw new AbortTaskRunError("OpenAI call failed");
}
return chatCompletion.choices[0].message.content;
},
});
```
### Using try/catch
Sometimes you want to catch an error and don't want to retry the task. You can use try/catch as you normally would. In this example we fallback to using Replicate if OpenAI fails.
```ts /trigger/myTasks.ts
import { task } from "@trigger.dev/sdk/v3";
export const openaiTask = task({
id: "openai-task",
run: async (payload: { prompt: string }) => {
try {
//if this fails, it will throw an error and retry
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: payload.prompt }],
model: "gpt-3.5-turbo",
});
if (chatCompletion.choices[0]?.message.content === undefined) {
//sometimes OpenAI returns an empty response, let's retry by throwing an error
throw new Error("OpenAI call failed");
}
return chatCompletion.choices[0].message.content;
} catch (error) {
//use Replicate if OpenAI fails
const prediction = await replicate.run(
"meta/llama-2-70b-chat:02e509c789964a7ea8736978a43525956ef40397be9033abf9fd2badfe68c9e3",
{
input: {
prompt: payload.prompt,
max_new_tokens: 250,
},
}
);
if (prediction.output === undefined) {
//retry if Replicate fails
throw new Error("Replicate call failed");
}
return prediction.output;
}
},
});
```
# Overview & Authentication
Using the Trigger.dev SDK from your frontend application.
You can use certain SDK functions in your frontend application to interact with the Trigger.dev API. This guide will show you how to authenticate your requests and use the SDK in your frontend application.
## Authentication
You must authenticate your requests using a "Public Access Token" when using the SDK in your frontend application. To create a Public Access Token, you can use the `auth.createPublicToken` function in your backend code:
```tsx
const publicToken = await auth.createPublicToken();
```
To use a Public Access Token in your frontend application, you can call the `auth.configure` function or the `auth.withAuth` function:
```ts
import { auth } from "@trigger.dev/sdk/v3";
auth.configure({
accessToken: publicToken,
});
// or
await auth.withAuth({ accessToken: publicToken }, async () => {
// Your code here will use the public token
});
```
### Scopes
By default a Public Access Token has limited permissions. You can specify the scopes you need when creating a Public Access Token:
```ts
const publicToken = await auth.createPublicToken({
scopes: {
read: {
runs: true,
},
},
});
```
This will allow the token to read all runs, which is probably not what you want. You can specify only certain runs by passing an array of run IDs:
```ts
const publicToken = await auth.createPublicToken({
scopes: {
read: {
runs: ["run_1234", "run_5678"],
},
},
});
```
You can scope the token to only read certain tasks:
```ts
const publicToken = await auth.createPublicToken({
scopes: {
read: {
tasks: ["my-task-1", "my-task-2"],
},
},
});
```
Or tags:
```ts
const publicToken = await auth.createPublicToken({
scopes: {
read: {
tags: ["my-tag-1", "my-tag-2"],
},
},
});
```
Or a specific batch of runs:
```ts
const publicToken = await auth.createPublicToken({
scopes: {
read: {
batch: "batch_1234",
},
},
});
```
You can also combine scopes. For example, to read only certain tasks and tags:
```ts
const publicToken = await auth.createPublicToken({
scopes: {
read: {
tasks: ["my-task-1", "my-task-2"],
tags: ["my-tag-1", "my-tag-2"],
},
},
});
```
### Expiration
By default, Public Access Token's expire after 15 minutes. You can specify a different expiration time when creating a Public Access Token:
```ts
const publicToken = await auth.createPublicToken({
expirationTime: "1hr",
});
```
* If `expirationTime` is a string, it will be treated as a time span
* If `expirationTime` is a number, it will be treated as a Unix timestamp
* If `expirationTime` is a `Date`, it will be treated as a date
The format used for a time span is the same as the [jose package](https://github.com/panva/jose), which is a number followed by a unit. Valid units are: "sec", "secs", "second", "seconds", "s", "minute", "minutes", "min", "mins", "m", "hour", "hours", "hr", "hrs", "h", "day", "days", "d", "week", "weeks", "w", "year", "years", "yr", "yrs", and "y". It is not possible to specify months. 365.25 days is used as an alias for a year. If the string is suffixed with "ago", or prefixed with a "-", the resulting time span gets subtracted from the current unix timestamp. A "from now" suffix can also be used for readability when adding to the current unix timestamp.
## Auto-generated tokens
When triggering a task from your backend, the `handle` received from the `trigger` function now includes a `publicAccessToken` field. This token can be used to authenticate requests in your frontend application:
```ts
import { tasks } from "@trigger.dev/sdk/v3";
const handle = await tasks.trigger("my-task", { some: "data" });
console.log(handle.publicAccessToken);
```
By default, tokens returned from the `trigger` function expire after 15 minutes and have a read scope for that specific run, and any tags associated with it. You can customize the expiration of the auto-generated tokens by passing a `publicTokenOptions` object to the `trigger` function:
```ts
const handle = await tasks.trigger(
"my-task",
{ some: "data" },
{
tags: ["my-tag"],
},
{
publicAccessToken: {
expirationTime: "1hr",
},
}
);
```
You will also get back a Public Access Token when using the `batchTrigger` function:
```ts
import { tasks } from "@trigger.dev/sdk/v3";
const handle = await tasks.batchTrigger("my-task", [
{ payload: { some: "data" } },
{ payload: { some: "data" } },
{ payload: { some: "data" } },
]);
console.log(handle.publicAccessToken);
```
## Available SDK functions
Currently the following functions are available in the frontend SDK:
### runs.retrieve
The `runs.retrieve` function allows you to retrieve a run by its ID.
```ts
import { runs, auth } from "@trigger.dev/sdk/v3";
// Somewhere in your backend code
const handle = await tasks.trigger("my-task", { some: "data" });
// In your frontend code
auth.configure({
accessToken: handle.publicAccessToken,
});
const run = await runs.retrieve(handle.id);
```
Learn more about the `runs.retrieve` function in the [runs.retrieve doc](/management/runs/retrieve).
### runs.subscribeToRun
The `runs.subscribeToRun` function allows you to subscribe to a run by its ID, and receive updates in real-time when the run changes.
```ts
import { runs, auth } from "@trigger.dev/sdk/v3";
// Somewhere in your backend code
const handle = await tasks.trigger("my-task", { some: "data" });
// In your frontend code
auth.configure({
accessToken: handle.publicAccessToken,
});
for await (const run of runs.subscribeToRun(handle.id)) {
// This will log the run every time it changes
console.log(run);
}
```
See the [Realtime doc](/realtime) for more information.
### runs.subscribeToRunsWithTag
The `runs.subscribeToRunsWithTag` function allows you to subscribe to runs with a specific tag, and receive updates in real-time when the runs change.
```ts
import { runs, auth } from "@trigger.dev/sdk/v3";
// Somewhere in your backend code
const handle = await tasks.trigger("my-task", { some: "data" }, { tags: ["my-tag"] });
// In your frontend code
auth.configure({
accessToken: handle.publicAccessToken,
});
for await (const run of runs.subscribeToRunsWithTag("my-tag")) {
// This will log the run every time it changes
console.log(run);
}
```
See the [Realtime doc](/realtime) for more information.
## React hooks
We also provide React hooks to make it easier to use the SDK in your React application. See our [React hooks](/frontend/react-hooks) documentation for more information.
## Triggering tasks
We don't currently support triggering tasks from the frontend SDK. If this is something you need, please let us know by [upvoting the feature](https://feedback.trigger.dev/p/ability-to-trigger-tasks-from-frontend).
# React hooks
Using the Trigger.dev v3 API from your React application.
Our react hooks package provides a set of hooks that make it easy to interact with the Trigger.dev API from your React application, using our [frontend API](/frontend/overview). You can use these hooks to fetch runs, batches, and subscribe to real-time updates.
## Installation
Install the `@trigger.dev/react-hooks` package in your project:
```bash npm
npm add @trigger.dev/react-hooks
```
```bash pnpm
pnpm add @trigger.dev/react-hooks
```
```bash yarn
yarn install @trigger.dev/react-hooks
```
## Authentication
Before you can use the hooks, you need to provide a public access token to the `TriggerAuthContext` provider. Learn more about [authentication in the frontend guide](/frontend/overview).
```tsx
import { TriggerAuthContext } from "@trigger.dev/react-hooks";
export function SetupTrigger() {
return (
);
}
```
Now children components can use the hooks to interact with the Trigger.dev API. If you are self-hosting Trigger.dev, you can provide the `baseURL` to the `TriggerAuthContext` provider.
```tsx
import { TriggerAuthContext } from "@trigger.dev/react-hooks";
export function SetupTrigger() {
return (
);
}
```
### Next.js and client components
If you are using Next.js with the App Router, you have to make sure the component that uses the `TriggerAuthContext` is a client component. So for example, the following code will not work:
```tsx app/page.tsx
import { TriggerAuthContext } from "@trigger.dev/react-hooks";
export default function Page() {
return (
);
}
```
That's because `Page` is a server component and the `TriggerAuthContext.Provider` uses client-only react code. To fix this, wrap the `TriggerAuthContext.Provider` in a client component:
```ts components/TriggerProvider.tsx
"use client";
import { TriggerAuthContext } from "@trigger.dev/react-hooks";
export function TriggerProvider({
accessToken,
children,
}: {
accessToken: string;
children: React.ReactNode;
}) {
return (
{children}
);
}
```
### Passing the token to the frontend
Techniques for passing the token to the frontend vary depending on your setup. Here are a few ways to do it for different setups:
#### Next.js App Router
If you are using Next.js with the App Router and you are triggering a task from a server action, you can use cookies to store and pass the token to the frontend.
```tsx actions/trigger.ts
"use server";
import { tasks } from "@trigger.dev/sdk/v3";
import type { exampleTask } from "@/trigger/example";
import { redirect } from "next/navigation";
import { cookies } from "next/headers";
export async function startRun() {
const handle = await tasks.trigger("example", { foo: "bar" });
// Set the auto-generated publicAccessToken in a cookie
cookies().set("publicAccessToken", handle.publicAccessToken);
redirect(`/runs/${handle.id}`);
}
```
Then in the `/runs/[id].tsx` page, you can read the token from the cookie and pass it to the `TriggerProvider`.
```tsx pages/runs/[id].tsx
import { TriggerProvider } from "@/components/TriggerProvider";
export default function RunPage({ params }: { params: { id: string } }) {
const publicAccessToken = cookies().get("publicAccessToken");
return (
);
}
```
Instead of a cookie, you could also use a query parameter to pass the token to the frontend:
```tsx actions/trigger.ts
import { tasks } from "@trigger.dev/sdk/v3";
import type { exampleTask } from "@/trigger/example";
import { redirect } from "next/navigation";
import { cookies } from "next/headers";
export async function startRun() {
const handle = await tasks.trigger("example", { foo: "bar" });
redirect(`/runs/${handle.id}?publicAccessToken=${handle.publicAccessToken}`);
}
```
And then in the `/runs/[id].tsx` page:
```tsx pages/runs/[id].tsx
import { TriggerProvider } from "@/components/TriggerProvider";
export default function RunPage({
params,
searchParams,
}: {
params: { id: string };
searchParams: { publicAccessToken: string };
}) {
return (
);
}
```
Another alternative would be to use a server-side rendered page to fetch the token and pass it to the frontend:
```tsx pages/runs/[id].tsx
import { TriggerProvider } from "@/components/TriggerProvider";
import { generatePublicAccessToken } from "@/trigger/auth";
export default async function RunPage({ params }: { params: { id: string } }) {
// This will be executed on the server only
const publicAccessToken = await generatePublicAccessToken(params.id);
return (
);
}
```
```tsx trigger/auth.ts
import { auth } from "@trigger.dev/sdk/v3";
export async function generatePublicAccessToken(runId: string) {
return auth.createPublicToken({
scopes: {
read: {
runs: [runId],
},
},
expirationTime: "1h",
});
}
```
## Usage
### SWR vs Realtime hooks
We offer two "styles" of hooks: SWR and Realtime. The SWR hooks use the [swr](https://swr.vercel.app/) library to fetch data once and cache it. The Realtime hooks use [Trigger.dev realtime](/realtime) to subscribe to updates in real-time.
It can be a little confusing which one to use because [swr](https://swr.vercel.app/) can also be
configured to poll for updates. But because of rate-limits and the way the Trigger.dev API works,
we recommend using the Realtime hooks for most use-cases.
All hooks named `useRealtime*` are Realtime hooks, and all hooks named `use*` are SWR hooks.
#### Common SWR hook options
You can pass the following options to the all SWR hooks:
Revalidate the data when the window regains focus.
Revalidate the data when the browser regains a network connection.
Poll for updates at the specified interval (in milliseconds). Polling is not recommended for most
use-cases. Use the Realtime hooks instead.
#### Common SWR hook return values
An error object if an error occurred while fetching the data.
A boolean indicating if the data is currently being fetched.
A boolean indicating if the data is currently being revalidated.
A boolean indicating if an error occurred while fetching the data.
### useRun
The `useRun` hook allows you to fetch a run by its ID.
```tsx
"use client"; // This is needed for Next.js App Router or other RSC frameworks
import { useRun } from "@trigger.dev/react-hooks";
export function MyComponent({ runId }: { runId: string }) {
const { run, error, isLoading } = useRun(runId);
if (isLoading) return
Loading...
;
if (error) return
Error: {error.message}
;
return
Run: {run.id}
;
}
```
The `run` object returned is the same as the [run object](/management/runs/retrieve) returned by the Trigger.dev API. To correctly type the run's payload and output, you can provide the type of your task to the `useRun` hook:
```tsx
import { useRun } from "@trigger.dev/react-hooks";
import type { myTask } from "@/trigger/myTask";
export function MyComponent({ runId }: { runId: string }) {
const { run, error, isLoading } = useRun(runId);
if (isLoading) return
Loading...
;
if (error) return
Error: {error.message}
;
// Now run.payload and run.output are correctly typed
return
Run: {run.id}
;
}
```
### useRealtimeRun
The `useRealtimeRun` hook allows you to subscribe to a run by its ID.
```tsx
"use client"; // This is needed for Next.js App Router or other RSC frameworks
import { useRealtimeRun } from "@trigger.dev/react-hooks";
export function MyComponent({ runId }: { runId: string }) {
const { run, error } = useRealtimeRun(runId);
if (error) return
Error: {error.message}
;
return
Run: {run.id}
;
}
```
To correctly type the run's payload and output, you can provide the type of your task to the `useRealtimeRun` hook:
```tsx
import { useRealtimeRun } from "@trigger.dev/react-hooks";
import type { myTask } from "@/trigger/myTask";
export function MyComponent({ runId }: { runId: string }) {
const { run, error } = useRealtimeRun(runId);
if (error) return
Error: {error.message}
;
// Now run.payload and run.output are correctly typed
return
Run: {run.id}
;
}
```
See our [Realtime documentation](/realtime) for more information.
### useRealtimeRunsWithTag
The `useRealtimeRunsWithTag` hook allows you to subscribe to multiple runs with a specific tag.
```tsx
"use client"; // This is needed for Next.js App Router or other RSC frameworks
import { useRealtimeRunsWithTag } from "@trigger.dev/react-hooks";
export function MyComponent({ tag }: { tag: string }) {
const { runs, error } = useRealtimeRunsWithTag(tag);
if (error) return
Error: {error.message}
;
return (
{runs.map((run) => (
Run: {run.id}
))}
);
}
```
To correctly type the runs payload and output, you can provide the type of your task to the `useRealtimeRunsWithTag` hook:
```tsx
import { useRealtimeRunsWithTag } from "@trigger.dev/react-hooks";
import type { myTask } from "@/trigger/myTask";
export function MyComponent({ tag }: { tag: string }) {
const { runs, error } = useRealtimeRunsWithTag(tag);
if (error) return
Error: {error.message}
;
// Now runs[i].payload and runs[i].output are correctly typed
return (
{runs.map((run) => (
Run: {run.id}
))}
);
}
```
If `useRealtimeRunsWithTag` could return multiple different types of tasks, you can pass a union of all the task types to the hook:
```tsx
import { useRealtimeRunsWithTag } from "@trigger.dev/react-hooks";
import type { myTask1, myTask2 } from "@/trigger/myTasks";
export function MyComponent({ tag }: { tag: string }) {
const { runs, error } = useRealtimeRunsWithTag(tag);
if (error) return
Error: {error.message}
;
// You can narrow down the type of the run based on the taskIdentifier
for (const run of runs) {
if (run.taskIdentifier === "my-task-1") {
// run is correctly typed as myTask1
} else if (run.taskIdentifier === "my-task-2") {
// run is correctly typed as myTask2
}
}
return (
{runs.map((run) => (
Run: {run.id}
))}
);
}
```
See our [Realtime documentation](/realtime) for more information.
### useRealtimeBatch
The `useRealtimeBatch` hook allows you to subscribe to a batch of runs by its the batch ID.
```tsx
"use client"; // This is needed for Next.js App Router or other RSC frameworks
import { useRealtimeBatch } from "@trigger.dev/react-hooks";
export function MyComponent({ batchId }: { batchId: string }) {
const { runs, error } = useRealtimeBatch(batchId);
if (error) return
Error: {error.message}
;
return (
{runs.map((run) => (
Run: {run.id}
))}
);
}
```
See our [Realtime documentation](/realtime) for more information.
# GitHub Actions
You can easily deploy your tasks with GitHub actions.
This simple GitHub action file will deploy your Trigger.dev tasks when new code is pushed to the `main` branch and the `trigger` directory has changes in it.
The deploy step will fail if any version mismatches are detected. Please see the [version
pinning](/github-actions#version-pinning) section for more details.
```yaml .github/workflows/release-trigger-prod.yml
name: Deploy to Trigger.dev (prod)
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: "20.x"
- name: Install dependencies
run: npm install
- name: π Deploy Trigger.dev
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
run: |
npx trigger.dev@latest deploy
```
```yaml .github/workflows/release-trigger-staging.yml
name: Deploy to Trigger.dev (staging)
# Requires manually calling the workflow from a branch / commit to deploy to staging
on:
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: "20.x"
- name: Install dependencies
run: npm install
- name: π Deploy Trigger.dev
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
run: |
npx trigger.dev@latest deploy --env staging
```
If you already have a GitHub action file, you can just add the final step "π Deploy Trigger.dev" to your existing file.
## Creating a Personal Access Token
Go to your profile page and click on the ["Personal Access
Tokens"](https://cloud.trigger.dev/account/tokens) tab.
Click on 'Settings' -> 'Secrets and variables' -> 'Actions' -> 'New repository secret'
Add the name `TRIGGER_ACCESS_TOKEN` and the value of your access token. ![Add TRIGGER\_ACCESS\_TOKEN
in GitHub](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/github-access-token.png)
## Version pinning
The CLI and `@trigger.dev/*` package versions need to be in sync with the `trigger.dev` CLI, otherwise there will be errors and unpredictable behavior. Hence, the `deploy` command will automatically fail during CI on any version mismatches.
Tip: add the deploy command to your `package.json` file to keep versions managed in the same place. For example:
```json
{
"scripts": {
"deploy:trigger-prod": "npx trigger.dev@3.0.0 deploy",
"deploy:trigger": "npx trigger.dev@3.0.0 deploy --env staging"
}
}
```
Your workflow file will follow the version specified in the `package.json` script, like so:
```yaml .github/workflows/release-trigger.yml
- name: π Deploy Trigger.dev
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
run: |
npm run deploy:trigger
```
You should use the version you run locally during dev and manual deploy. The current version is displayed in the banner, but you can also check it by appending `--version` to any command.
## Self-hosting
When self-hosting, you will have to take a few additional steps:
* Specify the `TRIGGER_API_URL` environment variable. You can add it to the GitHub secrets the same way as the access token. This should point at your webapp domain, for example: `https://trigger.example.com`
* Setup docker as you will need to build and push the image to your registry. On [Trigger.dev Cloud](https://cloud.trigger.dev) this is all done remotely.
* Add your registry credentials to the GitHub secrets.
* Use the `--self-hosted` and `--push` flags when deploying.
Other than that, your GitHub action file will look very similar to the one above:
```yaml .github/workflows/release-trigger-self-hosted.yml
name: Deploy to Trigger.dev (self-hosted)
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 20.x
uses: actions/setup-node@v4
with:
node-version: "20.x"
- name: Install dependencies
run: npm install
# docker setup - part 1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# docker setup - part 2
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: π Deploy Trigger.dev
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
# required when self-hosting
TRIGGER_API_URL: ${{ secrets.TRIGGER_API_URL }}
# deploy with additional flags
run: |
npx trigger.dev@latest deploy --self-hosted --push
```
# GitHub repo
Trigger.dev is [Open Source on GitHub](https://github.com/triggerdotdev/trigger.dev). You can contribute to the project by submitting issues, pull requests, or simply by using it and providing feedback.
You can also [self-host](/open-source-self-hosting) the project if you want to run it on your own infrastructure.
# Creating a project
This guide will show you how to create a new Trigger.dev project.
## Prerequisites
* [Create a Trigger.dev account](https://cloud.trigger.dev)
* Login to the Trigger.dev [dashboard](https://cloud.trigger.dev)
## Create a new Trigger.dev project
Click on "Projects" in the left hand side menu then click on "Create a new Project" button in the top right corner .
![Create a project page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/creating-a-project/creating-a-project-1.png)
![Name your project](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/creating-a-project/creating-a-project-2.png)
Once you have created your project you can find your Project ref to add to your `trigger.config` file and rename your project by clicking "Project settings" in the left hand side menubar.
![Useful project settings](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/creating-a-project/creating-a-project-3.png)
## Useful next steps
Setup Trigger.dev in 3 minutes
Learn what tasks are and how to write them
# Generate an image using DALLΒ·E 3
This example will show you how to generate an image using DALLΒ·E 3 and text using GPT-4o with Trigger.dev.
## Overview
This example demonstrates how to use Trigger.dev to make reliable calls to AI APIs, specifically OpenAI's GPT-4o and DALL-E 3. It showcases automatic retrying with a maximum of 3 attempts, built-in error handling to avoid timeouts, and the ability to trace and monitor API calls.
## Task code
```ts trigger/generateContent.ts
import { task } from "@trigger.dev/sdk/v3";
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
type Payload = {
theme: string;
description: string;
};
export const generateContent = task({
id: "generate-content",
retry: {
maxAttempts: 3, // Retry up to 3 times
},
run: async ({ theme, description }: Payload) => {
// Generate text
const textResult = await openai.chat.completions.create({
model: "gpt-4o",
messages: generateTextPrompt(theme, description),
});
if (!textResult.choices[0]) {
throw new Error("No content, retryingβ¦");
}
// Generate image
const imageResult = await openai.images.generate({
model: "dall-e-3",
prompt: generateImagePrompt(theme, description),
});
if (!imageResult.data[0]) {
throw new Error("No image, retryingβ¦");
}
return {
text: textResult.choices[0],
image: imageResult.data[0].url,
};
},
});
function generateTextPrompt(theme: string, description: string): any {
return `Theme: ${theme}\n\nDescription: ${description}`;
}
function generateImagePrompt(theme: string, description: string): any {
return `Theme: ${theme}\n\nDescription: ${description}`;
}
```
## Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"theme": "A beautiful sunset",
"description": "A sunset over the ocean with a tiny yacht in the distance."
}
```
# Transcribe audio using Deepgram
This example will show you how to transcribe audio using Deepgram's speech recognition API with Trigger.dev.
## Overview
Transcribe audio using [Deepgram's](https://developers.deepgram.com/docs/introduction) speech recognition API.
## Key Features
* Transcribe audio from a URL
* Use the Nova 2 model for transcription
## Task code
```ts trigger/deepgramTranscription.ts
import { createClient } from "@deepgram/sdk";
import { logger, task } from "@trigger.dev/sdk/v3";
// Initialize the Deepgram client, using your Deepgram API key (you can find this in your Deepgram account settings).
const deepgram = createClient(process.env.DEEPGRAM_SECRET_KEY);
export const deepgramTranscription = task({
id: "deepgram-transcribe-audio",
run: async (payload: { audioUrl: string }) => {
const { audioUrl } = payload;
logger.log("Transcribing audio from URL", { audioUrl });
// Transcribe the audio using Deepgram
const { result, error } = await deepgram.listen.prerecorded.transcribeUrl(
{
url: audioUrl,
},
{
model: "nova-2", // Use the Nova 2 model for the transcription
smart_format: true, // Automatically format transcriptions to improve readability
diarize: true, // Recognize speaker changes and assign a speaker to each word in the transcript
}
);
if (error) {
logger.error("Failed to transcribe audio", { error });
throw error;
}
console.dir(result, { depth: null });
// Extract the transcription from the result
const transcription = result.results.channels[0].alternatives[0].paragraphs?.transcript;
logger.log(`Generated transcription: ${transcription}`);
return {
result,
};
},
});
```
## Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"audioUrl": "https://dpgr.am/spacewalk.wav"
}
```
# Convert an image to a cartoon using Fal.ai
This example task generates an image from a URL using Fal.ai and uploads it to Cloudflare R2.
## Walkthrough
This video walks through the process of creating this task in a Next.js project.
## Prerequisites
* An existing project
* A [Trigger.dev account](https://cloud.trigger.dev) with Trigger.dev [initialized in your project](/quick-start)
* A [Fal.ai](https://fal.ai/) account
* A [Cloudflare](https://developers.cloudflare.com/r2/) account with an R2 bucket setup
## Task code
This task converts an image to a cartoon using Fal.ai, and uploads the result to Cloudflare R2.
```ts trigger/fal-ai-image-to-cartoon.ts
import { logger, task } from "@trigger.dev/sdk/v3";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import * as fal from "@fal-ai/serverless-client";
import fetch from "node-fetch";
import { z } from "zod";
// Initialize fal.ai client
fal.config({
credentials: process.env.FAL_KEY, // Get this from your fal.ai dashboard
});
// Initialize S3-compatible client for Cloudflare R2
const s3Client = new S3Client({
// How to authenticate to R2: https://developers.cloudflare.com/r2/api/s3/tokens/
region: "auto",
endpoint: process.env.R2_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY ?? "",
},
});
export const FalResult = z.object({
images: z.tuple([z.object({ url: z.string() })]),
});
export const falAiImageToCartoon = task({
id: "fal-ai-image-to-cartoon",
run: async (payload: { imageUrl: string; fileName: string }) => {
logger.log("Converting image to cartoon", payload);
// Convert image to cartoon using fal.ai
const result = await fal.subscribe("fal-ai/flux/dev/image-to-image", {
input: {
prompt: "Turn the image into a cartoon in the style of a Pixar character",
image_url: payload.imageUrl,
},
onQueueUpdate: (update) => {
logger.info("Fal.ai processing update", { update });
},
});
const $result = FalResult.parse(result);
const [{ url: cartoonImageUrl }] = $result.images;
// Download the cartoon image
const imageResponse = await fetch(cartoonImageUrl);
const imageBuffer = await imageResponse.arrayBuffer().then(Buffer.from);
// Upload to Cloudflare R2
const r2Key = `cartoons/${payload.fileName}`;
const uploadParams = {
Bucket: process.env.R2_BUCKET, // Create a bucket in your Cloudflare dashboard
Key: r2Key,
Body: imageBuffer,
ContentType: "image/png",
};
logger.log("Uploading cartoon to R2", { key: r2Key });
await s3Client.send(new PutObjectCommand(uploadParams));
logger.log("Cartoon uploaded to R2", { key: r2Key });
return {
originalUrl: payload.imageUrl,
cartoonUrl: `File uploaded to storage at: ${r2Key}`,
};
},
});
```
### Testing your task
You can test your task by triggering it from the Trigger.dev dashboard.
```json
"imageUrl": "", // Replace with the URL of the image you want to convert to a cartoon
"fileName": "" // Replace with the name you want to save the file as in Cloudflare R2
```
# Generate an image from a prompt using Fal.ai and Trigger.dev Realtime
This example task generates an image from a prompt using Fal.ai and shows the progress of the task on the frontend using Trigger.dev Realtime.
## Walkthrough
This video walks through the process of creating this task in a Next.js project.
## Prerequisites
* An existing project
* A [Trigger.dev account](https://cloud.trigger.dev) with Trigger.dev [initialized in your project](/quick-start)
* A [Fal.ai](https://fal.ai/) account
## Task code
This task converts an image to a cartoon using fal AI, and uploads the result to Cloudflare R2.
```ts trigger/fal-ai-image-from-prompt-realtime.ts
import * as fal from "@fal-ai/serverless-client";
import { logger, schemaTask } from "@trigger.dev/sdk/v3";
import { z } from "zod";
export const FalResult = z.object({
images: z.tuple([z.object({ url: z.string() })]),
});
export const payloadSchema = z.object({
imageUrl: z.string().url(),
prompt: z.string(),
});
export const realtimeImageGeneration = schemaTask({
id: "realtime-image-generation",
schema: payloadSchema,
run: async (payload) => {
const result = await fal.subscribe("fal-ai/flux/dev/image-to-image", {
input: {
image_url: payload.imageUrl,
prompt: payload.prompt,
},
onQueueUpdate: (update) => {
logger.info("Fal.ai processing update", { update });
},
});
const $result = FalResult.parse(result);
const [{ url: cartoonUrl }] = $result.images;
return {
imageUrl: cartoonUrl,
};
},
});
```
### Testing your task
You can test your task by triggering it from the Trigger.dev dashboard. Here's an example payload:
```json
{
"imageUrl": "https://static.vecteezy.com/system/resources/previews/005/857/332/non_2x/funny-portrait-of-cute-corgi-dog-outdoors-free-photo.jpg",
"prompt": "Dress this dog for Christmas"
}
```
# Video processing with FFmpeg
These examples show you how to process videos in various ways using FFmpeg with Trigger.dev.
export const packages_0 = "ffmpeg"
## Prerequisites
* A project with [Trigger.dev initialized](/quick-start)
* [FFmpeg](https://www.ffmpeg.org/download.html) installed on your machine
### Adding the FFmpeg build extension
To use these example tasks, you'll first need to add our FFmpeg extension to your project configuration like this:
```ts trigger.config.ts
import { ffmpeg } from "@trigger.dev/build/extensions/core";
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [ffmpeg()],
},
});
```
[Build extensions](/config/config-file#extensions) allow you to hook into the build system and
customize the build process or the resulting bundle and container image (in the case of
deploying). You can use pre-built extensions or create your own.
You'll also need to add `@trigger.dev/build` to your `package.json` file under `devDependencies` if you don't already have it there.
## Compress a video using FFmpeg
This task demonstrates how to use FFmpeg to compress a video, reducing its file size while maintaining reasonable quality, and upload the compressed video to R2 storage.
### Key Features
* Fetches a video from a given URL
* Compresses the video using FFmpeg with various compression settings
* Uploads the compressed video to R2 storage
### Task code
```ts trigger/ffmpeg-compress-video.ts
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { logger, task } from "@trigger.dev/sdk/v3";
import ffmpeg from "fluent-ffmpeg";
import fs from "fs/promises";
import fetch from "node-fetch";
import { Readable } from "node:stream";
import os from "os";
import path from "path";
// Initialize S3 client
const s3Client = new S3Client({
// How to authenticate to R2: https://developers.cloudflare.com/r2/api/s3/tokens/
region: "auto",
endpoint: process.env.R2_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY ?? "",
},
});
export const ffmpegCompressVideo = task({
id: "ffmpeg-compress-video",
run: async (payload: { videoUrl: string }) => {
const { videoUrl } = payload;
// Generate temporary file names
const tempDirectory = os.tmpdir();
const outputPath = path.join(tempDirectory, `output_${Date.now()}.mp4`);
// Fetch the video
const response = await fetch(videoUrl);
// Compress the video
await new Promise((resolve, reject) => {
if (!response.body) {
return reject(new Error("Failed to fetch video"));
}
ffmpeg(Readable.from(response.body))
.outputOptions([
"-c:v libx264", // Use H.264 codec
"-crf 28", // Higher CRF for more compression (28 is near the upper limit for acceptable quality)
"-preset veryslow", // Slowest preset for best compression
"-vf scale=iw/2:ih/2", // Reduce resolution to 320p width (height auto-calculated)
"-c:a aac", // Use AAC for audio
"-b:a 64k", // Reduce audio bitrate to 64k
"-ac 1", // Convert to mono audio
])
.output(outputPath)
.on("end", resolve)
.on("error", reject)
.run();
});
// Read the compressed video
const compressedVideo = await fs.readFile(outputPath);
const compressedSize = compressedVideo.length;
// Log compression results
logger.log(`Compressed video size: ${compressedSize} bytes`);
logger.log(`Temporary compressed video file created`, { outputPath });
// Create the r2Key for the extracted audio, using the base name of the output path
const r2Key = `processed-videos/${path.basename(outputPath)}`;
const uploadParams = {
Bucket: process.env.R2_BUCKET,
Key: r2Key,
Body: compressedVideo,
};
// Upload the video to R2 and get the URL
await s3Client.send(new PutObjectCommand(uploadParams));
logger.log(`Compressed video saved to your r2 bucket`, { r2Key });
// Delete the temporary compressed video file
await fs.unlink(outputPath);
logger.log(`Temporary compressed video file deleted`, { outputPath });
// Return the compressed video buffer and r2 key
return {
Bucket: process.env.R2_BUCKET,
r2Key,
};
},
});
```
### Testing your task
To test this task, use this payload structure:
```json
{
"videoUrl": "" // Replace with the URL of the video you want to upload
}
```
## Extract audio from a video using FFmpeg
This task demonstrates how to use FFmpeg to extract audio from a video, convert it to WAV format, and upload it to R2 storage.
### Key Features
* Fetches a video from a given URL
* Extracts the audio from the video using FFmpeg
* Converts the extracted audio to WAV format
* Uploads the extracted audio to R2 storage
### Task code
```ts trigger/ffmpeg-extract-audio.ts
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { logger, task } from "@trigger.dev/sdk/v3";
import ffmpeg from "fluent-ffmpeg";
import fs from "fs/promises";
import fetch from "node-fetch";
import { Readable } from "node:stream";
import os from "os";
import path from "path";
// Initialize S3 client
const s3Client = new S3Client({
// How to authenticate to R2: https://developers.cloudflare.com/r2/api/s3/tokens/
region: "auto",
endpoint: process.env.R2_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY ?? "",
},
});
export const ffmpegExtractAudio = task({
id: "ffmpeg-extract-audio",
run: async (payload: { videoUrl: string }) => {
const { videoUrl } = payload;
// Generate temporary file names
const tempDirectory = os.tmpdir();
const outputPath = path.join(tempDirectory, `audio_${Date.now()}.wav`);
// Fetch the video
const response = await fetch(videoUrl);
// Extract the audio
await new Promise((resolve, reject) => {
if (!response.body) {
return reject(new Error("Failed to fetch video"));
}
ffmpeg(Readable.from(response.body))
.outputOptions([
"-vn", // Disable video output
"-acodec pcm_s16le", // Use PCM 16-bit little-endian encoding
"-ar 44100", // Set audio sample rate to 44.1 kHz
"-ac 2", // Set audio channels to stereo
])
.output(outputPath)
.on("end", resolve)
.on("error", reject)
.run();
});
// Read the extracted audio
const audioBuffer = await fs.readFile(outputPath);
const audioSize = audioBuffer.length;
// Log audio extraction results
logger.log(`Extracted audio size: ${audioSize} bytes`);
logger.log(`Temporary audio file created`, { outputPath });
// Create the r2Key for the extracted audio, using the base name of the output path
const r2Key = `extracted-audio/${path.basename(outputPath)}`;
const uploadParams = {
Bucket: process.env.R2_BUCKET,
Key: r2Key,
Body: audioBuffer,
};
// Upload the audio to R2 and get the URL
await s3Client.send(new PutObjectCommand(uploadParams));
logger.log(`Extracted audio saved to your R2 bucket`, { r2Key });
// Delete the temporary audio file
await fs.unlink(outputPath);
logger.log(`Temporary audio file deleted`, { outputPath });
// Return the audio file path, size, and R2 URL
return {
Bucket: process.env.R2_BUCKET,
r2Key,
};
},
});
```
### Testing your task
To test this task, use this payload structure:
Make sure to provide a video URL that contains audio. If the video does not have audio, the task
will fail.
```json
{
"videoUrl": "" // Replace with the URL of the video you want to upload
}
```
## Generate a thumbnail from a video using FFmpeg
This task demonstrates how to use FFmpeg to generate a thumbnail from a video at a specific time and upload the generated thumbnail to R2 storage.
### Key Features
* Fetches a video from a given URL
* Generates a thumbnail from the video at the 5-second mark
* Uploads the generated thumbnail to R2 storage
### Task code
```ts trigger/ffmpeg-generate-thumbnail.ts
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { logger, task } from "@trigger.dev/sdk/v3";
import ffmpeg from "fluent-ffmpeg";
import fs from "fs/promises";
import fetch from "node-fetch";
import { Readable } from "node:stream";
import os from "os";
import path from "path";
// Initialize S3 client
const s3Client = new S3Client({
// How to authenticate to R2: https://developers.cloudflare.com/r2/api/s3/tokens/
region: "auto",
endpoint: process.env.R2_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY ?? "",
},
});
export const ffmpegGenerateThumbnail = task({
id: "ffmpeg-generate-thumbnail",
run: async (payload: { videoUrl: string }) => {
const { videoUrl } = payload;
// Generate output file name
const tempDirectory = os.tmpdir();
const outputPath = path.join(tempDirectory, `thumbnail_${Date.now()}.jpg`);
// Fetch the video
const response = await fetch(videoUrl);
// Generate the thumbnail
await new Promise((resolve, reject) => {
if (!response.body) {
return reject(new Error("Failed to fetch video"));
}
ffmpeg(Readable.from(response.body))
.screenshots({
count: 1,
folder: "/tmp",
filename: path.basename(outputPath),
size: "320x240",
timemarks: ["5"], // 5 seconds
})
.on("end", resolve)
.on("error", reject);
});
// Read the generated thumbnail
const thumbnail = await fs.readFile(outputPath);
// Create the r2Key for the extracted audio, using the base name of the output path
const r2Key = `thumbnails/${path.basename(outputPath)}`;
const uploadParams = {
Bucket: process.env.R2_BUCKET,
Key: r2Key,
Body: thumbnail,
};
// Upload the thumbnail to R2 and get the URL
await s3Client.send(new PutObjectCommand(uploadParams));
const r2Url = `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com/${process.env.R2_BUCKET}/${r2Key}`;
logger.log("Thumbnail uploaded to R2", { url: r2Url });
// Delete the temporary file
await fs.unlink(outputPath);
// Log thumbnail generation results
logger.log(`Thumbnail uploaded to S3: ${r2Url}`);
// Return the thumbnail buffer, path, and R2 URL
return {
thumbnailBuffer: thumbnail,
thumbnailPath: outputPath,
r2Url,
};
},
});
```
### Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"videoUrl": "" // Replace with the URL of the video you want to upload
}
```
## Local development
To test this example task locally, be sure to install any packages from the build extensions you added to your `trigger.config.ts` file to your local machine. In this case, you need to install {packages_0}.
# Crawl a URL using Firecrawl
This example demonstrates how to crawl a URL using Firecrawl with Trigger.dev.
## Overview
Firecrawl is a tool for crawling websites and extracting clean markdown that's structured in an LLM-ready format.
Here are two examples of how to use Firecrawl with Trigger.dev:
## Prerequisites
* A project with [Trigger.dev initialized](/quick-start)
* A [Firecrawl](https://firecrawl.dev/) account
## Example 1: crawl an entire website with Firecrawl
This task crawls a website and returns the `crawlResult` object. You can set the `limit` parameter to control the number of URLs that are crawled.
```ts trigger/firecrawl-url-crawl.ts
import FirecrawlApp from "@mendable/firecrawl-js";
import { task } from "@trigger.dev/sdk/v3";
// Initialize the Firecrawl client with your API key
const firecrawlClient = new FirecrawlApp({
apiKey: process.env.FIRECRAWL_API_KEY, // Get this from your Firecrawl dashboard
});
export const firecrawlCrawl = task({
id: "firecrawl-crawl",
run: async (payload: { url: string }) => {
const { url } = payload;
// Crawl: scrapes all the URLs of a web page and return content in LLM-ready format
const crawlResult = await firecrawlClient.crawlUrl(url, {
limit: 100, // Limit the number of URLs to crawl
scrapeOptions: {
formats: ["markdown", "html"],
},
});
if (!crawlResult.success) {
throw new Error(`Failed to crawl: ${crawlResult.error}`);
}
return {
data: crawlResult,
};
},
});
```
### Testing your task
You can test your task by triggering it from the Trigger.dev dashboard.
```json
"url": "" // Replace with the URL you want to crawl
```
## Example 2: scrape a single URL with Firecrawl
This task scrapes a single URL and returns the `scrapeResult` object.
```ts trigger/firecrawl-url-scrape.ts
import FirecrawlApp, { ScrapeResponse } from "@mendable/firecrawl-js";
import { task } from "@trigger.dev/sdk/v3";
// Initialize the Firecrawl client with your API key
const firecrawlClient = new FirecrawlApp({
apiKey: process.env.FIRECRAWL_API_KEY, // Get this from your Firecrawl dashboard
});
export const firecrawlScrape = task({
id: "firecrawl-scrape",
run: async (payload: { url: string }) => {
const { url } = payload;
// Scrape: scrapes a URL and get its content in LLM-ready format (markdown, structured data via LLM Extract, screenshot, html)
const scrapeResult = (await firecrawlClient.scrapeUrl(url, {
formats: ["markdown", "html"],
})) as ScrapeResponse;
if (!scrapeResult.success) {
throw new Error(`Failed to scrape: ${scrapeResult.error}`);
}
return {
data: scrapeResult,
};
},
});
```
### Testing your task
You can test your task by triggering it from the Trigger.dev dashboard.
```json
"url": "" // Replace with the URL you want to scrape
```
# Convert documents to PDF using LibreOffice
This example demonstrates how to convert documents to PDF using LibreOffice with Trigger.dev.
export const packages_0 = "libreoffice"
## Prerequisites
* A project with [Trigger.dev initialized](/quick-start)
* [LibreOffice](https://www.libreoffice.org/download/libreoffice-fresh/) installed on your machine
* A [Cloudflare R2](https://developers.cloudflare.com) account and bucket
### Using our `aptGet` build extension to add the LibreOffice package
To deploy this task, you'll need to add LibreOffice to your project configuration, like this:
```ts trigger.config.ts
import { aptGet } from "@trigger.dev/build/extensions/core";
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [
aptGet({
packages: ["libreoffice"],
}),
],
},
});
```
[Build extensions](/config/config-file#extensions) allow you to hook into the build system and
customize the build process or the resulting bundle and container image (in the case of
deploying). You can use pre-built extensions or create your own.
You'll also need to add `@trigger.dev/build` to your `package.json` file under `devDependencies` if you don't already have it there.
## Convert a document to PDF using LibreOffice and upload to R2
This task demonstrates how to use LibreOffice to convert a document (.doc or .docx) to PDF and upload the PDF to an R2 storage bucket.
### Key Features
* Fetches a document from a given URL
* Converts the document to PDF
* Uploads the PDF to R2 storage
### Task code
```ts trigger/libreoffice-pdf-convert.ts
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { task } from "@trigger.dev/sdk/v3";
import libreoffice from "libreoffice-convert";
import { promisify } from "node:util";
import path from "path";
import fs from "fs";
const convert = promisify(libreoffice.convert);
// Initialize S3 client
const s3Client = new S3Client({
// How to authenticate to R2: https://developers.cloudflare.com/r2/api/s3/tokens/
region: "auto",
endpoint: process.env.R2_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY ?? "",
},
});
export const libreOfficePdfConvert = task({
id: "libreoffice-pdf-convert",
run: async (payload: { documentUrl: string }, { ctx }) => {
// Set LibreOffice path for production environment
if (ctx.environment.type !== "DEVELOPMENT") {
process.env.LIBREOFFICE_PATH = "/usr/bin/libreoffice";
}
try {
// Create temporary file paths
const inputPath = path.join(process.cwd(), `input_${Date.now()}.docx`);
const outputPath = path.join(process.cwd(), `output_${Date.now()}.pdf`);
// Download file from URL
const response = await fetch(payload.documentUrl);
const buffer = Buffer.from(await response.arrayBuffer());
fs.writeFileSync(inputPath, buffer);
const inputFile = fs.readFileSync(inputPath);
// Convert to PDF using LibreOffice
const pdfBuffer = await convert(inputFile, ".pdf", undefined);
fs.writeFileSync(outputPath, pdfBuffer);
// Upload to R2
const key = `converted-pdfs/output_${Date.now()}.pdf`;
await s3Client.send(
new PutObjectCommand({
Bucket: process.env.R2_BUCKET,
Key: key,
Body: fs.readFileSync(outputPath),
})
);
// Cleanup temporary files
fs.unlinkSync(inputPath);
fs.unlinkSync(outputPath);
return { pdfLocation: key };
} catch (error) {
console.error("Error converting PDF:", error);
throw error;
}
},
});
```
### Testing your task
To test this task, use this payload structure:
```json
{
"documentUrl": "" // Replace with the URL of the document you want to convert
}
```
## Local development
To test this example task locally, be sure to install any packages from the build extensions you added to your `trigger.config.ts` file to your local machine. In this case, you need to install {packages_0}.
# Call OpenAI with retrying
This example will show you how to call OpenAI with retrying using Trigger.dev.
## Overview
Sometimes OpenAI calls can take a long time to complete, or they can fail. This task will retry if the API call fails completely or if the response is empty.
## Task code
```ts trigger/openai.ts
import { task } from "@trigger.dev/sdk/v3";
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export const openaiTask = task({
id: "openai-task",
//specifying retry options overrides the defaults defined in your trigger.config file
retry: {
maxAttempts: 10,
factor: 1.8,
minTimeoutInMs: 500,
maxTimeoutInMs: 30_000,
randomize: false,
},
run: async (payload: { prompt: string }) => {
//if this fails, it will throw an error and retry
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: payload.prompt }],
model: "gpt-3.5-turbo",
});
if (chatCompletion.choices[0]?.message.content === undefined) {
//sometimes OpenAI returns an empty response, let's retry by throwing an error
throw new Error("OpenAI call failed");
}
return chatCompletion.choices[0].message.content;
},
});
```
## Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"prompt": "What is the meaning of life?"
}
```
# Turn a PDF into an image using MuPDF
This example will show you how to turn a PDF into an image using MuPDF and Trigger.dev.
export const packages_0 = "mupdf-tools from MuPDF"
## Overview
This example demonstrates how to use Trigger.dev to turn a PDF into a series of images using MuPDF and upload them to Cloudflare R2.
## Update your build configuration
To use this example, add these build settings below to your `trigger.config.ts` file. They ensure that the `mutool` and `curl` packages are installed when you deploy your task. You can learn more about this and see more build settings [here](/config/config-file#aptget).
```ts trigger.config.ts
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [aptGet({ packages: ["mupdf-tools", "curl"] })],
},
});
```
## Task code
```ts trigger/pdfToImage.ts
import { logger, task } from "@trigger.dev/sdk/v3";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { execSync } from "child_process";
import fs from "fs";
import path from "path";
// Initialize S3 client
const s3Client = new S3Client({
region: "auto",
endpoint: process.env.S3_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY ?? "",
},
});
export const pdfToImage = task({
id: "pdf-to-image",
run: async (payload: { pdfUrl: string; documentId: string }) => {
logger.log("Converting PDF to images", payload);
const pdfPath = `/tmp/${payload.documentId}.pdf`;
const outputDir = `/tmp/${payload.documentId}`;
// Download PDF and convert to images using MuPDF
execSync(`curl -s -o ${pdfPath} ${payload.pdfUrl}`);
fs.mkdirSync(outputDir, { recursive: true });
execSync(`mutool convert -o ${outputDir}/page-%d.png ${pdfPath}`);
// Upload images to R2
const uploadedUrls = [];
for (const file of fs.readdirSync(outputDir)) {
const s3Key = `images/${payload.documentId}/${file}`;
const uploadParams = {
Bucket: process.env.S3_BUCKET,
Key: s3Key,
Body: fs.readFileSync(path.join(outputDir, file)),
ContentType: "image/png",
};
logger.log("Uploading to R2", uploadParams);
await s3Client.send(new PutObjectCommand(uploadParams));
const s3Url = `https://${process.env.S3_BUCKET}.r2.cloudflarestorage.com/${s3Key}`;
uploadedUrls.push(s3Url);
logger.log("Image uploaded to R2", { url: s3Url });
}
// Clean up
fs.rmSync(outputDir, { recursive: true, force: true });
fs.unlinkSync(pdfPath);
logger.log("All images uploaded to R2", { urls: uploadedUrls });
return {
imageUrls: uploadedUrls,
};
},
});
```
## Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"pdfUrl": "https://pdfobject.com/pdf/sample.pdf",
"documentId": "unique-document-id"
}
```
## Local development
To test this example task locally, be sure to install any packages from the build extensions you added to your `trigger.config.ts` file to your local machine. In this case, you need to install {packages_0}.
# Puppeteer
These examples demonstrate how to use Puppeteer with Trigger.dev.
export const packages_0 = "the Puppeteer library."
## Prerequisites
* A project with [Trigger.dev initialized](/quick-start)
* [Puppeteer](https://pptr.dev/guides/installation) installed on your machine
## Overview
There are 3 example tasks to follow on this page:
1. [Basic example](/guides/examples/puppeteer#basic-example)
2. [Generate a PDF from a web page](/guides/examples/puppeteer#generate-a-pdf-from-a-web-page)
3. [Scrape content from a web page](/guides/examples/puppeteer#scrape-content-from-a-web-page)
**WEB SCRAPING:** When web scraping, you MUST use a proxy to comply with our terms of service. Direct scraping of third-party websites without the site owner's permission using Trigger.dev Cloud is prohibited and will result in account suspension. See [this example](/guides/examples/puppeteer#scrape-content-from-a-web-page) which uses a proxy.
## Build configurations
To use all examples on this page, you'll first need to add these build settings to your `trigger.config.ts` file:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { puppeteer } from "@trigger.dev/build/extensions/puppeteer";
export default defineConfig({
project: "",
// Your other config settings...
build: {
// This is required to use the Puppeteer library
extensions: [puppeteer()],
},
});
```
Learn more about [build configurations](/config/config-file#build-configuration) including setting default retry settings, customizing the build environment, and more.
## Set an environment variable
Set the following environment variable in your [Trigger.dev dashboard](/deploy-environment-variables) or [using the SDK](/deploy-environment-variables#in-your-code):
```bash
PUPPETEER_EXECUTABLE_PATH: "/usr/bin/google-chrome-stable",
```
## Basic example
### Overview
In this example we use [Puppeteer](https://pptr.dev/) to log out the title of a web page, in this case from the [Trigger.dev](https://trigger.dev) landing page.
### Task code
```ts trigger/puppeteer-basic-example.ts
import { logger, task } from "@trigger.dev/sdk/v3";
import puppeteer from "puppeteer";
export const puppeteerTask = task({
id: "puppeteer-log-title",
run: async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto("https://trigger.dev");
const content = await page.title();
logger.info("Content", { content });
await browser.close();
},
});
```
### Testing your task
There's no payload required for this task so you can just click "Run test" from the Test page in the dashboard. Learn more about testing tasks [here](/run-tests).
## Generate a PDF from a web page
### Overview
In this example we use [Puppeteer](https://pptr.dev/) to generate a PDF from the [Trigger.dev](https://trigger.dev) landing page and upload it to [Cloudflare R2](https://developers.cloudflare.com/r2/).
### Task code
```ts trigger/puppeteer-generate-pdf.ts
import { logger, task } from "@trigger.dev/sdk/v3";
import puppeteer from "puppeteer";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
// Initialize S3 client
const s3Client = new S3Client({
region: "auto",
endpoint: process.env.S3_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY ?? "",
},
});
export const puppeteerWebpageToPDF = task({
id: "puppeteer-webpage-to-pdf",
run: async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
const response = await page.goto("https://trigger.dev");
const url = response?.url() ?? "No URL found";
// Generate PDF from the web page
const generatePdf = await page.pdf();
logger.info("PDF generated from URL", { url });
await browser.close();
// Upload to R2
const s3Key = `pdfs/test.pdf`;
const uploadParams = {
Bucket: process.env.S3_BUCKET,
Key: s3Key,
Body: generatePdf,
ContentType: "application/pdf",
};
logger.log("Uploading to R2 with params", uploadParams);
// Upload the PDF to R2 and return the URL.
await s3Client.send(new PutObjectCommand(uploadParams));
const s3Url = `https://${process.env.S3_BUCKET}.s3.amazonaws.com/${s3Key}`;
logger.log("PDF uploaded to R2", { url: s3Url });
return { pdfUrl: s3Url };
},
});
```
### Testing your task
There's no payload required for this task so you can just click "Run test" from the Test page in the dashboard. Learn more about testing tasks [here](/run-tests).
## Scrape content from a web page
### Overview
In this example we use [Puppeteer](https://pptr.dev/) with a [BrowserBase](https://www.browserbase.com/) proxy to scrape the GitHub stars count from the [Trigger.dev](https://trigger.dev) landing page and log it out. See [this list](/guides/examples/puppeteer#proxying) for more proxying services we recommend.
When web scraping, you MUST use the technique below which uses a proxy with Puppeteer. Direct
scraping without using `browserWSEndpoint` is prohibited and will result in account suspension.
### Task code
```ts trigger/scrape-website.ts
import { logger, task } from "@trigger.dev/sdk/v3";
import puppeteer from "puppeteer-core";
export const puppeteerScrapeWithProxy = task({
id: "puppeteer-scrape-with-proxy",
run: async () => {
const browser = await puppeteer.connect({
browserWSEndpoint: `wss://connect.browserbase.com?apiKey=${process.env.BROWSERBASE_API_KEY}`,
});
const page = await browser.newPage();
try {
// Navigate to the target website
await page.goto("https://trigger.dev", { waitUntil: "networkidle0" });
// Scrape the GitHub stars count
const starCount = await page.evaluate(() => {
const starElement = document.querySelector(".github-star-count");
const text = starElement?.textContent ?? "0";
const numberText = text.replace(/[^0-9]/g, "");
return parseInt(numberText);
});
logger.info("GitHub star count", { starCount });
return { starCount };
} catch (error) {
logger.error("Error during scraping", {
error: error instanceof Error ? error.message : String(error),
});
throw error;
} finally {
await browser.close();
}
},
});
```
### Testing your task
There's no payload required for this task so you can just click "Run test" from the Test page in the dashboard. Learn more about testing tasks [here](/run-tests).
## Local development
To test this example task locally, be sure to install any packages from the build extensions you added to your `trigger.config.ts` file to your local machine. In this case, you need to install {packages_0}.
## Proxying
If you're using Trigger.dev Cloud and Puppeteer or any other tool to scrape content from websites you don't own, you'll need to proxy your requests. **If you don't you'll risk getting our IP address blocked and we will ban you from our service.**
Here are a list of proxy services we recommend:
* [Browserbase](https://www.browserbase.com/)
* [Brightdata](https://brightdata.com/)
* [Browserless](https://browserless.io/)
* [Oxylabs](https://oxylabs.io/)
* [ScrapingBee](https://scrapingbee.com/)
* [Smartproxy](https://smartproxy.com/)
# Generate a PDF using react-pdf and save it to R2
This example will show you how to generate a PDF using Trigger.dev.
## Overview
This example demonstrates how to use Trigger.dev to generate a PDF using `react-pdf` and save it to Cloudflare R2.
## Task code
This example must be a .tsx file to use React components.
```ts trigger/generateResumePDF.tsx
import { logger, task } from "@trigger.dev/sdk/v3";
import { renderToBuffer, Document, Page, Text, View } from "@react-pdf/renderer";
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
// Initialize R2 client
const r2Client = new S3Client({
// How to authenticate to R2: https://developers.cloudflare.com/r2/api/s3/tokens/
region: "auto",
endpoint: process.env.R2_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY ?? "",
},
});
export const generateResumePDF = task({
id: "generate-resume-pdf",
run: async (payload: { text: string }) => {
// Log the payload
logger.log("Generating PDF resume", payload);
// Render the ResumeDocument component to a PDF buffer
const pdfBuffer = await renderToBuffer(
{payload.text}
);
// Generate a unique filename based on the text and current timestamp
const filename = `${payload.text.replace(/\s+/g, "-").toLowerCase()}-${Date.now()}.pdf`;
// Set the R2 key for the PDF file
const r2Key = `resumes/${filename}`;
// Set the upload parameters for R2
const uploadParams = {
Bucket: process.env.R2_BUCKET,
Key: r2Key,
Body: pdfBuffer,
ContentType: "application/pdf",
};
// Log the upload parameters
logger.log("Uploading to R2 with params", uploadParams);
// Upload the PDF to R2
await r2Client.send(new PutObjectCommand(uploadParams));
// Return the Bucket and R2 key for the uploaded PDF
return {
Bucket: process.env.R2_BUCKET,
Key: r2Key,
};
},
});
```
## Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"text": "Hello, world!"
}
```
# Send a sequence of emails using Resend
This example will show you how to send a sequence of emails over several days using Resend with Trigger.dev.
## Overview
Each email is wrapped in retry.onThrow. This will retry the block of code if an error is thrown. This is useful when you donβt want to retry the whole task, but just a part of it. The entire task will use the default retrying, so can also retry.
Additionally this task uses wait.for to wait for a certain amount of time before sending the next email. During the waiting time, the task will be paused and will not consume any resources.
## Task code
```ts trigger/email-sequence.ts
import { Resend } from "resend";
const resend = new Resend(process.env.RESEND_ASP_KEY);
export const emailSequence = task({
id: "email-sequence",
run: async (payload: { userId: string; email: string; name: string }) => {
console.log(`Start email sequence for user ${payload.userId}`, payload);
// Send the first email immediately
const firstEmailResult = await retry.onThrow(
async ({ attempt }) => {
const { data, error } = await resend.emails.send({
from: "hello@trigger.dev",
to: payload.email,
subject: "Welcome to Trigger.dev",
html: `
Hello ${payload.name},
Welcome to Trigger.dev
`,
});
if (error) {
// Throwing an error will trigger a retry of this block
throw error;
}
return data;
},
{ maxAttempts: 3 }
);
// Then wait 3 days
await wait.for({ days: 3 });
// Send the second email
const secondEmailResult = await retry.onThrow(
async ({ attempt }) => {
const { data, error } = await resend.emails.send({
from: "hello@trigger.dev",
to: payload.email,
subject: "Some tips for you",
html: `
Hello ${payload.name},
Here are some tips for youβ¦
`,
});
if (error) {
// Throwing an error will trigger a retry of this block
throw error;
}
return data;
},
{ maxAttempts: 3 }
);
//etc...
},
});
```
## Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"userId": "123",
"email": "", // Replace with your test email
"name": "Alice Testington"
}
```
# Scrape the top 3 articles from Hacker News and email yourself a summary every weekday
This example demonstrates how to scrape the top 3 articles from Hacker News using BrowserBase and Puppeteer, summarize them with ChatGPT and send a nicely formatted email summary to yourself every weekday using Resend.
export const packages_0 = "the Puppeteer library"
## Overview
In this example we'll be using a number of different tools and features to:
1. Scrape the content of the top 3 articles from Hacker News
2. Summarize each article
3. Email the summaries to yourself
And we'll be using the following tools and features:
* [Schedules](/tasks/scheduled) to run the task every weekday at 9 AM
* [Batch Triggering](/triggering#yourtask-batchtriggerandwait) to run separate child tasks for each article while the parent task waits for them all to complete
* [idempotencyKey](/triggering#idempotencykey) to prevent tasks being triggered multiple times
* [BrowserBase](https://browserbase.com/) to proxy the scraping of the Hacker News articles
* [Puppeteer](https://pptr.dev/) to scrape the articles linked from Hacker News
* [OpenAI](https://platform.openai.com/docs/overview) to summarize the articles
* [Resend](https://resend.com/) to send a nicely formatted email summary
**WEB SCRAPING:** When web scraping, you MUST use a proxy to comply with our terms of service. Direct scraping of third-party websites without the site owner's permission using Trigger.dev Cloud is prohibited and will result in account suspension. See [this example](/guides/examples/puppeteer#scrape-content-from-a-web-page) which uses a proxy.
## Prerequisites
* A project with [Trigger.dev initialized](/quick-start)
* [Puppeteer](https://pptr.dev/guides/installation) installed on your machine
* A [BrowserBase](https://browserbase.com/) account
* An [OpenAI](https://platform.openai.com/docs/overview) account
* A [Resend](https://resend.com/) account
## Build configuration
First up, add these build settings to your `trigger.config.ts` file:
```tsx trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { puppeteer } from "@trigger.dev/build/extensions/puppeteer";
export default defineConfig({
project: "",
// Your other config settings...
build: {
// This is required to use the Puppeteer library
extensions: [puppeteer()],
},
});
```
Learn more about [build configurations](/config/config-file#build-configuration) including setting default retry settings, customizing the build environment, and more.
### Environment variables
Set the following environment variable in your local `.env` file to run this task locally. And before deploying your task, set them in the [Trigger.dev dashboard](/deploy-environment-variables) or [using the SDK](/deploy-environment-variables#in-your-code):
```bash
BROWSERBASE_API_KEY: ""
OPENAI_API_KEY: ""
RESEND_API_KEY: ""
```
### Task code
```ts trigger/scrape-hacker-news.ts
import { render } from "@react-email/render";
import { logger, schedules, task, wait } from "@trigger.dev/sdk/v3";
import { OpenAI } from "openai";
import puppeteer from "puppeteer-core";
import { Resend } from "resend";
import { HNSummaryEmail } from "./summarize-hn-email";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const resend = new Resend(process.env.RESEND_API_KEY);
// Parent task (scheduled to run 9AM every weekday)
export const summarizeHackerNews = schedules.task({
id: "summarize-hacker-news",
cron: {
pattern: "0 9 * * 1-5",
timezone: "Europe/London",
}, // Run at 9 AM, Monday to Friday
run: async () => {
// Connect to BrowserBase to proxy the scraping of the Hacker News articles
const browser = await puppeteer.connect({
browserWSEndpoint: `wss://connect.browserbase.com?apiKey=${process.env.BROWSERBASE_API_KEY}`,
});
logger.info("Connected to Browserbase");
const page = await browser.newPage();
// Navigate to Hacker News and scrape top 3 articles
await page.goto("https://news.ycombinator.com/news", {
waitUntil: "networkidle0",
});
logger.info("Navigated to Hacker News");
const articles = await page.evaluate(() => {
const items = document.querySelectorAll(".athing");
return Array.from(items)
.slice(0, 3)
.map((item) => {
const titleElement = item.querySelector(".titleline > a");
const link = titleElement?.getAttribute("href");
const title = titleElement?.textContent;
return { title, link };
});
});
logger.info("Scraped top 3 articles", { articles });
await browser.close();
await wait.for({ seconds: 5 });
// Use batchTriggerAndWait to process articles
const summaries = await scrapeAndSummarizeArticle
.batchTriggerAndWait(
articles.map((article) => ({
payload: { title: article.title!, link: article.link! },
idempotencyKey: article.link,
}))
)
.then((batch) =>
batch.runs.filter((run) => run.ok).map((run) => run.output)
);
// Send email using Resend
await resend.emails.send({
from: "Hacker News Summary ",
to: ["james@trigger.dev"],
subject: "Your morning HN summary",
html: render(),
});
logger.info("Email sent successfully");
},
});
// Child task for scraping and summarizing individual articles
export const scrapeAndSummarizeArticle = task({
id: "scrape-and-summarize-articles",
retry: {
maxAttempts: 3,
minTimeoutInMs: 5000,
maxTimeoutInMs: 10000,
factor: 2,
randomize: true,
},
run: async ({ title, link }: { title: string; link: string }) => {
logger.info(`Summarizing ${title}`);
const browser = await puppeteer.connect({
browserWSEndpoint: `wss://connect.browserbase.com?apiKey=${process.env.BROWSERBASE_API_KEY}`,
});
const page = await browser.newPage();
// Prevent all assets from loading, images, stylesheets etc
await page.setRequestInterception(true);
page.on("request", (request) => {
if (
["script", "stylesheet", "image", "media", "font"].includes(
request.resourceType()
)
) {
request.abort();
} else {
request.continue();
}
});
await page.goto(link, { waitUntil: "networkidle0" });
logger.info(`Navigated to article: ${title}`);
// Extract the main content of the article
const content = await page.evaluate(() => {
const articleElement = document.querySelector("article") || document.body;
return articleElement.innerText.trim().slice(0, 1500); // Limit to 1500 characters
});
await browser.close();
logger.info(`Extracted content for article: ${title}`, { content });
// Summarize the content using ChatGPT
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "user",
content: `Summarize this article in 2-3 concise sentences:\n\n${content}`,
},
],
});
logger.info(`Generated summary for article: ${title}`);
return {
title,
link,
summary: response.choices[0].message.content,
};
},
});
```
## Create your email template using React Email
To prevent the main example from becoming too cluttered, we'll create a separate file for our email template. It's formatted using [React Email](https://react.email/docs/introduction) components so you'll need to install the package to use it.
Notice how this file is imported into the main task code and passed to Resend to send the email.
```tsx summarize-hn-email.tsx
import {
Html,
Head,
Body,
Container,
Section,
Heading,
Text,
Link,
} from "@react-email/components";
interface Article {
title: string;
link: string;
summary: string | null;
}
export const HNSummaryEmail: React.FC<{ articles: Article[] }> = ({
articles,
}) => (
Your Morning HN Summary
{articles.map((article, index) => (
{article.title}
{article.summary || "No summary available"}
))}
);
```
## Local development
To test this example task locally, be sure to install any packages from the build extensions you added to your `trigger.config.ts` file to your local machine. In this case, you need to install {packages_0}.
## Testing your task
To test this task in the dashboard, use the Test page and set the schedule date to "Now" to ensure the task triggers immediately. Then click "Run test" and wait for the task to complete.
# Track errors with Sentry
This example demonstrates how to track errors with Sentry using Trigger.dev.
## Overview
Automatically send errors and source maps to your Sentry project from your Trigger.dev tasks. Sending source maps to Sentry allows for more detailed stack traces when errors occur, as Sentry can map the minified code back to the original source code.
## Prerequisites
* A [Sentry](https://sentry.io) account and project
* A [Trigger.dev](https://trigger.dev) account and project
## Build configuration
To send errors to Sentry when there are errors in your tasks, you'll need to add this build configuration to your `trigger.config.ts` file. This will then run every time you deploy your project.
You will need to set the `SENTRY_AUTH_TOKEN` and `SENTRY_DSN` environment variables. You can find
the `SENTRY_AUTH_TOKEN` in your Sentry dashboard, in settings -> developer settings -> auth tokens
and the `SENTRY_DSN` in your Sentry dashboard, in settings -> projects -> your project -> client
keys (DSN). Add these to your `.env` file, and in your [Trigger.dev
dashboard](https://cloud.trigger.dev), under environment variables in your project's sidebar.
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { esbuildPlugin } from "@trigger.dev/build/extensions";
import { sentryEsbuildPlugin } from "@sentry/esbuild-plugin";
import * as Sentry from "@sentry/node";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [
esbuildPlugin(
sentryEsbuildPlugin({
org: "",
project: "",
// Find this auth token in settings -> developer settings -> auth tokens
authToken: process.env.SENTRY_AUTH_TOKEN,
}),
{ placement: "last", target: "deploy" }
),
],
},
init: async () => {
Sentry.init({
// The Data Source Name (DSN) is a unique identifier for your Sentry project.
dsn: process.env.SENTRY_DSN,
// Update this to match the environment you want to track errors for
environment: process.env.NODE_ENV === "production" ? "production" : "development",
});
},
onFailure: async (payload, error, { ctx }) => {
Sentry.captureException(error, {
extra: {
payload,
ctx,
},
});
},
});
```
[Build extensions](/config/config-file#extensions) allow you to hook into the build system and
customize the build process or the resulting bundle and container image (in the case of
deploying). You can use pre-built extensions or create your own.
## Testing that errors are being sent to Sentry
To test that errors are being sent to Sentry, you need to create a task that will fail.
This task takes no payload, and will throw an error.
```ts trigger/sentry-error-test.ts
import { task } from "@trigger.dev/sdk/v3";
export const sentryErrorTest = task({
id: "sentry-error-test",
retry: {
// Only retry once
maxAttempts: 1,
},
run: async () => {
const error = new Error("This is a custom error that Sentry will capture");
error.cause = { additionalContext: "This is additional context" };
throw error;
},
});
```
After creating the task, deploy your project.
```bash npm
npx trigger.dev@latest deploy
```
```bash pnpm
pnpm dlx trigger.dev@latest deploy
```
```bash yarn
yarn dlx trigger.dev@latest deploy
```
Once deployed, navigate to the `test` page in the sidebar of your [Trigger.dev dashboard](https://cloud.trigger.dev), click on your `prod` environment, and select the `sentryErrorTest` task.
Run a test task with an empty payload by clicking the `Run test` button.
Your run should then fail, and if everything is set up correctly, you will see an error in the Sentry project dashboard shortly after.
# Process images using Sharp
This example demonstrates how to process images using the Sharp library with Trigger.dev.
export const packages_0 = "the Sharp image processing library"
## Overview
This task processes and watermarks an image using the Sharp library, and then uploads it to R2 storage.
## Prerequisites
* A project with [Trigger.dev initialized](/quick-start)
* The [Sharp](https://sharp.pixelplumbing.com/install) library installed on your machine
* An R2-compatible object storage service, such as [Cloudflare R2](https://developers.cloudflare.com/r2)
## Adding the build configuration
To use this example, you'll first need to add these build settings to your `trigger.config.ts` file:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
// Your other config settings...
build: {
// This is required to use the Sharp library
external: ["sharp"],
},
});
```
Any packages that install or build a native binary should be added to external, as native binaries
cannot be bundled.
## Key features
* Resizes a JPEG image to 800x800 pixels
* Adds a watermark to the image, positioned in the bottom-right corner, using a PNG image
* Uploads the processed image to R2 storage
## Task code
```ts trigger/sharp-image-processing.ts
import { S3Client } from "@aws-sdk/client-s3";
import { Upload } from "@aws-sdk/lib-storage";
import { logger, task } from "@trigger.dev/sdk/v3";
import fs from "fs/promises";
import os from "os";
import path from "path";
import sharp from "sharp";
// Initialize R2 client using your R2 account details
const r2Client = new S3Client({
region: "auto",
endpoint: process.env.R2_ENDPOINT,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY ?? "",
},
});
export const sharpProcessImage = task({
id: "sharp-process-image",
retry: { maxAttempts: 1 },
run: async (payload: { imageUrl: string; watermarkUrl: string }) => {
const { imageUrl, watermarkUrl } = payload;
const outputPath = path.join(os.tmpdir(), `output_${Date.now()}.jpg`);
const [imageResponse, watermarkResponse] = await Promise.all([
fetch(imageUrl),
fetch(watermarkUrl),
]);
const imageBuffer = await imageResponse.arrayBuffer();
const watermarkBuffer = await watermarkResponse.arrayBuffer();
await sharp(Buffer.from(imageBuffer))
.resize(800, 800) // Resize the image to 800x800px
.composite([
{
input: Buffer.from(watermarkBuffer),
gravity: "southeast", // Position the watermark in the bottom-right corner
},
])
.jpeg() // Convert to jpeg
.toBuffer() // Convert to buffer
.then(async (outputBuffer) => {
await fs.writeFile(outputPath, outputBuffer); // Write the buffer to file
const r2Key = `processed-images/${path.basename(outputPath)}`;
const uploadParams = {
Bucket: process.env.R2_BUCKET,
Key: r2Key,
Body: await fs.readFile(outputPath),
};
const upload = new Upload({
client: r2Client,
params: uploadParams,
});
await upload.done();
logger.log("Image uploaded to R2 storage.", {
path: `/${process.env.R2_BUCKET}/${r2Key}`,
});
await fs.unlink(outputPath); // Clean up the temporary file
return { r2Key };
});
},
});
```
## Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"imageUrl": "", // Replace with a URL to a JPEG image
"watermarkUrl": "" // Replace with a URL to a PNG watermark image
}
```
## Local development
To test this example task locally, be sure to install any packages from the build extensions you added to your `trigger.config.ts` file to your local machine. In this case, you need to install {packages_0}.
# Trigger a task from Stripe webhook events
This example demonstrates how to handle Stripe webhook events using Trigger.dev.
## Overview
This example shows how to set up a webhook handler in your existing app for incoming Stripe events. The handler triggers a task when a `checkout.session.completed` event is received. This is easily customisable to handle other Stripe events.
## Key features
* Shows how to create a Stripe webhook handler in your app
* Triggers a task from your backend when a `checkout.session.completed` event is received
## Environment variables
You'll need to configure the following environment variables for this example to work:
* `STRIPE_WEBHOOK_SECRET` The secret key used to verify the Stripe webhook signature.
* `TRIGGER_API_URL` Your Trigger.dev API url: `https://api.trigger.dev`
* `TRIGGER_SECRET_KEY` Your Trigger.dev secret key
## Setting up the Stripe webhook handler
First you'll need to create a [Stripe webhook](https://stripe.com/docs/webhooks) handler route that listens for POST requests and verifies the Stripe signature.
Here are examples of how you can set up a handler using different frameworks:
```ts Next.js
// app/api/stripe-webhook/route.ts
import { NextResponse } from "next/server";
import { tasks } from "@trigger.dev/sdk/v3";
import Stripe from "stripe";
import type { stripeCheckoutCompleted } from "@/trigger/stripe-checkout-completed";
// π **type-only** import
export async function POST(request: Request) {
const signature = request.headers.get("stripe-signature");
const payload = await request.text();
if (!signature || !payload) {
return NextResponse.json(
{ error: "Invalid Stripe payload/signature" },
{
status: 400,
}
);
}
const event = Stripe.webhooks.constructEvent(
payload,
signature,
process.env.STRIPE_WEBHOOK_SECRET as string
);
// Perform the check based on the event type
switch (event.type) {
case "checkout.session.completed": {
// Trigger the task only if the event type is "checkout.session.completed"
const { id } = await tasks.trigger(
"stripe-checkout-completed",
event.data.object
);
return NextResponse.json({ runId: id });
}
default: {
// Return a response indicating that the event is not handled
return NextResponse.json(
{ message: "Event not handled" },
{
status: 200,
}
);
}
}
}
```
```ts Remix
// app/webhooks.stripe.ts
import { type ActionFunctionArgs, json } from "@remix-run/node";
import type { stripeCheckoutCompleted } from "src/trigger/stripe-webhook";
// π **type-only** import
import { tasks } from "@trigger.dev/sdk/v3";
import Stripe from "stripe";
export async function action({ request }: ActionFunctionArgs) {
// Validate the Stripe webhook payload
const signature = request.headers.get("stripe-signature");
const payload = await request.text();
if (!signature || !payload) {
return json({ error: "Invalid Stripe payload/signature" }, { status: 400 });
}
const event = Stripe.webhooks.constructEvent(
payload,
signature,
process.env.STRIPE_WEBHOOK_SECRET as string
);
// Perform the check based on the event type
switch (event.type) {
case "checkout.session.completed": {
// Trigger the task only if the event type is "checkout.session.completed"
const { id } = await tasks.trigger(
"stripe-checkout-completed",
event.data.object
);
return json({ runId: id });
}
default: {
// Return a response indicating that the event is not handled
return json({ message: "Event not handled" }, { status: 200 });
}
}
}
```
## Task code
This task is triggered when a `checkout.session.completed` event is received from Stripe.
```ts trigger/stripe-checkout-completed.ts
import { task } from "@trigger.dev/sdk/v3";
import type stripe from "stripe";
export const stripeCheckoutCompleted = task({
id: "stripe-checkout-completed",
run: async (payload: stripe.Checkout.Session) => {
// Add your custom logic for handling the checkout.session.completed event here
},
});
```
## Testing your task locally
To test everything is working you can use the Stripe CLI to send test events to your endpoint:
1. Install the [Stripe CLI](https://stripe.com/docs/stripe-cli#install), and login
2. Follow the instructions to [test your handler](https://docs.stripe.com/webhooks#test-webhook). This will include a temporary `STRIPE_WEBHOOK_SECRET` that you can use for testing.
3. When triggering the event, use the `checkout.session.completed` event type. With the Stripe CLI: `stripe trigger checkout.session.completed`
4. If your endpoint is set up correctly, you should see the Stripe events logged in your console with a status of `200`.
5. Then, check the [Trigger.dev](https://cloud.trigger.dev) dashboard and you should see the successful run of the `stripe-webhook` task.
For more information on setting up and testing Stripe webhooks, refer to the [Stripe Webhook Documentation](https://stripe.com/docs/webhooks).
# Supabase database operations using Trigger.dev
These examples demonstrate how to run basic CRUD operations on a table in a Supabase database using Trigger.dev.
## Add a new user to a table in a Supabase database
This is a basic task which inserts a new row into a table from a Trigger.dev task.
### Key features
* Shows how to set up a Supabase client using the `@supabase/supabase-js` library
* Shows how to add a new row to a table using `insert`
### Prerequisites
* A [Supabase account](https://supabase.com/dashboard/) and a project set up
* In your Supabase project, create a table called `user_subscriptions`.
* In your `user_subscriptions` table, create a new column:
* `user_id`, with the data type: `text`
### Task code
```ts trigger/supabase-database-insert.ts
import { createClient } from "@supabase/supabase-js";
import { task } from "@trigger.dev/sdk/v3";
// Generate the Typescript types using the Supabase CLI: https://supabase.com/docs/guides/api/rest/generating-types
import { Database } from "database.types";
// Create a single Supabase client for interacting with your database
// 'Database' supplies the type definitions to supabase-js
const supabase = createClient(
// These details can be found in your Supabase project settings under `API`
process.env.SUPABASE_PROJECT_URL as string, // e.g. https://abc123.supabase.co - replace 'abc123' with your project ID
process.env.SUPABASE_SERVICE_ROLE_KEY as string // Your service role secret key
);
export const supabaseDatabaseInsert = task({
id: "add-new-user",
run: async (payload: { userId: string }) => {
const { userId } = payload;
// Insert a new row into the user_subscriptions table with the provided userId
const { error } = await supabase.from("user_subscriptions").insert({
user_id: userId,
});
// If there was an error inserting the new user, throw an error
if (error) {
throw new Error(`Failed to insert new user: ${error.message}`);
}
return {
message: `New user added successfully: ${userId}`,
};
},
});
```
This task uses your service role secret key to bypass Row Level Security. There are different ways
of configuring your [RLS
policies](https://supabase.com/docs/guides/database/postgres/row-level-security), so always make
sure you have the correct permissions set up for your project.
### Testing your task
To test this task in the [Trigger.dev dashboard](https://cloud.trigger.dev), you can use the following payload:
```json
{
"userId": "user_12345"
}
```
If the task completes successfully, you will see a new row in your `user_subscriptions` table with the `user_id` set to `user_12345`.
## Update a user's subscription on a table in a Supabase database
This task shows how to update a user's subscription on a table. It checks if the user already has a subscription and either inserts a new row or updates an existing row with the new plan.
This type of task is useful for managing user subscriptions, updating user details, or performing other operations you might need to do on a database table.
### Key features
* Shows how to set up a Supabase client using the `@supabase/supabase-js` library
* Adds a new row to the table if the user doesn't exist using `insert`
* Checks if the user already has a plan, and if they do updates the existing row using `update`
* Demonstrates how to use [AbortTaskRunError](https://trigger.dev/docs/errors-retrying#using-aborttaskrunerror) to stop the task run without retrying if an invalid plan type is provided
### Prerequisites
* A [Supabase account](https://supabase.com/dashboard/) and a project set up
* In your Supabase project, create a table called `user_subscriptions` (if you haven't already)
* In your `user_subscriptions` table, create these columns (if they don't already exist):
* `user_id`, with the data type: `text`
* `plan`, with the data type: `text`
* `updated_at`, with the data type: `timestamptz`
### Task code
```ts trigger/supabase-update-user-subscription.ts
import { createClient } from "@supabase/supabase-js";
import { AbortTaskRunError, task } from "@trigger.dev/sdk/v3";
// Generate the Typescript types using the Supabase CLI: https://supabase.com/docs/guides/api/rest/generating-types
import { Database } from "database.types";
// Define the allowed plan types
type PlanType = "hobby" | "pro" | "enterprise";
// Create a single Supabase client for interacting with your database
// 'Database' supplies the type definitions to supabase-js
const supabase = createClient(
// These details can be found in your Supabase project settings under `API`
process.env.SUPABASE_PROJECT_URL as string, // e.g. https://abc123.supabase.co - replace 'abc123' with your project ID
process.env.SUPABASE_SERVICE_ROLE_KEY as string // Your service role secret key
);
export const supabaseUpdateUserSubscription = task({
id: "update-user-subscription",
run: async (payload: { userId: string; newPlan: PlanType }) => {
const { userId, newPlan } = payload;
// Abort the task run without retrying if the new plan type is invalid
if (!["hobby", "pro", "enterprise"].includes(newPlan)) {
throw new AbortTaskRunError(
`Invalid plan type: ${newPlan}. Allowed types are 'hobby', 'pro', or 'enterprise'.`
);
}
// Query the user_subscriptions table to check if the user already has a subscription
const { data: existingSubscriptions } = await supabase
.from("user_subscriptions")
.select("user_id")
.eq("user_id", userId);
if (!existingSubscriptions || existingSubscriptions.length === 0) {
// If there are no existing users with the provided userId and plan, insert a new row
const { error: insertError } = await supabase.from("user_subscriptions").insert({
user_id: userId,
plan: newPlan,
updated_at: new Date().toISOString(),
});
// If there was an error inserting the new subscription, throw an error
if (insertError) {
throw new Error(`Failed to insert user subscription: ${insertError.message}`);
}
} else {
// If the user already has a subscription, update their existing row
const { error: updateError } = await supabase
.from("user_subscriptions")
// Set the plan to the new plan and update the timestamp
.update({ plan: newPlan, updated_at: new Date().toISOString() })
.eq("user_id", userId);
// If there was an error updating the subscription, throw an error
if (updateError) {
throw new Error(`Failed to update user subscription: ${updateError.message}`);
}
}
// Return an object with the userId and newPlan
return {
userId,
newPlan,
};
},
});
```
This task uses your service role secret key to bypass Row Level Security. There are different ways
of configuring your [RLS
policies](https://supabase.com/docs/guides/database/postgres/row-level-security), so always make
sure you have the correct permissions set up for your project.
### Testing your task
To test this task in the [Trigger.dev dashboard](https://cloud.trigger.dev), you can use the following payload:
```json
{
"userId": "user_12345",
"newPlan": "pro"
}
```
If the task completes successfully, you will see a new row in your `user_subscriptions` table with the `user_id` set to `user_12345`, the `plan` set to `pro`, and the `updated_at` timestamp updated to the current time.
## Learn more about Supabase and Trigger.dev
### Full walkthrough guides from development to deployment
Learn how to trigger a task from a Supabase edge function when a URL is visited.
Learn how to trigger a task from a Supabase edge function when an event occurs in your database.
### Task examples with code you can copy and paste
Run basic CRUD operations on a table in a Supabase database using Trigger.dev.
Download a video from a URL and upload it to Supabase Storage using S3.
# Uploading files to Supabase Storage
This example demonstrates how to upload files to Supabase Storage using Trigger.dev.
## Overview
This example shows how to upload a video file to Supabase Storage using two different methods.
* [Upload to Supabase Storage using the Supabase client](/guides/examples/supabase-storage-upload#example-1-upload-to-supabase-storage-using-the-supabase-storage-client)
* [Upload to Supabase Storage using the AWS S3 client](/guides/examples/supabase-storage-upload#example-2-upload-to-supabase-storage-using-the-aws-s3-client)
## Upload to Supabase Storage using the Supabase client
This task downloads a video from a provided URL, saves it to a temporary file, and then uploads the video file to Supabase Storage using the Supabase client.
### Task code
```ts trigger/supabase-storage-upload.ts
import { createClient } from "@supabase/supabase-js";
import { logger, task } from "@trigger.dev/sdk/v3";
import fetch from "node-fetch";
// Initialize Supabase client
const supabase = createClient(
process.env.SUPABASE_PROJECT_URL ?? "",
process.env.SUPABASE_SERVICE_ROLE_KEY ?? ""
);
export const supabaseStorageUpload = task({
id: "supabase-storage-upload",
run: async (payload: { videoUrl: string }) => {
const { videoUrl } = payload;
const bucket = "my_bucket"; // Replace "my_bucket" with your bucket name
const objectKey = `video_${Date.now()}.mp4`;
// Download video data as a buffer
const response = await fetch(videoUrl);
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const videoBuffer = await response.buffer();
// Upload the video directly to Supabase Storage
const { error } = await supabase.storage.from(bucket).upload(objectKey, videoBuffer, {
contentType: "video/mp4",
upsert: true,
});
if (error) {
throw new Error(`Error uploading video: ${error.message}`);
}
logger.log(`Video uploaded to Supabase Storage bucket`, { objectKey });
// Return the video object key and bucket
return {
objectKey,
bucket: bucket,
};
},
});
```
### Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"videoUrl": "" // Replace with the URL of the video you want to upload
}
```
## Upload to Supabase Storage using the AWS S3 client
This task downloads a video from a provided URL, saves it to a temporary file, and then uploads the video file to Supabase Storage using the AWS S3 client.
### Key features
* Fetches a video from a provided URL
* Uploads the video file to Supabase Storage using S3
### Task code
```ts trigger/supabase-storage-upload-s3.ts
import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3";
import { logger, task } from "@trigger.dev/sdk/v3";
import fetch from "node-fetch";
// Initialize S3 client for Supabase Storage
const s3Client = new S3Client({
region: process.env.SUPABASE_REGION, // Your Supabase project's region e.g. "us-east-1"
endpoint: `https://${process.env.SUPABASE_PROJECT_ID}.supabase.co/storage/v1/s3`,
credentials: {
// These credentials can be found in your supabase storage settings, under 'S3 access keys'
accessKeyId: process.env.SUPABASE_ACCESS_KEY_ID ?? "",
secretAccessKey: process.env.SUPABASE_SECRET_ACCESS_KEY ?? "",
},
});
export const supabaseStorageUploadS3 = task({
id: "supabase-storage-upload-s3",
run: async (payload: { videoUrl: string }) => {
const { videoUrl } = payload;
// Fetch the video as an ArrayBuffer
const response = await fetch(videoUrl);
const videoArrayBuffer = await response.arrayBuffer();
const videoBuffer = Buffer.from(videoArrayBuffer);
const bucket = "my_bucket"; // Replace "my_bucket" with your bucket name
const objectKey = `video_${Date.now()}.mp4`;
// Upload the video directly to Supabase Storage
await s3Client.send(
new PutObjectCommand({
Bucket: bucket,
Key: objectKey,
Body: videoBuffer,
})
);
logger.log(`Video uploaded to Supabase Storage bucket`, { objectKey });
// Return the video object key
return {
objectKey,
bucket: bucket,
};
},
});
```
### Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"videoUrl": "" // Replace with the URL of the video you want to upload
}
```
## Learn more about Supabase and Trigger.dev
### Full walkthrough guides from development to deployment
Learn how to trigger a task from a Supabase edge function when a URL is visited.
Learn how to trigger a task from a Supabase edge function when an event occurs in your database.
### Task examples with code you can copy and paste
Run basic CRUD operations on a table in a Supabase database using Trigger.dev.
Download a video from a URL and upload it to Supabase Storage using S3.
# Using the Vercel AI SDK
This example demonstrates how to use the Vercel AI SDK with Trigger.dev.
## Overview
The [Vercel AI SDK](https://www.npmjs.com/package/ai) is a simple way to use AI models from many different providers, including OpenAI, Microsoft Azure, Google Generative AI, Anthropic, Amazon Bedrock, Groq, Perplexity and [more](https://sdk.vercel.ai/providers/ai-sdk-providers).
It provides a consistent interface to interact with the different AI models, so you can easily switch between them without needing to change your code.
## Generate text using OpenAI
This task shows how to use the Vercel AI SDK to generate text from a prompt with OpenAI.
### Task code
```ts trigger/vercel-ai-sdk-openai.ts
import { logger, task } from "@trigger.dev/sdk/v3";
import { generateText } from "ai";
// Install the package of the AI model you want to use, in this case OpenAI
import { openai } from "@ai-sdk/openai"; // Ensure OPENAI_API_KEY environment variable is set
export const openaiTask = task({
id: "openai-text-generate",
run: async (payload: { prompt: string }) => {
const chatCompletion = await generateText({
model: openai("gpt-4-turbo"),
// Add a system message which will be included with the prompt
system: "You are a friendly assistant!",
// The prompt passed in from the payload
prompt: payload.prompt,
});
// Log the generated text
logger.log("chatCompletion text:" + chatCompletion.text);
return chatCompletion;
},
});
```
## Testing your task
To test this task in the dashboard, you can use the following payload:
```json
{
"prompt": "What is the meaning of life?"
}
```
## Learn more about Vercel and Trigger.dev
### Walk-through guides from development to deployment
Learn how to setup Trigger.dev with Next.js, using either the pages or app router.
Learn how to create a webhook handler for incoming webhooks in a Next.js app, and trigger a task from it.
### Task examples
Learn how to automatically sync environment variables from your Vercel projects to Trigger.dev.
Learn how to use the Vercel AI SDK, which is a simple way to use AI models from different
providers, including OpenAI, Anthropic, Amazon Bedrock, Groq, Perplexity etc.
# Syncing environment variables from your Vercel projects
This example demonstrates how to sync environment variables from your Vercel project to Trigger.dev.
## Build configuration
To sync environment variables, you just need to add our build extension to your `trigger.config.ts` file. This extension will then automatically run every time you deploy your Trigger.dev project.
You need to set the `VERCEL_ACCESS_TOKEN` and `VERCEL_PROJECT_ID` environment variables, or pass
in the token and project ID as arguments to the `syncVercelEnvVars` build extension. If you're
working with a team project, you'll also need to set `VERCEL_TEAM_ID`, which can be found in your
team settings. You can find / generate the `VERCEL_ACCESS_TOKEN` in your Vercel
[dashboard](https://vercel.com/account/settings/tokens). Make sure the scope of the token covers
the project with the environment variables you want to sync.
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { syncVercelEnvVars } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
// Your other config settings...
build: {
// Add the syncVercelEnvVars build extension
extensions: [syncVercelEnvVars()],
},
});
```
[Build extensions](/config/config-file#extensions) allow you to hook into the build system and
customize the build process or the resulting bundle and container image (in the case of
deploying). You can use pre-built extensions or create your own.
## Running the sync operation
To sync the environment variables, all you need to do is run our `deploy` command. You should see some output in the console indicating that the environment variables have been synced, and they should now be available in your Trigger.dev dashboard.
```bash
npx trigger.dev@latest deploy
```
## Learn more about Vercel and Trigger.dev
### Walk-through guides from development to deployment
Learn how to setup Trigger.dev with Next.js, using either the pages or app router.
Learn how to create a webhook handler for incoming webhooks in a Next.js app, and trigger a task from it.
### Task examples
Learn how to automatically sync environment variables from your Vercel projects to Trigger.dev.
Learn how to use the Vercel AI SDK, which is a simple way to use AI models from different
providers, including OpenAI, Anthropic, Amazon Bedrock, Groq, Perplexity etc.
# Bun guide
This guide will show you how to setup Trigger.dev with Bun
export const framework_0 = "Bun"
A specific Bun version is currently required for the dev command to work. This is due to a [bug](https://github.com/oven-sh/bun/issues/13799) with IPC. Please use Bun version 1.1.24 or lower: `curl -fsSL https://bun.sh/install | bash -s -- bun-v1.1.24`
We now have experimental support for Bun. This guide will show you have to setup Trigger.dev in your existing Bun project, test an example task, and view the run.
The trigger.dev CLI does not yet support Bun. So you will need to run the CLI using Node.js. But
Bun will still be used to execute your tasks, even in the `dev` environment.
## Prerequisites
* Setup a project in {framework_0}
* Ensure TypeScript is installed
* [Create a Trigger.dev account](https://cloud.trigger.dev)
* [Create a new Trigger.dev project](/guides/dashboard/creating-a-project)
## Initial setup
The easiest way to get started is to use the CLI. It will add Trigger.dev to your existing project, create a `/trigger` folder and give you an example task.
Run this command in the root of your project to get started:
```bash npm
npx trigger.dev@latest init --runtime bun
```
```bash pnpm
pnpm dlx trigger.dev@latest init --runtime bun
```
```bash yarn
yarn dlx trigger.dev@latest init --runtime bun
```
It will do a few things:
1. Log you into the CLI if you're not already logged in.
2. Create a `trigger.config.ts` file in the root of your project.
3. Ask where you'd like to create the `/trigger` directory.
4. Create the `/src/trigger` directory with an example task, `/src/trigger/example.[ts/js]`.
Install the "Hello World" example task when prompted. We'll use this task to test the setup.
Open the `/src/trigger/example.ts` file and replace the contents with the following:
```ts example.ts
import { Database } from "bun:sqlite";
import { task } from "@trigger.dev/sdk/v3";
export const bunTask = task({
id: "bun-task",
run: async (payload: { query: string }) => {
const db = new Database(":memory:");
const query = db.query("select 'Hello world' as message;");
console.log(query.get()); // => { message: "Hello world" }
return {
message: "Query executed",
};
},
});
```
The CLI `dev` command runs a server for your tasks. It watches for changes in your `/trigger` directory and communicates with the Trigger.dev platform to register your tasks, perform runs, and send data back and forth.
It can also update your `@trigger.dev/*` packages to prevent version mismatches and failed deploys. You will always be prompted first.
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
The CLI `dev` command spits out various useful URLs. Right now we want to visit the Test page .
You should see our Example task in the list , select it. Most tasks have a "payload" which you enter in the JSON editor , but our example task doesn't need any input.
Press the "Run test" button .
![Test page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-page.png)
Congratulations, you should see the run page which will live reload showing you the current state of the run.
![Run page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-page.png)
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
![Terminal showing completed run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/terminal-completed-run.png)
## Known issues
* Certain OpenTelemetry instrumentation will not work with Bun, because Bun does not support Node's `register` hook. This means that some libraries that rely on this hook will not work with Bun.
# Drizzle setup guide
This guide will show you how to set up Drizzle ORM with Trigger.dev
## Overview
This guide will show you how to set up [Drizzle ORM](https://orm.drizzle.team/) with Trigger.dev, test and view an example task run.
## Prerequisites
* An existing Node.js project with a `package.json` file
* Ensure TypeScript is installed
* A [PostgreSQL](https://www.postgresql.org/) database server running locally, or accessible via a connection string
* Drizzle ORM [installed and initialized](https://orm.drizzle.team/docs/get-started) in your project
* A `DATABASE_URL` environment variable set in your `.env` file, pointing to your PostgreSQL database (e.g. `postgresql://user:password@localhost:5432/dbname`)
## Initial setup (optional)
Follow these steps if you don't already have Trigger.dev set up in your project.
The easiest way to get started is to use the CLI. It will add Trigger.dev to your existing project, create a `/trigger` folder and give you an example task.
Run this command in the root of your project to get started:
```bash npm
npx trigger.dev@latest init
```
```bash pnpm
pnpm dlx trigger.dev@latest init
```
```bash yarn
yarn dlx trigger.dev@latest init
```
It will do a few things:
1. Log you into the CLI if you're not already logged in.
2. Create a `trigger.config.ts` file in the root of your project.
3. Ask where you'd like to create the `/trigger` directory.
4. Create the `/trigger` directory with an example task, `/trigger/example.[ts/js]`.
Install the "Hello World" example task when prompted. We'll use this task to test the setup.
The CLI `dev` command runs a server for your tasks. It watches for changes in your `/trigger` directory and communicates with the Trigger.dev platform to register your tasks, perform runs, and send data back and forth.
It can also update your `@trigger.dev/*` packages to prevent version mismatches and failed deploys. You will always be prompted first.
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
The CLI `dev` command spits out various useful URLs. Right now we want to visit the Test page .
You should see our Example task in the list , select it. Most tasks have a "payload" which you enter in the JSON editor , but our example task doesn't need any input.
Press the "Run test" button .
![Test page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-page.png)
Congratulations, you should see the run page which will live reload showing you the current state of the run.
![Run page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-page.png)
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
![Terminal showing completed run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/terminal-completed-run.png)
## Creating a task using Drizzle and deploying it to production
First, create a new task file in your `trigger` folder.
This is a simple task that will add a new user to your database, we will call it `drizzle-add-new-user`.
For this task to work correctly, you will need to have a `users` table schema defined with Drizzle
that includes `name`, `age` and `email` fields.
```ts /trigger/drizzle-add-new-user.ts
import { eq } from "drizzle-orm";
import { task } from "@trigger.dev/sdk/v3";
import { users } from "src/db/schema";
import { drizzle } from "drizzle-orm/node-postgres";
// Initialize Drizzle client
const db = drizzle(process.env.DATABASE_URL!);
export const addNewUser = task({
id: "drizzle-add-new-user",
run: async (payload: typeof users.$inferInsert) => {
// Create new user
const [user] = await db.insert(users).values(payload).returning();
return {
createdUser: user,
message: "User created and updated successfully",
};
},
});
```
Next, in your `trigger.config.js` file, add `pg` to the `externals` array. `pg` is a non-blocking PostgreSQL client for Node.js.
It is marked as an external to ensure that it is not bundled into the task's bundle, and instead will be installed and loaded from `node_modules` at runtime.
```js /trigger.config.js
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "", // Your project reference
// Your other config settings...
build: {
externals: ["pg"],
},
});
```
Once the build configuration is added, you can now deploy your task using the Trigger.dev CLI.
```bash npm
npx trigger.dev@latest deploy
```
```bash pnpm
pnpm dlx trigger.dev@latest deploy
```
```bash yarn
yarn dlx trigger.dev@latest deploy
```
In your Trigger.dev dashboard sidebar click "Environment Variables" , and then the "New environment variable" button .
![Environment variables page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-page.jpg)
You can add values for your local dev environment, staging and prod. in this case we will add the `DATABASE_URL` for the production environment.
![Environment variables
page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-panel.jpg)
To test this task, go to the 'test' page in the Trigger.dev dashboard and run the task with the following payload:
```json
{
"name": "", // e.g. "John Doe"
"age": "", // e.g. 25
"email": "" // e.g. "john@doe.test"
}
```
Congratulations! You should now see a new completed run, and a new user with the credentials you provided should be added to your database.
## Useful next steps
Learn what tasks are and their options
Learn how to write your own tasks
Learn how to deploy your task manually using the CLI
Learn how to deploy your task using GitHub actions
# Next.js setup guide
This guide will show you how to setup Trigger.dev in your existing Next.js project, test an example task, and view the run.
This guide can be followed for both App and Pages router as well as Server Actions.
## Prerequisites
* Setup a project in {framework_0}
* Ensure TypeScript is installed
* [Create a Trigger.dev account](https://cloud.trigger.dev)
* [Create a new Trigger.dev project](/guides/dashboard/creating-a-project)
## Initial setup
The easiest way to get started is to use the CLI. It will add Trigger.dev to your existing project, create a `/trigger` folder and give you an example task.
Run this command in the root of your project to get started:
```bash npm
npx trigger.dev@latest init
```
```bash pnpm
pnpm dlx trigger.dev@latest init
```
```bash yarn
yarn dlx trigger.dev@latest init
```
It will do a few things:
1. Log you into the CLI if you're not already logged in.
2. Create a `trigger.config.ts` file in the root of your project.
3. Ask where you'd like to create the `/trigger` directory.
4. Create the `/trigger` directory with an example task, `/trigger/example.[ts/js]`.
Install the "Hello World" example task when prompted. We'll use this task to test the setup.
The CLI `dev` command runs a server for your tasks. It watches for changes in your `/trigger` directory and communicates with the Trigger.dev platform to register your tasks, perform runs, and send data back and forth.
It can also update your `@trigger.dev/*` packages to prevent version mismatches and failed deploys. You will always be prompted first.
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
The CLI `dev` command spits out various useful URLs. Right now we want to visit the Test page .
You should see our Example task in the list , select it. Most tasks have a "payload" which you enter in the JSON editor , but our example task doesn't need any input.
Press the "Run test" button .
![Test page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-page.png)
Congratulations, you should see the run page which will live reload showing you the current state of the run.
![Run page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-page.png)
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
![Terminal showing completed run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/terminal-completed-run.png)
## Set your secret key locally
Set your `TRIGGER_SECRET_KEY` environment variable in your `.env.local` file if using the Next.js App router or `.env` file if using Pages router. This key is used to authenticate with Trigger.dev, so you can trigger runs from your Next.js app. Visit the API Keys page in the dashboard and select the DEV secret key.
![How to find your secret key](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/api-keys.png)
For more information on authenticating with Trigger.dev, see the [API keys page](/apikeys).
## Triggering your task in Next.js
Here are the steps to trigger your task in the Next.js App and Pages router and Server Actions. Alternatively, check out this repo for a [full working example](https://github.com/triggerdotdev/example-projects/tree/main/nextjs/server-actions/my-app) of a Next.js app with a Trigger.dev task triggered using a Server Action.
Add a Route Handler by creating a `route.ts` file (or `route.js` file) in the `app/api` directory like this: `app/api/hello-world/route.ts`.
Add this code to your `route.ts` file which imports your task along with `NextResponse` to handle the API route response:
```ts app/api/hello-world/route.ts
// Next.js API route support: https://nextjs.org/docs/api-routes/introduction
import type { helloWorldTask } from "@/trigger/example";
import { tasks } from "@trigger.dev/sdk/v3";
import { NextResponse } from "next/server";
//tasks.trigger also works with the edge runtime
//export const runtime = "edge";
export async function GET() {
const handle = await tasks.trigger(
"hello-world",
"James"
);
return NextResponse.json(handle);
}
```
Run your Next.js app:
```bash npm
npm run dev
```
```bash pnpm
pnpm run dev
```
```bash yarn
yarn dev
```
Run the dev server from Step 2. of the [Initial Setup](/guides/frameworks/nextjs#initial-setup) section above if it's not already running:
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
Now visit the URL in your browser to trigger the task. Ensure the port number is the same as the one you're running your Next.js app on. For example, if you're running your Next.js app on port 3000, visit:
```bash
http://localhost:3000/api/hello-world
```
You should see the CLI log the task run with a link to view the logs in the dashboard.
![Trigger.dev CLI showing a successful run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/trigger-cli-run-success.png)
Visit the [Trigger.dev dashboard](https://cloud.trigger.dev) to see your run.
Create an `actions.ts` file in the `app/api` directory and add this code which imports your `helloWorldTask()` task. Make sure to include `"use server";` at the top of the file.
```ts app/api/actions.ts
"use server";
import type { helloWorldTask } from "@/trigger/example";
import { tasks } from "@trigger.dev/sdk/v3";
export async function myTask() {
try {
const handle = await tasks.trigger(
"hello-world",
"James"
);
return { handle };
} catch (error) {
console.error(error);
return {
error: "something went wrong",
};
}
}
```
For the purposes of this guide, we'll create a button with an `onClick` event that triggers your task. We'll add this to the `page.tsx` file so we can trigger the task by clicking the button. Make sure to import your task and include `"use client";` at the top of your file.
```ts app/page.tsx
"use client";
import { myTask } from "./actions";
export default function Home() {
return (
);
}
```
Run your Next.js app:
```bash npm
npm run dev
```
```bash pnpm
pnpm run dev
```
```bash yarn
yarn dev
```
Open your app in a browser, making sure the port number is the same as the one you're running your Next.js app on. For example, if you're running your Next.js app on port 3000, visit:
```bash
http://localhost:3000
```
Run the dev server from Step 2. of the [Initial Setup](/guides/frameworks/nextjs#initial-setup) section above if it's not already running:
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
Then click the button we created in your app to trigger the task. You should see the CLI log the task run with a link to view the logs.
![Trigger.dev CLI showing a successful run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/trigger-cli-run-success.png)
Visit the [Trigger.dev dashboard](https://cloud.trigger.dev) to see your run.
Create an API route in the `pages/api` directory. Then create a `hello-world .ts` (or `hello-world.js`) file for your task and copy this code example:
```ts pages/api/hello-world.ts
// Next.js API route support: https://nextjs.org/docs/api-routes/introduction
import { helloWorldTask } from "@/trigger/example";
import { tasks } from "@trigger.dev/sdk/v3";
import type { NextApiRequest, NextApiResponse } from "next";
export default async function handler(
req: NextApiRequest,
res: NextApiResponse<{ id: string }>
) {
const handle = await tasks.trigger(
"hello-world",
"James"
);
res.status(200).json(handle);
}
```
Run your Next.js app:
```bash npm
npm run dev
```
```bash pnpm
pnpm run dev
```
```bash yarn
yarn dev
```
Run the dev server from Step 2. of the [Initial Setup](/guides/frameworks/nextjs#initial-setup) section above if it's not already running:
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
Now visit the URL in your browser to trigger the task. Ensure the port number is the same as the one you're running your Next.js app on. For example, if you're running your Next.js app on port 3000, visit:
```bash
http://localhost:3000/api/hello-world
```
You should see the CLI log the task run with a link to view the logs in the dashboard.
![Trigger.dev CLI showing a successful run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/trigger-cli-run-success.png)
Visit the [Trigger.dev dashboard](https://cloud.trigger.dev) to see your run.
## Automatically sync environment variables from your Vercel project (optional)
If you want to automatically sync environment variables from your Vercel project to Trigger.dev, you can add our `syncVercelEnvVars` build extension to your `trigger.config.ts` file.
You need to set the `VERCEL_ACCESS_TOKEN` and `VERCEL_PROJECT_ID` environment variables, or pass
in the token and project ID as arguments to the `syncVercelEnvVars` build extension. If you're
working with a team project, you'll also need to set `VERCEL_TEAM_ID`, which can be found in your
team settings. You can find / generate the `VERCEL_ACCESS_TOKEN` in your Vercel
[dashboard](https://vercel.com/account/settings/tokens). Make sure the scope of the token covers
the project with the environment variables you want to sync.
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { syncVercelEnvVars } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
// Your other config settings...
build: {
extensions: [syncVercelEnvVars()],
},
});
```
For more information, see our [Vercel sync environment
variables](/guides/examples/vercel-sync-env-vars) guide.
## Manually add your environment variables (optional)
If you have any environment variables in your tasks, be sure to add them in the dashboard so deployed code runs successfully. In Node.js, these environment variables are accessed in your code using `process.env.MY_ENV_VAR`.
In the sidebar select the "Environment Variables" page, then press the "New environment variable"
button. ![Environment variables page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-page.jpg)
You can add values for your local dev environment, staging and prod. ![Environment variables
page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-panel.jpg)
You can also add environment variables in code by following the steps on the [Environment Variables page](/deploy-environment-variables#in-your-code).
## Deploying your task to Trigger.dev
For this guide, we'll manually deploy your task by running the [CLI deploy command](/cli-deploy) below. Other ways to deploy are listed in the next section.
```bash npm
npx trigger.dev@latest deploy
```
```bash pnpm
pnpm dlx trigger.dev@latest deploy
```
```bash yarn
yarn dlx trigger.dev@latest deploy
```
### Other ways to deploy
Use GitHub Actions to automatically deploy your tasks whenever new code is pushed and when the `trigger` directory has changes in it. Follow [this guide](/github-actions) to set up GitHub Actions.
We're working on adding an official [Vercel integration](/vercel-integration) which you can follow the progress of [here](https://feedback.trigger.dev/p/vercel-integration-3).
## Troubleshooting & extra resources
### Revalidation from your Trigger.dev tasks
[Revalidation](https://vercel.com/docs/incremental-static-regeneration/quickstart#on-demand-revalidation) allows you to purge the cache for an ISR route. To revalidate an ISR route from a Trigger.dev task, you have to set up a handler for the `revalidate` event. This is an API route that you can add to your Next.js app.
This handler will run the `revalidatePath` function from Next.js, which purges the cache for the given path.
The handlers are slightly different for the App and Pages router:
#### Revalidation handler: App Router
If you are using the App router, create a new revalidation route at `app/api/revalidate/path/route.ts`:
```ts app/api/revalidate/path/route.ts
import { NextRequest, NextResponse } from "next/server";
import { revalidatePath } from "next/cache";
export async function POST(request: NextRequest) {
try {
const { path, type, secret } = await request.json();
// Create a REVALIDATION_SECRET and set it in your environment variables
if (secret !== process.env.REVALIDATION_SECRET) {
return NextResponse.json({ message: "Invalid secret" }, { status: 401 });
}
if (!path) {
return NextResponse.json({ message: "Path is required" }, { status: 400 });
}
revalidatePath(path, type);
return NextResponse.json({ revalidated: true });
} catch (err) {
console.error("Error revalidating path:", err);
return NextResponse.json({ message: "Error revalidating path" }, { status: 500 });
}
}
```
#### Revalidation handler: Pages Router
If you are using the Pages router, create a new revalidation route at `pages/api/revalidate/path.ts`:
```ts pages/api/revalidate/path.ts
import type { NextApiRequest, NextApiResponse } from "next";
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
try {
if (req.method !== "POST") {
return res.status(405).json({ message: "Method not allowed" });
}
const { path, secret } = req.body;
if (secret !== process.env.REVALIDATION_SECRET) {
return res.status(401).json({ message: "Invalid secret" });
}
if (!path) {
return res.status(400).json({ message: "Path is required" });
}
await res.revalidate(path);
return res.json({ revalidated: true });
} catch (err) {
console.error("Error revalidating path:", err);
return res.status(500).json({ message: "Error revalidating path" });
}
}
```
#### Revalidation task
This task takes a `path` as a payload and will revalidate the path you specify, using the handler you set up previously.
To run this task locally you will need to set the `REVALIDATION_SECRET` environment variable in your `.env.local` file (or `.env` file if using Pages router).
To run this task in production, you will need to set the `REVALIDATION_SECRET` environment variable in Vercel, in your project settings, and also in your environment variables in the Trigger.dev dashboard.
```ts trigger/revalidate-path.ts
import { logger, task } from "@trigger.dev/sdk/v3";
const NEXTJS_APP_URL = process.env.NEXTJS_APP_URL; // e.g. "http://localhost:3000" or "https://my-nextjs-app.vercel.app"
const REVALIDATION_SECRET = process.env.REVALIDATION_SECRET; // Create a REVALIDATION_SECRET and set it in your environment variables
export const revalidatePath = task({
id: "revalidate-path",
run: async (payload: { path: string }) => {
const { path } = payload;
try {
const response = await fetch(`${NEXTJS_APP_URL}/api/revalidate/path`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
path: `${NEXTJS_APP_URL}/${path}`,
secret: REVALIDATION_SECRET,
}),
});
if (response.ok) {
logger.log("Path revalidation successful", { path });
return { success: true };
} else {
logger.error("Path revalidation failed", {
path,
statusCode: response.status,
statusText: response.statusText,
});
return {
success: false,
error: `Revalidation failed with status ${response.status}: ${response.statusText}`,
};
}
} catch (error) {
logger.error("Path revalidation encountered an error", {
path,
error: error instanceof Error ? error.message : String(error),
});
return {
success: false,
error: `Failed to revalidate path due to an unexpected error`,
};
}
},
});
```
#### Testing the revalidation task
You can test your revalidation task in the Trigger.dev dashboard on the testing page, using the following payload.
```json
{
"path": "" // e.g. "blog"
}
```
### Next.js build failing due to missing API key in GitHub CI
This issue occurs during the Next.js app build process on GitHub CI where the Trigger.dev SDK is expecting the TRIGGER\_SECRET\_KEY environment variable to be set at build time. Next.js attempts to compile routes and creates static pages, which can cause issues with SDKs that require runtime environment variables. The solution is to mark the relevant pages as dynamic to prevent Next.js from trying to make them static. You can do this by adding the following line to the route file:
```ts
export const dynamic = "force-dynamic";
```
### Correctly passing event handlers to React components
An issue can sometimes arise when you try to pass a function directly to the `onClick` prop. This is because the function may require specific arguments or context that are not available when the event occurs. By wrapping the function call in an arrow function, you ensure that the handler is called with the correct context and any necessary arguments. For example:
This works:
```tsx
```
Whereas this does not work:
```tsx
```
## Learn more about Vercel and Trigger.dev
### Walk-through guides from development to deployment
Learn how to setup Trigger.dev with Next.js, using either the pages or app router.
Learn how to create a webhook handler for incoming webhooks in a Next.js app, and trigger a task from it.
### Task examples
Learn how to automatically sync environment variables from your Vercel projects to Trigger.dev.
Learn how to use the Vercel AI SDK, which is a simple way to use AI models from different
providers, including OpenAI, Anthropic, Amazon Bedrock, Groq, Perplexity etc.
## Useful next steps
Learn what tasks are and their options
Learn how to write your own tasks
Learn how to deploy your task manually using the CLI
Learn how to deploy your task using GitHub actions
# Triggering tasks with webhooks in Next.js
Learn how to trigger a task from a webhook in a Next.js app.
## Prerequisites
* [A Next.js project, set up with Trigger.dev](/guides/frameworks/nextjs)
* [cURL](https://curl.se/) installed on your local machine. This will be used to send a POST request to your webhook handler.
## Adding the webhook handler
The webhook handler in this guide will be an API route.
This will be different depending on whether you are using the Next.js pages router or the app router.
### Pages router: creating the webhook handler
Create a new file `pages/api/webhook-handler.ts` or `pages/api/webhook-hander.js`.
In your new file, add the following code:
```ts /pages/api/webhook-handler.ts
import { helloWorldTask } from "@/trigger/example";
import { tasks } from "@trigger.dev/sdk/v3";
import type { NextApiRequest, NextApiResponse } from "next";
export default async function handler(req: NextApiRequest, res: NextApiResponse) {
// Parse the webhook payload
const payload = req.body;
// Trigger the helloWorldTask with the webhook data as the payload
await tasks.trigger("hello-world", payload);
res.status(200).json({ message: "OK" });
}
```
This code will handle the webhook payload and trigger the 'Hello World' task.
### App router: creating the webhook handler
Create a new file in the `app/api/webhook-handler/route.ts` or `app/api/webhook-handler/route.js`.
In your new file, add the following code:
```ts /app/api/webhook-handler/route.ts
import type { helloWorldTask } from "@/trigger/example";
import { tasks } from "@trigger.dev/sdk/v3";
import { NextResponse } from "next/server";
export async function POST(req: Request) {
// Parse the webhook payload
const payload = await req.json();
// Trigger the helloWorldTask with the webhook data as the payload
await tasks.trigger("hello-world", payload);
return NextResponse.json("OK", { status: 200 });
}
```
This code will handle the webhook payload and trigger the 'Hello World' task.
## Triggering the task locally
Now that you have your webhook handler set up, you can trigger the 'Hello World' task from it. We will do this locally using cURL.
First, run your Next.js app.
```bash npm
npm run dev
```
```bash pnpm
pnpm run dev
```
```bash yarn
yarn dev
```
Then, open up a second terminal window and start the Trigger.dev dev server:
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
To send a POST request to your webhook handler, open up a terminal window on your local machine and run the following command:
If `http://localhost:3000` isn't the URL of your locally running Next.js app, replace the URL in
the below command with that URL instead.
```bash
curl -X POST -H "Content-Type: application/json" -d '{"Name": "John Doe", "Age": "87"}' http://localhost:3000/api/webhook-handler
```
This will send a POST request to your webhook handler, with a JSON payload.
After running the command, you should see a successful dev run and a 200 response in your terminals.
If you now go to your [Trigger.dev dashboard](https://cloud.trigger.dev), you should also see a successful run for the 'Hello World' task, with the payload you sent, in this case; `{"name": "John Doe", "age": "87"}`.
## Learn more about Vercel and Trigger.dev
### Walk-through guides from development to deployment
Learn how to setup Trigger.dev with Next.js, using either the pages or app router.
Learn how to create a webhook handler for incoming webhooks in a Next.js app, and trigger a task from it.
### Task examples
Learn how to automatically sync environment variables from your Vercel projects to Trigger.dev.
Learn how to use the Vercel AI SDK, which is a simple way to use AI models from different
providers, including OpenAI, Anthropic, Amazon Bedrock, Groq, Perplexity etc.
# Node.js setup guide
This guide will show you how to setup Trigger.dev in your existing Node.js project, test an example task, and view the run.
export const framework_0 = "Node.js"
## Prerequisites
* Setup a project in {framework_0}
* Ensure TypeScript is installed
* [Create a Trigger.dev account](https://cloud.trigger.dev)
* [Create a new Trigger.dev project](/guides/dashboard/creating-a-project)
## Initial setup
The easiest way to get started is to use the CLI. It will add Trigger.dev to your existing project, create a `/trigger` folder and give you an example task.
Run this command in the root of your project to get started:
```bash npm
npx trigger.dev@latest init
```
```bash pnpm
pnpm dlx trigger.dev@latest init
```
```bash yarn
yarn dlx trigger.dev@latest init
```
It will do a few things:
1. Log you into the CLI if you're not already logged in.
2. Create a `trigger.config.ts` file in the root of your project.
3. Ask where you'd like to create the `/trigger` directory.
4. Create the `/trigger` directory with an example task, `/trigger/example.[ts/js]`.
Install the "Hello World" example task when prompted. We'll use this task to test the setup.
The CLI `dev` command runs a server for your tasks. It watches for changes in your `/trigger` directory and communicates with the Trigger.dev platform to register your tasks, perform runs, and send data back and forth.
It can also update your `@trigger.dev/*` packages to prevent version mismatches and failed deploys. You will always be prompted first.
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
The CLI `dev` command spits out various useful URLs. Right now we want to visit the Test page .
You should see our Example task in the list , select it. Most tasks have a "payload" which you enter in the JSON editor , but our example task doesn't need any input.
Press the "Run test" button .
![Test page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-page.png)
Congratulations, you should see the run page which will live reload showing you the current state of the run.
![Run page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-page.png)
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
![Terminal showing completed run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/terminal-completed-run.png)
## Useful next steps
Learn what tasks are and their options
Learn how to write your own tasks
Learn how to deploy your task manually using the CLI
Learn how to deploy your task using GitHub actions
# Prisma setup guide
This guide will show you how to set up Prisma with Trigger.dev
## Overview
This guide will show you how to set up [Prisma](https://www.prisma.io/) with Trigger.dev, test and view an example task run.
## Prerequisites
* An existing Node.js project with a `package.json` file
* Ensure TypeScript is installed
* A [PostgreSQL](https://www.postgresql.org/) database server running locally, or accessible via a connection string
* Prisma ORM [installed and initialized](https://www.prisma.io/docs/getting-started/quickstart) in your project
* A `DATABASE_URL` environment variable set in your `.env` file, pointing to your PostgreSQL database (e.g. `postgresql://user:password@localhost:5432/dbname`)
## Initial setup (optional)
Follow these steps if you don't already have Trigger.dev set up in your project.
The easiest way to get started is to use the CLI. It will add Trigger.dev to your existing project, create a `/trigger` folder and give you an example task.
Run this command in the root of your project to get started:
```bash npm
npx trigger.dev@latest init
```
```bash pnpm
pnpm dlx trigger.dev@latest init
```
```bash yarn
yarn dlx trigger.dev@latest init
```
It will do a few things:
1. Log you into the CLI if you're not already logged in.
2. Create a `trigger.config.ts` file in the root of your project.
3. Ask where you'd like to create the `/trigger` directory.
4. Create the `/trigger` directory with an example task, `/trigger/example.[ts/js]`.
Install the "Hello World" example task when prompted. We'll use this task to test the setup.
The CLI `dev` command runs a server for your tasks. It watches for changes in your `/trigger` directory and communicates with the Trigger.dev platform to register your tasks, perform runs, and send data back and forth.
It can also update your `@trigger.dev/*` packages to prevent version mismatches and failed deploys. You will always be prompted first.
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
The CLI `dev` command spits out various useful URLs. Right now we want to visit the Test page .
You should see our Example task in the list , select it. Most tasks have a "payload" which you enter in the JSON editor , but our example task doesn't need any input.
Press the "Run test" button .
![Test page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-page.png)
Congratulations, you should see the run page which will live reload showing you the current state of the run.
![Run page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-page.png)
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
![Terminal showing completed run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/terminal-completed-run.png)
## Creating a task using Prisma and deploying it to production
First, create a new task file in your `trigger` folder.
This is a simple task that will add a new user to the database.
For this task to work correctly, you will need to have a `user` model in your Prisma schema with
an `id` field, a `name` field, and an `email` field.
```ts /trigger/prisma-add-new-user.ts
import { PrismaClient } from "@prisma/client";
import { task } from "@trigger.dev/sdk/v3";
// Initialize Prisma client
const prisma = new PrismaClient();
export const addNewUser = task({
id: "prisma-add-new-user",
run: async (payload: { name: string; email: string; id: number }) => {
const { name, email, id } = payload;
// This will create a new user in the database
const user = await prisma.user.create({
data: {
name: name,
email: email,
id: id,
},
});
return {
message: `New user added successfully: ${user.id}`,
};
},
});
```
Next, configure the Prisma [build extension](https://trigger.dev/docs/config/extensions/overview) in the `trigger.config.js` file to include the Prisma client in the build.
This will ensure that the Prisma client is available when the task runs.
For a full list of options available in the Prisma build extension, see the [Prisma build extension documentation](https://trigger.dev/docs/config/config-file#prisma).
```js /trigger.config.js
export default defineConfig({
project: "", // Your project reference
// Your other config settings...
build: {
extensions: [
prismaExtension({
version: "5.20.0", // optional, we'll automatically detect the version if not provided
// update this to the path of your Prisma schema file
schema: "prisma/schema.prisma",
}),
],
},
});
```
[Build extensions](/config/config-file#extensions) allow you to hook into the build system and
customize the build process or the resulting bundle and container image (in the case of
deploying). You can use pre-built extensions or create your own.
We use OpenTelemetry to [instrument](https://trigger.dev/docs/config/config-file#instrumentations) our tasks and collect telemetry data.
If you want to automatically log all Prisma queries and mutations, you can use the Prisma instrumentation extension.
```js /trigger.config.js
import { defineConfig } from "@trigger.dev/sdk/v3";
import { PrismaInstrumentation } from "@prisma/instrumentation";
import { OpenAIInstrumentation } from "@traceloop/instrumentation-openai";
export default defineConfig({
//..other stuff
instrumentations: [new PrismaInstrumentation(), new OpenAIInstrumentation()],
});
```
This provides much more detailed information about your tasks with minimal effort.
With the build extension and task configured, you can now deploy your task using the Trigger.dev CLI.
```bash npm
npx trigger.dev@latest deploy
```
```bash pnpm
pnpm dlx trigger.dev@latest deploy
```
```bash yarn
yarn dlx trigger.dev@latest deploy
```
In your Trigger.dev dashboard sidebar click "Environment Variables" , and then the "New environment variable" button .
You can add values for your local dev environment, staging and prod. in this case we will add the `DATABASE_URL` for the production environment.
![Environment variables
page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-panel.jpg)
To test this task, go to the 'test' page in the Trigger.dev dashboard and run the task with the following payload:
```json
{
"name": "", // e.g. "John Doe"
"email": "", // e.g. "john@doe.test"
"id": // e.g. 12345
}
```
Congratulations! You should now see a new completed run, and a new user with the credentials you provided should be added to your database.
## Useful next steps
Learn what tasks are and their options
Learn how to write your own tasks
Learn how to deploy your task manually using the CLI
Learn how to deploy your task using GitHub actions
# Remix setup guide
This guide will show you how to setup Trigger.dev in your existing Remix project, test an example task, and view the run.
export const framework_0 = "Remix"
## Prerequisites
* Setup a project in {framework_0}
* Ensure TypeScript is installed
* [Create a Trigger.dev account](https://cloud.trigger.dev)
* [Create a new Trigger.dev project](/guides/dashboard/creating-a-project)
## Initial setup
The easiest way to get started is to use the CLI. It will add Trigger.dev to your existing project, create a `/trigger` folder and give you an example task.
Run this command in the root of your project to get started:
```bash npm
npx trigger.dev@latest init
```
```bash pnpm
pnpm dlx trigger.dev@latest init
```
```bash yarn
yarn dlx trigger.dev@latest init
```
It will do a few things:
1. Log you into the CLI if you're not already logged in.
2. Create a `trigger.config.ts` file in the root of your project.
3. Ask where you'd like to create the `/trigger` directory.
4. Create the `/trigger` directory with an example task, `/trigger/example.[ts/js]`.
Install the "Hello World" example task when prompted. We'll use this task to test the setup.
The CLI `dev` command runs a server for your tasks. It watches for changes in your `/trigger` directory and communicates with the Trigger.dev platform to register your tasks, perform runs, and send data back and forth.
It can also update your `@trigger.dev/*` packages to prevent version mismatches and failed deploys. You will always be prompted first.
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
The CLI `dev` command spits out various useful URLs. Right now we want to visit the Test page .
You should see our Example task in the list , select it. Most tasks have a "payload" which you enter in the JSON editor , but our example task doesn't need any input.
Press the "Run test" button .
![Test page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-page.png)
Congratulations, you should see the run page which will live reload showing you the current state of the run.
![Run page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-page.png)
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
![Terminal showing completed run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/terminal-completed-run.png)
## Set your secret key locally
Set your `TRIGGER_SECRET_KEY` environment variable in your `.env` file. This key is used to authenticate with Trigger.dev, so you can trigger runs from your Remix app. Visit the API Keys page in the dashboard and select the DEV secret key.
![How to find your secret key](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/api-keys.png)
For more information on authenticating with Trigger.dev, see the [API keys page](/apikeys).
## Triggering your task in Remix
Create a new file called `api.hello-world.ts` (or `api.hello-world.js`) in the `app/routes` directory like this: `app/routes/api.hello-world.ts`.
Add this code to your `api.hello-world.ts` file which imports your task:
```ts app/routes/api.hello-world.ts
import type { helloWorldTask } from "../../src/trigger/example";
import { tasks } from "@trigger.dev/sdk/v3";
export async function loader() {
const handle = await tasks.trigger("hello-world", "James");
return new Response(JSON.stringify(handle), {
headers: { "Content-Type": "application/json" },
});
}
```
Run your Remix app:
```bash npm
npm run dev
```
```bash pnpm
pnpm run dev
```
```bash yarn
yarn dev
```
Run the dev server from Step 2. of the [Initial Setup](/guides/frameworks/remix#initial-setup) section above if it's not already running:
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
Now visit the URL in your browser to trigger the task. Ensure the port number is the same as the one you're running your Remix app on. For example, if you're running your Remix app on port 3000, visit:
```bash
http://localhost:3000/api/trigger
```
You should see the CLI log the task run with a link to view the logs in the dashboard.
![Trigger.dev CLI showing a successful run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/trigger-cli-run-success.png)
Visit the [Trigger.dev dashboard](https://cloud.trigger.dev) to see your run.
## Manually add your environment variables (optional)
If you have any environment variables in your tasks, be sure to add them in the dashboard so deployed code runs successfully. In Node.js, these environment variables are accessed in your code using `process.env.MY_ENV_VAR`.
In the sidebar select the "Environment Variables" page, then press the "New environment variable"
button. ![Environment variables page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-page.jpg)
You can add values for your local dev environment, staging and prod. ![Environment variables
page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-panel.jpg)
You can also add environment variables in code by following the steps on the [Environment Variables page](/deploy-environment-variables#in-your-code).
## Deploying your task to Trigger.dev
For this guide, we'll manually deploy your task by running the [CLI deploy command](/cli-deploy) below. Other ways to deploy are listed in the next section.
```bash npm
npx trigger.dev@latest deploy
```
```bash pnpm
pnpm dlx trigger.dev@latest deploy
```
```bash yarn
yarn dlx trigger.dev@latest deploy
```
### Other ways to deploy
Use GitHub Actions to automatically deploy your tasks whenever new code is pushed and when the `trigger` directory has changes in it. Follow [this guide](/github-actions) to set up GitHub Actions.
We're working on adding an official [Vercel integration](/vercel-integration) which you can follow the progress of [here](https://feedback.trigger.dev/p/vercel-integration-3).
## Deploying to Vercel Edge Functions
Before we start, it's important to note that:
* We'll be using a type-only import for the task to ensure compatibility with the edge runtime.
* The `@trigger.dev/sdk/v3` package supports the edge runtime out of the box.
There are a few extra steps to follow to deploy your `/api/hello-world` API endpoint to Vercel Edge Functions.
Update your API route to use the `runtime: "edge"` option and change it to an `action()` so we can trigger the task from a curl request later on.
```ts app/routes/api.hello-world.ts
import { tasks } from "@trigger.dev/sdk/v3";
import type { helloWorldTask } from "../../src/trigger/example";
// π **type-only** import
// include this at the top of your API route file
export const config = {
runtime: "edge",
};
export async function action({ request }: { request: Request }) {
// This is where you'd authenticate the request
const payload = await request.json();
const handle = await tasks.trigger("hello-world", payload);
return new Response(JSON.stringify(handle), {
headers: { "Content-Type": "application/json" },
});
}
```
Create or update the `vercel.json` file with the following:
```json vercel.json
{
"buildCommand": "npm run vercel-build",
"devCommand": "npm run dev",
"framework": "remix",
"installCommand": "npm install",
"outputDirectory": "build/client"
}
```
Update your `package.json` to include the following scripts:
```json package.json
"scripts": {
"build": "remix vite:build",
"dev": "remix vite:dev",
"lint": "eslint --ignore-path .gitignore --cache --cache-location ./node_modules/.cache/eslint .",
"start": "remix-serve ./build/server/index.js",
"typecheck": "tsc",
"vercel-build": "remix vite:build && cp -r ./public ./build/client"
},
```
Push your code to a Git repository and create a new project in the Vercel dashboard. Select your repository and follow the prompts to complete the deployment.
In the Vercel project settings, add your Trigger.dev secret key:
```bash
TRIGGER_SECRET_KEY=your-secret-key
```
You can find this key in the Trigger.dev dashboard under API Keys and select the environment key you want to use.
![How to find your secret key](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/api-keys.png)
Once you've added the environment variable, deploy your project to Vercel.
Ensure you have also deployed your Trigger.dev task. See [deploy your task
step](/guides/frameworks/remix#deploying-your-task-to-trigger-dev).
After deployment, you can test your task in production by running this curl command:
```bash
curl -X POST https://your-app.vercel.app/api/hello-world \
-H "Content-Type: application/json" \
-d '{"name": "James"}'
```
This sends a POST request to your API endpoint with a JSON payload.
### Additional notes
The `vercel-build` script in `package.json` is specific to Remix projects on Vercel, ensuring that static assets are correctly copied to the build output.
The `runtime: "edge"` configuration in the API route allows for better performance on Vercel's Edge Network.
## Additional resources for Remix
How to create a webhook handler in a Remix app, and trigger a task from it.
## Useful next steps
Learn what tasks are and their options
Learn how to write your own tasks
Learn how to deploy your task manually using the CLI
Learn how to deploy your task using GitHub actions
# Triggering tasks with webhooks in Remix
Learn how to trigger a task from a webhook in a Remix app.
## Prerequisites
* [A Remix project, set up with Trigger.dev](/guides/frameworks/remix)
* [cURL](https://curl.se/) installed on your local machine. This will be used to send a POST request to your webhook handler.
## Adding the webhook handler
The webhook handler in this guide will be an API route. Create a new file `app/routes/api.webhook-handler.ts` or `app/routes/api.webhook-handler.js`.
In your new file, add the following code:
```ts /api/webhook-handler.ts
import type { ActionFunctionArgs } from "@remix-run/node";
import { tasks } from "@trigger.dev/sdk/v3";
import { helloWorldTask } from "src/trigger/example";
export async function action({ request }: ActionFunctionArgs) {
const payload = await request.json();
// Trigger the helloWorldTask with the webhook data as the payload
await tasks.trigger("hello-world", payload);
return new Response("OK", { status: 200 });
}
```
This code will handle the webhook payload and trigger the 'Hello World' task.
## Triggering the task locally
Now that you have a webhook handler set up, you can trigger the 'Hello World' task from it. We will do this locally using cURL.
First, run your Remix app.
```bash npm
npm run dev
```
```bash pnpm
pnpm run dev
```
```bash yarn
yarn dev
```
Then, open up a second terminal window and start the Trigger.dev dev server:
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
To send a POST request to your webhook handler, open up a terminal window on your local machine and run the following command:
If `http://localhost:5173` isn't the URL of your locally running Remix app, replace the URL in the
below command with that URL instead.
```bash
curl -X POST -H "Content-Type: application/json" -d '{"Name": "John Doe", "Age": "87"}' http://localhost:5173/api/webhook-handler
```
This will send a POST request to your webhook handler, with a JSON payload.
After running the command, you should see a successful dev run and a 200 response in your terminals.
If you now go to your [Trigger.dev dashboard](https://cloud.trigger.dev), you should also see a successful run for the 'Hello World' task, with the payload you sent, in this case; `{"name": "John Doe", "age": "87"}`.
# Sequin database triggers
This guide will show you how to trigger tasks from database changes using Sequin
[Sequin](https://sequinstream.com) allows you to trigger tasks from database changes. Sequin captures every insert, update, and delete on a table and then ensures a task is triggered for each change.
Often, task runs coincide with database changes. For instance, you might want to use a Trigger.dev task to generate an embedding for each post in your database:
In this guide, you'll learn how to use Sequin to trigger Trigger.dev tasks from database changes.
## Prerequisites
You are about to create a [regular Trigger.dev task](/tasks-regular) that you will execute when ever a post is inserted or updated in your database. Sequin will detect all the changes on the `posts` table and then send the payload of the post to an API endpoint that will call `tasks.trigger()` to create the embedding and update the database.
As long as you create an HTTP endpoint that Sequin can deliver webhooks to, you can use any web framework or edge function (e.g. Supabase Edge Functions, Vercel Functions, Cloudflare Workers, etc.) to invoke your Trigger.dev task. In this guide, we'll show you how to setup Trigger.dev tasks using Next.js API Routes.
You'll need the following to follow this guide:
* A Next.js project with [Trigger.dev](https://trigger.dev) installed
If you don't have one already, follow [Trigger.dev's Next.js setup
guide](/guides/frameworks/nextjs) to setup your project. You can return to this guide when
you're ready to write your first Trigger.dev task.
* A [Sequin](https://console.sequinstream.com/register) account
* A Postgres database (Sequin works with any Postgres database version 12 and up) with a `posts` table.
## Create a Trigger.dev task
Start by creating a new Trigger.dev task that takes in a Sequin change event as a payload, creates an embedding, and then inserts the embedding into the database:
In your `src/trigger/tasks` directory, create a new file called `create-embedding-for-post.ts` and add the following code:
```ts trigger/create-embedding-for-post.ts
import { task } from "@trigger.dev/sdk/v3";
import { OpenAI } from "openai";
import { upsertEmbedding } from "../util";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export const createEmbeddingForPost = task({
id: "create-embedding-for-post",
run: async (payload: {
record: {
id: number;
title: string;
body: string;
author: string;
createdAt: string;
embedding: string | null;
},
metadata: {
table_schema: string,
table_name: string,
consumer: {
id: string;
name: string;
};
};
}) => {
// Create an embedding using the title and body of payload.record
const content = `${payload.record.title}\n\n${payload.record.body}`;
const embedding = (await openai.embeddings.create({
model: "text-embedding-ada-002",
input: content,
})).data[0].embedding;
// Upsert the embedding in the database. See utils.ts for the implementation -> ->
await upsertEmbedding(embedding, payload.record.id);
// Return the updated record
return {
...payload.record,
embedding: JSON.stringify(embedding),
};
}
});
```
```ts utils.ts
import pg from "pg";
export async function upsertEmbedding(embedding: number[], id: number) {
const client = new pg.Client({
connectionString: process.env.DATABASE_URL,
});
await client.connect();
try {
const query = `
INSERT INTO post_embeddings (id, embedding)
VALUES ($2, $1)
ON CONFLICT (id)
DO UPDATE SET embedding = $1
`;
const values = [JSON.stringify(embedding), id];
const result = await client.query(query, values);
console.log(`Updated record in database. Rows affected: ${result.rowCount}`);
return result.rowCount;
} catch (error) {
console.error("Error updating record in database:", error);
throw error;
} finally {
await client.end();
}
}
```
This task takes in a Sequin record event, creates an embedding, and then upserts the embedding into a `post_embeddings` table.
Register the `create-embedding-for-post` task to your Trigger.dev cloud project by running the following command:
```bash
npx trigger.dev@latest dev
```
In the Trigger.dev dashboard, you should now see the `create-embedding-for-post` task:
You've successfully created a Trigger.dev task that will create an embedding for each post in your
database. In the next step, you'll create an API endpoint that Sequin can deliver records to.
## Setup API route
You'll now create an API endpoint that will receive posts from Sequin and then trigger the `create-embedding-for-post` task.
This guide covers how to setup an API endpoint using the Next.js App Router. You can find examples
for Next.js Server Actions and Pages Router in the [Trigger.dev
documentation](https://trigger.dev/docs/guides/frameworks/nextjs).
Add a route handler by creating a new `route.ts` file in a `/app/api/create-embedding-for-post` directory:
```ts app/api/create-embedding-for-post/route.ts
import type { createEmbeddingForPost } from "@/trigger/create-embedding-for-post";
import { tasks } from "@trigger.dev/sdk/v3";
import { NextResponse } from "next/server";
export async function POST(req: Request) {
const authHeader = req.headers.get("authorization");
if (!authHeader || authHeader !== `Bearer ${process.env.SEQUIN_WEBHOOK_SECRET}`) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
const payload = await req.json();
const handle = await tasks.trigger(
"create-embedding-for-post",
payload
);
return NextResponse.json(handle);
}
```
This route handler will receive records from Sequin, parse them, and then trigger the `create-embedding-for-post` task.
You'll need to set four secret keys in a `.env.local` file:
```bash
SEQUIN_WEBHOOK_SECRET=your-secret-key
TRIGGER_SECRET_KEY=secret-from-trigger-dev
OPENAI_API_KEY=sk-proj-asdfasdfasdf
DATABASE_URL=postgresql://
```
The `SEQUIN_WEBHOOK_SECRET` ensures that only Sequin can access your API endpoint.
The `TRIGGER_SECRET_KEY` is used to authenticate requests to Trigger.dev and can be found in the **API keys** tab of the Trigger.dev dashboard.
The `OPENAI_API_KEY` and `DATABASE_URL` are used to create an embedding using OpenAI and connect to your database. Be sure to add these as [environment variables](https://trigger.dev/docs/deploy-environment-variables) in Trigger.dev as well.
You've successfully created an API endpoint that can receive record payloads from Sequin and
trigger a Trigger.dev task. In the next step, you'll setup Sequin to trigger the endpoint.
## Create Sequin consumer
You'll now configure Sequin to send every row in your `posts` table to your Trigger.dev task.
1. Login to your Sequin account and click the **Add New Database** button.
2. Enter the connection details for your Postgres database.
If you need to connect to a local dev database, flip the **use localhost** switch and follow the instructions to create a tunnel using the [Sequin CLI](https://sequinstream.com/docs/cli).
3. Follow the instructions to create a publication and a replication slot by running two SQL commands in your database:
```sql
create publication sequin_pub for all tables;
select pg_create_logical_replication_slot('sequin_slot', 'pgoutput');
```
4. Name your database and click the **Connect Database** button.
Sequin will connect to your database and ensure that it's configured properly.
If you need step-by-step connection instructions to connect Sequin to your database, check out our [quickstart guide](https://sequinstream.com/docs/quickstart).
Now, create a tunnel to your local endpoint so Sequin can deliver change payloads to your local API:
1. In the Sequin console, open the **HTTP Endpoint** tab and click the **Create HTTP Endpoint** button.
2. Enter a name for your endpoint (i.e. `local_endpoint`) and flip the **Use localhost** switch. Follow the instructions in the Sequin console to [install the Sequin CLI](https://sequinstream.com/docs/cli), then run:
```bash
sequin tunnel --ports=3001:local_endpoint
```
3. Now, click **Add encryption header** and set the key to `Authorization` and the value to `Bearer SEQUIN_WEBHOOK_SECRET`.
4. Click **Create HTTP Endpoint**.
Create a push consumer that will capture posts from your database and deliver them to your local endpoint:
1. Navigate to the **Consumers** tab and click the **Create Consumer** button.
2. Select your `posts` table (i.e `public.posts`).
3. You want to ensure that every post receives an embedding - and that embeddings are updated as posts are updated. To do this, select to process **Rows** and click **Continue**.
You can also use **changes** for this particular use case, but **rows** comes with some nice replay and backfill features.
4. You'll now set the sort and filter for the consumer. For this guide, we'll sort by `updated_at` and start at the beginning of the table. We won't apply any filters:
5. On the next screen, select **Push** to have Sequin send the events to your webhook URL. Click **Continue**.
6. Now, give your consumer a name (i.e. `posts_push_consumer`) and in the **HTTP Endpoint** section select the `local_endpoint` you created above. Add the exact API route you created in the previous step (i.e. `/api/create-embedding-for-post`):
7. Click the **Create Consumer** button.
Your Sequin consumer is now created and ready to send events to your API endpoint.
## Test end-to-end
1. The Next.js app is running: `npm run dev`
2. The Trigger.dev dev server is running `npx trigger.dev@latest dev`
3. The Sequin tunnel is running: `sequin tunnel --ports=3001:local_endpoint`
```sql
insert into
posts (title, body, author)
values
(
'The Future of AI',
'An insightful look into how artificial intelligence is shaping the future of technology and society.',
'Alice H Johnson'
);
```
In the Sequin console, navigate to the [**Trace**](https://console.sequinstream.com/trace) tab and confirm that Sequin delivered the event to your local endpoint:
In your local terminal, you should see a `200` response in your Next.js app:
```bash
POST /api/create-embedding-for-post 200 in 262ms
```
Finally, in the [**Trigger.dev dashboard**](https://cloud.trigger.dev/), navigate to the Runs page and confirm that the task run completed successfully:
Every time a post is created or updated, Sequin will deliver the row payload to your API endpoint
and Trigger.dev will run the `create-embedding-for-post` task.
## Next steps
With Sequin and Trigger.dev, every post in your database will now have an embedding. This is a simple example of how you can trigger long-running tasks on database changes.
From here, add error handling and deploy to production:
* Add [retries](/errors-retrying) to your Trigger.dev task to ensure that any errors are captured and logged.
* Deploy to [production](/guides/frameworks/nextjs#deploying-your-task-to-trigger-dev) and update your Sequin consumer to point to your production database and endpoint.
# Triggering tasks from Supabase edge functions
This guide will show you how to trigger a task from a Supabase edge function, and then view the run in our dashboard.
The project created in this guide can be found in this [GitHub
repo](https://github.com/triggerdotdev/example-projects/tree/main/supabase).
## Overview
Supabase edge functions allow you to trigger tasks either when an event is sent from a third party (e.g. when a new Stripe payment is processed, when a new user signs up to a service, etc), or when there are any changes or updates to your Supabase database.
This guide shows you how to set up and deploy a simple Supabase edge function example that triggers a task when an edge function URL is accessed.
## Prerequisites
* Ensure you have the [Supabase CLI](https://supabase.com/docs/guides/cli/getting-started) installed
* Since Supabase CLI version 1.123.4, you must have [Docker Desktop installed](https://supabase.com/docs/guides/functions/deploy#deploy-your-edge-functions) to deploy Edge Functions
* Ensure TypeScript is installed
* [Create a Trigger.dev account](https://cloud.trigger.dev)
* [Create a new Trigger.dev project](/guides/dashboard/creating-a-project)
## Initial setup
If you already have a Supabase project on your local machine you can skip this step.
You can create a new project by running the following command in your terminal using the Supabase CLI:
```bash
supabase init
```
If you are using VS Code, ensure to answer 'y' when asked to generate VS Code settings for Deno,
and install any recommended extensions.
If your project does not already have `package.json` file (e.g. if you are using Deno), create it manually in your project's root folder.
If your project has a `package.json` file you can skip this step.
This is required for the Trigger.dev SDK to work correctly.
```ts package.json
{
"devDependencies": {
"typescript": "^5.6.2"
}
}
```
Update your Typescript version to the latest version available.
The easiest way to get started is to use the CLI. It will add Trigger.dev to your existing project, create a `/trigger` folder and give you an example task.
Run this command in the root of your project to get started:
```bash npm
npx trigger.dev@latest init
```
```bash pnpm
pnpm dlx trigger.dev@latest init
```
```bash yarn
yarn dlx trigger.dev@latest init
```
It will do a few things:
1. Log you into the CLI if you're not already logged in.
2. Create a `trigger.config.ts` file in the root of your project.
3. Ask where you'd like to create the `/trigger` directory.
4. Create the `/trigger` directory with an example task, `/trigger/example.[ts/js]`.
Install the "Hello World" example task when prompted. We'll use this task to test the setup.
The CLI `dev` command runs a server for your tasks. It watches for changes in your `/trigger` directory and communicates with the Trigger.dev platform to register your tasks, perform runs, and send data back and forth.
It can also update your `@trigger.dev/*` packages to prevent version mismatches and failed deploys. You will always be prompted first.
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
The CLI `dev` command spits out various useful URLs. Right now we want to visit the Test page .
You should see our Example task in the list , select it. Most tasks have a "payload" which you enter in the JSON editor , but our example task doesn't need any input.
Press the "Run test" button .
![Test page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-page.png)
Congratulations, you should see the run page which will live reload showing you the current state of the run.
![Run page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-page.png)
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
![Terminal showing completed run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/terminal-completed-run.png)
## Create a new Supabase edge function and deploy it
We'll call this example `edge-function-trigger`.
In your project, run the following command in the terminal using the Supabase CLI:
```bash
supabase functions new edge-function-trigger
```
Replace the placeholder code in your `edge-function-trigger/index.ts` file with the following:
```ts functions/edge-function-trigger/index.ts
// Setup type definitions for built-in Supabase Runtime APIs
import "jsr:@supabase/functions-js/edge-runtime.d.ts";
// Import the Trigger.dev SDK - replace "" with the version of the SDK you are using, e.g. "3.0.0". You can find this in your package.json file.
import { tasks } from "npm:@trigger.dev/sdk@3.0.0/v3";
// Import your task type from your /trigger folder
import type { helloWorldTask } from "../../../src/trigger/example.ts";
// π **type-only** import
Deno.serve(async () => {
await tasks.trigger(
// Your task id
"hello-world",
// Your task payload
"Hello from a Supabase Edge Function!"
);
return new Response("OK");
});
```
You can only import the `type` from the task.
Tasks in the `trigger` folder use Node, so they must stay in there or they will not run,
especially if you are using a different runtime like Deno. Also do not add "`npm:`" to imports
inside your task files, for the same reason.
You can now deploy your edge function with the following command in your terminal:
```bash
supabase functions deploy edge-function-trigger --no-verify-jwt
```
`--no-verify-jwt` removes the JSON Web Tokens requirement from the authorization header. By
default this should be on, but it is not required for this example. Learn more about JWTs
[here](https://supabase.com/docs/guides/auth/jwts).
Follow the CLI instructions and once complete you should now see your new edge function deployment in your Supabase edge functions dashboard.
There will be a link to the dashboard in your terminal output, or you can find it at this URL:
`https://supabase.com/dashboard/project//functions`
Replace `your-project-id` with your actual project ID.
## Set your Trigger.dev prod secret key in the Supabase dashboard
To trigger a task from your edge function, you need to set your Trigger.dev secret key in the Supabase dashboard.
To do this, first go to your Trigger.dev [project dashboard](https://cloud.trigger.dev) and copy the `prod` secret key from the API keys page.
![How to find your prod secret key](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/api-key-prod.png)
Then, in [Supabase](https://supabase.com/dashboard/projects), select your project, navigate to 'Project settings' , click 'Edge functions' in the configurations menu, and then click the 'Add new secret' button.
Add `TRIGGER_SECRET_KEY` with the pasted value of your Trigger.dev `prod` secret key.
![Add secret key in Supabase](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-keys-1.png)
## Deploy your task and trigger it from your edge function
Next, deploy your `hello-world` task to [Trigger.dev cloud](https://cloud.trigger.dev).
```bash npm
npx trigger.dev@latest deploy
```
```bash pnpm
pnpm dlx trigger.dev@latest deploy
```
```bash yarn
yarn dlx trigger.dev@latest deploy
```
To do this all you need to do is simply open the `edge-function-trigger` URL.
`https://supabase.com/dashboard/project//functions`
Replace `your-project-id` with your actual project ID.
In your Supabase project, go to your Edge function dashboard, find `edge-function-trigger`, copy the URL, and paste it into a new window in your browser.
Once loaded you should see βOKβ on the new screen.
![Edge function URL](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-function-url.png)
The task will be triggered when your edge function URL is accessed.
Check your [cloud.trigger.dev](http://cloud.trigger.dev) dashboard and you should see a succesful `hello-world` task.
**Congratulations, you have run a simple Hello World task from a Supabase edge function!**
## Learn more about Supabase and Trigger.dev
### Full walkthrough guides from development to deployment
Learn how to trigger a task from a Supabase edge function when a URL is visited.
Learn how to trigger a task from a Supabase edge function when an event occurs in your database.
### Task examples with code you can copy and paste
Run basic CRUD operations on a table in a Supabase database using Trigger.dev.
Download a video from a URL and upload it to Supabase Storage using S3.
# Triggering tasks from Supabase Database Webhooks
This guide shows you how to trigger a transcribing task when a row is added to a table in a Supabase database, using a Database Webhook and Edge Function.
The project created in this guide can be found in this [GitHub
repo](https://github.com/triggerdotdev/example-projects/tree/main/supabase).
## Overview
Supabase and Trigger.dev can be used together to create powerful workflows triggered by real-time changes in your database tables:
* A Supabase Database Webhook triggers an Edge Function when a row including a video URL is inserted into a table
* The Edge Function triggers a Trigger.dev task, passing the `video_url` column data from the new table row as the payload
* The Trigger.dev task then:
* Uses [FFmpeg](https://www.ffmpeg.org/) to extract the audio track from a video URL
* Uses [Deepgram](https://deepgram.com) to transcribe the extracted audio
* Updates the original table row using the `record.id` in Supabase with the new transcription using `update`
## Prerequisites
* Ensure you have the [Supabase CLI](https://supabase.com/docs/guides/cli/getting-started) installed
* Since Supabase CLI version 1.123.4, you must have [Docker Desktop installed](https://supabase.com/docs/guides/functions/deploy#deploy-your-edge-functions) to deploy Edge Functions
* Ensure TypeScript is installed
* [Create a Trigger.dev account](https://cloud.trigger.dev)
* [Create a new Trigger.dev project](/guides/dashboard/creating-a-project)
* [Create a new Deepgram account](https://deepgram.com/) and get your API key from the dashboard
## Initial setup
If you already have a Supabase project on your local machine you can skip this step.
You can create a new project by running the following command in your terminal using the Supabase CLI:
```bash
supabase init
```
If you are using VS Code, ensure to answer 'y' when asked to generate VS Code settings for Deno,
and install any recommended extensions.
If your project does not already have `package.json` file (e.g. if you are using Deno), create it manually in your project's root folder.
If your project has a `package.json` file you can skip this step.
This is required for the Trigger.dev SDK to work correctly.
```ts package.json
{
"devDependencies": {
"typescript": "^5.6.2"
}
}
```
Update your Typescript version to the latest version available.
The easiest way to get started is to use the CLI. It will add Trigger.dev to your existing project, create a `/trigger` folder and give you an example task.
Run this command in the root of your project to get started:
```bash npm
npx trigger.dev@latest init
```
```bash pnpm
pnpm dlx trigger.dev@latest init
```
```bash yarn
yarn dlx trigger.dev@latest init
```
It will do a few things:
1. Log you into the CLI if you're not already logged in.
2. Create a `trigger.config.ts` file in the root of your project.
3. Ask where you'd like to create the `/trigger` directory.
4. Create the `/trigger` directory with an example task, `/trigger/example.[ts/js]`.
Choose "None" when prompted to install an example task. We will create a new task for this guide.
## Create a new table in your Supabase database
First, in the Supabase project dashboard, you'll need to create a new table to store the video URL and transcription.
To do this, click on 'Table Editor' in the left-hand menu and create a new table.
![How to create a new Supabase table](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-new-table-1.png)
Call your table `video_transcriptions`.
Add two new columns, one called `video_url` with the type `text` , and another called `transcription`, also with the type `text` .
![How to create a new Supabase table 2](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-new-table-2.png)
## Create and deploy the Trigger.dev task
### Generate the Database type definitions
To allow you to use TypeScript to interact with your table, you need to [generate the type definitions](https://supabase.com/docs/guides/api/rest/generating-types) for your Supabase table using the Supabase CLI.
```bash
supabase gen types --lang=typescript --project-id --schema public > database.types.ts
```
Replace `` with your Supabase project reference ID. This can be found in your Supabase project settings under 'General'.
### Create the transcription task
Create a new task file in your `/trigger` folder. Call it `videoProcessAndUpdate.ts`.
This task takes a video from a public video url, extracts the audio using FFmpeg and transcribes the audio using Deepgram. The transcription summary will then be updated back to the original row in the `video_transcriptions` table in Supabase.
You will need to install some additional dependencies for this task:
```bash npm
npm install @deepgram/sdk @supabase/supabase-js fluent-ffmpeg
```
```bash pnpm
pnpm install @deepgram/sdk @supabase/supabase-js fluent-ffmpeg
```
```bash yarn
yarn install @deepgram/sdk @supabase/supabase-js fluent-ffmpeg
```
These dependencies will allow you to interact with the Deepgram and Supabase APIs and extract audio from a video using FFmpeg.
```ts /trigger/videoProcessAndUpdate.ts
// Install any missing dependencies below
import { createClient as createDeepgramClient } from "@deepgram/sdk";
import { createClient as createSupabaseClient } from "@supabase/supabase-js";
import { logger, task } from "@trigger.dev/sdk/v3";
import ffmpeg from "fluent-ffmpeg";
import fs from "fs";
import { Readable } from "node:stream";
import os from "os";
import path from "path";
import { Database } from "../../database.types";
// Create a single Supabase client for interacting with your database
// 'Database' supplies the type definitions to supabase-js
const supabase = createSupabaseClient(
// These details can be found in your Supabase project settings under `API`
process.env.SUPABASE_PROJECT_URL as string, // e.g. https://abc123.supabase.co - replace 'abc123' with your project ID
process.env.SUPABASE_SERVICE_ROLE_KEY as string // Your service role secret key
);
// Your DEEPGRAM_SECRET_KEY can be found in your Deepgram dashboard
const deepgram = createDeepgramClient(process.env.DEEPGRAM_SECRET_KEY);
export const videoProcessAndUpdate = task({
id: "video-process-and-update",
run: async (payload: { videoUrl: string; id: number }) => {
const { videoUrl, id } = payload;
logger.log(`Processing video at URL: ${videoUrl}`);
// Generate temporary file names
const tempDirectory = os.tmpdir();
const outputPath = path.join(tempDirectory, `audio_${Date.now()}.wav`);
const response = await fetch(videoUrl);
// Extract the audio using FFmpeg
await new Promise((resolve, reject) => {
if (!response.body) {
return reject(new Error("Failed to fetch video"));
}
ffmpeg(Readable.from(response.body))
.outputOptions([
"-vn", // Disable video output
"-acodec pcm_s16le", // Use PCM 16-bit little-endian encoding
"-ar 44100", // Set audio sample rate to 44.1 kHz
"-ac 2", // Set audio channels to stereo
])
.output(outputPath)
.on("end", resolve)
.on("error", reject)
.run();
});
logger.log(`Audio extracted from video`, { outputPath });
// Transcribe the audio using Deepgram
const { result, error } = await deepgram.listen.prerecorded.transcribeFile(
fs.readFileSync(outputPath),
{
model: "nova-2", // Use the Nova 2 model
smart_format: true, // Automatically format the transcription
diarize: true, // Enable speaker diarization
}
);
if (error) {
throw error;
}
const transcription = result.results.channels[0].alternatives[0].paragraphs?.transcript;
logger.log(`Transcription: ${transcription}`);
// Delete the temporary audio file
fs.unlinkSync(outputPath);
logger.log(`Temporary audio file deleted`, { outputPath });
const { error: updateError } = await supabase
.from("video_transcriptions")
// Update the transcription column
.update({ transcription: transcription })
// Find the row by its ID
.eq("id", id);
if (updateError) {
throw new Error(`Failed to update transcription: ${updateError.message}`);
}
return {
message: `Summary of the audio: ${transcription}`,
result,
};
},
});
```
When updating your tables from a Trigger.dev task which has been triggered by a database change,
be extremely careful to not cause an infinite loop. Ensure you have the correct conditions in
place to prevent this.
### Adding the FFmpeg build extension
Before you can deploy the task, you'll need to add the FFmpeg build extension to your `trigger.config.ts` file.
```ts trigger.config.ts
// Add this import
import { ffmpeg } from "@trigger.dev/build/extensions/core";
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "", // Replace with your project ref
// Your other config settings...
build: {
// Add the FFmpeg build extension
extensions: [ffmpeg()],
},
});
```
[Build extensions](/config/config-file#extensions) allow you to hook into the build system and
customize the build process or the resulting bundle and container image (in the case of
deploying). You can use pre-built extensions or create your own.
You'll also need to add `@trigger.dev/build` to your `package.json` file under `devDependencies`
if you don't already have it there.
### Add your Deepgram and Supabase environment variables to your Trigger.dev project
You will need to add your `DEEPGRAM_SECRET_KEY`, `SUPABASE_PROJECT_URL` and `SUPABASE_SERVICE_ROLE_KEY` as environment variables in your Trigger.dev project. This can be done in the 'Environment Variables' page in your project dashboard.
![Adding environment variables](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/environment-variables-page.jpg)
### Deploying your task
Now you can now deploy your task using the following command:
```bash npm
npx trigger.dev@latest deploy
```
```bash pnpm
pnpm dlx trigger.dev@latest deploy
```
```bash yarn
yarn dlx trigger.dev@latest deploy
```
## Create and deploy the Supabase Edge Function
### Add your Trigger.dev prod secret key to the Supabase dashboard
Go to your Trigger.dev [project dashboard](https://cloud.trigger.dev) and copy the `prod` secret key from the API keys page.
![How to find your prod secret key](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/api-key-prod.png)
Then, in [Supabase](https://supabase.com/dashboard/projects), select the project you want to use, navigate to 'Project settings' , click 'Edge Functions' in the configurations menu, and then click the 'Add new secret' button.
Add `TRIGGER_SECRET_KEY` with the pasted value of your Trigger.dev `prod` secret key.
![Add secret key in Supabase](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-keys-1.png)
### Create a new Edge Function using the Supabase CLI
Now create an Edge Function using the Supabase CLI. Call it `video-processing-handler`. This function will be triggered by the Database Webhook.
```bash
supabase functions new video-processing-handler
```
```ts functions/video-processing-handler/index.ts
// Setup type definitions for built-in Supabase Runtime APIs
import "jsr:@supabase/functions-js/edge-runtime.d.ts";
import { tasks } from "npm:@trigger.dev/sdk@latest/v3";
// Import the videoProcessAndUpdate task from the trigger folder
import type { videoProcessAndUpdate } from "../../../src/trigger/videoProcessAndUpdate.ts";
// π type only import
// Sets up a Deno server that listens for incoming JSON requests
Deno.serve(async (req) => {
const payload = await req.json();
// This payload will contain the video url and id from the new row in the table
const videoUrl = payload.record.video_url;
const id = payload.record.id;
// Trigger the videoProcessAndUpdate task with the videoUrl payload
await tasks.trigger("video-process-and-update", { videoUrl, id });
console.log(payload ?? "No name provided");
return new Response("ok");
});
```
Tasks in the `trigger` folder use Node, so they must stay in there or they will not run,
especially if you are using a different runtime like Deno. Also do not add "`npm:`" to imports
inside your task files, for the same reason.
### Deploy the Edge Function
Now deploy your new Edge Function with the following command:
```bash
supabase functions deploy video-processing-handler
```
Follow the CLI instructions, selecting the same project you added your `prod` secret key to, and once complete you should see your new Edge Function deployment in your Supabase Edge Functions dashboard.
There will be a link to the dashboard in your terminal output.
## Create the Database Webhook
In your Supabase project dashboard, click 'Project settings' , then the 'API' tab , and copy the `anon` `public` API key from the table .
![How to find your Supabase API keys](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-api-key.png)
Then, go to 'Database' click on 'Webhooks' , and then click 'Create a new hook' .
![How to create a new webhook](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-create-webhook-1.png)
Call the hook `edge-function-hook`.
Select the new table you have created:
`public` `video_transcriptions`.
Choose the `insert` event.
![How to create a new webhook 2](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-create-webhook-2.png)
Under 'Webhook configuration', select
'Supabase Edge Functions'{" "}
Under 'Edge Function', choose `POST`
and select the Edge Function you have created: `video-processing-handler`.{" "}
Under 'HTTP Headers', add a new header with the key `Authorization` and the value `Bearer ` (replace `` with the `anon` `public` API key you copied earlier).
Supabase Edge Functions require a JSON Web Token [JWT](https://supabase.com/docs/guides/auth/jwts)
in the authorization header. This is to ensure that only authorized users can access your edge
functions.
Click 'Create webhook'.{" "}
![How to create a new webhook 3](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-create-webhook-3.png)
Your Database Webhook is now ready to use.
## Triggering the entire workflow
Your `video-processing-handler` Edge Function is now set up to trigger the `videoProcessAndUpdate` task every time a new row is inserted into your `video_transcriptions` table.
To do this, go back to your Supabase project dashboard, click on 'Table Editor' in the left-hand menu, click on the `video_transcriptions` table , and then click 'Insert', 'Insert Row' .
![How to insert a new row 1](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-new-table-3.png)
Add a new item under `video_url`, with a public video url. .
You can use the following public video URL for testing: `https://content.trigger.dev/Supabase%20Edge%20Functions%20Quickstart.mp4`.
![How to insert a new row 2](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-new-table-4.png)
Once the new table row has been inserted, check your [cloud.trigger.dev](http://cloud.trigger.dev) project 'Runs' list and you should see a processing `videoProcessAndUpdate` task which has been triggered when you added a new row with the video url to your `video_transcriptions` table.
![Supabase successful run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-run-result.png)
Once the run has completed successfully, go back to your Supabase `video_transcriptions` table, and you should see that in the row containing the original video URL, the transcription has now been added to the `transcription` column.
![Supabase successful table update](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/supabase-table-result.png)
**Congratulations! You have completed the full workflow from Supabase to Trigger.dev and back again.**
## Learn more about Supabase and Trigger.dev
### Full walkthrough guides from development to deployment
Learn how to trigger a task from a Supabase edge function when a URL is visited.
Learn how to trigger a task from a Supabase edge function when an event occurs in your database.
### Task examples with code you can copy and paste
Run basic CRUD operations on a table in a Supabase database using Trigger.dev.
Download a video from a URL and upload it to Supabase Storage using S3.
# Supabase overview
Guides and examples for using Supabase with Trigger.dev.
## Learn more about Supabase and Trigger.dev
### Full walkthrough guides from development to deployment
Learn how to trigger a task from a Supabase edge function when a URL is visited.
Learn how to trigger a task from a Supabase edge function when an event occurs in your database.
### Task examples with code you can copy and paste
Run basic CRUD operations on a table in a Supabase database using Trigger.dev.
Download a video from a URL and upload it to Supabase Storage using S3.
# Using webhooks with Trigger.dev
Guides for using webhooks with Trigger.dev.
## Overview
Webhooks are a way to send and receive events from external services. Triggering tasks using webhooks allow you to add real-time, event driven functionality to your app.
A webhook handler is code that executes in response to an event. They can be endpoints in your framework's routing which can be triggered by an external service.
## Webhook guides
How to create a webhook handler in a Next.js app, and trigger a task from it.
How to create a webhook handler in a Remix app, and trigger a task from it.
How to create a Stripe webhook handler and trigger a task when a 'checkout session completed'
event is received.
Learn how to trigger a task from a Supabase edge function when an event occurs in your database.
# Frameworks, guides and examples overview
An ever growing list of guides and examples to help you get setup with Trigger.dev.
## Frameworks
}
href="/guides/frameworks/bun"
/>
}
href="/guides/frameworks/nodejs"
/>
}
href="/guides/frameworks/nextjs"
/>
}
href="/guides/frameworks/remix"
/>
## Guides
Get set up fast using our detailed walk-through guides.
| Guide | Description |
| :----------------------------------------------------------------------------------------- | :------------------------------------------------ |
| [Prisma](/guides/frameworks/prisma) | How to setup Prisma with Trigger.dev |
| [Sequin database triggers](/guides/frameworks/sequin) | Trigger tasks from database changes using Sequin |
| [Supabase edge function hello world](/guides/frameworks/supabase-edge-functions-basic) | Trigger tasks from Supabase edge function |
| [Supabase database webhooks](/guides/frameworks/supabase-edge-functions-database-webhooks) | Trigger tasks using Supabase database webhooks |
| [Using webhooks in Next.js](/guides/frameworks/nextjs-webhooks) | Trigger tasks from a webhook in Next.js |
| [Using webhooks in Remix](/guides/frameworks/remix-webhooks) | Trigger tasks from a webhook in Remix |
| [Stripe webhooks](/guides/examples/stripe-webhook) | Trigger tasks from incoming Stripe webhook events |
## Example tasks
Tasks you can copy and paste to get started with Trigger.dev. They can all be extended and customized to fit your needs.
| Example task | Description |
| :---------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------- |
| [DALLΒ·E 3 image generation](/guides/examples/dall-e3-generate-image) | Use OpenAI's GPT-4o and DALLΒ·E 3 to generate an image and text. |
| [Deepgram audio transcription](/guides/examples/deepgram-transcribe-audio) | Transcribe audio using Deepgram's speech recognition API. |
| [Fal.ai image to cartoon](/guides/examples/fal-ai-image-to-cartoon) | Convert an image to a cartoon using Fal.ai, and upload the result to Cloudflare R2. |
| [Fal.ai with Realtime](/guides/examples/fal-ai-realtime) | Generate an image from a prompt using Fal.ai and show the progress of the task on the frontend using Realtime. |
| [FFmpeg video processing](/guides/examples/ffmpeg-video-processing) | Use FFmpeg to process a video in various ways and save it to Cloudflare R2. |
| [Firecrawl URL crawl](/guides/examples/firecrawl-url-crawl) | Learn how to use Firecrawl to crawl a URL and return LLM-ready markdown. |
| [LibreOffice PDF conversion](/guides/examples/libreoffice-pdf-conversion) | Convert a document to PDF using LibreOffice. |
| [OpenAI with retrying](/guides/examples/open-ai-with-retrying) | Create a reusable OpenAI task with custom retry options. |
| [PDF to image](/guides/examples/pdf-to-image) | Use `MuPDF` to turn a PDF into images and save them to Cloudflare R2. |
| [React to PDF](/guides/examples/react-pdf) | Use `react-pdf` to generate a PDF and save it to Cloudflare R2. |
| [Puppeteer](/guides/examples/puppeteer) | Use Puppeteer to generate a PDF or scrape a webpage. |
| [Resend email sequence](/guides/examples/resend-email-sequence) | Send a sequence of emails over several days using Resend with Trigger.dev. |
| [Scrape Hacker News](/guides/examples/scrape-hacker-news) | Scrape Hacker News using BrowserBase and Puppeteer, summarize the articles with ChatGPT and send an email of the summary every weekday using Resend. |
| [Sentry error tracking](/guides/examples/sentry-error-tracking) | Automatically send errors to Sentry from your tasks. |
| [Sharp image processing](/guides/examples/sharp-image-processing) | Use Sharp to process an image and save it to Cloudflare R2. |
| [Supabase database operations](/guides/examples/supabase-database-operations) | Run basic CRUD operations on a table in a Supabase database using Trigger.dev. |
| [Supabase Storage upload](/guides/examples/supabase-storage-upload) | Download a video from a URL and upload it to Supabase Storage using S3. |
| [Vercel AI SDK](/guides/examples/vercel-ai-sdk) | Use Vercel AI SDK to generate text using OpenAI. |
| [Vercel sync environment variables](/guides/examples/vercel-sync-env-vars) | Automatically sync environment variables from your Vercel projects to Trigger.dev. |
If you would like to see a guide for your framework, or an example task for your use case, please
request it in our [Discord server](https://trigger.dev/discord) and we'll add it to the list.
# Upgrading from v2
How to upgrade v2 jobs to v3 tasks, and how to use them together.
## Changes from v2 to v3
The main difference is that things in v3 are far simpler. That's because in v3 your code is deployed to our servers (unless you self-host) which are long-running.
1. No timeouts.
2. No `io.runTask()` (and no `cacheKeys`).
3. Just use official SDKs, not integrations.
4. `task`s are the new primitive, not `job`s.
## Convert your v2 job using an AI prompt
The prompt in the accordion below gives good results when using Anthropic Claude 3.5 Sonnet. Youβll need a relatively large token limit.
Don't forget to paste your own v2 code in a markdown codeblock at the bottom of the prompt before running it.
I would like you to help me convert from Trigger.dev v2 to Trigger.dev v3.
The important differences:
1. The syntax for creating "background jobs" has changed. In v2 it looked like this:
```ts
import { eventTrigger } from "@trigger.dev/sdk";
import { client } from "@/trigger";
import { db } from "@/lib/db";
client.defineJob({
enabled: true,
id: "my-job-id",
name: "My job name",
version: "0.0.1",
// This is triggered by an event using eventTrigger. You can also trigger Jobs with webhooks, on schedules, and more: https://trigger.dev/docs/documentation/concepts/triggers/introduction
trigger: eventTrigger({
name: "theevent.name",
schema: z.object({
phoneNumber: z.string(),
verified: z.boolean(),
}),
}),
run: async (payload, io) => {
//everything needed to be wrapped in io.runTask in v2, to make it possible for long-running code to work
const result = await io.runTask("get-stuff-from-db", async () => {
const socials = await db.query.Socials.findMany({
where: eq(Socials.service, "tiktok"),
});
return socials;
});
io.logger.info("Completed fetch successfully");
},
});
```
In v3 it looks like this:
```ts
import { task } from "@trigger.dev/sdk/v3";
import { db } from "@/lib/db";
export const getCreatorVideosFromTikTok = task({
id: "my-job-id",
run: async (payload: { phoneNumber: string, verified: boolean }) => {
//in v3 there are no timeouts, so you can just use the code as is, no need to wrap in `io.runTask`
const socials = await db.query.Socials.findMany({
where: eq(Socials.service, "tiktok"),
});
//use `logger` instead of `io.logger`
logger.info("Completed fetch successfully");
},
});
```
Notice that the schema on v2 `eventTrigger` defines the payload type. In v3 that needs to be done on the TypeScript type of the `run` payload param.
2\. v2 had integrations with some APIs. Any package that isn't `@trigger.dev/sdk` can be replaced with an official SDK. The syntax may need to be adapted.
For example:
v2:
```ts
import { OpenAI } from "@trigger.dev/openai";
const openai = new OpenAI({
id: "openai",
apiKey: process.env.OPENAI_API_KEY!,
});
client.defineJob({
id: "openai-job",
name: "OpenAI Job",
version: "1.0.0",
trigger: invokeTrigger(),
integrations: {
openai, // Add the OpenAI client as an integration
},
run: async (payload, io, ctx) => {
// Now you can access it through the io object
const completion = await io.openai.chat.completions.create("completion", {
model: "gpt-3.5-turbo",
messages: [
{
role: "user",
content: "Create a good programming joke about background jobs",
},
],
});
},
});
```
Would become in v3:
```ts
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export const openaiJob = task({
id: "openai-job",
run: async (payload) => {
const completion = await openai.chat.completions.create(
{
model: "gpt-3.5-turbo",
messages: [
{
role: "user",
content: "Create a good programming joke about background jobs",
},
],
});
},
});
```
So don't use the `@trigger.dev/openai` package in v3, use the official OpenAI SDK.
Bear in mind that the syntax for the latest official SDK will probably be different from the @trigger.dev integration SDK. You will need to adapt the code accordingly.
3\. The most critical difference is that inside the `run` function you do NOT need to wrap everything in `io.runTask`. So anything inside there can be extracted out and be used in the main body of the function without wrapping it.
4\. The import for `task` in v3 is `import { task } from "@trigger.dev/sdk/v3";`
5\. You can trigger jobs from other jobs. In v2 this was typically done by either calling `io.sendEvent()` or by calling `yourOtherTask.invoke()`. In v3 you call `.trigger()` on the other task, there are no events in v3.
v2:
```ts
export const parentJob = client.defineJob({
id: "parent-job",
run: async (payload, io) => {
//send event
await client.sendEvent({
name: "user.created",
payload: { name: "John Doe", email: "john@doe.com", paidPlan: true },
});
//invoke
await exampleJob.invoke({ foo: "bar" }, {
idempotencyKey: `some_string_here_${
payload.someValue
}_${new Date().toDateString()}`,
});
},
});
```
v3:
```ts
export const parentJob = task({
id: "parent-job",
run: async (payload) => {
//trigger
await userCreated.trigger({ name: "John Doe", email: "john@doe.com", paidPlan: true });
//trigger, you can pass in an idempotency key
await exampleJob.trigger({ foo: "bar" }, {
idempotencyKey: `some_string_here_${
payload.someValue
}_${new Date().toDateString()}`,
});
}
});
```
Can you help me convert the following code from v2 to v3? Please include the full converted code in the answer, do not truncate it anywhere.
## OpenAI example comparison
This is a (very contrived) example that does a long OpenAI API call (>10s), stores the result in a database, waits for 5 mins, and then returns the result.
### v2
First, the old v2 code, which uses the OpenAI integration. Comments inline:
```ts v2 OpenAI task
import { client } from "~/trigger";
import { eventTrigger } from "@trigger.dev/sdk";
//1. A Trigger.dev integration for OpenAI
import { OpenAI } from "@trigger.dev/openai";
const openai = new OpenAI({
id: "openai",
apiKey: process.env["OPENAI_API_KEY"]!,
});
//2. Use the client to define a "Job"
client.defineJob({
id: "openai-tasks",
name: "OpenAI Tasks",
version: "0.0.1",
trigger: eventTrigger({
name: "openai.tasks",
schema: z.object({
prompt: z.string(),
}),
}),
//3. integrations are added and come through to `io` in the run fn
integrations: {
openai,
},
run: async (payload, io, ctx) => {
//4. You use `io` to get the integration
//5. Also note that "backgroundCreate" was needed for OpenAI
// to do work that lasted longer than your serverless timeout
const chatCompletion = await io.openai.chat.completions.backgroundCreate(
//6. You needed to add "cacheKeys" to any "task"
"background-chat-completion",
{
messages: [{ role: "user", content: payload.prompt }],
model: "gpt-3.5-turbo",
}
);
const result = chatCompletion.choices[0]?.message.content;
if (!result) {
//7. throwing an error at the top-level in v2 failed the task immediately
throw new Error("No result from OpenAI");
}
//8. io.runTask needed to be used to prevent work from happening twice
const dbRow = await io.runTask("store-in-db", async (task) => {
//9. Custom logic can be put here
// Anything returned must be JSON-serializable, so no Date objects etc.
return saveToDb(result);
});
//10. Wait for 5 minutes.
// You need a cacheKey and the 2nd param is a number
await io.wait("wait some time", 60 * 5);
//11. Anything returned must be JSON-serializable, so no Date objects etc.
return result;
},
});
```
### v3
In v3 we eliminate a lot of code mainly because we don't need tricks to try avoid timeouts. Here's the equivalent v3 code:
```ts v3 OpenAI task
import { logger, task, wait } from "@trigger.dev/sdk/v3";
//1. Official OpenAI SDK
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
//2. Jobs don't exist now, use "task"
export const openaiTask = task({
id: "openai-task",
//3. Retries happen if a task throws an error that isn't caught
// The default settings are in your trigger.config.ts (used if not overriden here)
retry: {
maxAttempts: 3,
},
run: async (payload: { prompt: string }) => {
//4. Use the official SDK
//5. No timeouts, so this can take a long time
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: payload.prompt }],
model: "gpt-3.5-turbo",
});
const result = chatCompletion.choices[0]?.message.content;
if (!result) {
//6. throwing an error at the top-level will retry the task (if retries are enabled)
throw new Error("No result from OpenAI");
}
//7. No need to use runTask, just call the function
const dbRow = await saveToDb(result);
//8. You can provide seconds, minutes, hours etc.
// You don't need cacheKeys in v3
await wait.for({ minutes: 5 });
//9. You can return anything that's serializable using SuperJSON
// That includes undefined, Date, bigint, RegExp, Set, Map, Error and URL.
return result;
},
});
```
## Triggering tasks comparison
### v2
In v2 there were different trigger types and triggering each type was slightly different.
```ts v2 triggering
async function yourBackendFunction() {
//1. for `eventTrigger` you use `client.sendEvent`
const event = await client.sendEvent({
name: "openai.tasks",
payload: { prompt: "Create a good programming joke about background jobs" },
});
//2. for `invokeTrigger` you'd call `invoke` on the job
const { id } = await invocableJob.invoke({
prompt: "What is the meaning of life?",
});
}
```
### v3
We've unified triggering in v3. You use `trigger()` or `batchTrigger()` which you can do on any type of task. Including scheduled, webhooks, etc if you want.
```ts v3 triggering
async function yourBackendFunction() {
//call `trigger()` on any task
const handle = await openaiTask.trigger({
prompt: "Tell me a programming joke",
});
}
```
## Upgrading your project
1. Make sure to upgrade all of your trigger.dev packages to v3 first.
```bash
npx @trigger.dev/cli@latest update --to 3.0.0
```
2. Follow the [v3 quick start](/quick-start) to get started with v3. Our new CLI will take care of the rest.
## Using v2 together with v3
You can use v2 and v3 in the same codebase. This can be useful where you already have v2 jobs or where we don't support features you need (yet).
We do not support calling v3 tasks from v2 jobs or vice versa.
# Email us
You can [email us](https://trigger.dev/contact) by filling out this form.
# Slack support
If you have a paid Trigger.dev account, you can request a private Slack Connect channel.
To do this:
1. Login to the [Trigger.dev web app](https://cloud.trigger.dev).
2. Subscribe to a paid plan if you haven't already.
3. In the bottom-left corner click "Join our Slack".
# How it works
Understand how Trigger.dev works and how it can help you.
## Introduction
Trigger.dev v3 allows you to integrate long-running async tasks into your application and run them in the background. This allows you to offload tasks that take a long time to complete, such as sending multi-day email campaigns, processing videos, or running long chains of AI tasks.
For example, the below task processes a video with `ffmpeg` and sends the results to an s3 bucket, then updates a database with the results and sends an email to the user.
```ts /trigger/video.ts
import { logger, task } from "@trigger.dev/sdk/v3";
import { updateVideoUrl } from "../db.js";
import ffmpeg from "fluent-ffmpeg";
import { Readable } from "node:stream";
import type { ReadableStream } from "node:stream/web";
import * as fs from "node:fs/promises";
import * as path from "node:path";
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { sendEmail } from "../email.js";
import { getVideo } from "../db.js";
// Initialize S3 client
const s3Client = new S3Client({
region: process.env.AWS_REGION,
});
export const convertVideo = task({
id: "convert-video",
retry: {
maxAttempts: 5,
minTimeoutInMs: 1000,
maxTimeoutInMs: 10000,
factor: 2,
},
run: async ({ videoId }: { videoId: string }) => {
const { url, userId } = await getVideo(videoId);
const outputPath = path.join("/tmp", `output_${videoId}.mp4`);
const response = await fetch(url);
await new Promise((resolve, reject) => {
ffmpeg(Readable.fromWeb(response.body as ReadableStream))
.videoFilters("scale=iw/2:ih/2")
.output(outputPath)
.on("end", resolve)
.on("error", reject)
.run();
});
const processedContent = await fs.readFile(outputPath);
// Upload to S3
const s3Key = `processed-videos/output_${videoId}.mp4`;
const uploadParams = {
Bucket: process.env.S3_BUCKET,
Key: s3Key,
Body: processedContent,
};
await s3Client.send(new PutObjectCommand(uploadParams));
const s3Url = `https://${process.env.S3_BUCKET}.s3.amazonaws.com/${s3Key}`;
logger.info("Video converted", { videoId, s3Url });
// Update database
await updateVideoUrl(videoId, s3Url);
await sendEmail(
userId,
"Video Processing Complete",
`Your video has been processed and is available at: ${s3Url}`
);
return { success: true, s3Url };
},
});
```
Now in your application, you can trigger this task by calling:
```ts
import { NextResponse } from "next/server";
import { tasks } from "@trigger.dev/sdk/v3";
import type { convertVideo } from "./trigger/video";
// π **type-only** import
export async function POST(request: Request) {
const body = await request.json();
// Trigger the task, this will return before the task is completed
const handle = await tasks.trigger("convert-video", body);
return NextResponse.json(handle);
}
```
This will schedule the task to run in the background and return a handle that you can use to check the status of the task. This allows your backend application to respond quickly to the user and offload the long-running task to Trigger.dev.
## The CLI
Trigger.dev comes with a CLI that allows you to initialize Trigger.dev into your project, deploy your tasks, and run your tasks locally. You can run it via `npx` like so:
```sh
npx trigger.dev@latest login # Log in to your Trigger.dev account
npx trigger.dev@latest init # Initialize Trigger.dev in your project
npx trigger.dev@latest dev # Run your tasks locally
npx trigger.dev@latest deploy # Deploy your tasks to the Trigger.dev instance
```
All these commands work with the Trigger.dev cloud and/or your self-hosted instance. It supports multiple profiles so you can easily switch between different accounts or instances.
```sh
npx trigger.dev@latest login --profile -a https://trigger.example.com # Log in to a specific profile into a self-hosted instance
npx trigger.dev@latest dev --profile # Initialize Trigger.dev in your project
npx trigger.dev@latest deploy --profile # Deploy your tasks to the Trigger.dev instance
```
## Trigger.dev architecture
Trigger.dev implements a serverless architecture (without timeouts!) that allows you to run your tasks in a scalable and reliable way. When you run `npx trigger.dev@latest deploy`, we build and deploy your task code to your Trigger.dev instance. Then, when you trigger a task from your application, it's run in a secure, isolated environment with the resources you need to complete the task. A simplified diagram for a task execution looks like this:
```mermaid
sequenceDiagram
participant App
participant Trigger.dev
participant Task Worker
App->>Trigger.dev: Trigger task
Trigger.dev-->>App: Task handle
Trigger.dev->>Task Worker: Run task
Task Worker-->>Trigger.dev: Task completed
```
In reality there are many more components involved, such as the task queue, the task scheduler, and the task worker pool, logging (etc.), but this diagram gives you a high-level overview of how Trigger.dev works.
## The Checkpoint-Resume System
Trigger.dev implements a powerful Checkpoint-Resume System that enables efficient execution of long-running background tasks in a serverless-like environment. This system allows tasks to pause, checkpoint their state, and resume seamlessly, optimizing resource usage and enabling complex workflows.
Here's how the Checkpoint-Resume System works:
1. **Task Execution**: When a task is triggered, it runs in an isolated environment with all necessary resources.
2. **Subtask Handling**: If a task needs to trigger a subtask, it can do so and wait for its completion using `triggerAndWait`
3. **State Checkpointing**: While waiting for a subtask or during a programmed pause (e.g., `wait.for({ seconds: 30 })`), the system uses CRIU (Checkpoint/Restore In Userspace) to create a checkpoint of the task's entire state, including memory, CPU registers, and open file descriptors.
4. **Resource Release**: After checkpointing, the parent task's resources are released, freeing up the execution environment.
5. **Efficient Storage**: The checkpoint is efficiently compressed and stored on disk, ready to be restored when needed.
6. **Event-Driven Resumption**: When a subtask completes or a wait period ends, Trigger.dev's event system triggers the restoration process.
7. **State Restoration**: The checkpoint is loaded back into a new execution environment, restoring the task to its exact state before suspension.
8. **Seamless Continuation**: The task resumes execution from where it left off, with any subtask results or updated state seamlessly integrated.
This approach allows Trigger.dev to manage resources efficiently, handle complex task dependencies, and provide a virtually limitless execution time for your tasks, all while maintaining the simplicity and scalability of a serverless architecture.
Example of a parent and child task using the Checkpoint-Resume System:
```ts
import { task, wait } from "@trigger.dev/sdk/v3";
export const parentTask = task({
id: "parent-task",
run: async () => {
console.log("Starting parent task");
// This will cause the parent task to be checkpointed and suspended
const result = await childTask.triggerAndWait({ data: "some data" });
console.log("Child task result:", result);
// This will also cause the task to be checkpointed and suspended
await wait.for({ seconds: 30 });
console.log("Resumed after 30 seconds");
return "Parent task completed";
},
});
export const childTask = task({
id: "child-task",
run: async (payload: { data: string }) => {
console.log("Starting child task with data:", payload.data);
// Simulate some work
await sleep(5);
return "Child task result";
},
});
```
The diagram below illustrates the flow of the parent and child tasks using the Checkpoint-Resume System:
```mermaid
sequenceDiagram
participant App
participant Trigger.dev
participant Parent Task
participant Child Task
participant CR System
participant Storage
App->>Trigger.dev: Trigger parent task
Trigger.dev->>Parent Task: Start execution
Parent Task->>Child Task: Trigger child task
Parent Task->>CR System: Request snapshot
CR System->>Storage: Store snapshot
CR System-->>Parent Task: Confirm snapshot stored
Parent Task->>Trigger.dev: Release resources
Child Task->>Trigger.dev: Complete execution
Trigger.dev->>CR System: Request parent task restoration
CR System->>Storage: Retrieve snapshot
CR System->>Parent Task: Restore state
Parent Task->>Trigger.dev: Resume execution
Parent Task->>Trigger.dev: Complete execution
```
This is why, in the Trigger.dev Cloud, we don't charge for the time waiting for subtasks or the
time spent in a paused state.
## Durable execution
Trigger.dev's Checkpoint-Resume System, combined with idempotency keys, enables durable execution of complex workflows. This approach allows for efficient retries and caching of results, ensuring that work is not unnecessarily repeated in case of failures.
### How it works
1. **Task breakdown**: Complex workflows are broken down into smaller, independent subtasks.
2. **Idempotency keys**: Each subtask is assigned a unique idempotency key.
3. **Result caching**: The output of each subtask is cached based on its idempotency key.
4. **Intelligent retries**: If a failure occurs, only the failed subtask and subsequent tasks are retried.
### Example: Video processing workflow
Let's rewrite the `convert-video` task above to be more durable:
```ts /trigger/video.ts
import { idempotencyKeys, logger, task } from "@trigger.dev/sdk/v3";
import { processVideo, sendUserEmail, uploadToS3 } from "./tasks.js";
import { updateVideoUrl } from "../db.js";
export const convertVideo = task({
id: "convert-video",
retry: {
maxAttempts: 5,
minTimeoutInMs: 1000,
maxTimeoutInMs: 10000,
factor: 2,
},
run: async ({ videoId }: { videoId: string }) => {
// Automatically scope the idempotency key to this run, across retries
const idempotencyKey = await idempotencyKeys.create(videoId);
// Process video
const { processedContent } = await processVideo
.triggerAndWait({ videoId }, { idempotencyKey })
.unwrap(); // Calling unwrap will return the output of the subtask, or throw an error if the subtask failed
// Upload to S3
const { s3Url } = await uploadToS3
.triggerAndWait({ processedContent, videoId }, { idempotencyKey })
.unwrap();
// Update database
await updateVideoUrl(videoId, s3Url);
// Send email, we don't need to wait for this to finish
await sendUserEmail.trigger({ videoId, s3Url }, { idempotencyKey });
return { success: true, s3Url };
},
});
```
```ts /trigger/tasks.ts
import { task, logger } from "@trigger.dev/sdk/v3";
import ffmpeg from "fluent-ffmpeg";
import { Readable } from "node:stream";
import type { ReadableStream } from "node:stream/web";
import * as fs from "node:fs/promises";
import * as path from "node:path";
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { sendEmail } from "../email.js";
import { getVideo } from "../db.js";
// Initialize S3 client
const s3Client = new S3Client({
region: process.env.AWS_REGION,
});
export const processVideo = task({
id: "process-video",
run: async ({ videoId }: { videoId: string }) => {
const { url } = await getVideo(videoId);
const outputPath = path.join("/tmp", `output_${videoId}.mp4`);
const response = await fetch(url);
await logger.trace("ffmpeg", async (span) => {
await new Promise((resolve, reject) => {
ffmpeg(Readable.fromWeb(response.body as ReadableStream))
.videoFilters("scale=iw/2:ih/2")
.output(outputPath)
.on("end", resolve)
.on("error", reject)
.run();
});
});
const processedContent = await fs.readFile(outputPath);
await fs.unlink(outputPath);
return { processedContent: processedContent.toString("base64") };
},
});
export const uploadToS3 = task({
id: "upload-to-s3",
run: async (payload: { processedContent: string; videoId: string }) => {
const { processedContent, videoId } = payload;
const s3Key = `processed-videos/output_${videoId}.mp4`;
const uploadParams = {
Bucket: process.env.S3_BUCKET,
Key: s3Key,
Body: Buffer.from(processedContent, "base64"),
};
await s3Client.send(new PutObjectCommand(uploadParams));
const s3Url = `https://${process.env.S3_BUCKET}.s3.amazonaws.com/${s3Key}`;
return { s3Url };
},
});
export const sendUserEmail = task({
id: "send-user-email",
run: async ({ videoId, s3Url }: { videoId: string; s3Url: string }) => {
const { userId } = await getVideo(videoId);
return await sendEmail(
userId,
"Video Processing Complete",
`Your video has been processed and is available at: ${s3Url}`
);
},
});
```
### How retries work
Let's say the email sending fails in our video processing workflow. Here's how the retry process works:
1. The main task throws an error and is scheduled for retry.
2. When retried, it starts from the beginning, but leverages cached results for completed subtasks.
Here's a sequence diagram illustrating this process:
```mermaid
sequenceDiagram
participant Main as Main Task
participant Process as Process Video
participant Upload as Upload to S3
participant DB as Update Database
participant Email as Send Email
Main->>Process: triggerAndWait (1st attempt)
Process-->>Main: Return result
Main->>Upload: triggerAndWait (1st attempt)
Upload-->>Main: Return result
Main->>DB: Update
Main->>Email: triggerAndWait (1st attempt)
Email--xMain: Fail
Main-->>Main: Schedule retry
Main->>Process: triggerAndWait (2nd attempt)
Process-->>Main: Return cached result
Main->>Upload: triggerAndWait (2nd attempt)
Upload-->>Main: Return cached result
Main->>DB: Update (idempotent)
Main->>Email: triggerAndWait (2nd attempt)
Email-->>Main: Success
```
## The build system
When you run `npx trigger.dev@latest deploy` or `npx trigger.dev@latest dev`, we build your task code using our build system, which is powered by [esbuild](https://esbuild.github.io/). When deploying, the code is packaged up into a Docker image and deployed to your Trigger.dev instance. When running in dev mode, the code is built and run locally on your machine. Some features of our build system include:
* **Bundled by default**: Code + dependencies are bundled and tree-shaked by default.
* **Build extensions**: Use and write custom build extensions to transform your code or the resulting docker image.
* **ESM ouput**: We output to ESM, which allows tree-shaking and better performance.
You can review the build output by running deploy with the `--dry-run` flag, which will output the Containerfile and the build output.
Learn more about working with our build system in the [configuration docs](/config/config-file).
## Dev mode
When you run `npx trigger.dev@latest dev`, we run your task code locally on your machine. All scheduling is still done in the Trigger.dev server instance, but the task code is run locally. This allows you to develop and test your tasks locally before deploying them to the cloud, and is especially useful for debugging and testing.
* The same build system is used in dev mode, so you can be sure that your code will run the same locally as it does in the cloud.
* Changes are automatically detected and a new version is spun up when you save your code.
* Add debuggers and breakpoints to your code and debug it locally.
* Each task is run in a separate process, so you can run multiple tasks in parallel.
* Auto-cancels tasks when you stop the dev server.
Trigger.dev currently does not support "offline" dev mode, where you can run tasks without an
internet connection. [Please let us know](https://feedback.trigger.dev/) if this is a feature you
want/need.
## Staging and production environments
Trigger.dev supports deploying to multiple "deployed" environments, such as staging and production. This allows you to test your tasks in a staging environment before deploying them to production. You can deploy to a new environment by running `npx trigger.dev@latest deploy --env `, where `` is the name of the environment you want to deploy to. Each environment has its own API Key, which you can use to trigger tasks in that environment.
## OpenTelemetry
The Trigger.dev logging and task dashboard is powered by OpenTelemetry traces and logs, which allows you to trace your tasks and auto-instrument your code. We also auto-correlate logs from subtasks and parent tasks, making it easy view the entire trace of a task execution. A single run of the video processing task above looks like this in the dashboard:
![OpenTelemetry trace](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/opentelemetry-trace.png)
Because we use standard OpenTelemetry, you can instrument your code and OpenTelemetry compatible libraries to get detailed traces and logs of your tasks. The above trace instruments both Prisma and the AWS SDK:
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { PrismaInstrumentation } from "@prisma/instrumentation";
import { AwsInstrumentation } from "@opentelemetry/instrumentation-aws-sdk";
export default defineConfig({
project: "",
instrumentations: [new PrismaInstrumentation(), new AwsInstrumentation()],
});
```
# Idempotency
An API call or operation is βidempotentβ if it has the same result when called more than once.
We currently support idempotency at the task level, meaning that if you trigger a task with the same `idempotencyKey` twice, the second request will not create a new task run.
## `idempotencyKey` option
You can provide an `idempotencyKey` to ensure that a task is only triggered once with the same key. This is useful if you are triggering a task within another task that might be retried:
```typescript
import { idempotencyKeys, task } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
retry: {
maxAttempts: 4,
},
run: async (payload: any) => {
// By default, idempotency keys generated are unique to the run, to prevent retries from duplicating child tasks
const idempotencyKey = await idempotencyKeys.create("my-task-key");
// childTask will only be triggered once with the same idempotency key
await childTask.triggerAndWait(payload, { idempotencyKey });
// Do something else, that may throw an error and cause the task to be retried
},
});
```
You can use the `idempotencyKeys.create` SDK function to create an idempotency key before passing it to the `options` object.
We automatically inject the run ID when generating the idempotency key when running inside a task by default. You can turn it off by passing the `scope` option to `idempotencyKeys.create`:
```typescript
import { idempotencyKeys, task } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
retry: {
maxAttempts: 4,
},
run: async (payload: any) => {
// This idempotency key will be the same for all runs of this task
const idempotencyKey = await idempotencyKeys.create("my-task-key", { scope: "global" });
// childTask will only be triggered once with the same idempotency key
await childTask.triggerAndWait(payload, { idempotencyKey });
// This is the same as the above
await childTask.triggerAndWait(payload, { idempotencyKey: "my-task-key" });
},
});
```
If you are triggering a task from your backend code, you can use the `idempotencyKeys.create` SDK function to create an idempotency key.
```typescript
import { idempotencyKeys, tasks } from "@trigger.dev/sdk/v3";
// You can also pass an array of strings to create a idempotency key
const idempotencyKey = await idempotencyKeys.create([myUser.id, "my-task"]);
await tasks.trigger("my-task", { some: "data" }, { idempotencyKey });
```
You can also pass a string to the `idempotencyKey` option, without first creating it with `idempotencyKeys.create`.
```typescript
import { myTask } from "./trigger/myTasks";
// You can also pass an array of strings to create a idempotency key
await myTask.trigger({ some: "data" }, { idempotencyKey: myUser.id });
```
Make sure you provide sufficiently unique keys to avoid collisions.
You can pass the `idempotencyKey` when calling `batchTrigger` as well:
```typescript
import { tasks } from "@trigger.dev/sdk/v3";
await tasks.batchTrigger("my-task", [
{
payload: { some: "data" },
options: { idempotencyKey: await idempotencyKeys.create(myUser.id) },
},
]);
```
## Payload-based idempotency
We don't currently support payload-based idempotency, but you can implement it yourself by hashing the payload and using the hash as the idempotency key.
```typescript
import { idempotencyKeys, task } from "@trigger.dev/sdk/v3";
import { createHash } from "node:crypto";
// Somewhere in your code
const idempotencyKey = await idempotencyKeys.create(hash(childPayload));
// childTask will only be triggered once with the same idempotency key
await tasks.trigger("child-task", { some: "payload" }, { idempotencyKey });
// Create a hash of the payload using Node.js crypto
// Ideally, you'd do a stable serialization of the payload before hashing, to ensure the same payload always results in the same hash
function hash(payload: any): string {
const hash = createHash("sha256");
hash.update(JSON.stringify(payload));
return hash.digest("hex");
}
```
## Important notes
Idempotency keys, even the ones scoped globally, are actually scoped to the task and the environment. This means that you cannot collide with keys from other environments (e.g. dev will never collide with prod), or to other projects and orgs.
If you use the same idempotency key for triggering different tasks, the tasks will not be idempotent, and both tasks will be triggered. There's currently no way to make multiple tasks idempotent with the same key.
# Introduction
Welcome to the Trigger.dev v3 documentation.
## What is Trigger.dev (v3)?
Trigger.dev v3 makes it easy to write reliable long-running tasks without timeouts.
* We run your tasks with no timeouts. You don't have to manage any infrastructure (unless you [self-host](/open-source-self-hosting)). Workers are automatically scaled and managed for you.
* We provide a multi-tenant queue that is used when triggering tasks.
* We provide an SDK and CLI for writing tasks in your existing codebase, inside [/trigger folders](/config/config-file).
* We provide different types of tasks: [regular](/tasks-regular) and [scheduled](/tasks/scheduled).
* We provide a dashboard for monitoring, debugging, and managing your tasks.
We're [open source](https://github.com/triggerdotdev/trigger.dev) and you can choose to use the [Trigger.dev Cloud](https://cloud.trigger.dev) or [Self-host Trigger.dev](/open-source-self-hosting) on your own infrastructure.
## Getting started
Go from zero to running your first task in 3 minutes.
Tasks are the core of Trigger.dev. Learn what they are and how to write them.
Detailed guides for setting up Trigger.dev with popular frameworks and services, including
Next.js, Remix, Supabase and more.
Code you can use in your own projects, including OpenAI, Deepgram, FFmpeg, Puppeteer, Stripe,
Supabase and more.
## Getting help
We'd love to hear from you or give you a hand getting started. Here are some ways to get in touch with us. We'd also β€οΈ your support.
The help forum is a great place to get help with any questions about Trigger.dev.
}
href="https://twitter.com/triggerdotdev"
color="#1DA1F2"
>
Follow us on X (Twitter) to get the latest updates and news.
Arrange a call with one of the founders. We can help answer questions and give 1-on-1 help
building your first task.
Check us out at triggerdotdev/trigger.dev
# Limits
There are some hard and soft limits that you might hit.
## Concurrency limits
| Pricing tier | Limit |
| :----------- | :------------------- |
| Free | 5 concurrent runs |
| Hobby | 25 concurrent runs |
| Pro | 100+ concurrent runs |
If you need more than 100 concurrent runs on the Pro tier, you can request more by contacting us via [email](https://trigger.dev/contact) or [Discord](https://trigger.dev/discord).
## Rate limits
Generally speaking each SDK call is an API call.
| Limit | Details |
| :---- | :------------------------ |
| API | 1,500 requests per minute |
The most common cause of hitting the API rate limit is if youβre calling `trigger()` on a task in a loop, instead of doing this use `batchTrigger()` which will trigger multiple tasks in a single API call. You can have up to 100 tasks in a single batch trigger call.
## Queued tasks
The number of queued tasks by environment.
| Limit | Details |
| :------ | :----------------- |
| Dev | At most 500 |
| Staging | At most 10 million |
| Prod | At most 10 million |
## Schedules
| Pricing tier | Limit |
| :----------- | :----------------- |
| Free | 5 per project |
| Hobby | 100 per project |
| Pro | 1,000+ per project |
When attaching schedules to tasks we strongly recommend you add them [in our dashboard](/tasks/scheduled#attaching-schedules-in-the-dashboard) if they're "static". That way you can control them easily per environment.
If you add them [dynamically using code](/management/schedules/create) make sure you add a `deduplicationKey` so you don't add the same schedule to a task multiple times. If you don't your task will get triggered multiple times, it will cost you more, and you will hit the limit.
If you're creating schedules for your user you will definitely need to request more schedules from us.
## Log retention
| Pricing tier | Limit |
| :----------- | :------ |
| Free | 1 day |
| Hobby | 7 days |
| Pro | 30 days |
## Log size
We limit the size of logs to prevent oversized data potentially causing issues.
#### Attribute Limits
* Span Attribute Count Limit: 256
* Log Attribute Count Limit: 256
* Span Attribute Value Length Limit: 1028 characters
* Log Attribute Value Length Limit: 1028 characters
#### Event and Link Limits
* Span Event Count Limit: 10
* Link Count Limit: 2
* Attributes per Link Limit: 10
* Attributes per Event Limit: 10
#### I/O Packet Length Limit
128 KB (131,072 bytes)
#### Attribute Clipping Behavior
* Attributes exceeding the value length limit (1028 characters) are discarded.
* If the total number of attributes exceeds 256, additional attributes are not included.
#### Attribute Value Size Calculation
* Strings: Actual length of the string
* Numbers: 8 bytes
* Booleans: 4 bytes
* Arrays: Sum of the sizes of all elements
* Undefined or null: 0 bytes
## Task payloads and outputs
| Limit | Details |
| :--------------------- | :--------------------------------------------- |
| Single trigger payload | Must not exceed 10MB |
| Batch trigger payload | The total of all payloads must not exceed 10MB |
| Task outputs | Must not exceed 10MB |
Payloads and outputs that exceed 512KB will be offloaded to object storage and a presigned URL will be provided to download the data when calling `runs.retrieve`. You don't need to do anything to handle this in your tasks however, as we will transparently upload/download these during operation.
## Alerts
An alert destination is a single email address, Slack channel, or webhook URL that you want to send alerts to. If you're on the Pro and need more than 100 alert destinations, you can request more by contacting us via [email](https://trigger.dev/contact) or [Discord](https://trigger.dev/discord).
| Pricing tier | Limit |
| :----------- | :---------------------- |
| Free | 1 alert destination |
| Hobby | 3 alert destinations |
| Pro | 100+ alert destinations |
## Machines
The default machine is `small-1x` which has 0.5 vCPU and 0.5 GB of RAM. You can optionally configure a higher spec machine which will increase the cost of running the task but can also improve the performance of the task if it is CPU or memory bound.
See the [machine configurations](/machines#machine-configurations) for more details.
## Team members
| Pricing tier | Limit |
| :----------- | :--------------- |
| Free | 5 team members |
| Hobby | 5 team members |
| Pro | 25+ team members |
# Logging and tracing
How to use the built-in logging and tracing system.
![The run log](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-log.png)
The run log shows you exactly what happened in every run of your tasks. It is comprised of logs, traces and spans.
## Logs
You can use `console.log()`, `console.error()`, etc as normal and they will be shown in your run log. This is the standard function so you can use it as you would in any other JavaScript or TypeScript code. Logs from any functions/packages will also be shown.
### logger
We recommend that you use our `logger` object which creates structured logs. Structured logs will make it easier for you to search the logs to quickly find runs.
```ts /trigger/logging.ts
import { task, logger } from "@trigger.dev/sdk/v3";
export const loggingExample = task({
id: "logging-example",
run: async (payload: { data: Record }) => {
//the first parameter is the message, the second parameter must be a key-value object (Record)
logger.debug("Debug message", payload.data);
logger.log("Log message", payload.data);
logger.info("Info message", payload.data);
logger.warn("You've been warned", payload.data);
logger.error("Error message", payload.data);
},
});
```
## Tracing and spans
Tracing is a way to follow the flow of your code. It's very useful for debugging and understanding how your code is working, especially with long-running or complex tasks.
Trigger.dev uses OpenTelemetry tracing under the hood. With automatic tracing for many things like task triggering, task attempts, HTTP requests, and more.
| Name | Description |
| ------------- | -------------------------------- |
| Task triggers | Task triggers. |
| Task attempts | Task attempts. |
| HTTP requests | HTTP requests made by your code. |
### Adding instrumentations
![The run log](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/auto-instrumentation.png)
You can [add instrumentations](/config/config-file#instrumentations). The Prisma one above will automatically trace all Prisma queries.
### Add custom traces
If you want to add custom traces to your code, you can use the `logger.trace` function. It will create a new OTEL trace and you can set attributes on it.
```ts
import { logger, task } from "@trigger.dev/sdk/v3";
export const customTrace = task({
id: "custom-trace",
run: async (payload) => {
//you can wrap code in a trace, and set attributes
const user = await logger.trace("fetch-user", async (span) => {
span.setAttribute("user.id", "1");
//...do stuff
//you can return a value
return {
id: "1",
name: "John Doe",
fetchedAt: new Date(),
};
});
const usersName = user.name;
},
});
```
# Machines
Configure the number of vCPUs and GBs of RAM you want the task to use.
The `machine` configuration is optional. Using higher spec machines will increase the cost of running the task but can also improve the performance of the task if it is CPU or memory bound.
```ts /trigger/heavy-task.ts
export const heavyTask = task({
id: "heavy-task",
machine: {
preset: "large-1x",
},
run: async ({ payload, ctx }) => {
//...
},
});
```
The default machine is `small-1x` which has 0.5 vCPU and 0.5 GB of RAM. You can change the default machine in your `trigger.config.ts` file:
```ts trigger.config.ts
import type { TriggerConfig } from "@trigger.dev/sdk/v3";
export const config: TriggerConfig = {
machine: "small-2x",
// ... other config
};
```
## Machine configurations
| Preset | vCPU | Memory | Disk space |
| ------------------ | ---- | ------ | ---------- |
| micro | 0.25 | 0.25 | 10GB |
| small-1x (default) | 0.5 | 0.5 | 10GB |
| small-2x | 1 | 1 | 10GB |
| medium-1x | 1 | 2 | 10GB |
| medium-2x | 2 | 4 | 10GB |
| large-1x | 4 | 8 | 10GB |
| large-2x | 8 | 16 | 10GB |
You can view the Trigger.dev cloud pricing for these machines [here](https://trigger.dev/pricing#computePricing).
# Create Env Var
v3-openapi POST /api/v1/projects/{projectRef}/envvars/{env}
Create a new environment variable for a specific project and environment.
# Delete Env Var
v3-openapi DELETE /api/v1/projects/{projectRef}/envvars/{env}/{name}
Delete a specific environment variable for a specific project and environment.
# Import Env Vars
v3-openapi POST /api/v1/projects/{projectRef}/envvars/{env}/import
Upload mulitple environment variables for a specific project and environment.
# List Env Vars
v3-openapi GET /api/v1/projects/{projectRef}/envvars/{env}
List all environment variables for a specific project and environment.
# Retrieve Env Var
v3-openapi GET /api/v1/projects/{projectRef}/envvars/{env}/{name}
Retrieve a specific environment variable for a specific project and environment.
# Update Env Var
v3-openapi PUT /api/v1/projects/{projectRef}/envvars/{env}/{name}
Update a specific environment variable for a specific project and environment.
# Overview & Authentication
Using the Trigger.dev v3 management API
## Installation
The management API is available through the same `@trigger.dev/sdk` package used in defining and triggering tasks. If you have already installed the package in your project, you can skip this step.
```bash npm
npm i @trigger.dev/sdk@latest
```
```bash pnpm
pnpm add @trigger.dev/sdk@latest
```
```bash yarn
yarn add @trigger.dev/sdk@latest
```
## Usage
All `v3` functionality is provided through the `@trigger.dev/sdk/v3` module. You can import the entire module or individual resources as needed.
```ts
import { configure, runs } from "@trigger.dev/sdk/v3";
configure({
// this is the default and if the `TRIGGER_SECRET_KEY` environment variable is set, can omit calling configure
secretKey: process.env["TRIGGER_SECRET_KEY"],
});
async function main() {
const runs = await runs.list({
limit: 10,
status: ["COMPLETED"],
});
}
main().catch(console.error);
```
## Authentication
There are two methods of authenticating with the management API: using a secret key associated with a specific environment in a project (`secretKey`), or using a personal access token (`personalAccessToken`). Both methods should only be used in a backend server, as they provide full access to the project.
There is a separate authentication strategy when making requests from your frontend application.
See the [Frontend guide](/frontend/overview) for more information. This guide is for backend usage
only.
Certain API functions work with both authentication methods, but require different arguments depending on the method used. For example, the `runs.list` function can be called using either a `secretKey` or a `personalAccessToken`, but the `projectRef` argument is required when using a `personalAccessToken`:
```ts
import { configure, runs } from "@trigger.dev/sdk/v3";
// Using secretKey authentication
configure({
secretKey: process.env["TRIGGER_SECRET_KEY"], // starts with tr_dev_ or tr_prod_
});
function secretKeyExample() {
return runs.list({
limit: 10,
status: ["COMPLETED"],
});
}
// Using personalAccessToken authentication
configure({
secretKey: process.env["TRIGGER_ACCESS_TOKEN"], // starts with tr_pat_
});
function personalAccessTokenExample() {
// Notice the projectRef argument is required when using a personalAccessToken
return runs.list("prof_1234", {
limit: 10,
status: ["COMPLETED"],
projectRef: "tr_proj_1234567890",
});
}
```
Consult the following table to see which endpoints support each authentication method.
| Endpoint | Secret key | Personal Access Token |
| ---------------------- | ---------- | --------------------- |
| `task.trigger` | β | |
| `task.batchTrigger` | β | |
| `runs.list` | β | β |
| `runs.retrieve` | β | |
| `runs.cancel` | β | |
| `runs.replay` | β | |
| `envvars.list` | β | β |
| `envvars.retrieve` | β | β |
| `envvars.upload` | β | β |
| `envvars.create` | β | β |
| `envvars.update` | β | β |
| `envvars.del` | β | β |
| `schedules.list` | β | |
| `schedules.create` | β | |
| `schedules.retrieve` | β | |
| `schedules.update` | β | |
| `schedules.activate` | β | |
| `schedules.deactivate` | β | |
| `schedules.del` | β | |
### Secret key
Secret key authentication scopes the API access to a specific environment in a project, and works with certain endpoints. You can read our [API Keys guide](/apikeys) for more information.
### Personal Access Token (PAT)
A PAT is a token associated with a specific user, and gives access to all the orgs, projects, and environments that the user has access to. You can identify a PAT by the `tr_pat_` prefix. Because a PAT does not scope access to a specific environment, you must provide the `projectRef` argument when using a PAT (and sometimes the environment as well).
For example, when uploading environment variables using a PAT, you must provide the `projectRef` and `environment` arguments:
```ts
import { configure, envvars } from "@trigger.dev/sdk/v3";
configure({
secretKey: process.env["TRIGGER_ACCESS_TOKEN"], // starts with tr_pat_
});
await envvars.upload("proj_1234", "dev", {
variables: {
MY_ENV_VAR: "MY_ENV_VAR_VALUE",
},
override: true,
});
```
## Handling errors
When the SDK method is unable to connect to the API server, or the API server returns a non-successful response, the SDK will throw an `ApiError` that you can catch and handle:
```ts
import { runs, APIError } from "@trigger.dev/sdk/v3";
async function main() {
try {
const run = await runs.retrieve("run_1234");
} catch (error) {
if (error instanceof ApiError) {
console.error(`API error: ${error.status}, ${error.headers}, ${error.body}`);
} else {
console.error(`Unknown error: ${error.message}`);
}
}
}
```
## Retries
The SDK will automatically retry requests that fail due to network errors or server errors. By default, the SDK will retry requests up to 3 times, with an exponential backoff delay between retries.
You can customize the retry behavior by passing a `requestOptions` option to the `configure` function:
```ts
import { configure } from "@trigger.dev/sdk/v3";
configure({
requestOptions: {
retry: {
maxAttempts: 5,
minTimeoutInMs: 1000,
maxTimeoutInMs: 5000,
factor: 1.8,
randomize: true,
},
},
});
```
All SDK functions also take a `requestOptions` parameter as the last argument, which can be used to customize the request options. You can use this to disable retries for a specific request:
```ts
import { runs } from "@trigger.dev/sdk/v3";
async function main() {
const run = await runs.retrieve("run_1234", {
retry: {
maxAttempts: 1, // Disable retries
},
});
}
```
When running inside a task, the SDK ignores customized retry options for certain functions (e.g.,
`task.trigger`, `task.batchTrigger`), and uses retry settings optimized for task execution.
## Auto-pagination
All list endpoints in the management API support auto-pagination.
You can use `for await β¦ of` syntax to iterate through items across all pages:
```ts
import { runs } from "@trigger.dev/sdk/v3";
async function fetchAllRuns() {
const allRuns = [];
for await (const run of runs.list({ limit: 10 })) {
allRuns.push(run);
}
return allRuns;
}
```
You can also use helpers on the return value from any `list` method to get the next/previous page of results:
```ts
import { runs } from "@trigger.dev/sdk/v3";
async function main() {
let page = await runs.list({ limit: 10 });
for (const run of page.data) {
console.log(run);
}
while (page.hasNextPage()) {
page = await page.getNextPage();
// ... do something with the next page
}
}
```
## Advanced usage
### Accessing raw HTTP responses
All API methods return a `Promise` subclass `ApiPromise` that includes helpers for accessing the underlying HTTP response:
```ts
import { runs } from "@trigger.dev/sdk/v3";
async function main() {
const { data: run, response: raw } = await runs.retrieve("run_1234").withResponse();
console.log(raw.status);
console.log(raw.headers);
const response = await runs.retrieve("run_1234").asResponse(); // Returns a Response object
console.log(response.status);
console.log(response.headers);
}
```
# List runs
v3-openapi GET /api/v1/projects/{projectRef}/runs
List runs in a project, across multiple environments, using Personal Access Token auth. You can filter the runs by status, created at, task identifier, version, and more.
# Cancel run
v3-openapi POST /api/v2/runs/{runId}/cancel
Cancels an in-progress run. If the run is already completed, this will have no effect.
# List runs
v3-openapi GET /api/v1/runs
List runs in a specific environment. You can filter the runs by status, created at, task identifier, version, and more.
# Replay run
v3-openapi POST /api/v1/runs/{runId}/replay
Creates a new run with the same payload and options as the original run.
# Reschedule run
v3-openapi POST /api/v1/runs/{runId}/reschedule
Updates a delayed run with a new delay. Only valid when the run is in the DELAYED state.
# Retrieve run
v3-openapi GET /api/v3/runs/{runId}
Retrieve information about a run, including its status, payload, output, and attempts. If you authenticate with a Public API key, we will omit the payload and output fields for security reasons.
# Update metadata
v3-openapi PUT /api/v1/runs/{runId}/metadata
Update the metadata of a run.
# Activate Schedule
v3-openapi POST /api/v1/schedules/{schedule_id}/activate
Activate a schedule by its ID. This will only work on `IMPERATIVE` schedules that were created in the dashboard or using the imperative SDK functions like `schedules.create()`.
# Create Schedule
v3-openapi POST /api/v1/schedules
Create a new `IMPERATIVE` schedule based on the specified options.
# Deactivate Schedule
v3-openapi POST /api/v1/schedules/{schedule_id}/deactivate
Deactivate a schedule by its ID. This will only work on `IMPERATIVE` schedules that were created in the dashboard or using the imperative SDK functions like `schedules.create()`.
# Delete Schedule
v3-openapi DELETE /api/v1/schedules/{schedule_id}
Delete a schedule by its ID. This will only work on `IMPERATIVE` schedules that were created in the dashboard or using the imperative SDK functions like `schedules.create()`.
# List Schedules
v3-openapi GET /api/v1/schedules
List all schedules. You can also paginate the results.
# Retrieve Schedule
v3-openapi GET /api/v1/schedules/{schedule_id}
Get a schedule by its ID.
# Get timezones
v3-openapi GET /api/v1/timezones
Get all supported timezones that schedule tasks support.
# Update Schedule
v3-openapi PUT /api/v1/schedules/{schedule_id}
Update a schedule by its ID. This will only work on `IMPERATIVE` schedules that were created in the dashboard or using the imperative SDK functions like `schedules.create()`.
# Batch trigger
v3-openapi POST /api/v1/tasks/{taskIdentifier}/batch
Batch trigger a task with up to 100 payloads.
# Trigger
v3-openapi POST /api/v1/tasks/{taskIdentifier}/trigger
Trigger a task by its identifier.
# Contributing
You can contribute to Trigger.dev in many ways.
Go to our [GitHub repository](https://github.com/triggerdotdev/trigger.dev) and open an issue or a pull request. We are always looking for contributors to help us improve Trigger.dev. You can contribute in many ways, including:
* Reporting bugs
* Suggesting new features
* Writing documentation
* Writing code
* Reviewing code
* Translating the app
* Sharing the app with others
* Giving feedback
* And more!
# Self-hosting
You can self-host Trigger.dev on your own infrastructure.
Security, scaling, and reliability concerns are not fully addressed here. This guide is meant for evaluation purposes and won't result in a production-ready deployment.This guide is for Docker only. We don't currently provide documentation for Kubernetes.
## Overview
The self-hosting guide covers two alternative setups. The first option uses a simple setup where you run everything on one server. With the second option, the webapp and worker components are split on two separate machines.
You're going to need at least one Debian (or derivative) machine with Docker and Docker Compose installed. We'll also use Ngrok to expose the webapp to the internet.
## Support
It's dangerous to go alone! Join the self-hosting channel on our [Discord server](https://discord.gg/NQTxt5NA7s).
## Caveats
The v3 worker components don't have ARM support yet.
This guide outlines a quick way to start self-hosting Trigger.dev for evaluation purposes - it won't result in a production-ready deployment. Security, scaling, and reliability concerns are not fully addressed here.
As self-hosted deployments tend to have unique requirements and configurations, we don't provide specific advice for securing your deployment, scaling up, or improving reliability.
Should the burden ever get too much, we'd be happy to see you on [Trigger.dev cloud](https://trigger.dev/pricing) where we deal with these concerns for you.
* The [docker checkpoint](https://docs.docker.com/reference/cli/docker/checkpoint/) command is an experimental feature which may not work as expected. It won't be enabled by default. Instead, the containers will stay up and their processes frozen. They won't consume CPU but they *will* consume RAM.
* The `docker-provider` does not currently enforce any resource limits. This means your tasks can consume up to the total machine CPU and RAM. Having no limits may be preferable when self-hosting, but can impact the performance of other services.
* The worker components (not the tasks!) have direct access to the Docker socket. This means they can run any Docker command. To restrict access, you may want to consider using [Docker Socket Proxy](https://github.com/Tecnativa/docker-socket-proxy).
* The task containers are running with host networking. This means there is no network isolation between them and the host machine. They will be able to access any networked service on the host.
* There is currently no support for adding multiple worker machines, but we're working on it.
## Requirements
* 4 CPU
* 8 GB RAM
* Debian or derivative
* Optional: A separate machine for the worker components
You will also need a way to expose the webapp to the internet. This can be done with a reverse proxy, or with a service like Ngrok. We will be using the latter in this guide.
## Option 1: Single server
This is the simplest setup. You run everything on one server. It's a good option if you have spare capacity on an existing machine, and have no need to independently scale worker capacity.
### Server setup
Some very basic steps to get started:
1. [Install Docker](https://docs.docker.com/get-docker/)
2. [Install Docker Compose](https://docs.docker.com/compose/install/)
3. [Install Ngrok](https://ngrok.com/download)
```bash
# add ngrok repo
curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | \
sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && \
echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | \
sudo tee /etc/apt/sources.list.d/ngrok.list
# add docker repo
curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc && \
sudo chmod a+r /etc/apt/keyrings/docker.asc && \
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# update and install
sudo apt-get update
sudo apt-get install -y \
docker.io \
docker-compose-plugin \
ngrok
```
### Trigger.dev setup
1. Clone the [Trigger.dev docker repository](https://github.com/triggerdotdev/docker)
```bash
git clone https://github.com/triggerdotdev/docker
cd docker
```
2. Run the start script and follow the prompts
```bash
./start.sh # hint: you can append -d to run in detached mode
```
#### Manual
Alternatively, you can follow these manual steps after cloning the docker repo:
1. Create the `.env` file
```bash
cp .env.example .env
```
2. Generate the required secrets
```bash
echo MAGIC_LINK_SECRET=$(openssl rand -hex 16)
echo SESSION_SECRET=$(openssl rand -hex 16)
echo ENCRYPTION_KEY=$(openssl rand -hex 16)
echo PROVIDER_SECRET=$(openssl rand -hex 32)
echo COORDINATOR_SECRET=$(openssl rand -hex 32)
```
3. Replace the default secrets in the `.env` file with the generated ones
4. Run docker compose to start the services
```bash
. lib.sh # source the helper function
docker_compose -p=trigger up
```
### Tunnelling
You will need to expose the webapp to the internet. You can use Ngrok for this. If you already have a working reverse proxy setup and a domain, you can skip to the last step.
1. Start Ngrok. You may get prompted to sign up - it's free.
```bash
./tunnel.sh
```
2. Copy the domain from the output, for example: `1234-42-42-42-42.ngrok-free.app`
3. Uncomment the `TRIGGER_PROTOCOL` and `TRIGGER_DOMAIN` lines in the `.env` file. Set it to the domain you copied.
```bash
TRIGGER_PROTOCOL=https
TRIGGER_DOMAIN=1234-42-42-42-42.ngrok-free.app
```
4. Quit the start script and launch it again, or run this:
```bash
./stop.sh && ./start.sh
```
### Registry setup
If you want to deploy v3 projects, you will need access to a Docker registry. The [CLI deploy](/cli-deploy) command will push the images, and then the worker machine can pull them when needed. We will use Docker Hub as an example.
1. Sign up for a free account at [Docker Hub](https://hub.docker.com/)
2. Edit the `.env` file and add the registry details
```bash
DEPLOY_REGISTRY_HOST=docker.io
DEPLOY_REGISTRY_NAMESPACE=
```
3. Log in to Docker Hub both locally and your server. For the split setup, this will be the worker machine. You may want to create an [access token](https://hub.docker.com/settings/security) for this.
```bash
docker login -u docker.io
```
4. Required on some systems: Run the login command inside the `docker-provider` container so it can pull deployment images to run your tasks.
```bash
docker exec -ti \
trigger-docker-provider-1 \
docker login -u docker.io
```
5. Restart the services
```bash
./stop.sh && ./start.sh
```
6. You can now deploy v3 projects using the CLI with these flags:
```
npx trigger.dev@latest deploy --self-hosted --push
```
## Option 2: Split services
With this setup, the webapp will run on a different machine than the worker components. This allows independent scaling of your workload capacity.
### Webapp setup
All steps are the same as for a single server, except for the following:
1. **Startup.** Run the start script with the `webapp` argument
```bash
./start.sh webapp
```
2. **Tunnelling.** This is now *required*. Please follow the [tunnelling](/open-source-self-hosting#tunnelling) section.
### Worker setup
1. **Environment variables.** Copy your `.env` file from the webapp to the worker machine:
```bash
# an example using scp
scp -3 root@:docker/.env root@:docker/.env
```
2. **Startup.** Run the start script with the `worker` argument
```bash
./start.sh worker
```
3. **Tunnelling.** This is *not* required for the worker components.
4. **Registry setup.** Follow the [registry setup](/open-source-self-hosting#registry-setup) section but run the last command on the worker machine - note the container name is different:
```bash
docker exec -ti \
trigger-worker-docker-provider-1 \
docker login -u docker.io
```
## Additional features
### Large payloads
By default, payloads over 512KB will be offloaded to S3-compatible storage. If you don't provide the required env vars, runs with payloads larger than this will fail.
For example, using Cloudflare R2:
```bash
OBJECT_STORE_BASE_URL="https://..r2.cloudflarestorage.com"
OBJECT_STORE_ACCESS_KEY_ID=""
OBJECT_STORE_SECRET_ACCESS_KEY=""
```
Alternatively, you can increase the threshold:
```bash
# size in bytes, example with 5MB threshold
TASK_PAYLOAD_OFFLOAD_THRESHOLD=5242880
```
### Version locking
There are several reasons to lock the version of your Docker images:
* **Backwards compatibility.** We try our best to maintain compatibility with older CLI versions, but it's not always possible. If you don't want to update your CLI, you can lock your Docker images to that specific version.
* **Ensuring full feature support.** Sometimes, new CLI releases will also require new or updated platform features. Running unlocked images can make any issues difficult to debug. Using a specific tag can help here as well.
By default, the images will point at the latest versioned release via the `v3` tag. You can override this by specifying a different tag in your `.env` file. For example:
```bash
TRIGGER_IMAGE_TAG=v3.0.4
```
### Auth options
By default, magic link auth is the only login option. If the `RESEND_API_KEY` env var is not set, the magic links will be logged by the webapp container and not sent via email.
All email addresses can sign up and log in this way. If you would like to restrict this, you can use the `WHITELISTED_EMAILS` env var. For example:
```bash
# every email that does not match this regex will be rejected
WHITELISTED_EMAILS="authorized@yahoo\.com|authorized@gmail\.com"
```
It's currently impossible to restrict GitHub OAuth logins by account name or email like above, so this method is *not recommended* for self-hosted instances. It's also very easy to lock yourself out of your own instance.
Only enable GitHub auth if you understand the risks! We strongly advise you against this.
Your GitHub OAuth app needs a callback URL `https:///auth/github/callback` and you will have to set the following env vars:
```bash
AUTH_GITHUB_CLIENT_ID=
AUTH_GITHUB_CLIENT_SECRET=
```
### Checkpoint support
This requires an *experimental Docker feature*. Successfully checkpointing a task today, does not
mean you will be able to restore it tomorrow. Your data may be lost. You've been warned!
Checkpointing allows you to save the state of a running container to disk and restore it later. This can be useful for
long-running tasks that need to be paused and resumed without losing state. Think fan-out and fan-in, or long waits in email campaigns.
The checkpoints will be pushed to the same registry as the deployed images. Please see the [registry setup](#registry-setup) section for more information.
#### Requirements
* Debian, **NOT** a derivative like Ubuntu
* Additional storage space for the checkpointed containers
#### Setup
Underneath the hood this uses Checkpoint and Restore in Userspace, or [CRIU](https://github.com/checkpoint-restore/criu) in short. We'll have to do a few things to get this working:
1. Install CRIU
```bash
sudo apt-get update
sudo apt-get install criu
```
2. Tweak the config so we can successfully checkpoint our workloads
```bash
mkdir -p /etc/criu
cat << EOF >/etc/criu/runc.conf
tcp-close
EOF
```
3. Make sure everything works
```bash
sudo criu check
```
3. Enable Docker experimental features, by adding the following to `/etc/docker/daemon.json`
```json
{
"experimental": true
}
```
4. Restart the Docker daemon
```bash
sudo systemctl restart docker
```
5. Uncomment `FORCE_CHECKPOINT_SIMULATION=0` in your `.env` file. Alternatively, run this:
```bash
echo "FORCE_CHECKPOINT_SIMULATION=0" >> .env
```
6. Restart the services
```bash
# if you're running everything on the same machine
./stop.sh && ./start.sh
# if you're running the worker on a different machine
./stop.sh worker && ./start.sh worker
```
## Updating
Once you have everything set up, you will periodically want to update your Docker images. You can easily do this by running the update script and restarting your services:
```bash
./update.sh
./stop.sh && ./start.sh
```
Sometimes, we will make more extensive changes that require pulling updated compose files, scripts, etc from our docker repo:
```bash
git pull
./stop.sh && ./start.sh
```
Occasionally, you may also have to update your `.env` file, but we will try to keep these changes to a minimum. Check the `.env.example` file for new variables.
### From beta
If you're coming from the beta CLI package images, you will need to:
* **Stash you changes.** If you made any changes, stash them with `git stash`.
* **Switch branches.** We moved back to main. Run `git checkout main` in your docker repo.
* **Pull in updates.** We've added a new container for [Electric](https://github.com/electric-sql/electric) and made some other improvements. Run `git pull` to get the latest updates.
* **Apply your changes.** If you stashed your changes, apply them with `git stash pop`.
* **Update your images.** We've also published new images. Run `./update.sh` to pull them.
* **Restart all services.** Run `./stop.sh && ./start.sh` and you're good to go.
In summary, run this wherever you cloned the docker repo:
```bash
# if you made changes
git stash
# switch to the main branch and pull the latest changes
git checkout main
git pull
# if you stashed your changes
git stash pop
# update and restart your services
./update.sh
./stop.sh && ./start.sh
```
## Troubleshooting
* **Deployment fails at the push step.** The machine running `deploy` needs registry access:
```bash
docker login -u
# this should now succeed
npx trigger.dev@latest deploy --self-hosted --push
```
* **Prod runs fail to start.** The `docker-provider` needs registry access:
```bash
# single server? run this:
docker exec -ti \
trigger-docker-provider-1 \
docker login -u docker.io
# split webapp and worker? run this on the worker:
docker exec -ti \
trigger-worker-docker-provider-1 \
docker login -u docker.io
```
## CLI usage
This section highlights some of the CLI commands and options that are useful when self-hosting. Please check the [CLI reference](/cli-introduction) for more in-depth documentation.
### Login
To avoid being redirected to the [Trigger.dev Cloud](https://cloud.trigger.dev) login page when using the CLI, you can specify the URL of your self-hosted instance with the `--api-url` or `-a` flag. For example:
```bash
npx trigger.dev@latest login -a http://trigger.example.com
```
Once you've logged in, the CLI will remember your login details and you won't need to specify the URL again with other commands.
#### Custom profiles
You can specify a custom profile when logging in. This allows you to easily use the CLI with our cloud product and your self-hosted instance at the same time. For example:
```
npx trigger.dev@latest login -a http://trigger.example.com --profile my-profile
```
You can then use this profile with other commands:
```
npx trigger.dev@latest dev --profile my-profile
```
To list all your profiles, use the `list-profiles` command:
```
npx trigger.dev@latest list-profiles
```
#### Verify login
It can be useful to check you have successfully logged in to the correct instance. You can do this with the `whoami` command, which will also show the API URL:
```bash
npx trigger.dev@latest whoami
# with a custom profile
npx trigger.dev@latest whoami --profile my-profile
```
### Deploy
On [Trigger.dev Cloud](https://cloud.trigger.dev), we build deployments remotely and push those images for you. When self-hosting you will have to do that locally yourself. This can be done with the `--self-hosted` and `--push` flags. For example:
```
npx trigger.dev@latest deploy --self-hosted --push
```
### CI / GitHub Actions
When running the CLI in a CI environment, your login profiles won't be available. Instead, you can use the `TRIGGER_API_URL` and `TRIGGER_ACCESS_TOKEN` environment
variables to point at your self-hosted instance and authenticate.
For more detailed instructions, see the [GitHub Actions guide](/github-actions).
## Telemetry
By default, the Trigger.dev webapp sends telemetry data to our servers. This data is used to improve the product and is not shared with third parties. If you would like to opt-out of this, you can set the `TRIGGER_TELEMETRY_DISABLED` environment variable in your `.env` file. The value doesn't matter, it just can't be empty. For example:
```bash
TRIGGER_TELEMETRY_DISABLED=1
```
# Concurrency & Queues
Configure what you want to happen when there is more than one run at a time.
Controlling concurrency is useful when you have a task that can't be run concurrently, or when you want to limit the number of runs to avoid overloading a resource.
## One at a time
This task will only ever have a single run executing at a time. All other runs will be queued until the current run is complete.
```ts /trigger/one-at-a-time.ts
export const oneAtATime = task({
id: "one-at-a-time",
queue: {
concurrencyLimit: 1,
},
run: async (payload) => {
//...
},
});
```
## Parallelism
You can execute lots of tasks at once by combining high concurrency with [batch triggering](/triggering) (or just triggering in a loop).
```ts /trigger/parallelism.ts
export const parallelism = task({
id: "parallelism",
queue: {
concurrencyLimit: 100,
},
run: async (payload) => {
//...
},
});
```
Be careful with high concurrency. If you're doing API requests you might hit rate limits. If
you're hitting your database you might overload it.
Your organization has a maximum concurrency limit which depends on your plan. If you're a paying
customer you can request a higher limit by [contacting us](https://www.trigger.dev/contact).
## Defining a queue
As well as putting queue settings directly on a task, you can define a queue and reuse it across multiple tasks. This allows you to share the same concurrency limit:
```ts /trigger/queue.ts
const myQueue = queue({
name: "my-queue",
concurrencyLimit: 1,
});
export const task1 = task({
id: "task-1",
queue: {
name: "my-queue",
},
run: async (payload: { message: string }) => {
// ...
},
});
export const task2 = task({
id: "task-2",
queue: {
name: "my-queue",
},
run: async (payload: { message: string }) => {
// ...
},
});
```
## Setting the concurrency when you trigger a run
When you trigger a task you can override the concurrency limit. This is really useful if you sometimes have high priority runs.
The task:
```ts /trigger/override-concurrency.ts
const generatePullRequest = task({
id: "generate-pull-request",
queue: {
//normally when triggering this task it will be limited to 1 run at a time
concurrencyLimit: 1,
},
run: async (payload) => {
//todo generate a PR using OpenAI
},
});
```
Triggering from your backend and overriding the concurrency:
```ts app/api/push/route.ts
import { generatePullRequest } from "~/trigger/override-concurrency";
export async function POST(request: Request) {
const data = await request.json();
if (data.branch === "main") {
//trigger the task, with a different queue
const handle = await generatePullRequest.trigger(data, {
queue: {
//the "main-branch" queue will have a concurrency limit of 10
//this triggered run will use that queue
name: "main-branch",
concurrencyLimit: 10,
},
});
return Response.json(handle);
} else {
//triggered with the default (concurrency of 1)
const handle = await generatePullRequest.trigger(data);
return Response.json(handle);
}
}
```
## Concurrency keys and per-tenant queuing
If you're building an application where you want to run tasks for your users, you might want a separate queue for each of your users. (It doesn't have to be users, it can be any entity you want to separately limit the concurrency for.)
You can do this by using `concurrencyKey`. It creates a separate queue for each value of the key.
Your backend code:
```ts app/api/pr/route.ts
import { generatePullRequest } from "~/trigger/override-concurrency";
export async function POST(request: Request) {
const data = await request.json();
if (data.isFreeUser) {
//free users can only have 1 PR generated at a time
const handle = await generatePullRequest.trigger(data, {
queue: {
//every free user gets a queue with a concurrency limit of 1
name: "free-users",
concurrencyLimit: 1,
},
concurrencyKey: data.userId,
});
//return a success response with the handle
return Response.json(handle);
} else {
//trigger the task, with a different queue
const handle = await generatePullRequest.trigger(data, {
queue: {
//every paid user gets a queue with a concurrency limit of 10
name: "paid-users",
concurrencyLimit: 10,
},
concurrencyKey: data.userId,
});
//return a success response with the handle
return Response.json(handle);
}
}
```
# Quick start
How to get started in 3 minutes using the CLI and SDK.
In this guide we will:
1. Create a `trigger.config.ts` file and a `/trigger` directory with an example task.
2. Get you to run the task using the CLI.
3. Show you how to view the run logs for that task.
You can either:
* Use the [Trigger.dev Cloud](https://cloud.trigger.dev).
* Or [self-host](/open-source-self-hosting) the service.
Once you've created an account, follow the steps in the app to:
1. Complete your account details.
2. Create your first Organization and Project.
The easiest way to get started is to use the CLI. It will add Trigger.dev to your existing project, create a `/trigger` folder and give you an example task.
Run this command in the root of your project to get started:
```bash npm
npx trigger.dev@latest init
```
```bash pnpm
pnpm dlx trigger.dev@latest init
```
```bash yarn
yarn dlx trigger.dev@latest init
```
It will do a few things:
1. Log you into the CLI if you're not already logged in.
2. Create a `trigger.config.ts` file in the root of your project.
3. Ask where you'd like to create the `/trigger` directory.
4. Create the `/trigger` directory with an example task, `/trigger/example.[ts/js]`.
Install the "Hello World" example task when prompted. We'll use this task to test the setup.
The CLI `dev` command runs a server for your tasks. It watches for changes in your `/trigger` directory and communicates with the Trigger.dev platform to register your tasks, perform runs, and send data back and forth.
It can also update your `@trigger.dev/*` packages to prevent version mismatches and failed deploys. You will always be prompted first.
```bash npm
npx trigger.dev@latest dev
```
```bash pnpm
pnpm dlx trigger.dev@latest dev
```
```bash yarn
yarn dlx trigger.dev@latest dev
```
The CLI `dev` command spits out various useful URLs. Right now we want to visit the Test page .
You should see our Example task in the list , select it. Most tasks have a "payload" which you enter in the JSON editor , but our example task doesn't need any input.
Press the "Run test" button .
![Test page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-page.png)
Congratulations, you should see the run page which will live reload showing you the current state of the run.
![Run page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-page.png)
If you go back to your terminal you'll see that the dev command also shows the task status and links to the run log.
![Terminal showing completed run](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/terminal-completed-run.png)
## Next steps
Learn how to trigger tasks from your code.
Tasks are the core of Trigger.dev. Learn what they are and how to write them.
# Realtime overview
Using the Trigger.dev v3 realtime API
Trigger.dev Realtime is a set of APIs that allow you to subscribe to runs and get real-time updates on the run status. This is useful for monitoring runs, updating UIs, and building realtime dashboards.
## How it works
The Realtime API is built on top of [Electric SQL](https://electric-sql.com/), an open-source PostgreSQL syncing engine. The Trigger.dev API wraps Electric SQL and provides a simple API to subscribe to [runs](/runs) and get real-time updates.
## Walkthrough
## Usage
After you trigger a task, you can subscribe to the run using the `runs.subscribeToRun` function. This function returns an async iterator that you can use to get updates on the run status.
```ts
import { runs, tasks } from "@trigger.dev/sdk/v3";
// Somewhere in your backend code
async function myBackend() {
const handle = await tasks.trigger("my-task", { some: "data" });
for await (const run of runs.subscribeToRun(handle.id)) {
// This will log the run every time it changes
console.log(run);
}
}
```
Every time the run changes, the async iterator will yield the updated run. You can use this to update your UI, log the run status, or take any other action.
Alternatively, you can subscribe to changes to any run that includes a specific tag (or tags) using the `runs.subscribeToRunsWithTag` function.
```ts
import { runs } from "@trigger.dev/sdk/v3";
// Somewhere in your backend code
for await (const run of runs.subscribeToRunsWithTag("user:1234")) {
// This will log the run every time it changes, for all runs with the tag "user:1234"
console.log(run);
}
```
If you've used `batchTrigger` to trigger multiple runs, you can also subscribe to changes to all the runs triggered in the batch using the `runs.subscribeToBatch` function.
```ts
import { runs } from "@trigger.dev/sdk/v3";
// Somewhere in your backend code
for await (const run of runs.subscribeToBatch("batch-id")) {
// This will log the run every time it changes, for all runs in the batch with the ID "batch-id"
console.log(run);
}
```
### React hooks
We also provide a set of React hooks that make it easy to use the Realtime API in your React components. See the [React hooks doc](/frontend/react-hooks) for more information.
## Run changes
You will receive updates whenever a run changes for the following reasons:
* The run moves to a new state. See our [run lifecycle docs](/runs#the-run-lifecycle) for more information.
* [Run tags](/tags) are added or removed.
* [Run metadata](/runs/metadata) is updated.
## Run object
The run object returned by the async iterator is NOT the same as the run object returned by the `runs.retrieve` function. This is because Electric SQL streams changes from a single PostgreSQL table, and the run object returned by `runs.retrieve` is a combination of multiple tables.
The run object returned by the async iterator has the following fields:
The run ID.
The task identifier.
The input payload for the run.
The output result of the run.
Timestamp when the run was created.
Timestamp when the run was last updated.
Sequential number assigned to the run.
Current status of the run.
| Status | Description |
| -------------------- | --------------------------------------------------------------------------------------------------------- |
| `WAITING_FOR_DEPLOY` | Task hasn't been deployed yet but is waiting to be executed |
| `QUEUED` | Run is waiting to be executed by a worker |
| `EXECUTING` | Run is currently being executed by a worker |
| `REATTEMPTING` | Run has failed and is waiting to be retried |
| `FROZEN` | Run has been paused by the system, and will be resumed by the system |
| `COMPLETED` | Run has been completed successfully |
| `CANCELED` | Run has been canceled by the user |
| `FAILED` | Run has been completed with errors |
| `CRASHED` | Run has crashed and won't be retried, most likely the worker ran out of resources, e.g. memory or storage |
| `INTERRUPTED` | Run was interrupted during execution, mostly this happens in development environments |
| `SYSTEM_FAILURE` | Run has failed to complete, due to an error in the system |
| `DELAYED` | Run has been scheduled to run at a specific time |
| `EXPIRED` | Run has expired and won't be executed |
| `TIMED_OUT` | Run has reached it's maxDuration and has been stopped |
Duration of the run in milliseconds.
Total cost of the run in cents.
Base cost of the run in cents before any additional charges.
Array of tags associated with the run.
Key used to ensure idempotent execution.
Timestamp when the run expired.
Time-to-live duration for the run.
Timestamp when the run finished.
Timestamp when the run started.
Timestamp until which the run is delayed.
Timestamp when the run was queued.
Additional metadata associated with the run.
Error information if the run failed.
Indicates whether this is a test run.
## Type-safety
You can infer the types of the run's payload and output by passing the type of the task to the `subscribeToRun` function. This will give you type-safe access to the run's payload and output.
```ts
import { runs, tasks } from "@trigger.dev/sdk/v3";
import type { myTask } from "./trigger/my-task";
// Somewhere in your backend code
async function myBackend() {
const handle = await tasks.trigger("my-task", { some: "data" });
for await (const run of runs.subscribeToRun(handle.id)) {
// This will log the run every time it changes
console.log(run.payload.some);
if (run.output) {
// This will log the output if it exists
console.log(run.output.some);
}
}
}
```
When using `subscribeToRunsWithTag`, you can pass a union of task types for all the possible tasks that can have the tag.
```ts
import { runs } from "@trigger.dev/sdk/v3";
import type { myTask, myOtherTask } from "./trigger/my-task";
// Somewhere in your backend code
for await (const run of runs.subscribeToRunsWithTag("my-tag")) {
// You can narrow down the type based on the taskIdentifier
switch (run.taskIdentifier) {
case "my-task": {
console.log("Run output:", run.output.foo); // This will be type-safe
break;
}
case "my-other-task": {
console.log("Run output:", run.output.bar); // This will be type-safe
break;
}
}
}
```
## Run metadata
The run metadata API gives you the ability to add or update custom metadata on a run, which will cause the run to be updated. This allows you to extend the realtime API with custom data attached to a run that can be used for various purposes. Some common use cases include:
* Adding a link to a related resource
* Adding a reference to a user or organization
* Adding a custom status with progress information
See our [run metadata docs](/runs/metadata) for more on how to use this feature.
### Using w/Realtime & React hooks
We suggest combining run metadata with the realtime API and our [React hooks](/frontend/react-hooks) to bridge the gap between your trigger.dev tasks and your UI. This allows you to update your UI in real-time based on changes to the run metadata. As a simple example, you could add a custom status to a run with a progress value, and update your UI based on that progress.
We have a full demo app repo available [here](https://github.com/triggerdotdev/nextjs-realtime-simple-demo)
## Limits
The Realtime API in the Trigger.dev Cloud limits the number of concurrent subscriptions, depending on your plan. If you exceed the limit, you will receive an error when trying to subscribe to a run. For more information, see our [pricing page](https://trigger.dev/pricing).
## Known issues
There is currently a known issue where the realtime API does not work if subscribing to a run that has a large payload or large output and are stored in object store instead of the database. We are working on a fix for this issue: [https://github.com/triggerdotdev/trigger.dev/issues/1451](https://github.com/triggerdotdev/trigger.dev/issues/1451). As a workaround you'll need to keep payloads and outputs below 128KB when using the realtime API.
# runs.subscribeToRun
Subscribes to all changes to a run.
```ts Example
import { runs } from "@trigger.dev/sdk/v3";
for await (const run of runs.subscribeToRun("run_1234")) {
console.log(run);
}
```
This function subscribes to all changes to a run. It returns an async iterator that yields the run object whenever the run is updated. The iterator will complete when the run is finished.
### Authentication
This function supports both server-side and client-side authentication. For server-side authentication, use your API key. For client-side authentication, you must generate a public access token with one of the following scopes:
* `read:runs`
* `read:runs:`
To generate a public access token, use the `auth.createPublicToken` function:
```ts
import { auth } from "@trigger.dev/sdk/v3";
// Somewhere in your backend code
const publicToken = await auth.createPublicToken({
scopes: {
read: {
runs: ["run_1234"],
},
},
});
```
### Response
The AsyncIterator yields an object with the following properties:
The run ID.
The task identifier.
The input payload for the run.
The output result of the run.
Timestamp when the run was created.
Timestamp when the run was last updated.
Sequential number assigned to the run.
Current status of the run.
| Status | Description |
| -------------------- | --------------------------------------------------------------------------------------------------------- |
| `WAITING_FOR_DEPLOY` | Task hasn't been deployed yet but is waiting to be executed |
| `QUEUED` | Run is waiting to be executed by a worker |
| `EXECUTING` | Run is currently being executed by a worker |
| `REATTEMPTING` | Run has failed and is waiting to be retried |
| `FROZEN` | Run has been paused by the system, and will be resumed by the system |
| `COMPLETED` | Run has been completed successfully |
| `CANCELED` | Run has been canceled by the user |
| `FAILED` | Run has been completed with errors |
| `CRASHED` | Run has crashed and won't be retried, most likely the worker ran out of resources, e.g. memory or storage |
| `INTERRUPTED` | Run was interrupted during execution, mostly this happens in development environments |
| `SYSTEM_FAILURE` | Run has failed to complete, due to an error in the system |
| `DELAYED` | Run has been scheduled to run at a specific time |
| `EXPIRED` | Run has expired and won't be executed |
| `TIMED_OUT` | Run has reached it's maxDuration and has been stopped |
Duration of the run in milliseconds.
Total cost of the run in cents.
Base cost of the run in cents before any additional charges.
Array of tags associated with the run.
Key used to ensure idempotent execution.
Timestamp when the run expired.
Time-to-live duration for the run.
Timestamp when the run finished.
Timestamp when the run started.
Timestamp until which the run is delayed.
Timestamp when the run was queued.
Additional metadata associated with the run.
Error information if the run failed.
Indicates whether this is a test run.
# runs.subscribeToRunsWithTag
Subscribes to all changes to runs with a specific tag.
```ts Example
import { runs } from "@trigger.dev/sdk/v3";
for await (const run of runs.subscribeToRunsWithTag("user:1234")) {
console.log(run);
}
```
This function subscribes to all changes to runs with a specific tag. It returns an async iterator that yields the run object whenever a run with the specified tag is updated. This iterator will never complete, so you must manually break out of the loop when you no longer want to receive updates.
### Authentication
This function supports both server-side and client-side authentication. For server-side authentication, use your API key. For client-side authentication, you must generate a public access token with one of the following scopes:
* `read:runs`
* `read:tags:`
To generate a public access token, use the `auth.createPublicToken` function:
```ts
import { auth } from "@trigger.dev/sdk/v3";
// Somewhere in your backend code
const publicToken = await auth.createPublicToken({
scopes: {
read: {
tags: ["user:1234"],
},
},
});
```
### Response
The AsyncIterator yields an object with the following properties:
The run ID.
The task identifier.
The input payload for the run.
The output result of the run.
Timestamp when the run was created.
Timestamp when the run was last updated.
Sequential number assigned to the run.
Current status of the run.
| Status | Description |
| -------------------- | --------------------------------------------------------------------------------------------------------- |
| `WAITING_FOR_DEPLOY` | Task hasn't been deployed yet but is waiting to be executed |
| `QUEUED` | Run is waiting to be executed by a worker |
| `EXECUTING` | Run is currently being executed by a worker |
| `REATTEMPTING` | Run has failed and is waiting to be retried |
| `FROZEN` | Run has been paused by the system, and will be resumed by the system |
| `COMPLETED` | Run has been completed successfully |
| `CANCELED` | Run has been canceled by the user |
| `FAILED` | Run has been completed with errors |
| `CRASHED` | Run has crashed and won't be retried, most likely the worker ran out of resources, e.g. memory or storage |
| `INTERRUPTED` | Run was interrupted during execution, mostly this happens in development environments |
| `SYSTEM_FAILURE` | Run has failed to complete, due to an error in the system |
| `DELAYED` | Run has been scheduled to run at a specific time |
| `EXPIRED` | Run has expired and won't be executed |
| `TIMED_OUT` | Run has reached it's maxDuration and has been stopped |
Duration of the run in milliseconds.
Total cost of the run in cents.
Base cost of the run in cents before any additional charges.
Array of tags associated with the run.
Key used to ensure idempotent execution.
Timestamp when the run expired.
Time-to-live duration for the run.
Timestamp when the run finished.
Timestamp when the run started.
Timestamp until which the run is delayed.
Timestamp when the run was queued.
Additional metadata associated with the run.
Error information if the run failed.
Indicates whether this is a test run.
# useRealtimeRun
Subscribes to all changes to a run in a React component.
```ts Example
import { useRealtimeRun } from "@trigger.dev/react-hooks";
import type { myTask } from "./trigger/tasks";
export function MyComponent({ runId }: { runId: string }) {
const { run, error } = useRealtimeRun(runId);
if (error) return
Error: {error.message}
;
return
Run: {run.id}
;
}
```
This react hook subscribes to all changes to a run. See the [React hooks doc](/frontend/react-hooks) for more information on how to use this hook.
### Response
The react hook returns an object with the following properties:
The run object. See the [Run object doc](realtime/overview#run-object) for more information.
An error object if an error occurred while subscribing to a run.
# useRealtimeRunsWithTag
Subscribes to all changes to runs with a specific tag in a React component.
```ts Example
import { useRealtimeRunsWithTag } from "@trigger.dev/react-hooks";
import type { myTask, myOtherTask } from "./trigger/tasks";
export function MyComponent({ tag }: { tag: string }) {
const { runs, error } = useRealtimeRunsWithTag(tag);
if (error) return
Error: {error.message}
;
return (
{runs.map((run) => (
Run: {run.id}
))}
);
}
```
This react hook subscribes to all changes to runs with a specific tag. See the [React hooks doc](/frontend/react-hooks) for more information on how to use this hook.
### Response
The react hook returns an object with the following properties:
An array of run objects. See the [Run object doc](/realtime/overview#run-object) for more
information.
An error object if an error occurred while subscribing.
# Replaying
A replay is a copy of a run with the same payload but against the latest version in that environment. This is useful if something went wrong and you want to try again with the latest version of your code.
### Replaying from the UI
![Select a task, then in the bottom right
click "Replay"](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/replay-run-action.png)
You can edit the payload (if available) and choose the environment to replay the run in.
![Select a task, then in the bottom right
click "Replay"](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/replay-run-modal.png)
![On the runs page, press the triple dot button](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/replay-runs-list.png)
![Click replay](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/replay-runs-list-popover.png)
### Replaying using the SDK
You can replay a run using the SDK:
```ts
const replayedRun = await runs.replay(run.id);
```
When you call `trigger()` or `batchTrigger()` on a task you receive back a run handle which has an `id` property. You can use that `id` to replay the run.
You can also access the run id from inside a run. You could write this to your database and then replay it later.
```ts
export const simpleChildTask = task({
id: "simple-child-task",
run: async (payload, { ctx }) => {
// the run ID (and other useful info) is in ctx
const runId = ctx.run.id;
},
});
```
### Bulk replaying
You can replay multiple runs at once by selecting them from the table on the Runs page using the checkbox on the left hand side of the row. Then click the "Replay runs" button from the bulk action bar that appears at the bottom of the screen.
This is especially useful if you have lots of failed runs and want to run them all again. To do this, first filter the runs by the status you want, then select all the runs you want to replay and click the "Replay runs" button from the bulk action bar at the bottom of the page.
# Request a feature
If you have a feature request or idea for Trigger, we'd love to hear it! You can submit your ideas on our [public roadmap](https://feedback.trigger.dev/). We're always looking for feedback on what to build next, so feel free to submit your ideas or vote on existing ones.
# Roadmap
See what's coming up next on our [public roadmap](https://feedback.trigger.dev/roadmap). We're always looking for feedback on what to build next, so feel free to submit your ideas or vote on existing ones.
# Run tests
You can use the dashboard to run a test of your tasks.
From the "Test" page in the sidebar of the dashboard you can run a test for any of your tasks, that includes for any environment.
![Select an environment](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-select-environment.png)
![Select the task to test](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-select-task.png)
Select a recent payload as a starting point or enter from scratch. Payloads must be valid JSON β you will see helpful errors if it is not. Press the "Run test" button or use the keyboard shortcut to run the test.
![Enter a payload](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/test-set-payload.png)
![View the run live](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-in-progress.png)
# Usage
Get compute duration and cost from inside a run, or for a specific block of code.
## Getting the run cost and duration
You can get the cost and duration of the current including retries of the same run.
```ts
export const heavyTask = task({
id: "heavy-task",
machine: {
preset: "medium-2x",
},
run: async (payload, { ctx }) => {
// Do some compute
const result = await convertVideo(payload.videoUrl);
// Get the current cost and duration up until this line of code
// This includes the compute time of the previous lines
let currentUsage = usage.getCurrent();
/* currentUsage = {
compute: {
attempt: {
costInCents: 0.01700,
durationMs: 1000,
},
total: {
costInCents: 0.0255,
durationMs: 1500,
},
},
baseCostInCents: 0.0025,
totalCostInCents: 0.028,
}
*/
// In the cloud product we do not count waits towards the compute cost or duration.
// We also don't include time between attempts or before the run starts executing your code.
// So this line does not affect the cost or duration.
await wait.for({ seconds: 5 });
// This will give the same result as before the wait.
currentUsage = usage.getCurrent();
// Do more compute
const result = await convertVideo(payload.videoUrl);
// This would give a different value
currentUsage = usage.getCurrent();
},
});
```
In Trigger.dev cloud we do not include time between attempts, before your code executes, or waits
towards the compute cost or duration.
## Getting the run cost and duration from your backend
You can use [runs.retrieve()](/management/runs/retrieve) to get a single run or [runs.list()](/management/runs/list) to get a list of runs. The response will include `costInCents` `baseCostInCents` and `durationMs` fields.
```ts single run
import { runs } from "@trigger.dev/sdk/v3";
const run = await runs.retrieve("run-id");
console.log(run.costInCents, run.baseCostInCents, run.durationMs);
const totalCost = run.costInCents + run.baseCostInCents;
```
```ts multiple runs
import { runs } from "@trigger.dev/sdk/v3";
let totalCost = 0;
for await (const run of runs.list({ tag: "user_123456" })) {
totalCost += run.costInCents + run.baseCostInCents;
console.log(run.costInCents, run.baseCostInCents, run.durationMs);
}
console.log("Total cost", totalCost);
```
## Getting the cost and duration of a block of code
You can also wrap code with `usage.measure` to get the cost and duration of that block of code:
```ts
// Inside a task run function, or inside a function that's called from there.
const { result, compute } = await usage.measure(async () => {
//...Do something for 1 second
return {
foo: "bar",
};
});
logger.info("Result", { result, compute });
/* result = {
foo: "bar"
}
compute = {
costInCents: 0.01700,
durationMs: 1000,
}
*/
```
This will work from inside the `run` function, our lifecycle hooks (like `onStart`, `onFailure`, `onSuccess`, etc.), or any function you're calling from the `run` function. It won't work for code that's not executed using Trigger.dev.
# Runs
Understanding the lifecycle of task run execution in Trigger.dev
In Trigger.dev, the concepts of runs and attempts are fundamental to understanding how tasks are executed and managed. This article explains these concepts in detail and provides insights into the various states a run can go through during its lifecycle.
## What are runs?
A run is created when you trigger a task (e.g. calling `yourTask.trigger({ foo: "bar" })`). It represents a single instance of a task being executed and contains the following key information:
* A unique run ID
* The current status of the run
* The payload (input data) for the task
* Lots of other metadata
## The run lifecycle
A run can go through **various** states during its lifecycle. The following diagram illustrates a typical state transition where a single run is triggered and completes successfully:
![Run Lifecycle](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-lifecycle.png)
Runs can also find themselves in lots of other states depending on what's happening at any given time. The following sections describe all the possible states in more detail.
### Initial States
**Waiting for deploy**:
If a task is triggered before it has been deployed, the run enters this state and waits for the task
to be deployed.
**Delayed**: When a run is triggered
with a delay, it enters this state until the specified delay period has passed.
**Queued**: The run is ready
to be executed and is waiting in the queue.
### Execution States
**Executing**: The task is
currently running.
**Reattempting**: The task has
failed and is being retried.
**Frozen**: Task has been frozen
and is waiting to be resumed.
### Final States
**Completed**: The task has successfully
finished execution.
**Canceled**: The run was manually canceled
by the user.
**Failed**: The task has failed
to complete successfully.
**Timed out**: Task has
failed because it exceeded its `maxDuration`.
**Crashed**: The worker process crashed
during execution (likely due to an Out of Memory error).
**Interrupted**: In development
mode, when the CLI is disconnected.
**System failure**: An unrecoverable system
error has occurred.
**Expired**: The run's Time-to-Live
(TTL) has passed before it could start executing.
## Attempts
An attempt represents a single execution of a task within a run. A run can have one or more attempts, depending on the task's retry settings and whether it fails. Each attempt has:
* A unique attempt ID
* A status
* An output (if successful) or an error (if failed)
When a task fails, it will be retried according to its retry settings, creating new attempts until it either succeeds or reaches the retry limit.
![Run with retries](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-with-retries.png)
## Run completion
A run is considered finished when:
1. The last attempt succeeds, or
2. The task has reached its retry limit and all attempts have failed
At this point, the run will have either an output (if successful) or an error (if failed).
## Advanced run features
### Idempotency Keys
When triggering a task, you can provide an idempotency key to ensure the task is executed only once, even if triggered multiple times. This is useful for preventing duplicate executions in distributed systems.
```javascript
yourTask.trigger({ foo: "bar" }, { idempotencyKey: "unique-key" });
```
* If a run with the same idempotency key is already in progress, the new trigger will be ignored.
* If the run has already finished, the previous output or error will be returned.
### Canceling runs
You can cancel an in-progress run using the API or the dashboard:
```ts
runs.cancel(runId);
```
When a run is canceled:
β The task execution is stopped
β The run is marked as canceled
β The task will not be retried
β Any in-progress child runs are also canceled
### Time-to-live (TTL)
You can set a TTL when triggering a run:
```ts
yourTask.trigger({ foo: "bar" }, { ttl: "10m" });
```
If the run hasn't started within the specified TTL, it will automatically expire. This is useful for time-sensitive tasks. Note that dev runs automatically have a 10-minute TTL.
![Run with TTL](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-with-ttl.png)
### Delayed runs
You can schedule a run to start after a specified delay:
```ts
yourTask.trigger({ foo: "bar" }, { delay: "1h" });
```
This is useful for tasks that need to be executed at a specific time in the future.
![Run with delay](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-with-delay.png)
### Replaying runs
You can create a new run with the same payload as a previous run:
```ts
runs.replay(runId);
```
This is useful for re-running a task with the same input, especially for debugging or recovering from failures. The new run will use the latest version of the task.
You can also replay runs from the dashboard using the same or different payload. Learn how to do this [here](/replaying).
### Waiting for runs
#### triggerAndWait()
The `triggerAndWait()` function triggers a task and then lets you wait for the result before continuing. [Learn more about triggerAndWait()](/triggering#yourtask-triggerandwait).
![Run with triggerAndWait](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-with-triggerAndWait\(\).png)
#### batchTriggerAndWait()
Similar to `triggerAndWait()`, the `batchTriggerAndWait()` function lets you batch trigger a task and wait for all the results [Learn more about batchTriggerAndWait()](/triggering#yourtask-batchtriggerandwait).
![Run with batchTriggerAndWait](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-with-batchTriggerAndWait\(\).png)
### Runs API
The runs API provides methods to interact with and manage runs:
```ts
// List all runs
runs.list();
// Get a specific run by ID
runs.retrieve(runId);
// Replay a run
runs.replay(runId);
// Reschedule a run
runs.reschedule(runId, delay);
// Cancel a run
runs.cancel(runId);
```
These methods allow you to access detailed information about runs and their attempts, including payloads, outputs, parent runs, and child runs.
### Real-time updates
You can subscribe to run updates in real-time using the `subscribeToRun()` function:
```ts
for await (const run of runs.subscribeToRun(runId)) {
console.log(run);
}
```
For more on real-time updates, see the [Realtime](/realtime) documentation.
### Triggering runs for undeployed tasks
It's possible to trigger a run for a task that hasn't been deployed yet. The run will enter the "Waiting for deploy" state until the task is deployed. Once deployed, the run will be queued and executed normally.
This feature is particularly useful in CI/CD pipelines where you want to trigger tasks before the deployment is complete.
# Max duration
Set a maximum duration for a task to run.
By default tasks can execute indefinitely, which can be great! But you also might want to set a `maxDuration` to prevent a task from running too long. You can set the `maxDuration` for a run in the following ways:
* Across all your tasks in the [config](/config/config-file#max-duration)
* On a specific task
* On a specific run when you [trigger a task](/triggering#maxduration)
## How it works
The `maxDuration` is set in seconds, and is compared to the CPU time elapsed since the start of a single execution (which we call attempts) of the task. The CPU time is the time that the task has been actively running on the CPU, and does not include time spent waiting during the following:
* `wait.for` calls
* `triggerAndWait` calls
* `batchTriggerAndWait` calls
You can inspect the CPU time of a task inside the run function with our `usage` utility:
```ts /trigger/max-duration.ts
import { task, usage } from "@trigger.dev/sdk/v3";
export const maxDurationTask = task({
id: "max-duration-task",
maxDuration: 300, // 300 seconds or 5 minutes
run: async (payload: any, { ctx }) => {
let currentUsage = usage.getCurrent();
currentUsage.attempt.durationMs; // The CPU time in milliseconds since the start of the run
},
});
```
The above value will be compared to the `maxDuration` you set. If the task exceeds the `maxDuration`, it will be stopped with the following error:
![Max duration error](https://mintlify.s3-us-west-1.amazonaws.com/trigger/runs/max-duration-error.png)
The minimum maxDuration is 5 seconds. The maximum is \~68 years.
## Configuring a default max duration
You can set a default `maxDuration` for all tasks in your [config file](/config/config-file#default-machine). This will apply to all tasks unless you override it on a specific task or run.
```ts /config/default-max-duration.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
//Your project ref (you can see it on the Project settings page in the dashboard)
project: "proj_gtcwttqhhtlasxgfuhxs",
maxDuration: 60, // 60 seconds or 1 minute
});
```
## Configuring for a task
You can set a `maxDuration` on a specific task:
```ts /trigger/max-duration-task.ts
import { task } from "@trigger.dev/sdk/v3";
export const maxDurationTask = task({
id: "max-duration-task",
maxDuration: 300, // 300 seconds or 5 minutes
run: async (payload: any, { ctx }) => {
//...
},
});
```
This will override the default `maxDuration` set in the config file. If you have a config file with a default `maxDuration` of 60 seconds, and you set a `maxDuration` of 300 seconds on a task, the task will run for 300 seconds.
You can "turn off" the Max duration set in your config file for a specific task like so:
```ts /trigger/max-duration-task.ts
import { task, timeout } from "@trigger.dev/sdk/v3";
export const maxDurationTask = task({
id: "max-duration-task",
maxDuration: timeout.None, // No max duration
run: async (payload: any, { ctx }) => {
//...
},
});
```
## Configuring for a run
You can set a `maxDuration` on a specific run when you trigger a task:
```ts /trigger/max-duration.ts
import { maxDurationTask } from "./trigger/max-duration-task";
// Trigger the task with a maxDuration of 300 seconds
const run = await maxDurationTask.trigger(
{ foo: "bar" },
{
maxDuration: 300, // 300 seconds or 5 minutes
}
);
```
You can also set the `maxDuration` to `timeout.None` to turn off the max duration for a specific run:
```ts /trigger/max-duration.ts
import { maxDurationTask } from "./trigger/max-duration-task";
import { timeout } from "@trigger.dev/sdk/v3";
// Trigger the task with no maxDuration
const run = await maxDurationTask.trigger(
{ foo: "bar" },
{
maxDuration: timeout.None, // No max duration
}
);
```
## maxDuration in run context
You can access the `maxDuration` set for a run in the run context:
```ts /trigger/max-duration-task.ts
import { task } from "@trigger.dev/sdk/v3";
export const maxDurationTask = task({
id: "max-duration-task",
maxDuration: 300, // 300 seconds or 5 minutes
run: async (payload: any, { ctx }) => {
console.log(ctx.run.maxDuration); // 300
},
});
```
## maxDuration and lifecycle functions
When a task run exceeds the `maxDuration`, the lifecycle functions `cleanup`, `onSuccess`, and `onFailure` will not be called.
# Run metadata
Attach a small amount of data to a run and update it as the run progresses.
You can attach up to 4KB (4,096 bytes) of metadata to a run, which you can then access from inside the run function, via the API, and in the dashboard. You can use metadata to store additional, structured information on a run. For example, you could store your userβs full name and corresponding unique identifier from your system on every task that is associated with that user.
## Usage
Add metadata to a run by passing it as an object to the `trigger` function:
```ts
const handle = await myTask.trigger(
{ message: "hello world" },
{ metadata: { user: { name: "Eric", id: "user_1234" } } }
);
```
Then inside your run function, you can access the metadata like this:
```ts
import { task, metadata } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
const user = metadata.get("user");
console.log(user.name); // "Eric"
console.log(user.id); // "user_1234"
},
});
```
You can also update the metadata during the run:
```ts
import { task, metadata } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
// Do some work
await metadata.set("progress", 0.1);
// Do some more work
await metadata.set("progress", 0.5);
// Do even more work
await metadata.set("progress", 1.0);
},
});
```
You can get the current metadata at any time by calling `metadata.get()` or `metadata.current()` (again, only inside a run):
```ts
import { task, metadata } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
// Get the whole metadata object
const currentMetadata = metadata.current();
console.log(currentMetadata);
// Get a specific key
const user = metadata.get("user");
console.log(user.name); // "Eric"
},
});
```
You can update metadata inside a run using `metadata.set()`, `metadata.save()`, or `metadata.del()`:
```ts
import { task, metadata } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
// Set a key
await metadata.set("progress", 0.5);
// Update the entire metadata object
await metadata.save({ progress: 0.6 });
// Delete a key
await metadata.del("progress");
},
});
```
Any of these methods can be called anywhere "inside" the run function, or a function called from the run function:
```ts
import { task, metadata } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
await doSomeWork();
},
});
async function doSomeWork() {
await metadata.set("progress", 0.5);
}
```
If you call any of the metadata methods outside of the run function, they will have no effect:
```ts
import { metadata } from "@trigger.dev/sdk/v3";
// Somewhere outside of the run function
async function doSomeWork() {
await metadata.set("progress", 0.5); // This will do nothing
}
```
This means it's safe to call these methods anywhere in your code, and they will only have an effect when called inside the run function.
Calling `metadata.current()` or `metadata.get()` outside of the run function will always return
undefined.
These methods also work inside any task lifecycle hook, either attached to the specific task or the global hooks defined in your `trigger.config.ts` file.
```ts myTasks.ts
import { task, metadata } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
// Your run function work here
},
onStart: async () => {
await metadata.set("progress", 0.5);
},
onSuccess: async () => {
await metadata.set("progress", 1.0);
},
});
```
```ts trigger.config.ts
import { defineConfig, metadata } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "proj_1234",
onStart: async () => {
await metadata.set("progress", 0.5);
},
});
```
## Metadata propagation
Metadata is NOT propagated to child tasks. If you want to pass metadata to a child task, you must do so explicitly:
```ts
import { task, metadata } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
await metadata.set("progress", 0.5);
await childTask.trigger(payload, { metadata: metadata.current() });
},
});
```
## Type-safe metadata
The metadata APIs are currently loosely typed, accepting any object that is JSON-serializable:
```ts
// β You can't pass a top-level array
const handle = await myTask.trigger(
{ message: "hello world" },
{ metadata: [{ user: { name: "Eric", id: "user_1234" } }] }
);
// β You can't pass a string as the entire metadata:
const handle = await myTask.trigger(
{ message: "hello world" },
{ metadata: "this is the metadata" }
);
// β You can't pass in a function or a class instance
const handle = await myTask.trigger(
{ message: "hello world" },
{ metadata: { user: () => "Eric", classInstance: new HelloWorld() } }
);
// β You can pass in dates and other JSON-serializable objects
const handle = await myTask.trigger(
{ message: "hello world" },
{ metadata: { user: { name: "Eric", id: "user_1234" }, date: new Date() } }
);
```
If you pass in an object like a Date, it will be serialized to a string when stored in the
metadata. That also means that when you retrieve it using `metadata.get()` or
`metadata.current()`, you will get a string back. You will need to deserialize it back to a Date
object if you need to use it as a Date.
We recommend wrapping the metadata API in a [Zod](https://zod.dev) schema (or your validator library of choice) to provide type safety:
```ts
import { task, metadata } from "@trigger.dev/sdk/v3";
import { z } from "zod";
const Metadata = z.object({
user: z.object({
name: z.string(),
id: z.string(),
}),
date: z.coerce.date(), // Coerce the date string back to a Date object
});
type Metadata = z.infer;
// Helper function to get the metadata object in a type-safe way
// Note: you would probably want to use .safeParse instead of .parse in a real-world scenario
function getMetadata() {
return Metadata.parse(metadata.current());
}
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }) => {
const metadata = getMetadata();
console.log(metadata.user.name); // "Eric"
console.log(metadata.user.id); // "user_1234"
console.log(metadata.date); // Date object
},
});
```
## Inspecting metadata
### Dashboard
You can view the metadata for a run in the Trigger.dev dashboard. The metadata will be displayed in the run details view:
![View run metadata dashboard](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/run-metadata.png)
### API
You can use the `runs.retrieve()` SDK function to get the metadata for a run:
```ts
import { runs } from "@trigger.dev/sdk/v3";
const run = await runs.retrieve("run_1234");
console.log(run.metadata);
```
See the [API reference](/management/runs/retrieve) for more information.
## Size limit
The maximum size of the metadata object is 4KB. If you exceed this limit, the SDK will throw an error. If you are self-hosting Trigger.dev, you can increase this limit by setting the `TASK_RUN_METADATA_MAXIMUM_SIZE` environment variable. For example, to increase the limit to 16KB, you would set `TASK_RUN_METADATA_MAXIMUM_SIZE=16384`.
# Tags
Tags allow you to easily filter runs in the dashboard and when using the SDK.
## What are tags?
We support up to 5 tags per run. Each one must be a string between 1 and 64 characters long.
We recommend prefixing your tags with their type and then an underscore or colon. For example, `user_123456` or `video:123`.
Many great APIs, like Stripe, already prefix their IDs with the type and an underscore. Like
`cus_123456` for a customer.
We don't enforce prefixes but if you use them you'll find it easier to filter and it will be clearer what the tag represents.
## How to add tags
There are two ways to add tags to a run:
1. When triggering the run.
2. Inside the `run` function, using `tags.add()`.
### 1. Adding tags when triggering the run
You can add tags when triggering a run using the `tags` option. All the different [trigger](/triggering) methods support this.
```ts trigger
const handle = await myTask.trigger(
{ message: "hello world" },
{ tags: ["user_123456", "org_abcdefg"] }
);
```
```ts batchTrigger
const batch = await myTask.batchTrigger([
{
payload: { message: "foo" },
options: { tags: "product_123456" },
},
{
payload: { message: "bar" },
options: { tags: ["user_123456", "product_3456789"] },
},
]);
```
This will create a run with the tags `user_123456` and `org_abcdefg`. They look like this in the runs table:
![How tags appear in the dashboard](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/tags-org-user.png)
### 2. Adding tags inside the `run` function
Use the `tags.add()` function to add tags to a run from inside the `run` function. This will add the tag `product_1234567` to the run:
```ts
import { task, tags } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
run: async (payload: { message: string }, { ctx }) => {
// Get the tags from when the run was triggered using the context
// This is not updated if you add tags during the run
logger.log("Tags from the run context", { tags: ctx.run.tags });
// Add tags during the run (a single string or array of strings)
await tags.add("product_1234567");
},
});
```
Reminder: you can only have up to 5 tags per run. If you call `tags.add()` and the total number of tags will be more than 5 we log an error and ignore the new tags. That includes tags from triggering and from inside the run function.
### Propagating tags to child runs
Tags do not propagate to child runs automatically. By default runs have no tags and you have to set them explicitly.
It's easy to propagate tags if you want:
```ts
export const myTask = task({
id: "my-task",
run: async (payload: Payload, { ctx }) => {
// Pass the tags from ctx into the child run
const { id } = await otherTask.trigger(
{ message: "triggered from myTask" },
{ tags: ctx.run.tags }
);
},
});
```
## Filtering runs by tags
You can filter runs by tags in the dashboard and in the SDK.
### In the dashboard
On the Runs page open the filter menu, choose "Tags" and then start typing in the name of the tag you want to filter by. You can select it and it will restrict the results to only runs with that tag. You can add multiple tags to filter by more than one.
![Filter by tags](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/tags-filtering.png)
### Using `runs.list()`
You can provide filters to the `runs.list` SDK function, including an array of tags.
```ts
import { runs } from "@trigger.dev/sdk/v3";
// Loop through all runs with the tag "user_123456" that have completed
for await (const run of runs.list({ tag: "user_123456", status: ["COMPLETED"] })) {
console.log(run.id, run.taskIdentifier, run.finishedAt, run.tags);
}
```
# Tasks: Overview
Tasks are functions that can run for a long time and provide strong resilience to failure.
There are different types of tasks including regular tasks and [scheduled tasks](/tasks/scheduled).
## Hello world task and how to trigger it
Here's an incredibly simple task:
```ts /trigger/hello-world.ts
import { task } from "@trigger.dev/sdk/v3";
//1. You need to export each task, even if it's a subtask
export const helloWorld = task({
//2. Use a unique id for each task
id: "hello-world",
//3. The run function is the main function of the task
run: async (payload: { message: string }) => {
//4. You can write code that runs for a long time here, there are no timeouts
console.log(payload.message);
},
});
```
You must `export` each task, even subtasks inside the same file. When exported they are accessible so their configuration can be registered with the platform.
You can trigger this in two ways:
1. From the dashboard [using the "Test" feature](/run-tests).
2. Trigger it from your backend code. See the [full triggering guide here](/triggering).
Here's how to trigger a single run from elsewhere in your code:
```ts Your backend code
import { helloWorld } from "./trigger/hello-world";
async function triggerHelloWorld() {
//This triggers the task and returns a handle
const handle = await helloWorld.trigger({ message: "Hello world!" });
//You can use the handle to check the status of the task, cancel and retry it.
console.log("Task is running with handle", handle.id);
}
```
You can also [trigger a task from another task](/triggering), and wait for the result.
## Defining a `task`
The task function takes an object with the following fields.
### The `id` field
This is used to identify your task so it can be triggered, managed, and you can view runs in the dashboard. This must be unique in your project β we recommend making it descriptive and unique.
### The `run` function
Your custom code inside `run()` will be executed when your task is triggered. Itβs an async function that has two arguments:
1. The run payload - the data that you pass to the task when you trigger it.
2. An object with `ctx` about the run (Context), and any output from the optional `init` function that runs before every run attempt.
Anything you return from the `run` function will be the result of the task. Data you return must be JSON serializable: strings, numbers, booleans, arrays, objects, and null.
### `retry` options
A task is retried if an error is thrown, by default we retry 3 times.
You can set the number of retries and the delay between retries in the `retry` field:
```ts /trigger/retry.ts
export const taskWithRetries = task({
id: "task-with-retries",
retry: {
maxAttempts: 10,
factor: 1.8,
minTimeoutInMs: 500,
maxTimeoutInMs: 30_000,
randomize: false,
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
For more information read [the retrying guide](/errors-retrying).
It's also worth mentioning that you can [retry a block of code](/errors-retrying) inside your tasks as well.
### `queue` options
Queues allow you to control the concurrency of your tasks. This allows you to have one-at-a-time execution and parallel executions. There are also more advanced techniques like having different concurrencies for different sets of your users. For more information read [the concurrency & queues guide](/queue-concurrency).
```ts /trigger/one-at-a-time.ts
export const oneAtATime = task({
id: "one-at-a-time",
queue: {
concurrencyLimit: 1,
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
### `machine` options
Some tasks require more vCPUs or GBs of RAM. You can specify these requirements in the `machine` field. For more information read [the machines guide](/machines).
```ts /trigger/heavy-task.ts
export const heavyTask = task({
id: "heavy-task",
machine: {
preset: "large-1x", // 4 vCPU, 8 GB RAM
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
### `maxDuration` option
By default tasks can execute indefinitely, which can be great! But you also might want to set a `maxDuration` to prevent a task from running too long. You can set the `maxDuration` on a task, and all runs of that task will be stopped if they exceed the duration.
```ts /trigger/long-task.ts
export const longTask = task({
id: "long-task",
maxDuration: 300, // 300 seconds or 5 minutes
run: async (payload: any, { ctx }) => {
//...
},
});
```
See our [maxDuration guide](/runs/max-duration) for more information.
## Lifecycle functions
![Lifecycle functions](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/lifecycle-functions.png)
### `init` function
This function is called before a run attempt:
```ts /trigger/init.ts
export const taskWithInit = task({
id: "task-with-init",
init: async (payload, { ctx }) => {
//...
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
You can also return data from the `init` function that will be available in the params of the `run`, `cleanup`, `onSuccess`, and `onFailure` functions.
```ts /trigger/init-return.ts
export const taskWithInitReturn = task({
id: "task-with-init-return",
init: async (payload, { ctx }) => {
return { someData: "someValue" };
},
run: async (payload: any, { ctx, init }) => {
console.log(init.someData); // "someValue"
},
});
```
Errors thrown in the `init` function are ignored.
### `cleanup` function
This function is called after the `run` function is executed, regardless of whether the run was successful or not. It's useful for cleaning up resources, logging, or other side effects.
```ts /trigger/cleanup.ts
export const taskWithCleanup = task({
id: "task-with-cleanup",
cleanup: async (payload, { ctx }) => {
//...
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
Errors thrown in the `cleanup` function will fail the attempt.
### `middleware` function
This function is called before the `run` function, it allows you to wrap the run function with custom code.
An error thrown in `middleware` is just like an uncaught error in the run function: it will
propagate through to `handleError()` and then will fail the attempt (causing a retry).
### `onStart` function
When a task run starts, the `onStart` function is called. It's useful for sending notifications, logging, and other side effects. This function will only be called one per run (not per retry). If you want to run code before each retry, use the `init` function.
```ts /trigger/on-start.ts
export const taskWithOnStart = task({
id: "task-with-on-start",
onStart: async (payload, { ctx }) => {
//...
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
You can also define an `onStart` function in your `trigger.config.ts` file to get notified when any task starts.
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "proj_1234",
onStart: async (payload, { ctx }) => {
console.log("Task started", ctx.task.id);
},
});
```
Errors thrown in the `onStart` function are ignored.
### `onSuccess` function
When a task run succeeds, the `onSuccess` function is called. It's useful for sending notifications, logging, syncing state to your database, or other side effects.
```ts /trigger/on-success.ts
export const taskWithOnSuccess = task({
id: "task-with-on-success",
onSuccess: async (payload, output, { ctx }) => {
//...
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
You can also define an `onSuccess` function in your `trigger.config.ts` file to get notified when any task succeeds.
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "proj_1234",
onSuccess: async (payload, output, { ctx }) => {
console.log("Task succeeded", ctx.task.id);
},
});
```
Errors thrown in the `onSuccess` function are ignored.
### `onFailure` function
When a task run fails, the `onFailure` function is called. It's useful for sending notifications, logging, or other side effects. It will only be executed once the task run has exhausted all its retries.
```ts /trigger/on-failure.ts
export const taskWithOnFailure = task({
id: "task-with-on-failure",
onFailure: async (payload, error, { ctx }) => {
//...
},
run: async (payload: any, { ctx }) => {
//...
},
});
```
You can also define an `onFailure` function in your `trigger.config.ts` file to get notified when any task fails.
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "proj_1234",
onFailure: async (payload, error, { ctx }) => {
console.log("Task failed", ctx.task.id);
},
});
```
Errors thrown in the `onFailure` function are ignored.
### `handleError` functions
You can define a function that will be called when an error is thrown in the `run` function, that allows you to control how the error is handled and whether the task should be retried.
Read more about `handleError` in our [Errors and Retrying guide](/errors-retrying).
Uncaught errors will throw a special internal error of the type `HANDLE_ERROR_ERROR`.
## Next steps
Learn how to trigger your tasks from your code.
Tasks are the core of Trigger.dev. Learn how to write them.
# Scheduled tasks (cron)
A task that is triggered on a recurring schedule using cron syntax.
Scheduled tasks are only for recurring tasks. If you want to trigger a one-off task at a future time, you should [use the delay option](/triggering#delay).
## Defining a scheduled task
This task will run when any of the attached schedules trigger. They have a predefined payload with some useful properties:
```ts
import { schedules } from "@trigger.dev/sdk/v3";
export const firstScheduledTask = schedules.task({
id: "first-scheduled-task",
run: async (payload) => {
//when the task was scheduled to run
//note this will be slightly different from new Date() because it takes a few ms to run the task
console.log(payload.timestamp); //is a Date object
//when the task was last run
//this can be undefined if it's never been run
console.log(payload.lastTimestamp); //is a Date object or undefined
//the timezone the schedule was registered with, defaults to "UTC"
//this is in IANA format, e.g. "America/New_York"
//See the full list here: https://cloud.trigger.dev/timezones
console.log(payload.timezone); //is a string
//If you want to output the time in the user's timezone do this:
const formatted = payload.timestamp.toLocaleString("en-US", {
timeZone: payload.timezone,
});
//the schedule id (you can have many schedules for the same task)
//using this you can remove the schedule, update it, etc
console.log(payload.scheduleId); //is a string
//you can optionally provide an external id when creating the schedule
//usually you would set this to a userId or some other unique identifier
//this can be undefined if you didn't provide one
console.log(payload.externalId); //is a string or undefined
//the next 5 dates this task is scheduled to run
console.log(payload.upcoming); //is an array of Date objects
},
});
```
You can see from the comments that the payload has several useful properties:
* `timestamp` - the time the task was scheduled to run, as a UTC date.
* `lastTimestamp` - the time the task was last run, as a UTC date.
* `timezone` - the timezone the schedule was registered with, defaults to "UTC". In IANA format, e.g. "America/New\_York".
* `scheduleId` - the id of the schedule that triggered the task
* `externalId` - the external id you (optionally) provided when creating the schedule
* `upcoming` - the next 5 times the task is scheduled to run
This task will NOT get triggered on a schedule until you attach a schedule to it. Read on for how
to do that.
Like all tasks they don't have timeouts, they should be placed inside a [/trigger folder](/config/config-file), and you [can configure them](/tasks/overview#defining-a-task).
## How to attach a schedule
Now that we've defined a scheduled task, we need to define when it will actually run. To do this we need to attach one or more schedules.
There are two ways of doing this:
* **Declarative:** defined on your `schedules.task`. They sync when you run the dev command or deploy.
* **Imperative:** created from the dashboard or by using the imperative SDK functions like `schedules.create()`.
A scheduled task can have multiple schedules attached to it, including a declarative schedule
and/or many imperative schedules.
### Declarative schedules
These sync when you run the [dev](/cli-dev) or [deploy](/cli-deploy) commands.
To create them you add the `cron` property to your `schedules.task()`. This property is optional and is only used if you want to add a declarative schedule to your task:
```ts
export const firstScheduledTask = schedules.task({
id: "first-scheduled-task",
//every two hours (UTC timezone)
cron: "0 */2 * * *",
run: async (payload, { ctx }) => {
//do something
},
});
```
If you use a string it will be in UTC. Alternatively, you can specify a timezone like this:
```ts
export const secondScheduledTask = schedules.task({
id: "second-scheduled-task",
cron: {
//5am every day Tokyo time
pattern: "0 5 * * *",
timezone: "Asia/Tokyo",
},
run: async (payload) => {},
});
```
When you run the [dev](/cli-dev) or [deploy](/cli-deploy) commands, declarative schedules will be synced. If you add, delete or edit the `cron` property it will be updated when you run these commands. You can view your schedules on the Schedules page in the dashboard.
### Imperative schedules
Alternatively you can explicitly attach schedules to a `schedules.task`. You can do this in the Schedules page in the dashboard by just pressing the "New schedule" button, or you can use the SDK to create schedules.
The advantage of imperative schedules is that they can be created dynamically, for example, you could create a schedule for each user in your database. They can also be activated, disabled, edited, and deleted without deploying new code by using the SDK or dashboard.
To use imperative schedules you need to do two things:
1. Define a task in your code using `schedules.task()`.
2. Attach 1+ schedules to the task either using the dashboard or the SDK.
## Supported cron syntax
```
* * * * *
β¬ β¬ β¬ β¬ β¬
β β β β |
β β β β β day of week (0 - 7, 1L - 7L) (0 or 7 is Sun)
β β β ββββββ month (1 - 12)
β β βββββββββββ day of month (1 - 31, L)
β ββββββββββββββββ hour (0 - 23)
βββββββββββββββββββββ minute (0 - 59)
```
"L" means the last. In the "day of week" field, 1L means the last Monday of the month. In the "day of month" field, L means the last day of the month.
We do not support seconds in the cron syntax.
## When schedules won't trigger
There are two situations when a scheduled task won't trigger:
* For Dev environments scheduled tasks will only trigger if you're running the dev CLI.
* For Staging/Production environments scheduled tasks will only trigger if the task is in the current deployment (latest version). We won't trigger tasks from previous deployments.
## Attaching schedules in the dashboard
You need to attach a schedule to a task before it will run on a schedule. You can attach static schedules in the dashboard:
In the sidebar select the "Schedules" page, then press the "New schedule" button. Or you can
follow the onboarding and press the create in dashboard button. ![Blank schedules
page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/schedules-blank.png)
Fill in the form and press "Create schedule" when you're done. ![Environment variables
page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/schedules-create.png)
These are the options when creating a schedule:
| Name | Description |
| ----------------- | --------------------------------------------------------------------------------------------- |
| Task | The id of the task you want to attach to. |
| Cron pattern | The schedule in cron format. |
| Timezone | The timezone the schedule will run in. Defaults to "UTC" |
| External id | An optional external id, usually you'd use a userId. |
| Deduplication key | An optional deduplication key. If you pass the same value, it will update rather than create. |
| Environments | The environments this schedule will run in. |
## Attaching schedules with the SDK
You call `schedules.create()` to create a schedule from your code. Here's the simplest possible example:
```ts
const createdSchedule = await schedules.create({
//The id of the scheduled task you want to attach to.
task: firstScheduledTask.id,
//The schedule in cron format.
cron: "0 0 * * *",
//this is required, it prevents you from creating duplicate schedules. It will update the schedule if it already exists.
deduplicationKey: "my-deduplication-key",
});
```
The `task` id must be a task that you defined using `schedules.task()`.
You can create many schedules with the same `task`, `cron`, and `externalId` but only one with the same `deduplicationKey`.
This means you can have thousands of schedules attached to a single task, but only one schedule per `deduplicationKey`. Here's an example with all the options:
```ts
const createdSchedule = await schedules.create({
//The id of the scheduled task you want to attach to.
task: firstScheduledTask.id,
//The schedule in cron format.
cron: "0 0 * * *",
// Optional, it defaults to "UTC". In IANA format, e.g. "America/New_York".
// In this case, the task will run at midnight every day in New York time.
// If you specify a timezone it will automatically work with daylight saving time.
timezone: "America/New_York",
//Optionally, you can specify your own IDs (like a user ID) and then use it inside the run function of your task.
//This allows you to have per-user cron tasks.
externalId: "user_123456",
//You can only create one schedule with this key.
//If you use it twice, the second call will update the schedule.
//This is useful because you don't want to create duplicate schedules for a user.
deduplicationKey: "user_123456-todo_reminder",
});
```
See [the SDK reference](/management/schedules/create) for full details.
### Dynamic schedules (or multi-tenant schedules)
By using the `externalId` you can have schedules for your users. This is useful for things like reminders, where you want to have a schedule for each user.
A reminder task:
```ts /trigger/reminder.ts
import { schedules } from "@trigger.dev/sdk/v3";
//this task will run when any of the attached schedules trigger
export const reminderTask = schedules.task({
id: "todo-reminder",
run: async (payload) => {
if (!payload.externalId) {
throw new Error("externalId is required");
}
//get user using the externalId you used when creating the schedule
const user = await db.getUser(payload.externalId);
//send a reminder email
await sendReminderEmail(user);
},
});
```
Then in your backend code, you can create a schedule for each user:
```ts Next.js API route
import { reminderTask } from "~/trigger/reminder";
//app/reminders/route.ts
export async function POST(request: Request) {
//get the JSON from the request
const data = await request.json();
//create a schedule for the user
const createdSchedule = await schedules.create({
task: reminderTask.id,
//8am every day
cron: "0 8 * * *",
//the user's timezone
timezone: data.timezone,
//the user id
externalId: data.userId,
//this makes it impossible to have two reminder schedules for the same user
deduplicationKey: `${data.userId}-reminder`,
});
//return a success response with the schedule
return Response.json(createdSchedule);
}
```
You can also retrieve, list, delete, deactivate and re-activate schedules using the SDK. More on that later.
## Testing schedules
You can test a scheduled task in the dashboard. Note that the `scheduleId` will always come through as `sched_1234` to the run.
In the sidebar select the "Test" page, then select a scheduled task from the list (they have a
clock icon on them) ![Test page](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/schedules-test.png)
Fill in the form \[1]. You can select from a recent run \[2] to pre-populate the fields. Press "Run
test" when you're ready ![Schedule test form](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/schedules-test-form.png)
## Managing schedules with the SDK
### Retrieving an existing schedule
```ts
const retrievedSchedule = await schedules.retrieve(scheduleId);
```
See [the SDK reference](/management/schedules/retrieve) for full details.
### Listing schedules
```ts
const allSchedules = await schedules.list();
```
See [the SDK reference](/management/schedules/list) for full details.
### Updating a schedule
```ts
const updatedSchedule = await schedules.update(scheduleId, {
task: firstScheduledTask.id,
cron: "0 0 1 * *",
externalId: "ext_1234444",
deduplicationKey: "my-deduplication-key",
});
```
See [the SDK reference](/management/schedules/update) for full details.
### Deactivating a schedule
```ts
const deactivatedSchedule = await schedules.deactivate(scheduleId);
```
See [the SDK reference](/management/schedules/deactivate) for full details.
### Activating a schedule
```ts
const activatedSchedule = await schedules.activate(scheduleId);
```
See [the SDK reference](/management/schedules/activate) for full details.
### Deleting a schedule
```ts
const deletedSchedule = await schedules.del(scheduleId);
```
See [the SDK reference](/management/schedules/delete) for full details.
### Getting possible timezones
You might want to show a dropdown menu in your UI so your users can select their timezone. You can get a list of all possible timezones using the SDK:
```ts
const timezones = await schedules.timezones();
```
See [the SDK reference](/management/schedules/timezones) for full details.
# schemaTask
Define tasks with a runtime payload schema and validate the payload before running the task.
The `schemaTask` function allows you to define a task with a runtime payload schema. This schema is used to validate the payload before running the task or when triggering a task directly. If the payload does not match the schema, the task will not execute.
## Usage
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import { z } from "zod";
const myTask = schemaTask({
id: "my-task",
schema: z.object({
name: z.string(),
age: z.number(),
}),
run: async (payload) => {
console.log(payload.name, payload.age);
},
});
```
`schemaTask` takes all the same options as [task](/tasks/overview), with the addition of a `schema` field. The `schema` field is a schema parser function from a schema library or or a custom parser function.
We will probably eventually combine `task` and `schemaTask` into a single function, but because
that would be a breaking change, we are keeping them separate for now.
When you trigger the task directly, the payload will be validated against the schema before the [run](/runs) is created:
```ts
import { tasks } from "@trigger.dev/sdk/v3";
import { myTask } from "./trigger/myTasks";
// This will call the schema parser function and validate the payload
await myTask.trigger({ name: "Alice", age: "oops" }); // this will throw an error
// This will NOT call the schema parser function
await tasks.trigger("my-task", { name: "Alice", age: "oops" }); // this will not throw an error
```
The error thrown when the payload does not match the schema will be the same as the error thrown by the schema parser function. For example, if you are using Zod, the error will be a `ZodError`.
We will also validate the payload every time before the task is run, so you can be sure that the payload is always valid. In the example above, the task would fail with a `TaskPayloadParsedError` error and skip retrying if the payload does not match the schema.
## Input/output schemas
Certain schema libraries, like Zod, split their type inference into "schema in" and "schema out". This means that you can define a single schema that will produce different types when triggering the task and when running the task. For example, you can define a schema that has a default value for a field, or a string coerced into a date:
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import { z } from "zod";
const myTask = schemaTask({
id: "my-task",
schema: z.object({
name: z.string().default("John"),
age: z.number(),
dob: z.coerce.date(),
}),
run: async (payload) => {
console.log(payload.name, payload.age);
},
});
```
In this case, the trigger payload type is `{ name?: string, age: number; dob: string }`, but the run payload type is `{ name: string, age: number; dob: Date }`. So you can trigger the task with a payload like this:
```ts
await myTask.trigger({ age: 30, dob: "2020-01-01" }); // this is valid
await myTask.trigger({ name: "Alice", age: 30, dob: "2020-01-01" }); // this is also valid
```
## Supported schema types
### Zod
You can use the [Zod](https://zod.dev) schema library to define your schema. The schema will be validated using Zod's `parse` function.
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import { z } from "zod";
export const zodTask = schemaTask({
id: "types/zod",
schema: z.object({
bar: z.string(),
baz: z.string().default("foo"),
}),
run: async (payload) => {
console.log(payload.bar, payload.baz);
},
});
```
### Yup
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import * as yup from "yup";
export const yupTask = schemaTask({
id: "types/yup",
schema: yup.object({
bar: yup.string().required(),
baz: yup.string().default("foo"),
}),
run: async (payload) => {
console.log(payload.bar, payload.baz);
},
});
```
### Superstruct
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import { object, string } from "superstruct";
export const superstructTask = schemaTask({
id: "types/superstruct",
schema: object({
bar: string(),
baz: string(),
}),
run: async (payload) => {
console.log(payload.bar, payload.baz);
},
});
```
### ArkType
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import { type } from "arktype";
export const arktypeTask = schemaTask({
id: "types/arktype",
schema: type({
bar: "string",
baz: "string",
}).assert,
run: async (payload) => {
console.log(payload.bar, payload.baz);
},
});
```
### @effect/schema
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import * as Schema from "@effect/schema/Schema";
// For some funny typescript reason, you cannot pass the Schema.decodeUnknownSync directly to schemaTask
const effectSchemaParser = Schema.decodeUnknownSync(
Schema.Struct({ bar: Schema.String, baz: Schema.String })
);
export const effectTask = schemaTask({
id: "types/effect",
schema: effectSchemaParser,
run: async (payload) => {
console.log(payload.bar, payload.baz);
},
});
```
### runtypes
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import * as T from "runtypes";
export const runtypesTask = schemaTask({
id: "types/runtypes",
schema: T.Record({
bar: T.String,
baz: T.String,
}),
run: async (payload) => {
console.log(payload.bar, payload.baz);
},
});
```
### valibot
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import * as v from "valibot";
// For some funny typescript reason, you cannot pass the v.parser directly to schemaTask
const valibotParser = v.parser(
v.object({
bar: v.string(),
baz: v.string(),
})
);
export const valibotTask = schemaTask({
id: "types/valibot",
schema: valibotParser,
run: async (payload) => {
console.log(payload.bar, payload.baz);
},
});
```
### typebox
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
import { Type } from "@sinclair/typebox";
import { wrap } from "@typeschema/typebox";
export const typeboxTask = schemaTask({
id: "types/typebox",
schema: wrap(
Type.Object({
bar: Type.String(),
baz: Type.String(),
})
),
run: async (payload) => {
console.log(payload.bar, payload.baz);
},
});
```
### Custom parser function
You can also define a custom parser function that will be called with the payload before the task is run. The parser function should return the parsed payload or throw an error if the payload is invalid.
```ts
import { schemaTask } from "@trigger.dev/sdk/v3";
export const customParserTask = schemaTask({
id: "types/custom-parser",
schema: (data: unknown) => {
// This is a custom parser, and should do actual parsing (not just casting)
if (typeof data !== "object") {
throw new Error("Invalid data");
}
const { bar, baz } = data as { bar: string; baz: string };
return { bar, baz };
},
run: async (payload) => {
console.log(payload.bar, payload.baz);
},
});
```
# Triggering
Tasks need to be triggered in order to run.
Trigger tasks **from your backend**:
| Function | This works | What it does |
| :----------------------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------- |
| `tasks.trigger()` | Anywhere | Triggers a task and gets a handle you can use to fetch and manage the run. [Read more](#tasks-trigger) |
| `tasks.batchTrigger()` | Anywhere | Triggers a task multiple times and gets a handle you can use to fetch and manage the runs. [Read more](#tasks-batchtrigger) |
| `tasks.triggerAndPoll()` | Anywhere | Triggers a task and then polls the run until itβs complete. [Read more](#tasks-triggerandpoll) |
Trigger tasks **from inside a run**:
| Function | This works | What it does |
| :------------------------------- | :---------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `yourTask.trigger()` | Anywhere | Triggers a task and gets a handle you can use to monitor and manage the run. It does not wait for the result. [Read more](#yourtask-trigger) |
| `yourTask.batchTrigger()` | Anywhere | Triggers a task multiple times and gets a handle you can use to monitor and manage the runs. It does not wait for the results. [Read more](#yourtask-batchtrigger) |
| `yourTask.triggerAndWait()` | Inside task | Triggers a task and then waits until it's complete. You get the result data to continue with. [Read more](#yourtask-triggerandwait) |
| `yourTask.batchTriggerAndWait()` | Inside task | Triggers a task multiple times in parallel and then waits until they're all complete. You get the resulting data to continue with. [Read more](#yourtask-batchtriggerandwait) |
Additionally, [scheduled tasks](/tasks/scheduled) get **automatically** triggered on their schedule and webhooks when receiving a webhook.
## Scheduled tasks
You should attach one or more schedules to your `schedules.task()` to trigger it on a recurring schedule. [Read the scheduled tasks docs](/tasks/scheduled).
## Authentication
When you trigger a task from your backend code, you need to set the `TRIGGER_SECRET_KEY` environment variable. You can find the value on the API keys page in the Trigger.dev dashboard. [More info on API keys](/apikeys).
## Triggering from your backend
You can trigger any task from your backend code using the `tasks.trigger()` or `tasks.batchTrigger()` SDK functions.
Do not trigger tasks directly from your frontend. If you do, you will leak your private
Trigger.dev API key.
You can use Next.js Server Actions but [you need to be careful with bundling](/guides/frameworks/nextjs#triggering-your-task-in-next-js).
### tasks.trigger()
Triggers a single run of a task with the payload you pass in, and any options you specify, without needing to import the task.
By using `tasks.trigger()`, you can pass in the task type as a generic argument, giving you full
type checking. Make sure you use a `type` import so that your task code is not imported into your
application.
```ts Next.js API route
import { tasks } from "@trigger.dev/sdk/v3";
import type { emailSequence } from "~/trigger/emails";
// π **type-only** import
//app/email/route.ts
export async function POST(request: Request) {
//get the JSON from the request
const data = await request.json();
// Pass the task type to `trigger()` as a generic argument, giving you full type checking
const handle = await tasks.trigger("email-sequence", {
to: data.email,
name: data.name,
});
//return a success response with the handle
return Response.json(handle);
}
```
```ts Remix
import { tasks } from "@trigger.dev/sdk/v3";
import type { emailSequence } from "~/trigger/emails";
// π **type-only** import
export async function action({ request, params }: ActionFunctionArgs) {
if (request.method !== "POST") {
throw new Response("Method Not Allowed", { status: 405 });
}
//get the JSON from the request
const data = await request.json();
// Pass the task type to `trigger()` as a generic argument, giving you full type checking
const handle = await tasks.trigger("email-sequence", {
to: data.email,
name: data.name,
});
//return a success response with the handle
return json(handle);
}
```
### tasks.batchTrigger()
Triggers multiple runs of a task with the payloads you pass in, and any options you specify, without needing to import the task.
By using `tasks.batchTrigger()`, you can pass in the task type as a generic argument, giving you
full type checking. Make sure you use a `type` import so that your task code is not imported into
your application.
```ts Next.js API route
import { tasks } from "@trigger.dev/sdk/v3";
import type { emailSequence } from "~/trigger/emails";
// π **type-only** import
//app/email/route.ts
export async function POST(request: Request) {
//get the JSON from the request
const data = await request.json();
// Pass the task type to `batchTrigger()` as a generic argument, giving you full type checking
const batchHandle = await tasks.batchTrigger(
"email-sequence",
data.users.map((u) => ({ payload: { to: u.email, name: u.name } }))
);
//return a success response with the handle
return Response.json(batchHandle);
}
```
```ts Remix
import { tasks } from "@trigger.dev/sdk/v3";
import type { emailSequence } from "~/trigger/emails";
export async function action({ request, params }: ActionFunctionArgs) {
if (request.method !== "POST") {
throw new Response("Method Not Allowed", { status: 405 });
}
//get the JSON from the request
const data = await request.json();
// Pass the task type to `batchTrigger()` as a generic argument, giving you full type checking
const batchHandle = await tasks.batchTrigger(
"email-sequence",
data.users.map((u) => ({ payload: { to: u.email, name: u.name } }))
);
//return a success response with the handle
return json(batchHandle);
}
```
### tasks.triggerAndPoll()
Triggers a single run of a task with the payload you pass in, and any options you specify, and then polls the run until it's complete.
By using `tasks.triggerAndPoll()`, you can pass in the task type as a generic argument, giving you
full type checking. Make sure you use a `type` import so that your task code is not imported into
your application.
```ts Next.js API route
import { tasks } from "@trigger.dev/sdk/v3";
import type { emailSequence } from "~/trigger/emails";
//app/email/route.ts
export async function POST(request: Request) {
//get the JSON from the request
const data = await request.json();
// Pass the task type to `triggerAndPoll()` as a generic argument, giving you full type checking
const result = await tasks.triggerAndPoll(
"email-sequence",
{
to: data.email,
name: data.name,
},
{ pollIntervalMs: 5000 }
);
//return a success response with the result
return Response.json(result);
}
```
```ts Remix
import { tasks } from "@trigger.dev/sdk/v3";
import type { emailSequence } from "~/trigger/emails";
export async function action({ request, params }: ActionFunctionArgs) {
if (request.method !== "POST") {
throw new Response("Method Not Allowed", { status: 405 });
}
//get the JSON from the request
const data = await request.json();
// Pass the task type to `triggerAndPoll()` as a generic argument, giving you full type checking
const result = await tasks.triggerAndPoll(
"email-sequence",
{
to: data.email,
name: data.name,
},
{ pollIntervalMs: 5000 }
);
//return a success response with the result
return json(result);
}
```
The above code is just a demonstration of the API and is not recommended to use in an API route
this way as it will block the request until the task is complete.
## Triggering from inside a run
Task instance methods are available on the `Task` object you receive when you define a task. We recommend you use these methods inside another task to trigger subtasks.
### yourTask.trigger()
Triggers a single run of a task with the payload you pass in, and any options you specify. It does NOT wait for the result.
If called from within a task, you can use the `AndWait` version to pause execution until the triggered run is complete.
If you need to call `trigger()` on a task in a loop, use [`batchTrigger()`](/triggering#task-batchtrigger) instead which will trigger up to 100 tasks in a single call.
```ts /trigger/my-task.ts
import { myOtherTask } from "~/trigger/my-other-task";
export const myTask = task({
id: "my-task",
run: async (payload: string) => {
const handle = await myOtherTask.trigger("some data");
//...do other stuff
},
});
```
### yourTask.batchTrigger()
Triggers multiple runs of a task with the payloads you pass in, and any options you specify. It does NOT wait for the result.
```ts /trigger/my-task.ts
import { myOtherTask } from "~/trigger/my-other-task";
export const myTask = task({
id: "my-task",
run: async (payload: string) => {
const batchHandle = await myOtherTask.batchTrigger([{ payload: "some data" }]);
//...do other stuff
},
});
```
### yourTask.triggerAndWait()
This is where it gets interesting. You can trigger a task and then wait for the result. This is useful when you need to call a different task and then use the result to continue with your task.
Instead, use `batchTriggerAndWait()` if you can, or a for loop if you can't.
To control concurrency using batch triggers, you can set `queue.concurrencyLimit` on the child task.
```ts /trigger/batch.ts
export const batchTask = task({
id: "batch-task",
run: async (payload: string) => {
const results = await childTask.batchTriggerAndWait([
{ payload: "item1" },
{ payload: "item2" },
]);
console.log("Results", results);
//...do stuff with the results
},
});
```
```ts /trigger/loop.ts
export const loopTask = task({
id: "loop-task",
run: async (payload: string) => {
//this will be slower than the batch version
//as we have to resume the parent after each iteration
for (let i = 0; i < 2; i++) {
const result = await childTask.triggerAndWait(`item${i}`);
console.log("Result", result);
//...do stuff with the result
}
},
});
```
```ts /trigger/parent.ts
export const parentTask = task({
id: "parent-task",
run: async (payload: string) => {
const result = await childTask.triggerAndWait("some-data");
console.log("Result", result);
//...do stuff with the result
},
});
```
The `result` object is a "Result" type that needs to be checked to see if the child task run was successful:
```ts /trigger/parent.ts
export const parentTask = task({
id: "parent-task",
run: async (payload: string) => {
const result = await childTask.triggerAndWait("some-data");
if (result.ok) {
console.log("Result", result.output); // result.output is the typed return value of the child task
} else {
console.error("Error", result.error); // result.error is the error that caused the run to fail
}
},
});
```
If instead you just want to get the output of the child task, and throw an error if the child task failed, you can use the `unwrap` method:
```ts /trigger/parent.ts
export const parentTask = task({
id: "parent-task",
run: async (payload: string) => {
const output = await childTask.triggerAndWait("some-data").unwrap();
console.log("Output", output);
},
});
```
You can also catch the error if the child task fails and get more information about the error:
```ts /trigger/parent.ts
import { task, SubtaskUnwrapError } from "@trigger.dev/sdk/v3";
export const parentTask = task({
id: "parent-task",
run: async (payload: string) => {
try {
const output = await childTask.triggerAndWait("some-data").unwrap();
console.log("Output", output);
} catch (error) {
if (error instanceof SubtaskUnwrapError) {
console.error("Error in fetch-post-task", {
runId: error.runId,
taskId: error.taskId,
cause: error.cause,
});
}
}
},
});
```
This method should only be used inside a task. If you use it outside a task, it will throw an
error.
### yourTask.batchTriggerAndWait()
You can batch trigger a task and wait for all the results. This is useful for the fan-out pattern, where you need to call a task multiple times and then wait for all the results to continue with your task.
Instead, pass in all items at once and set an appropriate `maxConcurrency`. Alternatively, use sequentially with a for loop.
To control concurrency, you can set `queue.concurrencyLimit` on the child task.
```ts /trigger/batch.ts
export const batchTask = task({
id: "batch-task",
run: async (payload: string) => {
const results = await childTask.batchTriggerAndWait([
{ payload: "item1" },
{ payload: "item2" },
]);
console.log("Results", results);
//...do stuff with the results
},
});
```
```ts /trigger/loop.ts
export const loopTask = task({
id: "loop-task",
run: async (payload: string) => {
//this will be slower than a single batchTriggerAndWait()
//as we have to resume the parent after each iteration
for (let i = 0; i < 2; i++) {
const result = await childTask.batchTriggerAndWait([
{ payload: `itemA${i}` },
{ payload: `itemB${i}` },
]);
console.log("Result", result);
//...do stuff with the result
}
},
});
```
When using `batchTriggerAndWait`, you have full control over how to handle failures within the batch. The method returns an array of run results, allowing you to inspect each run's outcome individually and implement custom error handling.
Here's how you can manage run failures:
1. **Inspect individual run results**: Each run in the returned array has an `ok` property indicating success or failure.
2. **Access error information**: For failed runs, you can examine the `error` property to get details about the failure.
3. **Choose your failure strategy**: You have two main options:
* **Fail the entire batch**: Throw an error if any run fails, causing the parent task to reattempt.
* **Continue despite failures**: Process the results without throwing an error, allowing the parent task to continue.
4. **Implement custom logic**: You can create sophisticated handling based on the number of failures, types of errors, or other criteria.
Here's an example of how you might handle run failures:
```ts /trigger/batchTriggerAndWait.ts
const result = await batchChildTask.batchTriggerAndWait([
{ payload: "item1" },
{ payload: "item2" },
{ payload: "item3" },
]);
// Result will contain the finished runs.
// They're only finished if they have succeeded or failed.
// "Failed" means all attempts failed
for (const run of result.runs) {
// Check if the run succeeded
if (run.ok) {
logger.info("Batch task run succeeded", { output: run.output });
} else {
logger.error("Batch task run error", { error: run.error });
//You can choose if you want to throw an error and fail the entire run
throw new Error(`Fail the entire run because ${run.id} failed`);
}
}
```
```ts /trigger/nested.ts
export const batchParentTask = task({
id: "parent-task",
run: async (payload: string) => {
const results = await childTask.batchTriggerAndWait([
{ payload: "item4" },
{ payload: "item5" },
{ payload: "item6" },
]);
console.log("Results", results);
//...do stuff with the result
},
});
```
This method should only be used inside a task. If you use it outside a task, it will throw an
error.
## Options
All of the above functions accept an options object:
```ts
await myTask.trigger({ some: "data" }, { delay: "1h", ttl: "1h" });
await myTask.batchTrigger([{ payload: { some: "data" }, options: { delay: "1h" } }]);
```
The following options are available:
### `delay`
When you want to trigger a task now, but have it run at a later time, you can use the `delay` option:
```ts
// Delay the task run by 1 hour
await myTask.trigger({ some: "data" }, { delay: "1h" });
// Delay the task run by 88 seconds
await myTask.trigger({ some: "data" }, { delay: "88s" });
// Delay the task run by 1 hour and 52 minutes and 18 seconds
await myTask.trigger({ some: "data" }, { delay: "1h52m18s" });
// Delay until a specific time
await myTask.trigger({ some: "data" }, { delay: "2024-12-01T00:00:00" });
// Delay using a Date object
await myTask.trigger({ some: "data" }, { delay: new Date(Date.now() + 1000 * 60 * 60) });
// Delay using a timezone
await myTask.trigger({ some: "data" }, { delay: new Date("2024-07-23T11:50:00+02:00") });
```
Runs that are delayed and have not been enqueued yet will display in the dashboard with a "Delayed" status:
![Delayed run in the dashboard](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/delayed-runs.png)
Delayed runs will be enqueued at the time specified, and will run as soon as possible after that
time, just as a normally triggered run would.
You can cancel a delayed run using the `runs.cancel` SDK function:
```ts
import { runs } from "@trigger.dev/sdk/v3";
await runs.cancel("run_1234");
```
You can also reschedule a delayed run using the `runs.reschedule` SDK function:
```ts
import { runs } from "@trigger.dev/sdk/v3";
// The delay option here takes the same format as the trigger delay option
await runs.reschedule("run_1234", { delay: "1h" });
```
The `delay` option is also available when using `batchTrigger`:
```ts
await myTask.batchTrigger([{ payload: { some: "data" }, options: { delay: "1h" } }]);
```
### `ttl`
You can set a TTL (time to live) when triggering a task, which will automatically expire the run if it hasn't started within the specified time. This is useful for ensuring that a run doesn't get stuck in the queue for too long.
All runs in development have a default `ttl` of 10 minutes. You can disable this by setting the
`ttl` option.
```ts
import { myTask } from "./trigger/myTasks";
// Expire the run if it hasn't started within 1 hour
await myTask.trigger({ some: "data" }, { ttl: "1h" });
// If you specify a number, it will be treated as seconds
await myTask.trigger({ some: "data" }, { ttl: 3600 }); // 1 hour
```
When a run is expired, it will be marked as "Expired" in the dashboard:
![Expired runs in the dashboard](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/expired-runs.png)
When you use both `delay` and `ttl`, the TTL will start counting down from the time the run is enqueued, not from the time the run is triggered.
So for example, when using the following code:
```ts
await myTask.trigger({ some: "data" }, { delay: "10m", ttl: "1h" });
```
The timeline would look like this:
1. The run is created at 12:00:00
2. The run is enqueued at 12:10:00
3. The TTL starts counting down from 12:10:00
4. If the run hasn't started by 13:10:00, it will be expired
For this reason, the `ttl` option only accepts durations and not absolute timestamps.
### `idempotencyKey`
You can provide an `idempotencyKey` to ensure that a task is only triggered once with the same key. This is useful if you are triggering a task within another task that might be retried:
```typescript
import { idempotencyKeys, task } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
retry: {
maxAttempts: 4,
},
run: async (payload: any) => {
// By default, idempotency keys generated are unique to the run, to prevent retries from duplicating child tasks
const idempotencyKey = await idempotencyKeys.create("my-task-key");
// childTask will only be triggered once with the same idempotency key
await childTask.triggerAndWait(payload, { idempotencyKey });
// Do something else, that may throw an error and cause the task to be retried
},
});
```
For more information, see our [Idempotency](/idempotency) documentation.
### `queue`
When you trigger a task you can override the concurrency limit. This is really useful if you sometimes have high priority runs.
The task:
```ts /trigger/override-concurrency.ts
const generatePullRequest = task({
id: "generate-pull-request",
queue: {
//normally when triggering this task it will be limited to 1 run at a time
concurrencyLimit: 1,
},
run: async (payload) => {
//todo generate a PR using OpenAI
},
});
```
Triggering from your backend and overriding the concurrency:
```ts app/api/push/route.ts
import { generatePullRequest } from "~/trigger/override-concurrency";
export async function POST(request: Request) {
const data = await request.json();
if (data.branch === "main") {
//trigger the task, with a different queue
const handle = await generatePullRequest.trigger(data, {
queue: {
//the "main-branch" queue will have a concurrency limit of 10
//this triggered run will use that queue
name: "main-branch",
concurrencyLimit: 10,
},
});
return Response.json(handle);
} else {
//triggered with the default (concurrency of 1)
const handle = await generatePullRequest.trigger(data);
return Response.json(handle);
}
}
```
### `concurrencyKey`
If you're building an application where you want to run tasks for your users, you might want a separate queue for each of your users. (It doesn't have to be users, it can be any entity you want to separately limit the concurrency for.)
You can do this by using `concurrencyKey`. It creates a separate queue for each value of the key.
Your backend code:
```ts app/api/pr/route.ts
import { generatePullRequest } from "~/trigger/override-concurrency";
export async function POST(request: Request) {
const data = await request.json();
if (data.isFreeUser) {
//free users can only have 1 PR generated at a time
const handle = await generatePullRequest.trigger(data, {
queue: {
//every free user gets a queue with a concurrency limit of 1
name: "free-users",
concurrencyLimit: 1,
},
concurrencyKey: data.userId,
});
//return a success response with the handle
return Response.json(handle);
} else {
//trigger the task, with a different queue
const handle = await generatePullRequest.trigger(data, {
queue: {
//every paid user gets a queue with a concurrency limit of 10
name: "paid-users",
concurrencyLimit: 10,
},
concurrencyKey: data.userId,
});
//return a success response with the handle
return Response.json(handle);
}
}
```
### `maxAttempts`
You can set the maximum number of attempts for a task run. If the run fails, it will be retried up to the number of attempts you specify.
```ts
await myTask.trigger({ some: "data" }, { maxAttempts: 3 });
await myTask.trigger({ some: "data" }, { maxAttempts: 1 }); // no retries
```
This will override the `retry.maxAttempts` value set in the task definition.
### `tags`
View our [tags doc](/tags) for more information.
### `metadata`
View our [metadata doc](/runs/metadata) for more information.
### `maxDuration`
View our [maxDuration doc](/runs/max-duration) for more information.
## Large Payloads
We recommend keeping your task payloads as small as possible. We currently have a hard limit on task payloads above 10MB.
If your payload size is larger than 512KB, instead of saving the payload to the database, we will upload it to an S3-compatible object store and store the URL in the database.
When your task runs, we automatically download the payload from the object store and pass it to your task function. We also will return to you a `payloadPresignedUrl` from the `runs.retrieve` SDK function so you can download the payload if needed:
```ts
import { runs } from "@trigger.dev/sdk/v3";
const run = await runs.retrieve(handle);
if (run.payloadPresignedUrl) {
const response = await fetch(run.payloadPresignedUrl);
const payload = await response.json();
console.log("Payload", payload);
}
```
We also use this same system for dealing with large task outputs, and subsequently will return a
corresponding `outputPresignedUrl`. Task outputs are limited to 100MB.
If you need to pass larger payloads, you'll need to upload the payload to your own storage and pass a URL to the file in the payload instead. For example, uploading to S3 and then sending a presigned URL that expires in URL:
```ts /yourServer.ts
import { myTask } from "./trigger/myTasks";
import { s3Client, getSignedUrl, PutObjectCommand, GetObjectCommand } from "./s3";
import { createReadStream } from "node:fs";
// Upload file to S3
await s3Client.send(
new PutObjectCommand({
Bucket: "my-bucket",
Key: "myfile.json",
Body: createReadStream("large-payload.json"),
})
);
// Create presigned URL
const presignedUrl = await getSignedUrl(
s3Client,
new GetObjectCommand({
Bucket: "my-bucket",
Key: "my-file.json",
}),
{
expiresIn: 3600, // expires in 1 hour
}
);
// Now send the URL to the task
const handle = await myTask.trigger({
url: presignedUrl,
});
```
```ts /trigger/myTasks.ts
import { task } from "@trigger.dev/sdk/v3";
export const myTask = task({
id: "my-task",
run: async (payload: { url: string }) => {
// Download the file from the URL
const response = await fetch(payload.url);
const data = await response.json();
// Do something with the data
},
});
```
### Batch Triggering
When using `batchTrigger` or `batchTriggerAndWait`, the total size of all payloads cannot exceed 10MB. This means if you are doing a batch of 100 runs, each payload should be less than 100KB.
# Common problems
Some common problems you might experience and their solutions
## Development
### `EACCES: permission denied`
If you see this error:
```
6090 verbose stack Error: EACCES: permission denied, rename '/Users/user/.npm/_cacache/tmp/f1bfea11' -> '/Users/user/.npm/_cacache/content-v2/sha512/31/d8/e094a47a0105d06fd246892ed1736c02eae323726ec6a3f34734eeb71308895dfba4f4f82a88ffe7e480c90b388c91fc3d9f851ba7b96db4dc33fbc65528'
```
First, clear the npm cache:
```sh
npm cache clean --force
```
Then change the permissions of the npm folder (if 1 doesn't work):
```sh
sudo chown -R $(whoami) ~/.npm
```
## Deployment
Running the \[trigger.dev deploy] command builds and deploys your code. Sometimes there can be issues building your code.
You can run the deploy command with `--log-level debug` at the end. This will spit out a lot of information about the deploy. If you can't figure out the problem from the information below please join [our Discord](https://trigger.dev/discord) and create a help forum post. Do NOT share the extended debug logs publicly as they might reveal private information about your project.
You can also review the build by supplying the `--dry-run` flag. This will build your project but not deploy it. You can then inspect the build output on your machine.
Here are some common problems and their solutions:
### `Failed to build project image: Error building image`
There should be a link below the error message to the full build logs on your machine. Take a look at these to see what went wrong. Join [our Discord](https://trigger.dev/discord) and you share it privately with us if you can't figure out what's going wrong. Do NOT share these publicly as the verbose logs might reveal private information about your project.
### `Deployment encountered an error`
Usually there will be some useful guidance below this message. If you can't figure out what's going wrong then join [our Discord](https://trigger.dev/discord) and create a Help forum post with a link to your deployment.
## Project setup issues
### `The requested module 'node:events' does not provide an export named 'addAbortListener'`
If you see this error it means you're not a supported version of Node:
```
SyntaxError: The requested module 'node:events' does not provide an export named 'addAbortListener'
at ModuleJob._instantiate (node:internal/modules/esm/module_job:123:21)
at async ModuleJob.run (node:internal/modules/esm/module_job:189:5)
Node.js v19.9.0
```
You need to be on at least these minor versions:
| Version | Minimum |
| ------- | ------- |
| 18 | 18.20+ |
| 20 | 20.5+ |
| 21 | 21.0+ |
| 22 | 22.0+ |
## Runtime issues
### `Environment variable not found:`
Your code is deployed separately from the rest of your app(s) so you need to make sure that you set any environment variables you use in your tasks in the Trigger.dev dashboard. [Read the guide](/deploy-environment-variables).
### `Error: @prisma/client did not initialize yet.`
Prisma uses code generation to create the client from your schema file. This means you need to add a bit of config so we can generate this file before your tasks run: [Read the guide](/config/config-file#prisma).
### When triggering subtasks the parent task finishes too soon
Make sure that you always use `await` when you call `trigger`, `triggerAndWait`, `batchTrigger`, and `batchTriggerAndWait`. If you don't then it's likely the task(s) won't be triggered because the calling function process can be terminated before the networks calls are sent.
### Rate limit exceeded
The most common cause of hitting the API rate limit is if youβre calling `trigger()` on a task in a loop, instead of doing this use `batchTrigger()` which will trigger multiple tasks in a single API call. You can have up to 100 tasks in a single batch trigger call.
View the [rate limits](/limits) page for more information.
### `Crypto is not defined`
This can happen in different situations, for example when using plain strings as idempotency keys. Support for `Crypto` without a special flag was added in Node `v19.0.0`. You will have to upgrade Node - we recommend even-numbered major releases, e.g. `v20` or `v22`. Alternatively, you can switch from plain strings to the `idempotencyKeys.create` SDK function. [Read the guide](/idempotency).
## Framework specific issues
### NestJS swallows all errors/exceptions
If you're using NestJS and you add code like this into your tasks you will prevent any errors from being surfaced:
```ts
export const simplestTask = task({
id: "nestjs-example",
run: async (payload) => {
//by doing this you're swallowing any errors
const app = await NestFactory.createApplicationContext(AppModule);
await app.init();
//etc...
},
});
```
NestJS has a global exception filter that catches all errors and swallows them, so we can't receive them. Our current recommendation is to not use NestJS inside your tasks. If you're a NestJS user you can still use Trigger.dev but just don't use NestJS inside your tasks like this.
### React is not defined
If you see this error:
```
Worker failed to start ReferenceError: React is not defined
```
Either add this to your file:
```ts
import React from "react";
```
Or change the tsconfig jsx setting:
```json
{
"compilerOptions": {
//...
"jsx": "react-jsx"
}
}
```
### Next.js build failing due to missing API key in GitHub CI
This issue occurs during the Next.js app build process on GitHub CI where the Trigger.dev SDK is expecting the TRIGGER\_SECRET\_KEY environment variable to be set at build time. Next.js attempts to compile routes and creates static pages, which can cause issues with SDKs that require runtime environment variables. The solution is to mark the relevant pages as dynamic to prevent Next.js from trying to make them static. You can do this by adding the following line to the route file:
```ts
export const dynamic = "force-dynamic";
```
### Correctly passing event handlers to React components
An issue can sometimes arise when you try to pass a function directly to the `onClick` prop. This is because the function may require specific arguments or context that are not available when the event occurs. By wrapping the function call in an arrow function, you ensure that the handler is called with the correct context and any necessary arguments. For example:
This works:
```tsx
```
Whereas this does not work:
```tsx
```
# Alerts
Get alerted when runs or deployments fail, or when deployments succeed.
Click on "Alerts" in the left hand side menu, then click on "New alert" to open the new alert modal.
![Email alerts](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/troubleshooting-alerts-blank.png)
Choose to be notified by email, Slack notification or webhook whenever:
* a run fails
* a deployment fails
* a deployment succeeds
![Email alerts](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/troubleshooting-alerts-modal.png)
Click on the triple dot menu on the right side of the table row and select "Disable" or "Delete".
![Disable and delete alerts](https://mintlify.s3-us-west-1.amazonaws.com/trigger/images/troubleshooting-alerts-disable-delete.png)
# GitHub Issues
Please [join our community on Discord](https://github.com/triggerdotdev/trigger.dev/issues) to ask questions, share your projects, and get help from other developers.
# Uptime Status
Get email notifications when Trigger.dev creates, updates or resolves a platform incident.
[Subscribe](https://status.trigger.dev/)
# Upgrade to new build system
How to update to 3.0.0 from the beta
The Trigger.dev packages are now at version `3.0.x` in the `latest` tag. This is our first official release of v3 under the latest tag, and we recommend anyone still using packages in the `beta` tag to upgrade to the latest version. This guide will help you upgrade your project to the latest version of Trigger.dev.
The major changes in this release are a new build system, which is more flexible and powerful than the previous build system. We've also made some changes to the `trigger.dev` CLI to improve the developer experience.
The main features of the new build sytem are:
* **Bundling by default**: All dependencies are bundled by default, so you no longer need to specify which dependencies to bundle. This solves a whole bunch of issues related to monorepos.
* **Build extensions**: A new way to extend the build process with custom logic. This is a more flexible and powerful way to extend the build process compared to the old system. (including custom esbuild plugin support)
* **Improved configuration**: We've migrated to using [c12](https://github.com/unjs/c12) to power our configuration system.
* **Improved error handling**: We now do a much better job of reporting of any errors that happen during the indexing process by loading your trigger task files dynamically.
* **Improved cold start times**: Previously, we would load all your trigger task files at once, which could lead to long cold start times. Now we load your trigger task files dynamically, which should improve cold start times.
## Update packages
To use the new build system, you have to update to use our latest packages. Update the `@trigger.dev/sdk` package in your package.json:
```json
"@trigger.dev/sdk": "^3.0.0",
```
You will also need to update your usage of the `trigger.dev` CLI to use the latest release. If you run the CLI via `npx` you can update to the latest release like so:
```sh
# old way
npx trigger.dev@3.0.0-beta.56 dev
# using the latest release
npx trigger.dev@latest dev
```
If you've added the `trigger.dev` CLI to your `devDependencies`, then you should update the version to point to the latest release:
```json
"trigger.dev": "^3.0.0",
```
Once you do that make sure you re-install your dependencies using `npm i` or the equivalent with your preferred package manager.
If you deploy using GitHub actions, make sure you update the version there too.
## Update your `trigger.config.ts`
The new build system does not effect your trigger task files at all, so those can remain unchanged. However, you may need to make changes to your `trigger.config.ts` file.
### `defineConfig`
You should now import the `defineConfig` function from `@trigger.dev/sdk/v3` and export the config as the default export:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
});
```
### Deprecated: `dependenciesToBundle`
The new build system will bundle all dependencies by default, so `dependenciesToBundle` no longer makes any sense and can be removed.
#### Externals
Now that all dependencies are bundled, there are some situations where bundling a dependency doesn't work, and needs to be made external (e.g. when a dependency includes a native module). You can now specify these dependencies as build externals in the `defineConfig` function:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
export default defineConfig({
project: "",
build: {
external: ["native-module"],
},
});
```
`external` is an array of strings, where each string is the name of a dependency that should be made external. Glob expressions are also supported and use the [minimatch](https://github.com/isaacs/minimatch) matcher.
### additionalFiles
The `additionalFiles` option has been moved to our new build extension system.
To use build extensions, you'll need to add the `@trigger.dev/build` package to your `devDependencies`:
```sh
npm add @trigger.dev/build@latest -D
```
Now you can import the `additionalFiles` build extension and use it in your `trigger.config.ts` file:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { additionalFiles } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
build: {
extensions: [
additionalFiles({ files: ["wrangler/wrangler.toml", "./assets/**", "./fonts/**"] }),
],
},
});
```
### additionalPackages
The `additionalPackages` option has been moved to our new build extension system.
To use build extensions, you'll need to add the `@trigger.dev/build` package to your `devDependencies`:
```sh
npm add @trigger.dev/build@latest -D
```
Now you can import the `additionalPackages` build extension and use it in your `trigger.config.ts` file:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { additionalPackages } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
build: {
extensions: [additionalPackages({ packages: ["wrangler"] })],
},
});
```
### resolveEnvVars
The `resolveEnvVars` export has been moved to our new build extension system.
To use build extensions, you'll need to add the `@trigger.dev/build` package to your `devDependencies`:
```sh
npm add @trigger.dev/build@latest -D
```
Now you can import the `syncEnvVars` build extension and use it in your `trigger.config.ts` file:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { syncEnvVars } from "@trigger.dev/build/extensions/core";
export default defineConfig({
project: "",
build: {
extensions: [
syncEnvVars(async (params) => {
return {
MY_ENV_VAR: "my-value",
};
}),
],
},
});
```
The `syncEnvVars` callback function works very similarly to the deprecated `resolveEnvVars` handler, but now instead of returning an object with a `variables` key that contains the environment variables, you return an object with the environment variables directly (see the example above).
One other difference is now `params.env` only contains the environment variables that are set in the Trigger.dev environment variables, and not the environment variables from the process. If you want to access the environment variables from the process, you can use `process.env`.
See the [syncEnvVars](/deploy-environment-variables#sync-env-vars-from-another-service) documentation for more information.
### emitDecoratorMetadata
If you make use of decorators in your code, and have enabled the `emitDecoratorMetadata` tsconfig compiler option, you'll need to enable this in the new build sytem using the `emitDecoratorMetadata` build extension:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { emitDecoratorMetadata } from "@trigger.dev/build/extensions/typescript";
export default defineConfig({
project: "",
build: {
extensions: [emitDecoratorMetadata()],
},
});
```
### Prisma
We've created a build extension to support using Prisma in your Trigger.dev tasks. To use this extension, you'll need to add the `@trigger.dev/build` package to your `devDependencies`:
```sh
npm add @trigger.dev/build@latest -D
```
Then you can import the `prismaExtension` build extension and use it in your `trigger.config.ts` file, passing in the path to your Prisma schema file:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { prismaExtension } from "@trigger.dev/build/extensions/prisma";
export default defineConfig({
project: "",
build: {
extensions: [
prismaExtension({
schema: "prisma/schema.prisma",
}),
],
},
});
```
This will make sure that your prisma client is generated during the build process when deploying to Trigger.dev.
This does not have any effect when running the `dev` command, so you'll need to make sure you
generate your client locally first.
If you want to also run migrations during the build process, you can pass in the `migrate` option:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { prismaExtension } from "@trigger.dev/build/extensions/prisma";
export default defineConfig({
project: "",
build: {
extensions: [
prismaExtension({
schema: "prisma/schema.prisma",
migrate: true,
directUrlEnvVarName: "DATABASE_URL_UNPOOLED", // optional - the name of the environment variable that contains the direct database URL if you are using a direct database URL
}),
],
},
});
```
If you have multiple `generator` statements defined in your schema file, you can pass in the `clientGenerator` option to specify the `prisma-client-js` generator, which will prevent other generators from being generated:
```prisma schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DATABASE_URL_UNPOOLED")
}
// We only want to generate the prisma-client-js generator
generator client {
provider = "prisma-client-js"
}
generator kysely {
provider = "prisma-kysely"
output = "../../src/kysely"
enumFileName = "enums.ts"
fileName = "types.ts"
}
```
```ts trigger.config.ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { prismaExtension } from "@trigger.dev/build/extensions/prisma";
export default defineConfig({
project: "",
build: {
extensions: [
prismaExtension({
schema: "prisma/schema.prisma",
clientGenerator: "client",
}),
],
},
});
```
### audioWaveform
Previously, we installed [Audio Waveform](https://github.com/bbc/audiowaveform) in the build image. That's been moved to a build extension:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { audioWaveform } from "@trigger.dev/build/extensions/audioWaveform";
export default defineConfig({
project: "",
build: {
extensions: [audioWaveform()], // uses verson 1.1.0 of audiowaveform by default
},
});
```
### esbuild plugins
You can now add esbuild plugins to customize the build process using the `esbuildPlugin` build extension. The example below shows how to automatically upload sourcemaps to Sentry using their esbuild plugin:
```ts
import { defineConfig } from "@trigger.dev/sdk/v3";
import { esbuildPlugin } from "@trigger.dev/build/extensions";
import { sentryEsbuildPlugin } from "@sentry/esbuild-plugin";
export default defineConfig({
project: "",
build: {
extensions: [
esbuildPlugin(
sentryEsbuildPlugin({
org: process.env.SENTRY_ORG,
project: process.env.SENTRY_PROJECT,
authToken: process.env.SENTRY_AUTH_TOKEN,
}),
// optional - only runs during the deploy command, and adds the plugin to the end of the list of plugins
{ placement: "last", target: "deploy" }
),
],
},
});
```
## Changes to the `trigger.dev` CLI
### No more typechecking during deploy
We no longer run typechecking during the deploy command. This was causing issues with some projects, and we found that it wasn't necessary to run typechecking during the deploy command. If you want to run typechecking before deploying to Trigger.dev, you can run the `tsc` command before running the `deploy` command.
```sh
tsc && npx trigger.dev@latest deploy
```
Or if you are using GitHub actions, you can add an additional step to run the `tsc` command before deploying to Trigger.dev.
```yaml
- name: Install dependencies
run: npm install
- name: Typecheck
run: npx tsc
- name: π Deploy Trigger.dev
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
run: |
npx trigger.dev@latest deploy
```
### deploy `--dry-run`
You can now inspect the build output of your project without actually deploying it to Trigger.dev by using the `--dry-run` flag:
```sh
npx trigger.dev@latest deploy --dry-run
```
This will save the build output and print the path to the build output directory. If you face any issues with deploying, please include the build output in your issue report.
### `--env-file`
You can now pass the path to your local `.env` file using the `--env-file` flag during `dev` and `deploy` commands:
```sh
npx trigger.dev@latest dev --env-file ../../.env
npx trigger.dev@latest deploy --env-file ../../.env
```
The `.env` file works slightly differently in `dev` vs `deploy`:
* In `dev`, the `.env` file is loaded into the CLI's `process.env` and also into the environment variables of the Trigger.dev environment.
* In `deploy`, the `.env` file is loaded into the CLI's `process.env` but not into the environment variables of the Trigger.dev environment. If you want to sync the environment variables from the `.env` file to the Trigger.dev environment variables, you can use the `syncEnvVars` build extension.
### dev debugging in VS Code
Debugging your tasks code in `dev` is now supported via VS Code, without having to pass in any additional flags. Create a launch configuration in `.vscode/launch.json`:
```json launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Trigger.dev: Dev",
"type": "node",
"request": "launch",
"cwd": "${workspaceFolder}",
"runtimeExecutable": "npx",
"runtimeArgs": ["trigger.dev@latest", "dev"],
"skipFiles": ["/**"],
"sourceMaps": true
}
]
}
```
Then you can start debugging your tasks code by selecting the `Trigger.dev: Dev` configuration in the debug panel, and set breakpoints in your tasks code.
### TRIGGER\_ACCESS\_TOKEN in dev
You can now authenticate the `dev` command using the `TRIGGER_ACCESS_TOKEN` environment variable. Previously this was only supported in the `deploy` command.
```sh
TRIGGER_ACCESS_TOKEN= npx trigger.dev@latest dev
```
### Better deploy support for self-hosters
You can now specify a custom registry and namespace when deploying via a self-hosted instance of Trigger.dev:
```sh
npx trigger.dev@latest deploy \
--self-hosted \
--load-image \
--registry docker.io \
--namespace mydockerhubusername
```
All you have to do is create a repository in dockerhub that matches the project ref of your Trigger.dev project (e.g. `proj_rrkpdguyagvsoktglnod`)
Docker Hub will automatically create a repository the first time you push, which is public by
default. If you want to keep these images private, make sure you create the repository before you
first run the `deploy` command
## Known issues
* Path aliases are not yet support in your `trigger.config.ts` file. To workaround this issue you'll need to rewrite path aliases to their relative paths. (See [this](https://github.com/unjs/jiti/issues/166) and [this](https://knip.dev/reference/known-issues#path-aliases-in-config-files)) for more info.
* `*.test.ts` and `.spec.ts` files inside the trigger dirs will be bundled and could cause issues. You'll need to move these files outside of the trigger dirs to avoid this issue.
# How to upgrade the Trigger.dev packages
When we release fixes and new features we recommend you upgrade your Trigger.dev packages.
## Update command
Run this command in your project:
```sh
npx trigger.dev@latest update
```
This will update all of the Trigger.dev packages in your project to the latest version.
## Running the CLI locally
When you run the CLI locally use the latest version for the `dev` and `deploy` commands:
```sh
npx trigger.dev@latest dev
```
```sh
npx trigger.dev@latest deploy
```
These commands will also give you the option to upgrade if you are behind on versions.
## Deploying with GitHub Actions
You can deploy using [GitHub Actions](/github-actions). We recommend that you lock your version in the workflow file so make sure to upgrade.
The deploy step will fail if version mismatches are detected. It's important that you update the
version using the steps below.
In your `.githubs/workflows` folder you can find your workflow yml files. You may have a prod
and staging one.
In the steps you'll see a `run` command. It will run the trigger.dev deploy CLI command. Make
sure to update this version to the latest version (e.g. `npx trigger.dev@3.0.0 deploy`).
## package.json dev dependency
Instead of using `npx`, `pnpm dlx` or `yarn dlx` you can add the Trigger.dev CLI as a dev dependency to your package.json file.
For example:
```json
{
"devDependencies": {
"trigger.dev": "3.0.0"
}
}
```
If you've done this make sure to update the version to match the `@trigger.dev/sdk` package.
Once you have added the `trigger.dev` package to your `devDependencies`, you can use `npm exec trigger.dev`, `pnpm exec trigger.dev`, or `yarn exec trigger.dev` to run the CLI.
But we recommend adding your dev and deploy commands to the `scripts` section of your `package.json` file:
```json
{
"scripts": {
"dev:trigger": "trigger dev",
"deploy:trigger": "trigger deploy"
}
}
```
Then you can run `npm run dev:trigger` and `npm run deploy:trigger` to run the CLI.
# Vercel integration
When you deploy to Vercel, automatically deploy your associated tasks.
This feature is in review. Stay up to date with progress and vote on its priority on our [Roadmap](https://feedback.trigger.dev/roadmap).
# Versioning
We use atomic versioning to ensure that started tasks are not affected by changes to the task code.
A version is a bundle of tasks at a certain point in time.
## Version identifiers
Version identifiers look like this:
* `20240313.1` - March 13th, 2024, version 1
* `20240313.2` - March 13th, 2024, version 2
* `20240314.1` - March 14th, 2024, version 1
You can see there are two parts to the version identifier:
* The date (in reverse format)
* The version number
Versions numbers are incremented each time a new version is created for that date and environment. So it's possible to have `20240313.1` in both the `dev` and `prod` environments.
## Version locking
When a task run starts it is locked to the latest version of the code (for that environment). Once locked it won't change versions, even if you deploy new versions. This is to ensure that a task run is not affected by changes to the code.
### Child tasks and version locking
Trigger and wait functions version lock child task runs to the parent task run version. This ensures the results from child runs match what the parent task is expecting. If you don't wait then version locking doesn't apply.
| Trigger function | Parent task version | Child task version | isLocked |
| ----------------------- | ------------------- | ------------------ | -------- |
| `trigger()` | `20240313.2` | Latest | No |
| `batchTrigger()` | `20240313.2` | Latest | No |
| `triggerAndWait()` | `20240313.2` | `20240313.2` | Yes |
| `batchTriggerAndWait()` | `20240313.2` | `20240313.2` | Yes |
## Local development
When running the local server (using `npx trigger.dev dev`), every relevant code change automatically creates a new version of all tasks.
So a task run will continue running on the version it was locked to. We do this by spawning a new process for each task run. This ensures that the task run is not affected by changes to the code.
## Deployment
Every deployment creates a new version of all tasks for that environment.
## Retries and reattempts
When a task has an uncaught error it will [retry](/errors-retrying), assuming you have not set `maxAttempts` to 0. Retries are locked to the original version of the run.
## Replays
A "replay" is a new run of a task that uses the same inputs but will use the latest version of the code. This is useful when you fix a bug and want to re-run a task with the same inputs. See [replaying](/replaying) for more information.
# Video walkthrough
Go from zero to a working task in your Next.js app in 10 minutes.
### In this video we cover the following topics:
* [0:00](https://youtu.be/YH_4c0K7fGM?si=J8svVzotZtyTXDap\&t=0) β [Install Trigger.dev](/quick-start) in an existing Next.js project
* [1:44](https://youtu.be/YH_4c0K7fGM?si=J8svVzotZtyTXDap\&t=104) β [Run and test](/run-tests) the "Hello, world!" example project
* [2:09](https://youtu.be/YH_4c0K7fGM?si=FMTP8ep_cDBCU0_x\&t=128) β Create and run an AI image generation task that uses [Fal.ai](https://fal.ai) β ([View the code](/guides/examples/fal-ai-image-to-cartoon))
* [6:25](https://youtu.be/YH_4c0K7fGM?si=pPc8iLI2Y9FGD3yo\&t=385) β Create and run a [Realtime](/realtime/overview) example using [React hooks](/frontend/react-hooks) β ([View the code](/guides/examples/fal-ai-realtime))
* [11:10](https://youtu.be/YH_4c0K7fGM?si=Mjd0EvvNsNlVouvY\&t=670) β [Deploy your task](/cli-deploy) to the Trigger.dev Cloud
# Wait: Overview
During your run you can wait for a period of time or for something to happen.
Waiting allows you to write complex tasks as a set of async code, without having to scheduled another task or poll for changes.
In the Trigger.dev Cloud we automatically pause execution of tasks when they are waiting for
longer than a few seconds. You are not charged when execution is paused.
| Function | What it does |
| -------------------------------------- | ----------------------------------------------------------------------------------------- |
| [wait.for()](/wait-for) | Waits for a specific period of time, e.g. 1 day. |
| [wait.until()](/wait-until) | Waits until the provided `Date`. |
| [wait.forRequest()](/wait-for-request) | Waits until a matching HTTP request is received, and gives you the data to continue with. |
| [waitForEvent()](/wait-for-event) | Waits for a matching event, like in the example above. |
# Wait for
Wait for a period of time, then continue execution.
Inside your tasks you can wait for a period of time before you want execution to continue.
```ts /trigger/long-task.ts
export const veryLongTask = task({
id: "very-long-task",
run: async (payload) => {
await wait.for({ seconds: 5 });
await wait.for({ minutes: 10 });
await wait.for({ hours: 1 });
await wait.for({ days: 1 });
await wait.for({ weeks: 1 });
await wait.for({ months: 1 });
await wait.for({ years: 1 });
},
});
```
This allows you to write linear code without having to worry about the complexity of scheduling or managing cron jobs.
In the Trigger.dev Cloud we automatically pause execution of tasks when they are waiting for
longer than a few seconds. You are not charged when execution is paused.
# Wait for event
Wait until an event has been received, then continue execution.
This feature is in review. Stay up to date with progress and vote on its priority on our [Roadmap](https://feedback.trigger.dev/roadmap).
# Wait for request
Wait until a `Request` has been received at the provided URL, then continue execution.
This feature is in review. Stay up to date with progress and vote on its priority on our [Roadmap](https://feedback.trigger.dev/roadmap).
# Wait until
Wait until a date, then continue execution.
This example sends a reminder email to a user at the specified datetime.
```ts /trigger/reminder-email.ts
export const sendReminderEmail = task({
id: "send-reminder-email",
run: async (payload: { to: string; name: string; date: string }) => {
//wait until the date
await wait.until({ date: new Date(payload.date) });
//todo send email
const { data, error } = await resend.emails.send({
from: "hello@trigger.dev",
to: payload.to,
subject: "Don't forgetβ¦",
html: `
Hello ${payload.name},
...
`,
});
},
});
```
This allows you to write linear code without having to worry about the complexity of scheduling or managing cron jobs.
In the Trigger.dev Cloud we automatically pause execution of tasks when they are waiting for
longer than a few seconds. You are not charged when execution is paused.
## `throwIfInThePast`
You can optionally throw an error if the date is already in the past when the function is called:
```ts
await wait.until({ date: new Date(date), throwIfInThePast: true });
```
You can of course use try/catch if you want to do something special in this case.
# Writing tasks: Introduction
Tasks are the core of Trigger.dev. They are long-running processes that are triggered by events.
Before digging deeper into the details of writing tasks, you should read the [fundamentals of tasks](/tasks/overview) to understand what tasks are and how they work.
## Writing tasks
| Topic | Description |
| :----------------------------------------- | :-------------------------------------------------------------------------------------------------- |
| [Logging](/logging) | View and send logs and traces from your tasks. |
| [Errors & retrying](/errors-retrying) | How to deal with errors and write reliable tasks. |
| [Wait](/wait) | Wait for periods of time or for external events to occur before continuing. |
| [Concurrency & Queues](/queue-concurrency) | Configure what you want to happen when there is more than one run at a time. |
| [Versioning](/versioning) | How versioning works. |
| [Machines](/machines) | Configure the CPU and RAM of the machine your task runs on |
| [Idempotency](/idempotency) | Protect against mutations happening twice. |
| [Replaying](/replaying) | You can replay a single task or many at once with a new version of your code. |
| [Notifications](/notifications) | Send realtime notifications from your task that you can subscribe to from your backend or frontend. |