# API keys How to authenticate with Trigger.dev so you can trigger tasks. ### Authentication and your secret keys When you [trigger a task](/triggering) from your backend code, you need to set the `TRIGGER_SECRET_KEY` environment variable. Each environment has its own secret key. You can find the value on the API keys page in the Trigger.dev dashboard: ![How to find your secret key](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/api-keys.png) ### Automatically Configuring the SDK To automatically configure the SDK with your secret key, you can set the `TRIGGER_SECRET_KEY` environment variable. The SDK will automatically use this value when calling API methods (like `trigger`). ```bash .env TRIGGER_SECRET_KEY="tr_dev_…" ``` You can do the same if you are self-hosting and need to change the default URL by using `TRIGGER_API_URL`. ```bash .env TRIGGER_API_URL="https://trigger.example.com" ``` The default URL is `https://api.trigger.dev`. ### Manually Configuring the SDK If you prefer to manually configure the SDK, you can call the `configure` method: ```ts import { configure } from "@trigger.dev/sdk/v3"; import { myTask } from "./trigger/myTasks"; configure({ secretKey: "tr_dev_1234", // WARNING: Never actually hardcode your secret key like this baseURL: "https://mytrigger.example.com", // Optional }); async function triggerTask() { await myTask.trigger({ userId: "1234" }); // This will use the secret key and base URL you configured } ``` # Changelog Our [changelog](https://trigger.dev/changelog) is the best way to stay up to date with the latest changes to Trigger. # CLI deploy command The `trigger.dev deploy` command can be used to deploy your tasks to our infrastructure. Run the command like this: ```bash npm npx trigger.dev@latest deploy ``` ```bash pnpm pnpm dlx trigger.dev@latest deploy ``` ```bash yarn yarn dlx trigger.dev@latest deploy ``` This will fail in CI if any version mismatches are detected. Ensure everything runs locally first using the [dev](/cli-dev-commands) command and don't bypass the version checks! It performs a few steps to deploy: 1. Optionally updates packages when running locally. 2. Compiles and bundles the code. 3. Deploys the code to the Trigger.dev instance. 4. Registers the tasks as a new version in the environment (prod by default). You can also setup [GitHub Actions](/github-actions) to deploy your tasks automatically. ## Arguments ``` npx trigger.dev@latest deploy [path] ``` The path to the project. Defaults to the current directory. ## Options The name of the config file found at the project path. Defaults to `trigger.config.ts` The project ref. Required if there is no config file. Load environment variables from a file. This will only hydrate the `process.env` of the CLI process, not the tasks. Skip checking for `@trigger.dev` package updates. Defaults to `prod` but you can specify `staging`. Create a deployable build but don't deploy it. Prints out the build path so you can inspect it. The platform to build the deployment image for. Defaults to `linux/amd64`. Turn off syncing environment variables with the Trigger.dev instance. ### Common options These options are available on most commands. The login profile to use. Defaults to "default". Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. ### Self-hosting These options are typically used when [self-hosting](/open-source-self-hosting) or for local development. Builds and loads the image using your local docker. Use the `--registry` option to specify the registry to push the image to when using `--self-hosted`, or just use `--push` to push to the default registry. Load the built image into your local docker. Loads the image into your local docker after building it. Specify the registry to push the image to when using `--self-hosted`. Will automatically enable `--push`. When using the `--self-hosted` flag, push the image to the registry. The namespace to use when pushing the image to the registry. For example, if pushing to Docker Hub, the namespace is your Docker Hub username. The networking mode for RUN instructions when using `--self-hosted`. ## Examples ### Push to Docker Hub (self-hosted) An example of deploying to Docker Hub when using a self-hosted setup: ```bash npx trigger.dev@latest deploy \ --self-hosted \ --load-image \ --registry docker.io \ --namespace mydockerhubusername ``` # CLI deploy options Use these options to help deploy your tasks to Trigger.dev. Run the command like this: ```bash npm npx trigger.dev@latest deploy ``` ```bash pnpm pnpm dlx trigger.dev@latest deploy ``` ```bash yarn yarn dlx trigger.dev@latest deploy ``` This will fail in CI if any version mismatches are detected. Ensure everything runs locally first using the [dev](/cli-dev-commands) command and don't bypass the version checks! It performs a few steps to deploy: 1. Optionally updates packages when running locally. 2. Compiles and bundles the code. 3. Deploys the code to the Trigger.dev instance. 4. Registers the tasks as a new version in the environment (prod by default). You can also setup [GitHub Actions](/github-actions) to deploy your tasks automatically. ## Arguments ``` npx trigger.dev@latest deploy [path] ``` The path to the project. Defaults to the current directory. ## Options The name of the config file found at the project path. Defaults to `trigger.config.ts` The project ref. Required if there is no config file. Load environment variables from a file. This will only hydrate the `process.env` of the CLI process, not the tasks. Skip checking for `@trigger.dev` package updates. Defaults to `prod` but you can specify `staging`. Create a deployable build but don't deploy it. Prints out the build path so you can inspect it. The platform to build the deployment image for. Defaults to `linux/amd64`. Turn off syncing environment variables with the Trigger.dev instance. ### Common options These options are available on most commands. The login profile to use. Defaults to "default". Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. ### Self-hosting These options are typically used when [self-hosting](/open-source-self-hosting) or for local development. Builds and loads the image using your local docker. Use the `--registry` option to specify the registry to push the image to when using `--self-hosted`, or just use `--push` to push to the default registry. Load the built image into your local docker. Loads the image into your local docker after building it. Specify the registry to push the image to when using `--self-hosted`. Will automatically enable `--push`. When using the `--self-hosted` flag, push the image to the registry. The namespace to use when pushing the image to the registry. For example, if pushing to Docker Hub, the namespace is your Docker Hub username. The networking mode for RUN instructions when using `--self-hosted`. ## Examples ### Push to Docker Hub (self-hosted) An example of deploying to Docker Hub when using a self-hosted setup: ```bash npx trigger.dev@latest deploy \ --self-hosted \ --load-image \ --registry docker.io \ --namespace mydockerhubusername ``` # CLI dev command The `trigger.dev dev` command is used to run your tasks locally. This runs a server on your machine that can execute Trigger.dev tasks: ```bash npm npx trigger.dev@latest dev ``` ```bash pnpm pnpm dlx trigger.dev@latest dev ``` ```bash yarn yarn dlx trigger.dev@latest dev ``` It will first perform an update check to prevent version mismatches, failed deploys, and other errors. You will always be prompted first. You will see in the terminal that the server is running and listening for tasks. When you run a task, you will see it in the terminal along with a link to view it in the dashboard. It is worth noting that each task runs in a separate Node process. This means that if you have a long-running task, it will not block other tasks from running. ## Options The name of the config file found at the project path. Defaults to `trigger.config.ts` The project ref. Required if there is no config file. Load environment variables from a file. This will only hydrate the `process.env` of the CLI process, not the tasks. Skip checking for `@trigger.dev` package updates. ### Common options These options are available on most commands. The login profile to use. Defaults to "default". Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. ## Concurrently running the terminal Install the concurrently package as a dev dependency: ```ts concurrently --raw --kill-others npm:dev:remix npm:dev:trigger ``` Then add something like this in your package.json scripts: ```json "scripts": { "dev": "concurrently --raw --kill-others npm:dev:*", "dev:trigger": "npx trigger.dev@latest dev", // Add your framework-specific dev script here, for example: // "dev:next": "next dev", // "dev:remix": "remix dev", //... } ``` # CLI dev command The `trigger.dev dev` command is used to run your tasks locally. This runs a server on your machine that can execute Trigger.dev tasks: ```bash npm npx trigger.dev@latest dev ``` ```bash pnpm pnpm dlx trigger.dev@latest dev ``` ```bash yarn yarn dlx trigger.dev@latest dev ``` It will first perform an update check to prevent version mismatches, failed deploys, and other errors. You will always be prompted first. You will see in the terminal that the server is running and listening for tasks. When you run a task, you will see it in the terminal along with a link to view it in the dashboard. It is worth noting that each task runs in a separate Node process. This means that if you have a long-running task, it will not block other tasks from running. ## Options The name of the config file found at the project path. Defaults to `trigger.config.ts` The project ref. Required if there is no config file. Load environment variables from a file. This will only hydrate the `process.env` of the CLI process, not the tasks. Skip checking for `@trigger.dev` package updates. ### Common options These options are available on most commands. The login profile to use. Defaults to "default". Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. ## Concurrently running the terminal Install the concurrently package as a dev dependency: ```ts concurrently --raw --kill-others npm:dev:remix npm:dev:trigger ``` Then add something like this in your package.json scripts: ```json "scripts": { "dev": "concurrently --raw --kill-others npm:dev:*", "dev:trigger": "npx trigger.dev@latest dev", // Add your framework-specific dev script here, for example: // "dev:next": "next dev", // "dev:remix": "remix dev", //... } ``` # CLI init command Use these options when running the CLI `init` command. Run the command like this: ```bash npm npx trigger.dev@latest init ``` ```bash pnpm pnpm dlx trigger.dev@latest init ``` ```bash yarn yarn dlx trigger.dev@latest init ``` ## Options By default, the init command assumes you are using TypeScript. Use this flag to initialize a project that uses JavaScript. The project ref to use when initializing the project. The version of the `@trigger.dev/sdk` package to install. Defaults to `latest`. Skip installing the `@trigger.dev/sdk` package. Override the existing config file if it exists. Additional arguments to pass to the package manager. Accepts CSV for multiple args. ### Common options These options are available on most commands. The login profile to use. Defaults to "default". Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. # Introduction The Trigger.dev CLI has a number of options and commands to help you develop locally, self host, and deploy your tasks. ## Options Shows the help information for the command. Displays the version number of the CLI. ## Commands | Command | Description | | :------------------------------------------- | :----------------------------------------------------------------- | | [login](/cli-login-commands) | Login with Trigger.dev so you can perform authenticated actions. | | [init](/cli-init-commands) | Initialize your existing project for development with Trigger.dev. | | [dev](/cli-dev-commands) | Run your Trigger.dev tasks locally. | | [deploy](/cli-deploy-commands) | Deploy your Trigger.dev v3 project to the cloud. | | [whoami](/cli-whoami-commands) | Display the current logged in user and project details. | | [logout](/cli-logout-commands) | Logout of Trigger.dev. | | [list-profiles](/cli-list-profiles-commands) | List all of your CLI profiles. | | [update](/cli-update-commands) | Updates all `@trigger.dev/*` packages to match the CLI version. | # CLI list-profiles command Use these options when using the `list-profiles` CLI command. Run the command like this: ```bash npm npx trigger.dev@latest list-profiles ``` ```bash pnpm pnpm dlx trigger.dev@latest list-profiles ``` ```bash yarn yarn dlx trigger.dev@latest list-profiles ``` ## Options ### Common options These options are available on most commands. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. # CLI login command Use these options when logging in to Trigger.dev using the CLI. Run the command like this: ```bash npm npx trigger.dev@latest login ``` ```bash pnpm pnpm dlx trigger.dev@latest login ``` ```bash yarn yarn dlx trigger.dev@latest login ``` ## Options ### Common options These options are available on most commands. The login profile to use. Defaults to "default". Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. # CLI logout command Use these options when using the `logout` CLI command. Run the command like this: ```bash npm npx trigger.dev@latest logout ``` ```bash pnpm pnpm dlx trigger.dev@latest logout ``` ```bash yarn yarn dlx trigger.dev@latest logout ``` ## Options ### Common options These options are available on most commands. The login profile to use. Defaults to "default". Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. # CLI update command Use these options when using the `update` CLI command. Run the command like this: ```bash npm npx trigger.dev@latest update ``` ```bash pnpm pnpm dlx trigger.dev@latest update ``` ```bash yarn yarn dlx trigger.dev@latest update ``` ## Options ### Common options These options are available on most commands. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. # CLI whoami command Use these options to display the current logged in user and project details. Run the command like this: ```bash npm npx trigger.dev@latest whoami ``` ```bash pnpm pnpm dlx trigger.dev@latest whoami ``` ```bash yarn yarn dlx trigger.dev@latest whoami ``` ## Options ### Common options These options are available on most commands. The login profile to use. Defaults to "default". Override the default API URL. If not specified, it uses `https://api.trigger.dev`. This can also be set via the `TRIGGER_API_URL` environment variable. The CLI log level to use. Options are `debug`, `info`, `log`, `warn`, `error`, and `none`. This does not affect the log level of your trigger.dev tasks. Defaults to `log`. Opt-out of sending telemetry data. This can also be done via the `TRIGGER_TELEMETRY_DISABLED` environment variable. Just set it to anything other than an empty string. Shows the help information for the command. Displays the version number of the CLI. # Discord Community Please [join our community on Discord](https://trigger.dev/discord) to ask questions, share your projects, and get help from other developers. # The trigger.config.ts file This file is used to configure your project and how it's built. The `trigger.config.ts` file is used to configure your Trigger.dev project. It is a TypeScript file at the root of your project that exports a default configuration object. Here's an example: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ // Your project ref (you can see it on the Project settings page in the dashboard) project: "", //The paths for your trigger folders dirs: ["./trigger"], retries: { //If you want to retry a task in dev mode (when using the CLI) enabledInDev: false, //the default retry settings. Used if you don't specify on a task. default: { maxAttempts: 3, minTimeoutInMs: 1000, maxTimeoutInMs: 10000, factor: 2, randomize: true, }, }, }); ``` The config file handles a lot of things, like: * Specifying where your trigger tasks are located using the `dirs` option. * Setting the default retry settings. * Configuring OpenTelemetry instrumentations. * Customizing the build process. * Adding global task lifecycle functions. The config file is bundled with your project, so code imported in the config file is also bundled, which can have an effect on build times and cold start duration. One important qualification is anything defined in the `build` config is automatically stripped out of the config file, and imports used inside build config with be tree-shaken out. ## Dirs You can specify the directories where your tasks are located using the `dirs` option: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", dirs: ["./trigger"], }); ``` If you omit the `dirs` option, we will automatically detect directories that are named `trigger` in your project, but we recommend specifying the directories explicitly. The `dirs` option is an array of strings, so you can specify multiple directories if you have tasks in multiple locations. We will search for TypeScript and JavaScript files in the specified directories and include them in the build process. We automatically exclude files that have `.test` or `.spec` in the name, but you can customize this by specifying glob patterns in the `ignorePatterns` option: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", dirs: ["./trigger"], ignorePatterns: ["**/*.my-test.ts"], }); ``` ## Lifecycle functions You can add lifecycle functions to get notified when any task starts, succeeds, or fails using `onStart`, `onSuccess` and `onFailure`: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... onSuccess: async (payload, output, { ctx }) => { console.log("Task succeeded", ctx.task.id); }, onFailure: async (payload, error, { ctx }) => { console.log("Task failed", ctx.task.id); }, onStart: async (payload, { ctx }) => { console.log("Task started", ctx.task.id); }, init: async (payload, { ctx }) => { console.log("I run before any task is run"); }, }); ``` Read more about task lifecycle functions in the [tasks overview](/tasks/overview). ## Instrumentations We use OpenTelemetry (OTEL) for our run logs. This means you get a lot of information about your tasks with no effort. But you probably want to add more information to your logs. For example, here's all the Prisma calls automatically logged: ![The run log](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/auto-instrumentation.png) Here we add Prisma and OpenAI instrumentations to your `trigger.config.ts` file. ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { PrismaInstrumentation } from "@prisma/instrumentation"; import { OpenAIInstrumentation } from "@traceloop/instrumentation-openai"; export default defineConfig({ project: "", // Your other config settings... telemetry: { instrumentations: [new PrismaInstrumentation(), new OpenAIInstrumentation()], }, }); ``` There is a [huge library of instrumentations](https://opentelemetry.io/ecosystem/registry/?language=js) you can easily add to your project like this. Some ones we recommend: | Package | Description | | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | | `@opentelemetry/instrumentation-undici` | Logs all fetch calls (inc. Undici fetch) | | `@opentelemetry/instrumentation-http` | Logs all HTTP calls | | `@prisma/instrumentation` | Logs all Prisma calls, you need to [enable tracing](https://github.com/prisma/prisma/tree/main/packages/instrumentation) | | `@traceloop/instrumentation-openai` | Logs all OpenAI calls | `@opentelemetry/instrumentation-fs` which logs all file system calls is currently not supported. ## Runtime We currently only officially support the `node` runtime, but you can try our experimental `bun` runtime by setting the `runtime` option in your config file: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... runtime: "bun", }); ``` See our [Bun guide](/guides/frameworks/bun) for more information. ## Default machine You can specify the default machine for all tasks in your project: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... defaultMachine: "large-1x", }); ``` See our [machines documentation](/machines) for more information. ## Log level You can set the log level for your project: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... logLevel: "debug", }); ``` The `logLevel` only determines which logs are sent to the Trigger.dev instance when using the `logger` API. All `console` based logs are always sent. ## Max duration You can set the default `maxDuration` for all tasks in your project: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... maxDuration: 60, // 60 seconds }); ``` See our [maxDuration guide](/runs/max-duration) for more information. ## Build configuration You can customize the build process using the `build` option: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... build: { // Don't bundle these packages external: ["header-generator"], }, }); ``` The `trigger.config.ts` file is included in the bundle, but with the `build` configuration stripped out. These means any imports only used inside the `build` configuration are also removed from the final bundle. ### External All code is bundled by default, but you can exclude some packages from the bundle using the `external` option: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... build: { external: ["header-generator"], }, }); ``` When a package is excluded from the bundle, it will be added to a dynamically generated package.json file in the build directory. The version of the package will be the same as the version found in your `node_modules` directory. Each entry in the external should be a package name, not necessarily the import path. For example, if you want to exclude the `ai` package, but you are importing `ai/rsc`, you should just include `ai` in the `external` array: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... build: { external: ["ai"], }, }); ``` Any packages that install or build a native binary should be added to external, as native binaries cannot be bundled. For example, `re2`, `sharp`, and `sqlite3` should be added to external. ### JSX You can customize the `jsx` options that are passed to `esbuild` using the `jsx` option: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... build: { jsx: { // Use the Fragment component instead of React.Fragment fragment: "Fragment", // Use the h function instead of React.createElement factory: "h", // Turn off automatic runtime automatic: false, }, }, }); ``` By default we enabled [esbuild's automatic JSX runtime](https://esbuild.github.io/content-types/#auto-import-for-jsx) which means you don't need to import `React` in your JSX files. You can disable this by setting `automatic` to `false`. See the [esbuild JSX documentation](https://esbuild.github.io/content-types/#jsx) for more information. ### Conditions You can add custom [import conditions](https://esbuild.github.io/api/#conditions) to your build using the `conditions` option: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... build: { conditions: ["react-server"], }, }); ``` These conditions effect how imports are resolved during the build process. For example, the `react-server` condition will resolve `ai/rsc` to the server version of the `ai/rsc` export. Custom conditions will also be passed to the `node` runtime when running your tasks. ### Extensions Build extension allow you to hook into the build system and customize the build process or the resulting bundle and container image (in the case of deploying). You can use pre-built extensions by installing the `@trigger.dev/build` package into your `devDependencies`, or you can create your own. #### additionalFiles Import the `additionalFiles` build extension and use it in your `trigger.config.ts` file: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { additionalFiles } from "@trigger.dev/build/extensions/core"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [ additionalFiles({ files: ["wrangler/wrangler.toml", "./assets/**", "./fonts/**"] }), ], }, }); ``` This will copy the files specified in the `files` array to the build directory. The `files` array can contain globs. The output paths will match the path of the file, relative to the root of the project. The root of the project is the directory that contains the trigger.config.ts file #### `additionalPackages` Import the `additionalPackages` build extension and use it in your `trigger.config.ts` file: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { additionalPackages } from "@trigger.dev/build/extensions/core"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [additionalPackages({ packages: ["wrangler"] })], }, }); ``` This allows you to include additional packages in the build that are not automatically included via imports. This is useful if you want to install a package that includes a CLI tool that you want to invoke in your tasks via `exec`. We will try to automatically resolve the version of the package but you can specify the version by using the `@` symbol: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [additionalPackages({ packages: ["wrangler@1.19.0"] })], }, }); ``` #### `emitDecoratorMetadata` If you need support for the `emitDecoratorMetadata` typescript compiler option, import the `emitDecoratorMetadata` build extension and use it in your `trigger.config.ts` file: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { emitDecoratorMetadata } from "@trigger.dev/build/extensions/typescript"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [emitDecoratorMetadata()], }, }); ``` This is usually required if you are using certain ORMs, like TypeORM, that require this option to be enabled. It's not enabled by default because there is a performance cost to enabling it. emitDecoratorMetadata works by hooking into the esbuild bundle process and using the TypeScript compiler API to compile files where we detect the use of decorators. This means you must have `emitDecoratorMetadata` enabled in your `tsconfig.json` file, as well as `typescript` installed in your `devDependencies`. #### Prisma If you are using Prisma, you should use the prisma build extension. * Automatically handles copying prisma files to the build directory. * Generates the prisma client during the deploy process * Optionally will migrate the database during the deploy process * Support for TypedSQL and multiple schema files. You can use it for a simple Prisma setup like this: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { prismaExtension } from "@trigger.dev/build/extensions/prisma"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [ prismaExtension({ version: "5.19.0", // optional, we'll automatically detect the version if not provided schema: "prisma/schema.prisma", }), ], }, }); ``` This does not have any effect when running the `dev` command, only when running the `deploy` command. If you want to also run migrations during the build process, you can pass in the `migrate` option: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { prismaExtension } from "@trigger.dev/build/extensions/prisma"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [ prismaExtension({ schema: "prisma/schema.prisma", migrate: true, directUrlEnvVarName: "DATABASE_URL_UNPOOLED", // optional - the name of the environment variable that contains the direct database URL if you are using a direct database URL }), ], }, }); ``` If you have multiple `generator` statements defined in your schema file, you can pass in the `clientGenerator` option to specify the `prisma-client-js` generator, which will prevent other generators from being generated. Some examples where you may need to do this include when using the `prisma-kysely` or `prisma-json-types-generator` generators. ```prisma schema.prisma datasource db { provider = "postgresql" url = env("DATABASE_URL") directUrl = env("DATABASE_URL_UNPOOLED") } // We only want to generate the prisma-client-js generator generator client { provider = "prisma-client-js" } generator kysely { provider = "prisma-kysely" output = "../../src/kysely" enumFileName = "enums.ts" fileName = "types.ts" } generator json { provider = "prisma-json-types-generator" } ``` ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { prismaExtension } from "@trigger.dev/build/extensions/prisma"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [ prismaExtension({ schema: "prisma/schema.prisma", clientGenerator: "client", }), ], }, }); ``` If you are using [TypedSQL](https://www.prisma.io/typedsql), you'll need to enable it via the `typedSql` option: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [ prismaExtension({ schema: "prisma/schema.prisma", typedSql: true, }), ], }, }); ``` The `prismaExtension` will inject the `DATABASE_URL` environment variable into the build process. Learn more about setting environment variables for deploying in our [Environment Variables](/deploy-environment-variables) guide. These environment variables are only used during the build process and are not embedded in the final container image. #### syncEnvVars The `syncEnvVars` build extension replaces the deprecated `resolveEnvVars` export. Check out our [syncEnvVars documentation](/deploy-environment-variables#sync-env-vars-from-another-service) for more information. ```ts import { syncEnvVars } from "@trigger.dev/build/extensions/core"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [syncEnvVars()], }, }); ``` #### syncVercelEnvVars The `syncVercelEnvVars` build extension syncs environment variables from your Vercel project to Trigger.dev. You need to set the `VERCEL_ACCESS_TOKEN` and `VERCEL_PROJECT_ID` environment variables, or pass in the token and project ID as arguments to the `syncVercelEnvVars` build extension. If you're working with a team project, you'll also need to set `VERCEL_TEAM_ID`, which can be found in your team settings. You can find / generate the `VERCEL_ACCESS_TOKEN` in your Vercel [dashboard](https://vercel.com/account/settings/tokens). Make sure the scope of the token covers the project with the environment variables you want to sync. ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { syncVercelEnvVars } from "@trigger.dev/build/extensions/core"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [syncVercelEnvVars()], }, }); ``` #### audioWaveform Previously, we installed [Audio Waveform](https://github.com/bbc/audiowaveform) in the build image. That's been moved to a build extension: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { audioWaveform } from "@trigger.dev/build/extensions/audioWaveform"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [audioWaveform()], // uses verson 1.1.0 of audiowaveform by default }, }); ``` #### puppeteer **WEB SCRAPING:** When web scraping, you MUST use a proxy to comply with our terms of service. Direct scraping of third-party websites without the site owner's permission using Trigger.dev Cloud is prohibited and will result in account suspension. See [this example](/guides/examples/puppeteer#scrape-content-from-a-web-page) which uses a proxy. To use Puppeteer in your project, add these build settings to your `trigger.config.ts` file: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { puppeteer } from "@trigger.dev/build/extensions/puppeteer"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [puppeteer()], }, }); ``` And add the following environment variable in your Trigger.dev dashboard on the Environment Variables page: ```bash PUPPETEER_EXECUTABLE_PATH: "/usr/bin/google-chrome-stable", ``` Follow [this example](/guides/examples/puppeteer) to get setup with Trigger.dev and Puppeteer in your project. #### ffmpeg You can add the `ffmpeg` build extension to your build process: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { ffmpeg } from "@trigger.dev/build/extensions/core"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [ffmpeg()], }, }); ``` By default, this will install the version of `ffmpeg` that is available in the Debian package manager. If you need a specific version, you can pass in the version as an argument: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { ffmpeg } from "@trigger.dev/build/extensions/core"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [ffmpeg({ version: "6.0-4" })], }, }); ``` This extension will also add the `FFMPEG_PATH` and `FFPROBE_PATH` to your environment variables, making it easy to use popular ffmpeg libraries like `fluent-ffmpeg`. Follow [this example](/guides/examples/ffmpeg-video-processing) to get setup with Trigger.dev and FFmpeg in your project. #### esbuild plugins You can easily add existing or custom esbuild plugins to your build process using the `esbuildPlugin` extension: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { esbuildPlugin } from "@trigger.dev/build/extensions"; import { sentryEsbuildPlugin } from "@sentry/esbuild-plugin"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [ esbuildPlugin( sentryEsbuildPlugin({ org: process.env.SENTRY_ORG, project: process.env.SENTRY_PROJECT, authToken: process.env.SENTRY_AUTH_TOKEN, }), // optional - only runs during the deploy command, and adds the plugin to the end of the list of plugins { placement: "last", target: "deploy" } ), ], }, }); ``` #### aptGet You can install system packages into the deployed image using using the `aptGet` extension: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { aptGet } from "@trigger.dev/build/extensions/core"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [aptGet({ packages: ["ffmpeg"] })], }, }); ``` If you want to install a specific version of a package, you can specify the version like this: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [aptGet({ packages: ["ffmpeg=6.0-4"] })], }, }); ``` #### Custom extensions You can create your own extensions to further customize the build process. Extensions are an object with a `name` and zero or more lifecycle hooks (`onBuildStart` and `onBuildComplete`) that allow you to modify the `BuildContext` object that is passed to the build process through adding layers. For example, this is how the `aptGet` extension is implemented: ```ts import { BuildExtension } from "@trigger.dev/core/v3/build"; export type AptGetOptions = { packages: string[]; }; export function aptGet(options: AptGetOptions): BuildExtension { return { name: "aptGet", onBuildComplete(context) { if (context.target === "dev") { return; } context.logger.debug("Adding apt-get layer", { pkgs: options.packages, }); context.addLayer({ id: "apt-get", image: { pkgs: options.packages, }, }); }, }; } ``` Instead of creating this function and worrying about types, you can define an extension inline in your `trigger.config.ts` file: ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "", // Your other config settings... build: { extensions: [ { name: "aptGet", onBuildComplete(context) { if (context.target === "dev") { return; } context.logger.debug("Adding apt-get layer", { pkgs: ["ffmpeg"], }); context.addLayer({ id: "apt-get", image: { pkgs: ["ffmpeg"], }, }); }, }, ], }, }); ``` We'll be expanding the documentation on how to create custom extensions in the future, but for now you are encouraged to look at the existing extensions in the `@trigger.dev/build` package for inspiration, which you can see in our repo [here](https://github.com/triggerdotdev/trigger.dev/tree/main/packages/build/src/extensions) # Build extensions Customize how your project is built and deployed to Trigger.dev with build extensions Build extension allow you to hook into the build system and customize the build process or the resulting bundle and container image (in the case of deploying). See our [trigger.config.ts reference](/config/config-file#extensions) for more information on how to install and use our built-in extensions. Build extensions can do the following: * Add additional files to the build * Add dependencies to the list of externals * Add esbuild plugins * Add additional npm dependencies * Add additional system packages to the image build container * Add commands to run in the image build container * Add environment variables to the image build container * Sync environment variables to your Trigger.dev project ## Creating a build extension Build extensions are added to your `trigger.config.ts` file, with a required `name` and optional build hook functions. Here's a simple example of a build extension that just logs a message when the build starts: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "my-project", build: { extensions: [ { name: "my-extension", onBuildStart: async (context) => { console.log("Build starting!"); }, }, ], }, }); ``` You can also extract that out into a function instead of defining it inline, in which case you will need to import the `BuildExtension` type from the `@trigger.dev/build` package: You'll need to add the `@trigger.dev/build` package to your `devDependencies` before the below code will work. Make sure it's version matches that of the installed `@trigger.dev/sdk` package. ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { BuildExtension } from "@trigger.dev/build"; export default defineConfig({ project: "my-project", build: { extensions: [myExtension()], }, }); function myExtension(): BuildExtension { return { name: "my-extension", onBuildStart: async (context) => { console.log("Build starting!"); }, }; } ``` ## Build hooks ### externalsForTarget This allows the extension to add additional dependencies to the list of externals for the build. This is useful for dependencies that are not included in the bundle, but are expected to be available at runtime. ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "my-project", build: { extensions: [ { name: "my-extension", externalsForTarget: async (target) => { return ["my-dependency"]; }, }, ], }, }); ``` ### onBuildStart This hook runs before the build starts. It receives the `BuildContext` object as an argument. ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "my-project", build: { extensions: [ { name: "my-extension", onBuildStart: async (context) => { console.log("Build starting!"); }, }, ], }, }); ``` If you want to add an esbuild plugin, you must do so in the `onBuildStart` hook. Here's an example of adding a custom esbuild plugin: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "my-project", build: { extensions: [ { name: "my-extension", onBuildStart: async (context) => { context.registerPlugin({ name: "my-plugin", setup(build) { build.onLoad({ filter: /.*/, namespace: "file" }, async (args) => { return { contents: "console.log('Hello, world!')", loader: "js", }; }); }, }); }, }, ], }, }); ``` You can use the `BuildContext.target` property to determine if the build is for `dev` or `deploy`: ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "my-project", build: { extensions: [ { name: "my-extension", onBuildStart: async (context) => { if (context.target === "dev") { console.log("Building for dev"); } else { console.log("Building for deploy"); } }, }, ], }, }); ``` ### onBuildComplete This hook runs after the build completes. It receives the `BuildContext` object and a `BuildManifest` object as arguments. This is where you can add in one or more `BuildLayer`'s to the context. ```ts import { defineConfig } from "@trigger.dev/sdk/v3"; export default defineConfig({ project: "my-project", build: { extensions: [ { name: "my-extension", onBuildComplete: async (context, manifest) => { context.addLayer({ id: "more-dependencies", dependencies, }); }, }, ], }, }); ``` See the [addLayer](#addlayer) documentation for more information on how to use `addLayer`. ## BuildTarget Can either be `dev` or `deploy`, matching the CLI command name that is being run. ```sh npx trigger.dev@latest dev # BuildTarget is "dev" npx trigger.dev@latest deploy # BuildTarget is "deploy" ``` ## BuildContext ### addLayer() The layer to add to the build context. See the [BuildLayer](#buildlayer) documentation for more information. ### registerPlugin() The esbuild plugin to register. An optional target to register the plugin for. If not provided, the plugin will be registered for all targets. An optional placement for the plugin. If not provided, the plugin will be registered in place. This allows you to control the order of plugins. ### resolvePath() Resolves a path relative to the project's working directory. The path to resolve. ```ts const resolvedPath = context.resolvePath("my-other-dependency"); ``` ### properties The target of the build, either `dev` or `deploy`. The runtime of the project (either node or bun) The project ref The trigger directories to search for tasks The build configuration object The working directory of the project The root workspace directory of the project The path to the package.json file The path to the lockfile (package-lock.json, yarn.lock, or pnpm-lock.yaml) The path to the trigger.config.ts file The path to the tsconfig.json file A logger object that can be used to log messages to the console. ## BuildLayer A unique identifier for the layer. An array of commands to run in the image build container. ```ts commands: ["echo 'Hello, world!'"]; ``` These commands are run after packages have been installed and the code copied into the container in the "build" stage of the Dockerfile. This means you cannot install system packages in these commands because they won't be available in the final stage. To do that, please use the `pkgs` property of the `image` object. An array of system packages to install in the image build container. An array of instructions to add to the Dockerfile. Environment variables to add to the image build container, but only during the "build" stage of the Dockerfile. This is where you'd put environment variables that are needed when running any of the commands in the `commands` array. Environment variables that should sync to the Trigger.dev project, which will then be avalable in your tasks at runtime. Importantly, these are NOT added to the image build container, but are instead added to the Trigger.dev project and stored securely. An object of dependencies to add to the build. The key is the package name and the value is the version. ```ts dependencies: { "my-dependency": "^1.0.0", }; ``` ### examples Add a command that will echo the value of an environment variable: ```ts context.addLayer({ id: "my-layer", commands: [`echo $MY_ENV_VAR`], build: { env: { MY_ENV_VAR: "Hello, world!", }, }, }); ``` ## Troubleshooting When creating a build extension, you may run into issues with the build process. One thing that can help is turning on `debug` logging when running either `dev` or `deploy`: ```sh npx trigger.dev@latest dev --log-level debug npx trigger.dev@latest deploy --log-level debug ``` Another helpful tool is the `--dry-run` flag on the `deploy` command, which will bundle your project and generate the Containerfile (e.g. the Dockerfile) without actually deploying it. This can help you see what the final image will look like and debug any issues with the build process. ```sh npx trigger.dev@latest deploy --dry-run ``` You should also take a look at our built in extensions for inspiration on how to create your own. You can find them in in [the source code here](https://github.com/triggerdotdev/trigger.dev/tree/main/packages/build/src/extensions). # Context Get the context of a task run. Context (`ctx`) is a way to get information about a run. The context object does not change whilst your code is executing. This means values like `ctx.run.durationMs` will be fixed at the moment the `run()` function is called. ```typescript Context example import { task } from "@trigger.dev/sdk/v3"; export const parentTask = task({ id: "parent-task", run: async (payload: { message: string }, { ctx }) => { if (ctx.environment.type === "DEVELOPMENT") { return; } }, }); ``` ## Context properties The exported function name of the task e.g. `myTask` if you defined it like this: `export const myTask = task(...)`. The ID of the task. The file path of the task. The ID of the execution attempt. The attempt number. The start time of the attempt. The ID of the background worker. The ID of the background worker task. The current status of the attempt. The ID of the task run. The context of the task run. An array of [tags](/tags) associated with the task run. Whether this is a [test run](/run-tests). The creation time of the task run. The start time of the task run. An optional [idempotency key](/idempotency) for the task run. The [maximum number of attempts](/triggering#maxattempts) allowed for this task run. The duration of the task run in milliseconds when the `run()` function is called. For live values use the [usage SDK functions](/run-usage). The cost of the task run in cents when the `run()` function is called. For live values use the [usage SDK functions](/run-usage). The base cost of the task run in cents when the `run()` function is called. For live values use the [usage SDK functions](/run-usage). The [version](/versioning) of the task run. The [maximum allowed duration](/runs/max-duration) for the task run. The ID of the queue. The name of the queue. The ID of the environment. The slug of the environment. The type of the environment (PRODUCTION, STAGING, DEVELOPMENT, or PREVIEW). The ID of the organization. The slug of the organization. The name of the organization. The ID of the project. The reference of the project. The slug of the project. The name of the project. Optional information about the batch, if applicable. The ID of the batch. Optional information about the machine preset used for execution. The name of the machine preset. The CPU allocation for the machine. The memory allocation for the machine. The cost in cents per millisecond for this machine preset. # Environment Variables Any environment variables used in your tasks need to be added so the deployed code will run successfully. An environment variable in Node.js is accessed in your code using `process.env.MY_ENV_VAR`. We deploy your tasks and scale them up and down when they are triggered. So any environment variables you use in your tasks need to accessible to us so your code will run successfully. ## In the dashboard ### Setting environment variables In the sidebar select the "Environment Variables" page, then press the "New environment variable" button. ![Environment variables page](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/environment-variables-page.jpg) You can add values for your local dev environment, staging and prod. ![Environment variables page](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/environment-variables-panel.jpg) Specifying Dev values is optional. They will be overriden by values in your .env file when running locally. ### Editing environment variables You can edit an environment variable's values. You cannot edit the key name, you must delete and create a new one. ![Environment variables page](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/environment-variables-actions.png) ![Environment variables page](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/environment-variables-edit-popover.png) ### Deleting environment variables Environment variables are fetched and injected before a runs begins. So if you delete one you can cause runs to fail that are expecting variables to be set. ![Environment variables page](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/environment-variables-actions.png) This will immediately delete the variable. ![Environment variables page](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/environment-variables-delete-popover.png) ## In your code You can use our SDK to get and manipulate environment variables. You can also easily sync environment variables from another service into Trigger.dev. ### Directly manipulating environment variables We have a complete set of SDK functions (and REST API) you can use to directly manipulate environment variables. | Function | Description | | -------------------------------------------------- | ----------------------------------------------------------- | | [envvars.list()](/management/envvars/list) | List all environment variables | | [envvars.upload()](/management/envvars/import) | Upload multiple env vars. You can override existing values. | | [envvars.create()](/management/envvars/create) | Create a new environment variable | | [envvars.retrieve()](/management/envvars/retrieve) | Retrieve an environment variable | | [envvars.update()](/management/envvars/update) | Update a single environment variable | | [envvars.del()](/management/envvars/delete) | Delete a single environment variable | ### Sync env vars from another service You could use the SDK functions above but it's much easier to use our `syncEnvVars` build extension in your `trigger.config` file. To use the `syncEnvVars` build extension, you should first install the `@trigger.dev/build` package into your devDependencies. In this example we're using env vars from [Infisical](https://infisical.com). ```ts trigger.config.ts import { defineConfig } from "@trigger.dev/sdk/v3"; import { syncEnvVars } from "@trigger.dev/build/extensions/core"; import { InfisicalSDK } from "@infisical/sdk"; export default defineConfig({ build: { extensions: [ syncEnvVars(async (ctx) => { const client = new InfisicalSDK(); await client.auth().universalAuth.login({ clientId: process.env.INFISICAL_CLIENT_ID!, clientSecret: process.env.INFISICAL_CLIENT_SECRET!, }); const { secrets } = await client.secrets().listSecrets({ environment: ctx.environment, projectId: process.env.INFISICAL_PROJECT_ID!, }); return secrets.map((secret) => ({ name: secret.secretKey, value: secret.secretValue, })); }), ], }, }); ``` #### Syncing environment variables from Vercel To sync environment variables from your Vercel projects to Trigger.dev, you can use our build extension. Check out our [syncing environment variables from Vercel guide](/guides/examples/vercel-sync-env-vars). #### Deploy When you run the [CLI deploy command](/cli-deploy) directly or using [GitHub Actions](/github-actions) it will sync the environment variables from [Infisical](https://infisical.com) to Trigger.dev. This means they'll appear on the Environment Variables page so you can confirm that it's worked. This means that you need to redeploy your Trigger.dev tasks if you change the environment variables in [Infisical](https://infisical.com). The `process.env.INFISICAL_CLIENT_ID`, `process.env.INFISICAL_CLIENT_SECRET` and `process.env.INFISICAL_PROJECT_ID` will need to be supplied to the `deploy` CLI command. You can do this via the `--env-file .env` flag or by setting them as environment variables in your terminal. #### Dev `syncEnvVars` does not have any effect when running the `dev` command locally. If you want to inject environment variables from another service into your local environment you can do so via a `.env` file or just supplying them as environment variables in your terminal. Most services will have a CLI tool that allows you to run a command with environment variables set: ```sh infisical run -- npx trigger.dev@latest dev ``` Any environment variables set in the CLI command will be available to your local Trigger.dev tasks. ### The syncEnvVars callback return type You can return env vars as an object with string keys and values, or an array of names + values. ```ts return { MY_ENV_VAR: "my value", MY_OTHER_ENV_VAR: "my other value", }; ``` or ```ts return [ { name: "MY_ENV_VAR", value: "my value", }, { name: "MY_OTHER_ENV_VAR", value: "my other value", }, ]; ``` This should mean that for most secret services you won't need to convert the data into a different format. ### Using Google credential JSON files Securely pass a Google credential JSON file to your Trigger.dev task using environment variables. In your terminal, run the following command and copy the resulting base64 string: ``` base64 -i path/to/your/service-account-file.json ``` Follow [these steps](/deploy-environment-variables) to set a new environment variable using the base64 string as the value. ``` GOOGLE_CREDENTIALS_BASE64="" ``` Add the following code to your Trigger.dev task: ```ts import { google } from "googleapis"; const credentials = JSON.parse( Buffer.from(process.env.GOOGLE_CREDENTIALS_BASE64, "base64").toString("utf8") ); const auth = new google.auth.GoogleAuth({ credentials, scopes: ["https://www.googleapis.com/auth/cloud-platform"], }); const client = await auth.getClient(); ``` You can now use the `client` object to make authenticated requests to Google APIs # Errors & Retrying How to deal with errors and write reliable tasks. When an uncaught error is thrown inside your task, that task attempt will fail. You can configure retrying in two ways: 1. In your [trigger.config file](/config/config-file) you can set the default retrying behavior for all tasks. 2. On each task you can set the retrying behavior. By default when you create your project using the CLI init command we disabled retrying in the DEV environment. You can enable it in your [trigger.config file](/config/config-file). ## A simple example with OpenAI This task will retry 10 times with exponential backoff. * `openai.chat.completions.create()` can throw an error. * The result can be empty and we want to try again. So we manually throw an error. ```ts /trigger/openai.ts import { task } from "@trigger.dev/sdk/v3"; import OpenAI from "openai"; const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY, }); export const openaiTask = task({ id: "openai-task", //specifying retry options overrides the defaults defined in your trigger.config file retry: { maxAttempts: 10, factor: 1.8, minTimeoutInMs: 500, maxTimeoutInMs: 30_000, randomize: false, }, run: async (payload: { prompt: string }) => { //if this fails, it will throw an error and retry const chatCompletion = await openai.chat.completions.create({ messages: [{ role: "user", content: payload.prompt }], model: "gpt-3.5-turbo", }); if (chatCompletion.choices[0]?.message.content === undefined) { //sometimes OpenAI returns an empty response, let's retry by throwing an error throw new Error("OpenAI call failed"); } return chatCompletion.choices[0].message.content; }, }); ``` ## Combining tasks One way to gain reliability is to break your work into smaller tasks and [trigger](/triggering) them from each other. Each task can have its own retrying behavior: ```ts /trigger/multiple-tasks.ts import { task } from "@trigger.dev/sdk/v3"; export const myTask = task({ id: "my-task", retry: { maxAttempts: 10, }, run: async (payload: string) => { const result = await otherTask.triggerAndWait("some data"); //...do other stuff }, }); export const otherTask = task({ id: "other-task", retry: { maxAttempts: 5, }, run: async (payload: string) => { return { foo: "bar", }; }, }); ``` Another benefit of this approach is that you can view the logs and retry each task independently from the dashboard. ## Retrying smaller parts of a task Another complimentary strategy is to perform retrying inside of your task. We provide some useful functions that you can use to retry smaller parts of a task. Of course, you can also write your own logic or use other packages. ### retry.onThrow() You can retry a block of code that can throw an error, with the same retry settings as a task. ```ts /trigger/retry-on-throw.ts import { task, logger, retry } from "@trigger.dev/sdk/v3"; export const retryOnThrow = task({ id: "retry-on-throw", run: async (payload: any) => { //Will retry up to 3 times. If it fails 3 times it will throw. const result = await retry.onThrow( async ({ attempt }) => { //throw on purpose the first 2 times, obviously this is a contrived example if (attempt < 3) throw new Error("failed"); //... return { foo: "bar", }; }, { maxAttempts: 3, randomize: false } ); //this will log out after 3 attempts of retry.onThrow logger.info("Result", { result }); }, }); ``` If all of the attempts with `retry.onThrow` fail, an error will be thrown. You can catch this or let it cause a retry of the entire task. ### retry.fetch() You can use `fetch`, `axios`, or any other library in your code. But we do provide a convenient function to perform HTTP requests with conditional retrying based on the response: ```ts /trigger/retry-fetch.ts import { task, logger, retry } from "@trigger.dev/sdk/v3"; export const taskWithFetchRetries = task({ id: "task-with-fetch-retries", run: async ({ payload, ctx }) => { //if the Response is a 429 (too many requests), it will retry using the data from the response. A lot of good APIs send these headers. const headersResponse = await retry.fetch("http://my.host/test-headers", { retry: { byStatus: { "429": { strategy: "headers", limitHeader: "x-ratelimit-limit", remainingHeader: "x-ratelimit-remaining", resetHeader: "x-ratelimit-reset", resetFormat: "unix_timestamp_in_ms", }, }, }, }); const json = await headersResponse.json(); logger.info("Fetched headers response", { json }); //if the Response is a 500-599 (issue with the server you're calling), it will retry up to 10 times with exponential backoff const backoffResponse = await retry.fetch("http://my.host/test-backoff", { retry: { byStatus: { "500-599": { strategy: "backoff", maxAttempts: 10, factor: 2, minTimeoutInMs: 1_000, maxTimeoutInMs: 30_000, randomize: false, }, }, }, }); const json2 = await backoffResponse.json(); logger.info("Fetched backoff response", { json2 }); //You can additionally specify a timeout. In this case if the response takes longer than 1 second, it will retry up to 5 times with exponential backoff const timeoutResponse = await retry.fetch("https://httpbin.org/delay/2", { timeoutInMs: 1000, retry: { timeout: { maxAttempts: 5, factor: 1.8, minTimeoutInMs: 500, maxTimeoutInMs: 30_000, randomize: false, }, }, }); const json3 = await timeoutResponse.json(); logger.info("Fetched timeout response", { json3 }); return { result: "success", payload, json, json2, json3, }; }, }); ``` If all of the attempts with `retry.fetch` fail, an error will be thrown. You can catch this or let it cause a retry of the entire task. ## Advanced error handling and retrying We provide a `handleError` callback on the task and in your `trigger.config` file. This gets called when an uncaught error is thrown in your task. You can * Inspect the error, log it, and return a different error if you'd like. * Modify the retrying behavior based on the error, payload, context, etc. If you don't return anything from the function it will use the settings on the task (or inherited from the config). So you only need to use this to override things. ### OpenAI error handling example OpenAI calls can fail for a lot of reasons and the ideal retry behavior is different for each. In this complicated example: * We skip retrying if there's no Response status. * We skip retrying if you've run out of credits. * If there are no Response headers we let the normal retrying logic handle it (return undefined). * If we've run out of requests or tokens we retry at the time specified in the headers. ```ts tasks.ts import { task } from "@trigger.dev/sdk/v3"; import { calculateISO8601DurationOpenAIVariantResetAt, openai } from "./openai.js"; export const openaiTask = task({ id: "openai-task", retry: { maxAttempts: 1, }, run: async (payload: { prompt: string }) => { const chatCompletion = await openai.chat.completions.create({ messages: [{ role: "user", content: payload.prompt }], model: "gpt-3.5-turbo", }); return chatCompletion.choices[0].message.content; }, handleError: async (payload, error, { ctx, retryAt }) => { if (error instanceof OpenAI.APIError) { if (!error.status) { return { skipRetrying: true, }; } if (error.status === 429 && error.type === "insufficient_quota") { return { skipRetrying: true, }; } if (!error.headers) { //returning undefined means the normal retrying logic will be used return; } const remainingRequests = error.headers["x-ratelimit-remaining-requests"]; const requestResets = error.headers["x-ratelimit-reset-requests"]; if (typeof remainingRequests === "string" && Number(remainingRequests) === 0) { return { retryAt: calculateISO8601DurationOpenAIVariantResetAt(requestResets), }; } const remainingTokens = error.headers["x-ratelimit-remaining-tokens"]; const tokensResets = error.headers["x-ratelimit-reset-tokens"]; if (typeof remainingTokens === "string" && Number(remainingTokens) === 0) { return { retryAt: calculateISO8601DurationOpenAIVariantResetAt(tokensResets), }; } } }, }); ``` ```ts openai.ts import { OpenAI } from "openai"; export const openai = new OpenAI({ apiKey: env.OPENAI_API_KEY }); export function calculateISO8601DurationOpenAIVariantResetAt( resets: string, now: Date = new Date() ): Date | undefined { // Check if the input is null or undefined if (!resets) return undefined; // Regular expression to match the duration string pattern const pattern = /^(?:(\d+)d)?(?:(\d+)h)?(?:(\d+)m)?(?:(\d+(?:\.\d+)?)s)?(?:(\d+)ms)?$/; const match = resets.match(pattern); // If the string doesn't match the expected format, return undefined if (!match) return undefined; // Extract days, hours, minutes, seconds, and milliseconds from the string const days = parseInt(match[1] ?? "0", 10) || 0; const hours = parseInt(match[2] ?? "0", 10) || 0; const minutes = parseInt(match[3] ?? "0", 10) || 0; const seconds = parseFloat(match[4] ?? "0") || 0; const milliseconds = parseInt(match[5] ?? "0", 10) || 0; // Calculate the future date based on the current date plus the extracted time const resetAt = new Date(now); resetAt.setDate(resetAt.getDate() + days); resetAt.setHours(resetAt.getHours() + hours); resetAt.setMinutes(resetAt.getMinutes() + minutes); resetAt.setSeconds(resetAt.getSeconds() + Math.floor(seconds)); resetAt.setMilliseconds( resetAt.getMilliseconds() + (seconds - Math.floor(seconds)) * 1000 + milliseconds ); return resetAt; } ``` ## Preventing retries ### Using `AbortTaskRunError` You can prevent retries by throwing an `AbortTaskRunError`. This will fail the task attempt and disable retrying. ```ts /trigger/myTasks.ts import { task, AbortTaskRunError } from "@trigger.dev/sdk/v3"; export const openaiTask = task({ id: "openai-task", run: async (payload: { prompt: string }) => { //if this fails, it will throw an error and stop retrying const chatCompletion = await openai.chat.completions.create({ messages: [{ role: "user", content: payload.prompt }], model: "gpt-3.5-turbo", }); if (chatCompletion.choices[0]?.message.content === undefined) { // If OpenAI returns an empty response, abort retrying throw new AbortTaskRunError("OpenAI call failed"); } return chatCompletion.choices[0].message.content; }, }); ``` ### Using try/catch Sometimes you want to catch an error and don't want to retry the task. You can use try/catch as you normally would. In this example we fallback to using Replicate if OpenAI fails. ```ts /trigger/myTasks.ts import { task } from "@trigger.dev/sdk/v3"; export const openaiTask = task({ id: "openai-task", run: async (payload: { prompt: string }) => { try { //if this fails, it will throw an error and retry const chatCompletion = await openai.chat.completions.create({ messages: [{ role: "user", content: payload.prompt }], model: "gpt-3.5-turbo", }); if (chatCompletion.choices[0]?.message.content === undefined) { //sometimes OpenAI returns an empty response, let's retry by throwing an error throw new Error("OpenAI call failed"); } return chatCompletion.choices[0].message.content; } catch (error) { //use Replicate if OpenAI fails const prediction = await replicate.run( "meta/llama-2-70b-chat:02e509c789964a7ea8736978a43525956ef40397be9033abf9fd2badfe68c9e3", { input: { prompt: payload.prompt, max_new_tokens: 250, }, } ); if (prediction.output === undefined) { //retry if Replicate fails throw new Error("Replicate call failed"); } return prediction.output; } }, }); ``` # Overview & Authentication Using the Trigger.dev SDK from your frontend application. You can use our [React hooks](/frontend/react-hooks) in your frontend application to interact with the Trigger.dev API. This guide will show you how to generate Public Access Tokens that can be used to authenticate your requests. ## Authentication To create a Public Access Token, you can use the `auth.createPublicToken` function in your **backend** code: ```tsx const publicToken = await auth.createPublicToken(); // πŸ‘ˆ this public access token has no permissions, so is pretty useless! ``` ### Scopes By default a Public Access Token has no permissions. You must specify the scopes you need when creating a Public Access Token: ```ts const publicToken = await auth.createPublicToken({ scopes: { read: { runs: true, // ❌ this token can read all runs, possibly useful for debugging/testing }, }, }); ``` This will allow the token to read all runs, which is probably not what you want. You can specify only certain runs by passing an array of run IDs: ```ts const publicToken = await auth.createPublicToken({ scopes: { read: { runs: ["run_1234", "run_5678"], // βœ… this token can read only these runs }, }, }); ``` You can scope the token to only read certain tasks: ```ts const publicToken = await auth.createPublicToken({ scopes: { read: { tasks: ["my-task-1", "my-task-2"], // πŸ‘ˆ this token can read all runs of these tasks }, }, }); ``` Or tags: ```ts const publicToken = await auth.createPublicToken({ scopes: { read: { tags: ["my-tag-1", "my-tag-2"], // πŸ‘ˆ this token can read all runs with these tags }, }, }); ``` Or a specific batch of runs: ```ts const publicToken = await auth.createPublicToken({ scopes: { read: { batch: "batch_1234", // πŸ‘ˆ this token can read all runs in this batch }, }, }); ``` You can also combine scopes. For example, to read runs with specific tags and for specific tasks: ```ts const publicToken = await auth.createPublicToken({ scopes: { read: { tasks: ["my-task-1", "my-task-2"], tags: ["my-tag-1", "my-tag-2"], }, }, }); ``` ### Expiration By default, Public Access Token's expire after 15 minutes. You can specify a different expiration time when creating a Public Access Token: ```ts const publicToken = await auth.createPublicToken({ expirationTime: "1hr", }); ``` * If `expirationTime` is a string, it will be treated as a time span * If `expirationTime` is a number, it will be treated as a Unix timestamp * If `expirationTime` is a `Date`, it will be treated as a date The format used for a time span is the same as the [jose package](https://github.com/panva/jose), which is a number followed by a unit. Valid units are: "sec", "secs", "second", "seconds", "s", "minute", "minutes", "min", "mins", "m", "hour", "hours", "hr", "hrs", "h", "day", "days", "d", "week", "weeks", "w", "year", "years", "yr", "yrs", and "y". It is not possible to specify months. 365.25 days is used as an alias for a year. If the string is suffixed with "ago", or prefixed with a "-", the resulting time span gets subtracted from the current unix timestamp. A "from now" suffix can also be used for readability when adding to the current unix timestamp. ## Auto-generated tokens When triggering a task from your backend, the `handle` received from the `trigger` function now includes a `publicAccessToken` field. This token can be used to authenticate requests in your frontend application: ```ts import { tasks } from "@trigger.dev/sdk/v3"; const handle = await tasks.trigger("my-task", { some: "data" }); console.log(handle.publicAccessToken); ``` By default, tokens returned from the `trigger` function expire after 15 minutes and have a read scope for that specific run. You can customize the expiration of the auto-generated tokens by passing a `publicTokenOptions` object to the `trigger` function: ```ts const handle = await tasks.trigger( "my-task", { some: "data" }, { tags: ["my-tag"], }, { publicAccessToken: { expirationTime: "1hr", }, } ); ``` You will also get back a Public Access Token when using the `batchTrigger` function: ```ts import { tasks } from "@trigger.dev/sdk/v3"; const handle = await tasks.batchTrigger("my-task", [ { payload: { some: "data" } }, { payload: { some: "data" } }, { payload: { some: "data" } }, ]); console.log(handle.publicAccessToken); ``` ## Usage To learn how to use these Public Access Tokens, see our [React hooks](/frontend/react-hooks) guide. # Overview Using the Trigger.dev v3 API from your React application. Our react hooks package provides a set of hooks that make it easy to interact with the Trigger.dev API from your React application, using our [frontend API](/frontend/overview). You can use these hooks to fetch runs, and subscribe to real-time updates, and trigger tasks from your frontend application. ## Installation Install the `@trigger.dev/react-hooks` package in your project: ```bash npm npm add @trigger.dev/react-hooks ``` ```bash pnpm pnpm add @trigger.dev/react-hooks ``` ```bash yarn yarn install @trigger.dev/react-hooks ``` ## Authentication All hooks accept an optional last argument `options` that accepts an `accessToken` param, which should be a valid Public Access Token. Learn more about [generating tokens in the frontend guide](/frontend/overview). ```tsx import { useRealtimeRun } from "@trigger.dev/react-hooks"; export function MyComponent({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { run, error } = useRealtimeRun(runId, { accessToken: publicAccessToken, // This is required baseURL: "https://your-trigger-dev-instance.com", // optional, only needed if you are self-hosting Trigger.dev }); // ... } ``` Alternatively, you can use our `TriggerAuthContext` provider ```tsx import { TriggerAuthContext } from "@trigger.dev/react-hooks"; export function SetupTrigger({ publicAccessToken }: { publicAccessToken: string }) { return ( ); } ``` Now children components can use the hooks to interact with the Trigger.dev API. If you are self-hosting Trigger.dev, you can provide the `baseURL` to the `TriggerAuthContext` provider. ```tsx import { TriggerAuthContext } from "@trigger.dev/react-hooks"; export function SetupTrigger({ publicAccessToken }: { publicAccessToken: string }) { return ( ); } ``` ### Next.js and client components If you are using Next.js with the App Router, you have to make sure the component that uses the `TriggerAuthContext` is a client component. So for example, the following code will not work: ```tsx app/page.tsx import { TriggerAuthContext } from "@trigger.dev/react-hooks"; export default function Page() { return ( ); } ``` That's because `Page` is a server component and the `TriggerAuthContext.Provider` uses client-only react code. To fix this, wrap the `TriggerAuthContext.Provider` in a client component: ```ts components/TriggerProvider.tsx "use client"; import { TriggerAuthContext } from "@trigger.dev/react-hooks"; export function TriggerProvider({ accessToken, children, }: { accessToken: string; children: React.ReactNode; }) { return ( {children} ); } ``` ### Passing the token to the frontend Techniques for passing the token to the frontend vary depending on your setup. Here are a few ways to do it for different setups: #### Next.js App Router If you are using Next.js with the App Router and you are triggering a task from a server action, you can use cookies to store and pass the token to the frontend. ```tsx actions/trigger.ts "use server"; import { tasks } from "@trigger.dev/sdk/v3"; import type { exampleTask } from "@/trigger/example"; import { redirect } from "next/navigation"; import { cookies } from "next/headers"; export async function startRun() { const handle = await tasks.trigger("example", { foo: "bar" }); // Set the auto-generated publicAccessToken in a cookie cookies().set("publicAccessToken", handle.publicAccessToken); // βœ… this token only has access to read this run redirect(`/runs/${handle.id}`); } ``` Then in the `/runs/[id].tsx` page, you can read the token from the cookie and pass it to the `TriggerProvider`. ```tsx pages/runs/[id].tsx import { TriggerProvider } from "@/components/TriggerProvider"; export default function RunPage({ params }: { params: { id: string } }) { const publicAccessToken = cookies().get("publicAccessToken"); return ( ); } ``` Instead of a cookie, you could also use a query parameter to pass the token to the frontend: ```tsx actions/trigger.ts import { tasks } from "@trigger.dev/sdk/v3"; import type { exampleTask } from "@/trigger/example"; import { redirect } from "next/navigation"; import { cookies } from "next/headers"; export async function startRun() { const handle = await tasks.trigger("example", { foo: "bar" }); redirect(`/runs/${handle.id}?publicAccessToken=${handle.publicAccessToken}`); } ``` And then in the `/runs/[id].tsx` page: ```tsx pages/runs/[id].tsx import { TriggerProvider } from "@/components/TriggerProvider"; export default function RunPage({ params, searchParams, }: { params: { id: string }; searchParams: { publicAccessToken: string }; }) { return ( ); } ``` Another alternative would be to use a server-side rendered page to fetch the token and pass it to the frontend: ```tsx pages/runs/[id].tsx import { TriggerProvider } from "@/components/TriggerProvider"; import { generatePublicAccessToken } from "@/trigger/auth"; export default async function RunPage({ params }: { params: { id: string } }) { // This will be executed on the server only const publicAccessToken = await generatePublicAccessToken(params.id); return ( ); } ``` ```tsx trigger/auth.ts import { auth } from "@trigger.dev/sdk/v3"; export async function generatePublicAccessToken(runId: string) { return auth.createPublicToken({ scopes: { read: { runs: [runId], }, }, expirationTime: "1h", }); } ``` ## SWR vs Realtime hooks We offer two "styles" of hooks: SWR and Realtime. The SWR hooks use the [swr](https://swr.vercel.app/) library to fetch data once and cache it. The Realtime hooks use [Trigger.dev realtime](/realtime) to subscribe to updates in real-time. It can be a little confusing which one to use because [swr](https://swr.vercel.app/) can also be configured to poll for updates. But because of rate-limits and the way the Trigger.dev API works, we recommend using the Realtime hooks for most use-cases. ## SWR Hooks ### useRun The `useRun` hook allows you to fetch a run by its ID. ```tsx "use client"; // This is needed for Next.js App Router or other RSC frameworks import { useRun } from "@trigger.dev/react-hooks"; export function MyComponent({ runId }: { runId: string }) { const { run, error, isLoading } = useRun(runId); if (isLoading) return
Loading...
; if (error) return
Error: {error.message}
; return
Run: {run.id}
; } ``` The `run` object returned is the same as the [run object](/management/runs/retrieve) returned by the Trigger.dev API. To correctly type the run's payload and output, you can provide the type of your task to the `useRun` hook: ```tsx import { useRun } from "@trigger.dev/react-hooks"; import type { myTask } from "@/trigger/myTask"; export function MyComponent({ runId }: { runId: string }) { const { run, error, isLoading } = useRun(runId, { refreshInterval: 0, // Disable polling }); if (isLoading) return
Loading...
; if (error) return
Error: {error.message}
; // Now run.payload and run.output are correctly typed return
Run: {run.id}
; } ``` ### Common options You can pass the following options to the all SWR hooks: Revalidate the data when the window regains focus. Revalidate the data when the browser regains a network connection. Poll for updates at the specified interval (in milliseconds). Polling is not recommended for most use-cases. Use the Realtime hooks instead. ### Common return values An error object if an error occurred while fetching the data. A boolean indicating if the data is currently being fetched. A boolean indicating if the data is currently being revalidated. A boolean indicating if an error occurred while fetching the data. ## Realtime hooks See our [Realtime hooks documentation](/frontend/react-hooks/realtime) for more information. ## Trigger Hooks See our [Trigger hooks documentation](/frontend/react-hooks/triggering) for more information. # Realtime hooks Get live updates from the Trigger.dev API in your frontend application. These hooks allow you to subscribe to runs, batches, and streams using [Trigger.dev realtime](/realtime). Before reading this guide: * Read our [Realtime documentation](/realtime) to understand how the Trigger.dev realtime API works. * Read how to [setup and authenticate](/frontend/overview) using the `@trigger.dev/react-hooks` package. ## Hooks ### useRealtimeRun The `useRealtimeRun` hook allows you to subscribe to a run by its ID. ```tsx "use client"; // This is needed for Next.js App Router or other RSC frameworks import { useRealtimeRun } from "@trigger.dev/react-hooks"; export function MyComponent({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { run, error } = useRealtimeRun(runId, { accessToken: publicAccessToken, }); if (error) return
Error: {error.message}
; return
Run: {run.id}
; } ``` To correctly type the run's payload and output, you can provide the type of your task to the `useRealtimeRun` hook: ```tsx import { useRealtimeRun } from "@trigger.dev/react-hooks"; import type { myTask } from "@/trigger/myTask"; export function MyComponent({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { run, error } = useRealtimeRun(runId, { accessToken: publicAccessToken, }); if (error) return
Error: {error.message}
; // Now run.payload and run.output are correctly typed return
Run: {run.id}
; } ``` You can supply an `onComplete` callback to the `useRealtimeRun` hook to be called when the run is completed or errored. This is useful if you want to perform some action when the run is completed, like navigating to a different page or showing a notification. ```tsx import { useRealtimeRun } from "@trigger.dev/react-hooks"; export function MyComponent({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { run, error } = useRealtimeRun(runId, { accessToken: publicAccessToken, onComplete: (run, error) => { console.log("Run completed", run); }, }); if (error) return
Error: {error.message}
; return
Run: {run.id}
; } ``` See our [Realtime documentation](/realtime) for more information about the type of the run object and more. ### useRealtimeRunsWithTag The `useRealtimeRunsWithTag` hook allows you to subscribe to multiple runs with a specific tag. ```tsx "use client"; // This is needed for Next.js App Router or other RSC frameworks import { useRealtimeRunsWithTag } from "@trigger.dev/react-hooks"; export function MyComponent({ tag }: { tag: string }) { const { runs, error } = useRealtimeRunsWithTag(tag); if (error) return
Error: {error.message}
; return (
{runs.map((run) => (
Run: {run.id}
))}
); } ``` To correctly type the runs payload and output, you can provide the type of your task to the `useRealtimeRunsWithTag` hook: ```tsx import { useRealtimeRunsWithTag } from "@trigger.dev/react-hooks"; import type { myTask } from "@/trigger/myTask"; export function MyComponent({ tag }: { tag: string }) { const { runs, error } = useRealtimeRunsWithTag(tag); if (error) return
Error: {error.message}
; // Now runs[i].payload and runs[i].output are correctly typed return (
{runs.map((run) => (
Run: {run.id}
))}
); } ``` If `useRealtimeRunsWithTag` could return multiple different types of tasks, you can pass a union of all the task types to the hook: ```tsx import { useRealtimeRunsWithTag } from "@trigger.dev/react-hooks"; import type { myTask1, myTask2 } from "@/trigger/myTasks"; export function MyComponent({ tag }: { tag: string }) { const { runs, error } = useRealtimeRunsWithTag(tag); if (error) return
Error: {error.message}
; // You can narrow down the type of the run based on the taskIdentifier for (const run of runs) { if (run.taskIdentifier === "my-task-1") { // run is correctly typed as myTask1 } else if (run.taskIdentifier === "my-task-2") { // run is correctly typed as myTask2 } } return (
{runs.map((run) => (
Run: {run.id}
))}
); } ``` ### useRealtimeBatch The `useRealtimeBatch` hook allows you to subscribe to a batch of runs by its the batch ID. ```tsx "use client"; // This is needed for Next.js App Router or other RSC frameworks import { useRealtimeBatch } from "@trigger.dev/react-hooks"; export function MyComponent({ batchId }: { batchId: string }) { const { runs, error } = useRealtimeBatch(batchId); if (error) return
Error: {error.message}
; return (
{runs.map((run) => (
Run: {run.id}
))}
); } ``` See our [Realtime documentation](/realtime) for more information. ### useRealtimeRunWithStreams The `useRealtimeRunWithStreams` hook allows you to subscribe to a run by its ID and also receive any streams that are emitted by the task. See our [Realtime documentation](/realtime#streams) for more information about emitting streams from a task. ```tsx "use client"; // This is needed for Next.js App Router or other RSC frameworks import { useRealtimeRunWithStreams } from "@trigger.dev/react-hooks"; export function MyComponent({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { run, streams, error } = useRealtimeRunWithStreams(runId, { accessToken: publicAccessToken, }); if (error) return
Error: {error.message}
; return (
Run: {run.id}
{Object.keys(streams).map((stream) => (
Stream: {stream}
))}
); } ``` You can provide the type of the streams to the `useRealtimeRunWithStreams` hook: ```tsx import { useRealtimeRunWithStreams } from "@trigger.dev/react-hooks"; import type { myTask } from "@/trigger/myTask"; type STREAMS = { openai: string; // this is the type of each "part" of the stream }; export function MyComponent({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { run, streams, error } = useRealtimeRunWithStreams(runId, { accessToken: publicAccessToken, }); if (error) return
Error: {error.message}
; const text = streams.openai?.map((part) => part).join(""); return (
Run: {run.id}
{text}
); } ``` As you can see above, each stream is an array of the type you provided, keyed by the stream name. If instead of a pure text stream you have a stream of objects, you can provide the type of the object: ```tsx import type { TextStreamPart } from "ai"; import type { myTask } from "@/trigger/myTask"; type STREAMS = { openai: TextStreamPart<{}> }; export function MyComponent({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { run, streams, error } = useRealtimeRunWithStreams(runId, { accessToken: publicAccessToken, }); if (error) return
Error: {error.message}
; const text = streams.openai ?.filter((stream) => stream.type === "text-delta") ?.map((part) => part.text) .join(""); return (
Run: {run.id}
{text}
); } ``` ## Common options ### accessToken & baseURL You can pass the `accessToken` option to the Realtime hooks to authenticate the subscription. ```tsx import { useRealtimeRun } from "@trigger.dev/react-hooks"; export function MyComponent({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { run, error } = useRealtimeRun(runId, { accessToken: publicAccessToken, baseURL: "https://my-self-hosted-trigger.com", // Optional if you are using a self-hosted Trigger.dev instance }); if (error) return
Error: {error.message}
; return
Run: {run.id}
; } ``` ### enabled You can pass the `enabled` option to the Realtime hooks to enable or disable the subscription. ```tsx import { useRealtimeRun } from "@trigger.dev/react-hooks"; export function MyComponent({ runId, publicAccessToken, enabled, }: { runId: string; publicAccessToken: string; enabled: boolean; }) { const { run, error } = useRealtimeRun(runId, { accessToken: publicAccessToken, enabled, }); if (error) return
Error: {error.message}
; return
Run: {run.id}
; } ``` This allows you to conditionally disable using the hook based on some state. ### id You can pass the `id` option to the Realtime hooks to change the ID of the subscription. ```tsx import { useRealtimeRun } from "@trigger.dev/react-hooks"; export function MyComponent({ id, runId, publicAccessToken, enabled, }: { id: string; runId: string; publicAccessToken: string; enabled: boolean; }) { const { run, error } = useRealtimeRun(runId, { accessToken: publicAccessToken, enabled, id, }); if (error) return
Error: {error.message}
; return
Run: {run.id}
; } ``` This allows you to change the ID of the subscription based on some state. Passing in a different ID will unsubscribe from the current subscription and subscribe to the new one (and remove any cached data). ### experimental\_throttleInMs The `*withStreams` variants of the Realtime hooks accept an `experimental_throttleInMs` option to throttle the updates from the server. This can be useful if you are getting too many updates and want to reduce the number of updates. ```tsx import { useRealtimeRunsWithStreams } from "@trigger.dev/react-hooks"; export function MyComponent({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { const { runs, error } = useRealtimeRunsWithStreams(tag, { accessToken: publicAccessToken, experimental_throttleInMs: 1000, // Throttle updates to once per second }); if (error) return
Error: {error.message}
; return (
{runs.map((run) => (
Run: {run.id}
))}
); } ``` # Trigger hooks Triggering tasks from your frontend application. We provide a set of hooks that can be used to trigger tasks from your frontend application. ## Demo We've created a [Demo application](https://github.com/triggerdotdev/realtime-llm-battle) that demonstrates how to use our React hooks to trigger tasks in a Next.js application. The application uses the `@trigger.dev/react-hooks` package to trigger a task and subscribe to the run in real-time. ## Installation Install the `@trigger.dev/react-hooks` package in your project: ```bash npm npm add @trigger.dev/react-hooks ``` ```bash pnpm pnpm add @trigger.dev/react-hooks ``` ```bash yarn yarn install @trigger.dev/react-hooks ``` ## Authentication To authenticate a trigger hook, you must provide a special one-time use "trigger" token. These tokens are very similar to [Public Access Tokens](/frontend/overview#authentication), but they can only be used once to trigger a task. You can generate a trigger token using the `auth.createTriggerPublicToken` function in your backend code: ```ts import { auth } from "@trigger.dev/sdk/v3"; // Somewhere in your backend code const triggerToken = await auth.createTriggerPublicToken("my-task"); ``` These tokens also expire, with the default expiration time being 15 minutes. You can specify a custom expiration time by passing a `expirationTime` parameter: ```ts import { auth } from "@trigger.dev/sdk/v3"; // Somewhere in your backend code const triggerToken = await auth.createTriggerPublicToken("my-task", { expirationTime: "24hr", }); ``` You can also pass multiple tasks to the `createTriggerPublicToken` function to create a token that can trigger multiple tasks: ```ts import { auth } from "@trigger.dev/sdk/v3"; // Somewhere in your backend code const triggerToken = await auth.createTriggerPublicToken(["my-task-1", "my-task-2"]); ``` You can also pass the `multipleUse` parameter to create a token that can be used multiple times: ```ts import { auth } from "@trigger.dev/sdk/v3"; // Somewhere in your backend code const triggerToken = await auth.createTriggerPublicToken("my-task", { multipleUse: true, // ❌ Use this with caution! }); ``` After generating the trigger token in your backend, you must pass it to your frontend application. We have a guide on how to do this in the [React hooks overview](/frontend/react-hooks/overview#passing-the-token-to-the-frontend). ## Hooks ### useTaskTrigger The `useTaskTrigger` hook allows you to trigger a task from your frontend application. ```tsx "use client"; // This is needed for Next.js App Router or other RSC frameworks import { useTaskTrigger } from "@trigger.dev/react-hooks"; import type { myTask } from "@/trigger/myTask"; // πŸ‘† This is the type of your task export function MyComponent({ publicAccessToken }: { publicAccessToken: string }) { // pass the type of your task here πŸ‘‡ const { submit, handle, error, isLoading } = useTaskTrigger("my-task", { accessToken: publicAccessToken, // πŸ‘ˆ this is the "trigger" token }); if (error) { return
Error: {error.message}
; } if (handle) { return
Run ID: {handle.id}
; } return ( ); } ``` `useTaskTrigger` returns an object with the following properties: * `submit`: A function that triggers the task. It takes the payload of the task as an argument. * `handle`: The run handle object. This object contains the ID of the run that was triggered, along with a Public Access Token that can be used to access the run. * `isLoading`: A boolean that indicates whether the task is currently being triggered. * `error`: An error object that contains any errors that occurred while triggering the task. The `submit` function triggers the task with the specified payload. You can additionally pass an optional [options](/triggering#options) argument to the `submit` function: ```tsx submit({ foo: "bar" }, { tags: ["tag1", "tag2"] }); ``` #### Using the handle object You can use the `handle` object to initiate a subsequent [realtime hook](/frontend/react-hooks/realtime#userealtimerun) to subscribe to the run. ```tsx "use client"; // This is needed for Next.js App Router or other RSC frameworks import { useTaskTrigger, useRealtimeRun } from "@trigger.dev/react-hooks"; import type { myTask } from "@/trigger/myTask"; // πŸ‘† This is the type of your task export function MyComponent({ publicAccessToken }: { publicAccessToken: string }) { // pass the type of your task here πŸ‘‡ const { submit, handle, error, isLoading } = useTaskTrigger("my-task", { accessToken: publicAccessToken, // πŸ‘ˆ this is the "trigger" token }); // use the handle object to preserve type-safety πŸ‘‡ const { run, error: realtimeError } = useRealtimeRun(handle, { accessToken: handle?.publicAccessToken, enabled: !!handle, // Only subscribe to the run if the handle is available }); if (error) { return
Error: {error.message}
; } if (handle) { return
Run ID: {handle.id}
; } if (realtimeError) { return
Error: {realtimeError.message}
; } if (run) { return
Run ID: {run.id}
; } return ( ); } ``` We've also created some additional hooks that allow you to trigger tasks and subscribe to the run in one step: ### useRealtimeTaskTrigger The `useRealtimeTaskTrigger` hook allows you to trigger a task from your frontend application and then subscribe to the run in using Realtime: ```tsx "use client"; // This is needed for Next.js App Router or other RSC frameworks import { useRealtimeTaskTrigger } from "@trigger.dev/react-hooks"; import type { myTask } from "@/trigger/myTask"; export function MyComponent({ publicAccessToken }: { publicAccessToken: string }) { const { submit, run, error, isLoading } = useRealtimeTaskTrigger("my-task", { accessToken: publicAccessToken, }); if (error) { return
Error: {error.message}
; } // This is the realtime run object, which will automatically update when the run changes if (run) { return
Run ID: {run.id}
; } return ( ); } ``` ### useRealtimeTaskTriggerWithStreams The `useRealtimeTaskTriggerWithStreams` hook allows you to trigger a task from your frontend application and then subscribe to the run in using Realtime, and also receive any streams that are emitted by the task. ```tsx "use client"; // This is needed for Next.js App Router or other RSC frameworks import { useRealtimeTaskTriggerWithStreams } from "@trigger.dev/react-hooks"; import type { myTask } from "@/trigger/myTask"; type STREAMS = { openai: string; // this is the type of each "part" of the stream }; export function MyComponent({ publicAccessToken }: { publicAccessToken: string }) { const { submit, run, streams, error, isLoading } = useRealtimeTaskTriggerWithStreams< typeof myTask, STREAMS >("my-task", { accessToken: publicAccessToken, }); if (error) { return
Error: {error.message}
; } if (streams && run) { const text = streams.openai?.map((part) => part).join(""); return (
Run ID: {run.id}
{text}
); } return ( ); } ``` # GitHub Actions You can easily deploy your tasks with GitHub actions. This simple GitHub action file will deploy your Trigger.dev tasks when new code is pushed to the `main` branch and the `trigger` directory has changes in it. The deploy step will fail if any version mismatches are detected. Please see the [version pinning](/github-actions#version-pinning) section for more details. ```yaml .github/workflows/release-trigger-prod.yml name: Deploy to Trigger.dev (prod) on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Use Node.js 20.x uses: actions/setup-node@v4 with: node-version: "20.x" - name: Install dependencies run: npm install - name: πŸš€ Deploy Trigger.dev env: TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }} run: | npx trigger.dev@latest deploy ``` ```yaml .github/workflows/release-trigger-staging.yml name: Deploy to Trigger.dev (staging) # Requires manually calling the workflow from a branch / commit to deploy to staging on: workflow_dispatch: jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Use Node.js 20.x uses: actions/setup-node@v4 with: node-version: "20.x" - name: Install dependencies run: npm install - name: πŸš€ Deploy Trigger.dev env: TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }} run: | npx trigger.dev@latest deploy --env staging ``` If you already have a GitHub action file, you can just add the final step "πŸš€ Deploy Trigger.dev" to your existing file. ## Creating a Personal Access Token Go to your profile page and click on the ["Personal Access Tokens"](https://cloud.trigger.dev/account/tokens) tab. Click on 'Settings' -> 'Secrets and variables' -> 'Actions' -> 'New repository secret' Add the name `TRIGGER_ACCESS_TOKEN` and the value of your access token. ![Add TRIGGER\_ACCESS\_TOKEN in GitHub](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/github-access-token.png) ## Version pinning The CLI and `@trigger.dev/*` package versions need to be in sync with the `trigger.dev` CLI, otherwise there will be errors and unpredictable behavior. Hence, the `deploy` command will automatically fail during CI on any version mismatches. Tip: add the deploy command to your `package.json` file to keep versions managed in the same place. For example: ```json { "scripts": { "deploy:trigger-prod": "npx trigger.dev@3.0.0 deploy", "deploy:trigger": "npx trigger.dev@3.0.0 deploy --env staging" } } ``` Your workflow file will follow the version specified in the `package.json` script, like so: ```yaml .github/workflows/release-trigger.yml - name: πŸš€ Deploy Trigger.dev env: TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }} run: | npm run deploy:trigger ``` You should use the version you run locally during dev and manual deploy. The current version is displayed in the banner, but you can also check it by appending `--version` to any command. ## Self-hosting When self-hosting, you will have to take a few additional steps: * Specify the `TRIGGER_API_URL` environment variable. You can add it to the GitHub secrets the same way as the access token. This should point at your webapp domain, for example: `https://trigger.example.com` * Setup docker as you will need to build and push the image to your registry. On [Trigger.dev Cloud](https://cloud.trigger.dev) this is all done remotely. * Add your registry credentials to the GitHub secrets. * Use the `--self-hosted` and `--push` flags when deploying. Other than that, your GitHub action file will look very similar to the one above: ```yaml .github/workflows/release-trigger-self-hosted.yml name: Deploy to Trigger.dev (self-hosted) on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Use Node.js 20.x uses: actions/setup-node@v4 with: node-version: "20.x" - name: Install dependencies run: npm install # docker setup - part 1 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 # docker setup - part 2 - name: Login to DockerHub uses: docker/login-action@v3 with: username: ${{ secrets.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKERHUB_TOKEN }} - name: πŸš€ Deploy Trigger.dev env: TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }} # required when self-hosting TRIGGER_API_URL: ${{ secrets.TRIGGER_API_URL }} # deploy with additional flags run: | npx trigger.dev@latest deploy --self-hosted --push ``` # GitHub repo Trigger.dev is [Open Source on GitHub](https://github.com/triggerdotdev/trigger.dev). You can contribute to the project by submitting issues, pull requests, or simply by using it and providing feedback. You can also [self-host](/open-source-self-hosting) the project if you want to run it on your own infrastructure. # Creating a project This guide will show you how to create a new Trigger.dev project. ## Prerequisites * [Create a Trigger.dev account](https://cloud.trigger.dev) * Login to the Trigger.dev [dashboard](https://cloud.trigger.dev) ## Create a new Trigger.dev project Click on "Projects" in the left hand side menu then click on "Create a new Project" button in the top right corner . ![Create a project page](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/creating-a-project/creating-a-project-1.png) ![Name your project](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/creating-a-project/creating-a-project-2.png) Once you have created your project you can find your Project ref to add to your `trigger.config` file and rename your project by clicking "Project settings" in the left hand side menubar. ![Useful project settings](https://mintlify.s3.us-west-1.amazonaws.com/trigger/images/creating-a-project/creating-a-project-3.png) ## Useful next steps Setup Trigger.dev in 3 minutes Learn what tasks are and how to write them # Next.js Batch LLM Evaluator This example Next.js project evaluates multiple LLM models using the Vercel AI SDK and streams updates to the frontend using Trigger.dev Realtime. ## Overview This demo is a full stack example that uses the following: * A [Next.js](https://nextjs.org/) app with [Prisma](https://www.prisma.io/) for the database. * Trigger.dev [Realtime](https://trigger.dev/launchweek/0/realtime) to stream updates to the frontend. * Work with multiple LLM models using the Vercel [AI SDK](https://sdk.vercel.ai/docs/introduction). (OpenAI, Anthropic, XAI) * Distribute tasks across multiple tasks using the new [`batch.triggerByTaskAndWait`](https://trigger.dev/docs/triggering#batch-triggerbytaskandwait) method. ## GitHub repo Click here to view the full code for this project in our examples repository on GitHub. You can fork it and use it as a starting point for your own project. ## Video