Docker compose
You can self-host Trigger.dev on your own infrastructure using Docker.
The following instructions will use docker compose to spin up a Trigger.dev instance. Make sure to read the self-hosting overview first.
As self-hosted deployments tend to have unique requirements and configurations, we don’t provide specific advice for securing your deployment, scaling up, or improving reliability.
Should the burden ever get too much, we’d be happy to see you on Trigger.dev cloud where we deal with these concerns for you.
Warning: This guide alone is unlikely to result in a production-ready deployment. Security, scaling, and reliability concerns are not fully addressed here.
What’s new?
Goodbye v3, hello v4! We made quite a few changes:
- Much simpler setup. The provider and coordinator are now combined into a single supervisor. No more startup scripts, just
docker compose up
. - Automatic container cleanup. The supervisor will automatically clean up containers that are no longer needed.
- Support for multiple worker machines. This is a big one, and we’re very excited about it! You can now scale your workers horizontally as needed.
- Resource limits enforced by default. This means that tasks will be limited to the total CPU and RAM of the machine preset, preventing noisy neighbours.
- No direct Docker socket access. The compose file now comes with Docker Socket Proxy by default. Yes, you want this.
- No host networking. All containers are now running with network isolation, using only the network access they need.
- No checkpoint support. This was only ever experimental when self-hosting and not recommended. It caused a bunch of issues. We decided to focus on the core features instead.
- Built-in container registry and object storage. You can now deploy and execute tasks without needing third party services for this.
- Improved CLI commands. You don’t need any additional flags to deploy anymore, and there’s a new command to easily
switch
between profiles. - Whitelisting for GitHub OAuth. Any whitelisted email addresses will now also apply to sign ins via GitHub, unlike v3 where they only applied to magic links.
Requirements
These are the minimum requirements for running the webapp and worker components. They can run on the same, or on separate machines.
It’s fine to run everything on the same machine for testing. To be able to scale your workers, you will want to run them separately.
Prerequisites
To run the webapp and worker components, you will need:
- Docker 20.10.0+
- Docker Compose 2.20.0+
Webapp
This machine will host the webapp, postgres, redis, and related services.
- 2+ vCPU
- 4+ GB RAM
Worker
This machine will host the supervisor and all of the runs.
- 2+ vCPU
- 4+ GB RAM
How many workers and resources you need will depend on your workloads and concurrency requirements.
For example:
- 10 concurrency x
small-1x
(0.5 vCPU, 0.5 GB RAM) = 5 vCPU and 5 GB RAM - 20 concurrency x
small-1x
(0.5 vCPU, 0.5 GB RAM) = 10 vCPU and 10 GB RAM - 100 concurrency x
small-1x
(0.5 vCPU, 0.5 GB RAM) = 50 vCPU and 50 GB RAM - 100 concurrency x
small-2x
(1 vCPU, 1 GB RAM) = 100 vCPU and 100 GB RAM
You may need to spin up multiple workers to handle peak concurrency. The good news is you don’t have to know the exact numbers upfront. You can start with a single worker and add more as needed.
Setup
Webapp
- Clone the repository
- Create a
.env
file
- Start the webapp
- Configure the webapp using the environment variables in your
.env
file, then apply the changes:
- You should now be able to access the webapp at
http://localhost:8030
. When logging in, check the container logs for the magic link:
- (optional) To initialize a new project, run the following command:
Worker
- Clone the repository
- Create a
.env
file
- Start the worker
-
Configure the supervisor using the environment variables in your
.env
file, including the worker token. -
Apply the changes:
- Repeat as needed for additional workers.
Combined
If you want to run the webapp and worker on the same machine, just replace the up
command with the following:
Worker token
When running the combined stack, worker bootstrap is handled automatically. When running the webapp and worker separately, you will need to manually set the worker token.
On the first run, the webapp will generate a worker token and store it in a shared volume. It will also print the token to the console. It should look something like this:
You can then uncomment and set the TRIGGER_WORKER_TOKEN
environment variable in your .env
file.
Don’t forget to restart the worker container for the changes to take effect:
Registry setup
The registry is used to store and pull deployment images. When testing the stack locally, the defaults should work out of the box.
When deploying to production, you will need to set the correct URL and generate secure credentials for the registry.
Default settings
The default settings for the registry are:
- Registry:
localhost:5000
- Username:
registry-user
- Password:
very-secure-indeed
You should change these before deploying to production, especially the password. You can find more information about how to do this in the official registry docs.
Logging in
When self-hosting, builds run locally. You will have to login to the registry on every machine that runs the deploy
command. You should only have to do this once:
This will prompt for the password. Afterwards, the deploy command should work as expected.
Object storage
This is mainly used for large payloads and outputs. There are a few simple steps to follow to get started.
Default settings
The default settings for the object storage are:
- Endpoint:
http://localhost:9000
- Username:
admin
- Password:
very-safe-password
You should change these before deploying to production, especially the password.
Setup
-
Login to the dashboard:
http://localhost:9001
-
Create a bucket named
packets
. -
For production, you will want to set up a dedicated user and not use the root credentials above.
Authentication
The specific set of variables required will depend on your choice of email transport or alternative login methods like GitHub OAuth.
Magic link
By default, magic link auth is the only login option. If the EMAIL_TRANSPORT
env var is not set, the magic links will be logged by the webapp container and not sent via email.
Resend
SMTP
Note that setting SMTP_SECURE=false
does not mean the email is sent insecurely.
This simply means that the connection is secured using the modern STARTTLS protocol command instead of implicit TLS.
You should only set this to true when the SMTP server host directs you to do so (generally when using port 465)
AWS SES
Credentials are to be supplied as with any other program using the AWS SDK.
In this scenario, you would likely either supply the additional environment variables AWS_REGION
, AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
or, when running on AWS, use credentials supplied by the EC2 IMDS.
GitHub OAuth
To authenticate with GitHub, you will need to set up a GitHub OAuth app. It needs a callback URL https://<your_webapp_domain>/auth/github/callback
and you will have to set the following env vars:
Restricting access
All email addresses can sign up and log in this way. If you would like to restrict this, you can use the WHITELISTED_EMAILS
env var. For example:
This will apply to all auth methods including magic link and GitHub OAuth.
Version locking
There are several reasons to lock the version of your Docker images:
- Backwards compatibility. We try our best to maintain compatibility with older CLI versions, but it’s not always possible. If you don’t want to update your CLI, you can lock your Docker images to that specific version.
- Ensuring full feature support. Sometimes, new CLI releases will also require new or updated platform features. Running unlocked images can make any issues difficult to debug. Using a specific tag can help here as well.
By default, the images will point at the latest versioned release via the v4-beta
tag. You can override this by specifying a different tag in your .env
file. For example:
Troubleshooting
-
Deployment fails at the push step. The machine running
deploy
needs registry access. See the registry setup section for more details. -
Magic links don’t arrive. The webapp container needs to be able to send emails. You probably need to set up an email transport. See the authentication section for more details.
You should check the logs of the webapp container to see the magic link:
CLI usage
This section highlights some of the CLI commands and options that are useful when self-hosting. Please check the CLI reference for more in-depth documentation.
Login
To avoid being redirected to Trigger.dev Cloud when using the CLI, you need to specify the URL of your self-hosted instance with the --api-url
or -a
flag. For example:
Once you’ve logged in, you shouldn’t have to specify the URL again with other commands.
Profiles
You can specify a profile when logging in. This allows you to easily use the CLI with multiple instances of Trigger.dev. For example:
Logging in with a new profile will also make it the new default profile.
To use a specific profile, you can use the --profile
flag with other commands:
To list all your profiles, use the list-profiles
command:
To remove a profile, use the logout
command:
To switch to a different profile, use the switch
command:
Whoami
It can be useful to check you are logged into the correct instance. Running this will also show the API URL:
CI / GitHub Actions
When running the CLI in a CI environment, your login profiles won’t be available. Instead, you can use the TRIGGER_API_URL
and TRIGGER_ACCESS_TOKEN
environment
variables to point at your self-hosted instance and authenticate.
For more detailed instructions, see the GitHub Actions guide.
Telemetry
By default, the Trigger.dev webapp sends telemetry data to our servers. This data is used to improve the product and is not shared with third parties. If you would like to opt-out of this, you can set the TRIGGER_TELEMETRY_DISABLED
environment variable on the webapp container. The value doesn’t matter, it just can’t be empty. For example: