Overview

Self-hosting architecture

The self-hosting guide comes in two parts. The first part is a simple setup where you run everything on one server. In the second part, the webapp and worker components are split on two separate machines.

You’re going to need at least one Debian (or derivative) machine with Docker and Docker Compose installed. We’ll also use Ngrok to expose the webapp to the internet.

Caveats

The v3 worker components don’t have ARM support yet.

This guide outlines a quick way to start self-hosting Trigger.dev. Scaling, security, and reliability concerns are not fully addressed here. It’s unlikely to result in a production-ready deployment on its own, but it’s a good starting point.

As self-hosted deployments tend to have unique requirements and configurations, we don’t provide specific advice for scaling up or improving security and reliability.

Should the burden ever get too much, we’d be happy to see you on Trigger.dev cloud where we deal with these concerns for you.

Requirements

  • 4 CPU
  • 8 GB RAM
  • Debian or derivative
  • Optional: A separate machine for the worker components

You will also need a way to expose the webapp to the internet. This can be done with a reverse proxy, or with a service like Ngrok. We will be using the latter in this guide.

Part 1: Single server

This is the simplest setup. You run everything on one server. It’s a good option if you have spare capacity on an existing machine, and have no need to independently scale worker capacity.

Server setup

Some very basic steps to get started:

  1. Install Docker
  2. Install Docker Compose
  3. Install Ngrok

On a Debian server, you can install everything you need with the following commands:

curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | \
    sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && \
    echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | \
    sudo tee /etc/apt/sources.list.d/ngrok.list

sudo apt-get update
sudo apt-get install -y \
    docker.io \
    docker-compose \
    ngrok

Trigger.dev setup

  1. Clone the Trigger.dev docker repository and checkout the v3 branch
git clone https://github.com/triggerdotdev/docker
cd docker
git checkout v3
  1. Run the start script and follow the prompts
./start.sh # hint: you can append -d to run in detached mode

Manual setup

Alternatively, you can follow these manual steps after cloning the docker repo:

  1. Create the .env file
cp .env.example .env
  1. Generate the required secrets
echo MAGIC_LINK_SECRET=$(openssl rand -hex 16)
echo SESSION_SECRET=$(openssl rand -hex 16)
echo ENCRYPTION_KEY=$(openssl rand -hex 16)
echo PROVIDER_SECRET=$(openssl rand -hex 32)
echo COORDINATOR_SECRET=$(openssl rand -hex 32)
  1. Replace the default secrets in the .env file with the generated ones

  2. Run docker compose to start the services

. lib.sh # source the helper function
docker_compose -p=trigger up

Tunnelling

You will need to expose the webapp to the internet. You can use Ngrok for this. If you already have a working reverse proxy setup and a domain, you can skip to the last step.

  1. Start Ngrok. You may get prompted to sign up - it’s free.
./tunnel.sh
  1. Copy the domain from the output, for example: 1234-42-42-42-42.ngrok-free.app

  2. Uncomment the TRIGGER_PROTOCOL and TRIGGER_DOMAIN lines in the .env file. Set it to the domain you copied.

TRIGGER_PROTOCOL=https
TRIGGER_DOMAIN=1234-42-42-42-42.ngrok-free.app
  1. Quit the start script and launch it again, or run this:
./stop.sh && ./start.sh

Registry setup

If you want to deploy v3 projects, you will need access to a Docker registry. The CLI deploy command will push the images, and then the worker machine can pull them when needed. We will use Docker Hub as an example.

  1. Sign up for a free account at Docker Hub

  2. Edit the .env file and add the registry details

DEPLOY_REGISTRY_HOST=docker.io
DEPLOY_REGISTRY_NAMESPACE=<your_dockerhub_username>
  1. Log in to Docker Hub both locally and your server. For the split setup, this will be the worker machine. You may want to create an access token for this.
docker login -u <your_dockerhub_username>
  1. Restart the services
./stop.sh && ./start.sh
  1. You can now deploy v3 projects using the CLI with these flags:
npx trigger.dev@beta deploy --self-hosted --push

Part 2: Split services

With this setup, the webapp will run on a different machine than the worker components. This allows independent scaling of your workload capacity.

Webapp setup

All steps are the same as in Part 1, except for the following:

  1. Run the start script with the webapp argument
./start.sh webapp
  1. Tunnelling is now required. Please follow the tunnelling section from above.

Worker setup

  1. Copy your .env file from the webapp to the worker machine
# an example using scp
scp -3 root@<webapp_machine>:docker/.env root@<worker_machine>:docker/.env
  1. Run the start script with the worker argument
./start.sh worker
  1. Tunnelling is not required for the worker components.

Checkpoint support

This requires an experimental Docker feature. Successfully checkpointing a task today, does not mean you will be able to restore it tomorrow. Your data may be lost. You’ve been warned!

Checkpointing allows you to save the state of a running container to disk and restore it later. This can be useful for long-running tasks that need to be paused and resumed without losing state. Think fan-out and fan-in, or long waits in email campaigns.

The checkpoints will be pushed to the same registry as the deployed images. Please see the Registry setup section for more information.

Requirements

  • Debian, NOT a derivative like Ubuntu
  • Additional storage space for the checkpointed containers

Setup

Underneath the hood this uses Checkpoint and Restore in Userspace, or CRIU in short. We’ll have to do a few things to get this working:

  1. Install CRIU
sudo apt-get update
sudo apt-get install criu
  1. Tweak the config so we can successfully checkpoint our workloads
mkdir -p /etc/criu

cat << EOF >/etc/criu/runc.conf
tcp-close
EOF
  1. Make sure everything works
sudo criu check
  1. Enable Docker experimental features, by adding the following to /etc/docker/daemon.json
{
  "experimental": true
}
  1. Restart the Docker daemon
sudo systemctl restart docker
  1. Uncomment FORCE_CHECKPOINT_SIMULATION=0 in your .env file. Alternatively, run this:
echo "FORCE_CHECKPOINT_SIMULATION=0" >> .env
  1. Restart the services
# if you're running everything on the same machine
./stop.sh && ./start.sh

# if you're running the worker on a different machine
./stop.sh worker && ./start.sh worker

Telemetry

By default, the Trigger.dev webapp sends telemetry data to our servers. This data is used to improve the product and is not shared with third parties. If you would like to opt-out of this, you can set the TRIGGER_TELEMETRY_DISABLED environment variable in your .env file. The value doesn’t matter, it just can’t be empty. For example:

TRIGGER_TELEMETRY_DISABLED=1