1 new feature, 2 improvements, 2 bug fixes, and 7 server changes.
Highlights
Input streams for bidirectional communication with running tasks
Tasks can now receive data while they're running via typed input streams. Define a stream schema once and use it anywhere: wait in your task, send from your backend or frontend.
// streams.ts — define once, share everywhereimport { streams } from "@trigger.dev/sdk";export const approval = streams.input<{ approved: boolean; reviewer: string }>({ id: "approval",});
// task — suspend until a reviewer responds (frees compute while waiting)const result = await approval.wait({ timeout: "7d" });if (result.ok && result.output.approved) { await publish(draft);}
// backend — send data to the running taskawait approval.send(runId, { approved: true, reviewer: "[email protected]" });
Use .once() for a non-suspending wait, .on() to register a persistent listener (useful for cancel signals on AI tasks), or send from the frontend with the new useInputStreamSend React hook. (#3146)
Improvements
- Increase Batch trigger processing concurrency limits: Free plan 1 → 5, Hobby plan stays at 10, Pro plan 10 → 50. (#3079)
- Add
PAYLOAD_TOO_LARGEerror code so oversized batch trigger items fail gracefully with a pre-failed run instead of aborting the entire batch. (#3137)
Bug fixes
- Fix slow batch queue processing caused by spurious cooloff on concurrency blocks and a race condition where retry attempt counts were not atomically updated during message re-queue. (#3079)
- Fix
batchTriggerAndWaitvariants returningunknownforrun.taskIdentifierinstead of the correct value. (#3080)
Server changes
These changes are included in the v4.4.2 Docker image and are already live on Trigger.dev Cloud:
- Two-level tenant dispatch for batch queue processing, replacing the single master queue with a two-level index for O(1) tenant selection and fair scheduling regardless of queue count. (#3133)
- Server-side input streams support: API routes for sending data to running tasks, SSE reading, waitpoint creation, and a Redis cache for fast
.send()-to-.wait()bridging. Includes dashboard span support and s2-lite support for self-hosted deployments. (#3146) - Increase batch queue processing concurrency limits: Free plan 1 → 5, Hobby plan stays at 10, Pro plan 10 → 50. Cooloff on concurrency blocks is also disabled. (#3079)
- Move batch queue global rate limiter from the FairQueue claim phase to the BatchQueue worker queue consumer for accurate per-item rate limiting. Add a worker queue depth cap to prevent unbounded growth. (#3166)
- Fix a race condition in the waitpoint system where a run could be blocked by a completed waitpoint and never resumed due to a PostgreSQL MVCC issue. Most likely to occur when
wait.forToken()andwait.completeToken()were called at the same moment. (#3075) - Gracefully handle oversized NDJSON batch items instead of aborting the stream. Oversized items are emitted as pre-failed runs with
PAYLOAD_TOO_LARGE, while the rest of the batch processes normally. Also fixes invalid JSON errors on lines following an oversized chunk. (#3137) - Require the real user to be an admin during an impersonation session. Previously only the impersonation cookie was checked; now the admin flag is verified on every request and the session falls back to the real user if admin access has been revoked. (#3078)
How to upgrade
Update the trigger.dev/* packages to v4.4.2 using your package manager:
npx trigger.dev@latest update # npmpnpm dlx trigger.dev@latest update # pnpmyarn dlx trigger.dev@latest update # yarnbunx trigger.dev@latest update # bun
Self-hosted users: update your Docker image to ghcr.io/triggerdotdev/trigger.dev:v4.4.2.

