Customer Story

How Tierly builds AI-powered pricing intelligence using Trigger.dev

Gerasimos Plegas

Gerasimos Plegas

Founder, Tierly

Image for How Tierly builds AI-powered pricing intelligence using Trigger.dev

Gerasimos Plegas, Founder at Tierly, shares how they orchestrate complex multi-step AI workflows for competitive pricing analysis using Trigger.dev. Each analysis coordinates 10+ AI models, dozens of data extraction tasks, and human-in-the-loop review gates.

Building AI-Powered pricing intelligence for SaaS

At Tierly, we help SaaS companies understand where they stand in their competitive landscape. Our platform automatically analyzes competitor pricing pages, extracts tier data, matches comparable tiers across products, scores them using AI, and generates comprehensive reports with actionable recommendations.

A typical analysis involves:

  1. Finding and scraping pricing pages from a company URL
  2. Discovering competitors using AI research tools
  3. Extracting pricing tier data from multiple sources in parallel
  4. Matching comparable tiers across products
  5. Running AI-powered scoring across 6 pricing attributes
  6. Generating detailed reports with recommendations

Each analysis coordinates 10+ AI models, dozens of pricing page analysis requests, and produces structured recommendations; all while providing real-time progress updates to users.

The challenges we faced before Trigger.dev

Our initial architecture tried to run these complex AI pipelines as synchronous API routes. The problems became apparent quickly:

Timeouts everywhere. A single pricing analysis can take 5-15 minutes with multiple AI model calls (GPT-4o, GPT-5, o3), pricing page analysis operations, and complex data processing. Other cloud providers have shorter timeouts (e.g. 60 seconds), which made this impossible.

Rate limiting nightmares. We needed to coordinate scraping requests across Firecrawl's 50 concurrent request limit while processing multiple competitor URLs in parallel. Managing this manually led to 429 errors and failed analyses.

No visibility into failures. When a step failed deep in the pipeline, we had no easy way to see what happened, retry from the right point, or understand the cascade of issues.

Human-in-the-loop complexity. Users need to review and optionally edit extracted pricing tiers before scoring. Implementing approval gates with traditional approaches meant complex webhook systems and state management.

What we needed was durable execution that could handle long-running AI workflows, coordinate parallel operations, provide observability, and support human review gates natively.

Why we chose Trigger.dev

I discovered Trigger.dev when researching solutions for long-running AI workflows. Several features made it the clear choice:

  • Durable execution with automatic retries and checkpointing
  • First-class TypeScript support that fits our Next.js codebase
  • Queues with concurrency control for rate limiting
  • Wait tokens for human-in-the-loop approval flows
  • Python extension for running Playwright alongside Node.js tasks
  • Real-time metadata updates for progress tracking
  • Full observability across all environments

The TypeScript-native approach meant we could define our entire workflow logic in the same language as our application, with full type safety across task payloads and outputs.

How Trigger.dev fits into our architecture

Trigger.dev powers our entire analysis pipeline. Here's how we've structured it:

1. Parallel orchestration with batch triggers

Our main workflow runs two independent chains in parallel:


const {
runs: [pricingRun, competitorRun],
} = await batch.triggerByTaskAndWait([
{
task: pricingChainOrchestrator,
payload: { company, analysisId, totalSteps },
},
{
task: competitorChainOrchestrator,
payload: { company, analysisId, count, selectionType },
},
]);

The pricing chain finds the pricing URL and extracts product tiers. The competitor chain discovers competitors and extracts their tiers. Both run completely independently, cutting total analysis time nearly in half.

2. Human-in-the-loop with wait tokens

Users can review extracted pricing data before analysis continues. We use Trigger.dev's wait tokens to pause execution until approval:


const productReviewToken = await wait.createToken({ timeout: "1h" });
await metadata.parent.set("productReviewTokenId", productReviewToken.id);
const productReview = await wait.forToken<{
approved: boolean;
editedTiers?: Tiers;
}>(productReviewToken);
const finalProductTiers = productReview.output.editedTiers || productTiers;

The frontend displays a review modal, and users can edit tier data directly. When they approve, the workflow continues with their modifications.

3. Rate limiting with queues

We share Firecrawl's scraping API across multiple concurrent analyses. Trigger.dev queues handle this elegantly:


export const firecrawlScrapingQueue = queue({
name: "firecrawl-scraping",
concurrencyLimit: 45, // 90% of Firecrawl's 50 limit
});
export const productTiersTask = task({
id: "product-tiers",
queue: firecrawlScrapingQueue,
// ...
});

Every scraping task shares the same queue, ensuring we never exceed API limits regardless of how many analyses run simultaneously.

4. Python extension for Playwright

This one came as a bonus! Some competitor sites require JavaScript rendering, so we tried and finally managed to run Playwright alongside our Node.js tasks using Trigger.dev's Python extension.


export default defineConfig({
build: {
extensions: [
pythonExtension({
requirementsFile: "./requirements.txt",
scripts: ["./python/**/*.py"],
}),
installPlaywrightChromium(),
],
},
});

This allows us to scrape JavaScript-heavy pricing pages that pure HTTP requests can't handle, providing a solid fallback for Firecrawl.

5. Real-time progress with metadata

Users see live progress as their analysis runs. Metadata updates propagate to the frontend via our realtime subscriptions:


await metadata.set("progress", {
currentStep: "scoreTiers",
currentStepName: "Scoring and analyzing tiers",
completedSteps: 4,
totalSteps: 7,
percentage: 57,
});

Each step updates progress, and orchestrators use metadata.parent.set() to bubble status up from child tasks.

6. Progressive AI model fallbacks

Our tier extraction uses progressive model escalation with Trigger.dev's retry system:


// Attempt 1: gpt-4o-mini (fast, cheap)
// Attempt 2: gpt-4o (more capable)
// Attempt 3: gpt-4o with markdown fallback
const model = retryCount === 0 ? openai("gpt-4o-mini") : openai("gpt-4o");
const format = retryCount < 2 ? "json" : "markdown";

If the fast model fails to extract valid data, we automatically retry with more capable models. Trigger.dev's retry configuration handles the backoff:


retry: {
maxAttempts: 3,
factor: 1.8,
minTimeoutInMs: 1000,
maxTimeoutInMs: 30000,
randomize: true,
}

Key features we rely on

Beyond the core architecture, several Trigger.dev features have been essential:

Long task timeouts. Our report generation task uses GPT-5.1 with extended reasoning, which can take 10-20 minutes. Trigger.dev handles this without issue:


export default defineConfig({
maxDuration: 3600, // 1 hour max
});

Batch operations. We extract tiers from multiple competitor URLs using batch triggers, processing them in parallel while respecting rate limits.

Type-safe task chaining. Results flow between tasks with full TypeScript types, catching schema mismatches at compile time rather than runtime.

Preview environments. Every PR gets its own Trigger.dev environment, so we can test workflow changes without affecting production.

The impact of Trigger.dev

Trigger.dev has transformed how we build and operate our AI pipeline:

  • Analyses complete reliably even with 10+ AI model calls and dozens of scraping requests
  • Human review gates work seamlessly without complex webhook infrastructure
  • Rate limiting is automatic across all concurrent analyses
  • Full visibility into every step of every analysis, making debugging straightforward
  • Development velocity increased since workflow logic lives in TypeScript alongside our app

Trigger.dev handles our orchestration so we can focus on building better pricing intelligence for SaaS teams.

Ready to start building?

Build and deploy your first task in 3 minutes.

Get started now