
Respond to customer inquiry and check for inappropriate content
Create an AI agent workflow that responds to customer inquiries while checking if their text is inappropriate
Overview
Parallelization is a workflow pattern where multiple tasks or processes run simultaneously instead of sequentially, allowing for more efficient use of resources and faster overall execution. It’s particularly valuable when different parts of a task can be handled independently, such as running content analysis and response generation at the same time.
Example task
In this example, we’ll create a workflow that simultaneously checks content for issues while responding to customer inquiries. This approach is particularly effective when tasks require multiple perspectives or parallel processing streams, with the orchestrator synthesizing the results into a cohesive output.
This task:
- Uses
generateText
from Vercel’s AI SDK to interact with OpenAI models - Uses
experimental_telemetry
to provide LLM logs - Uses
batch.triggerByTaskAndWait
to run customer response and content moderation tasks in parallel - Generates customer service responses using an AI model
- Simultaneously checks for inappropriate content while generating responses
Run a test
On the Test page in the dashboard, select the handle-customer-question
task and include a payload like the following:
When triggered with a question, the task simultaneously generates a response while checking for inappropriate content using two parallel LLM calls. The main task waits for both operations to complete before delivering the final response.