Article

·

Turn your face into a super-hero with NextJS, Replicate, and Trigger.dev

Eric Allam

Eric Allam

CTO, Trigger.dev

Image for Turn your face into a super-hero with NextJS, Replicate, and Trigger.dev

TL;DR

This tutorial is super fun! You'll learn how to build a web application that allows users to generate AI images of themselves based on the prompt provided.

Before we start, head over to:

LINK: Generate a new avatar and post it in the comments! (To find good prompts check https://lexica.art)

In this tutorial, you will learn the following:

  • Upload images seamlessly in Next.js,
  • Generate stunning AI images with Replicate, and swap their faces with your face!
  • Send emails via Resend in Trigger.dev.

MoveHead


Your background job management for NextJS

Trigger.dev is an open-source library that enables you to create and monitor long-running jobs for your app with NextJS, Remix, Astro, and so many more!

If you can spend 10 seconds giving us a star, I would be super grateful 💖 https://github.com/triggerdotdev/trigger.dev

GiveStar


Set up the Wizard 🧙‍♂️

The application consists of two pages: the Home page that accepts users' email, image, gender, and a specific prompt if necessary, and the Success page that informs users that the image is being generated and will be sent to their email once it's ready.

The best part? All these tasks are handled seamlessly by Trigger.dev.🤩

overview

Run the code snippet below within your terminal to create a Typescript Next.js project.


_10
npx create-next-app image-generator

Main page 🏠

Update the index.tsx file to display a form that enables users to enter their email address and gender, an optional custom prompt, and upload a picture of themselves.


_95
"use client";
_95
import Head from "next/head";
_95
import { FormEvent, useState } from "react";
_95
import { useRouter } from "next/navigation";
_95
_95
export default function Home() {
_95
const [selectedFile, setSelectedFile] = useState<File>();
_95
const [userPrompt, setUserPrompt] = useState<string>("");
_95
const [email, setEmail] = useState<string>("");
_95
const [gender, setGender] = useState<string>("");
_95
const router = useRouter();
_95
_95
const handleSubmit = async (e: FormEvent<HTMLFormElement>) => {
_95
e.preventDefault();
_95
console.log({ selectedFile, userPrompt, email, gender });
_95
router.push("/success");
_95
};
_95
_95
return (
_95
<main className="flex min-h-screen w-full flex-col items-center justify-center px-4 md:p-8">
_95
<Head>
_95
<title>Avatar Generator</title>
_95
</Head>
_95
<header className="mb-8 flex w-full flex-col items-center justify-center">
_95
<h1 className="text-4xl font-bold">Avatar Generator</h1>
_95
<p className="opacity-60">
_95
Upload a picture of yourself and generate your avatar
_95
</p>
_95
</header>
_95
_95
<form
_95
method="POST"
_95
className="flex w-full flex-col md:w-[60%]"
_95
onSubmit={(e) => handleSubmit(e)}
_95
>
_95
<label htmlFor="email">Email Address</label>
_95
<input
_95
type="email"
_95
required
_95
className="mb-3 border-[1px] px-4 py-2"
_95
value={email}
_95
onChange={(e) => setEmail(e.target.value)}
_95
/>
_95
_95
<label htmlFor="gender">Gender</label>
_95
<select
_95
className="mb-4 rounded border-[1px] px-4 py-3"
_95
name="gender"
_95
id="gender"
_95
value={gender}
_95
onChange={(e) => setGender(e.target.value)}
_95
required
_95
>
_95
<option value="">Select</option>
_95
<option value="male">Male</option>
_95
<option value="female">Female</option>
_95
</select>
_95
_95
<label htmlFor="image">Upload your picture</label>
_95
<input
_95
name="image"
_95
type="file"
_95
className="mb-3 rounded-md border-[1px] px-4 py-2"
_95
accept=".png, .jpg, .jpeg"
_95
required
_95
onChange={({ target }) => {
_95
if (target.files) {
_95
const file = target.files[0];
_95
setSelectedFile(file);
_95
}
_95
}}
_95
/>
_95
<label htmlFor="prompt">
_95
Add custom prompt for your avatar
_95
<span className="opacity-60">(optional)</span>
_95
</label>
_95
<textarea
_95
rows={4}
_95
className="w-full border-[1px] p-3"
_95
name="prompt"
_95
id="prompt"
_95
value={userPrompt}
_95
placeholder="Copy image prompts from https://lexica.art"
_95
onChange={(e) => setUserPrompt(e.target.value)}
_95
/>
_95
<button
_95
type="submit"
_95
className="mt-5 rounded bg-blue-500 px-6 py-4 text-lg text-white hover:bg-blue-700"
_95
>
_95
Generate Avatar
_95
</button>
_95
</form>
_95
</main>
_95
);
_95
}

The code snippet above displays the required input fields and a button that logs all the user inputs to the console.

CodeSnip

The Success page ✅

After users submit the form on the home page, they are automatically redirected to the Success page. This page confirms the receipt of their request and informs them that they will receive the AI-generated image via email as soon as it is ready.

Create a success.tsx file and copy the code snippet into the file.


_22
import Link from "next/link";
_22
import Head from "next/head";
_22
_22
export default function Success() {
_22
return (
_22
<div className="flex min-h-screen w-full flex-col items-center justify-center">
_22
<Head>
_22
<title>Success | Avatar Generator</title>
_22
</Head>
_22
<h2 className="mb-2 text-3xl font-bold">Thank you! 🌟</h2>
_22
<p className="mb-4 text-center">
_22
Your image will be delivered to your email, once it is ready! 💫
_22
</p>
_22
<Link
_22
href="/"
_22
className="rounded bg-blue-500 px-4 py-3 text-white hover:bg-blue-600"
_22
>
_22
Generate another
_22
</Link>
_22
</div>
_22
);
_22
}

SuccessPage

Uploading images to a Next.js server

On the form, you need to allow users to upload images to the Next.js server and swap the face on the picture with an AI image.

To do this, I'll walk you through how to upload files in Next.js using Formidable - a Node.js module for parsing form data, especially file uploads.

BeforeAfter

Install Formidable to your Next.js project:


_10
npm install formidable @types/formidable

Before we proceed, update the handleSubmit function to send the user's data to an endpoint on the server.


_20
const handleSubmit = async (e: FormEvent<HTMLFormElement>) => {
_20
e.preventDefault();
_20
try {
_20
if (!selectedFile) return;
_20
const formData = new FormData();
_20
formData.append("image", selectedFile);
_20
formData.append("gender", gender);
_20
formData.append("email", email);
_20
formData.append("userPrompt", userPrompt);
_20
//👇🏻 post data to server's endpoint
_20
await fetch("/api/generate", {
_20
method: "POST",
_20
body: formData,
_20
});
_20
//👇🏻 redirect to Success page
_20
router.push("/success");
_20
} catch (err) {
_20
console.error({ err });
_20
}
_20
};

Create the /api/generate endpoint on the server and disable the default Next.js body-parser, as shown below.


_12
import type { NextApiRequest, NextApiResponse } from "next";
_12
_12
//👇🏻 disables the default Next.js body parser
_12
export const config = {
_12
api: {
_12
bodyParser: false,
_12
},
_12
};
_12
_12
export default function handler(req: NextApiRequest, res: NextApiResponse) {
_12
res.status(200).json({ message: "Hello world" });
_12
}

Add this code snippet directly below the config object to convert the image to base64 format.


_38
//👇🏻 creates a writable stream that stores a chunk of data
_38
const fileConsumer = (acc: any) => {
_38
const writable = new Writable({
_38
write: (chunk, _enc, next) => {
_38
acc.push(chunk);
_38
next();
_38
},
_38
});
_38
_38
return writable;
_38
};
_38
_38
const readFile = (req: NextApiRequest, saveLocally?: boolean) => {
_38
// @ts-ignore
_38
const chunks: any[] = [];
_38
//👇🏻 creates a formidable instance that uses the fileConsumer function
_38
const form = formidable({
_38
keepExtensions: true,
_38
fileWriteStreamHandler: () => fileConsumer(chunks),
_38
});
_38
_38
return new Promise((resolve, reject) => {
_38
form.parse(req, (err, fields: any, files: any) => {
_38
//👇🏻 converts the image to base64
_38
const image = Buffer.concat(chunks).toString("base64");
_38
//👇🏻 logs the result
_38
console.log({
_38
image,
_38
email: fields.email[0],
_38
gender: fields.gender[0],
_38
userPrompt: fields.userPrompt[0],
_38
});
_38
_38
if (err) reject(err);
_38
resolve({ fields, files });
_38
});
_38
});
_38
};

  • From the code snippet above,
    • The fileConsumer function creates a writable stream in Node.js for storing the chunk of data to be written.
    • The readFile function creates a Formidable instance that uses the fileConsumer function as the custom fileWriteStreamHandler. The handler ensures that the image data is stored within the chunks array.
    • It also returns the user’s image (base64 format), email, gender, and the custom prompt.

Finally, modify the handler function to execute readFile function.


_10
export default async function handler(
_10
req: NextApiRequest,
_10
res: NextApiResponse
_10
) {
_10
await readFile(req, true);
_10
_10
res.status(200).json({ message: "Processing!" });
_10
}

Congratulations!🎉 You've learnt how to upload images in base64 format in Next.js. In the upcoming section, I'll walk you through generating images with AI models on Replicate and sending them to your emails via Resend and Trigger.dev.


Managing long-running jobs with Trigger.dev 🏄‍♂️

Trigger.dev is an open-source library that offers three communication methods: webhook, schedule, and event. Schedule is ideal for recurring tasks, events activate a job upon sending a payload, and webhooks trigger real-time jobs when specific events occur.

Here, you'll learn how to create and trigger jobs within your Next.js project.

How to add Trigger.dev to a Next.js application

Sign up for a Trigger.dev account. Once registered, create an organisation and choose a project name for your jobs.

CreateOrg

Select Next.js as your framework and follow the process for adding Trigger.dev to an existing Next.js project.

Next

Otherwise, click Environments & API Keys on the sidebar menu of your project dashboard.

Copy

Copy your DEV server API key and run the code snippet below to install Trigger.dev. Follow the instructions carefully.


_10
npx @trigger.dev/cli@latest init

Start your Next.js project.


_10
npm run dev

In another terminal, run the following code snippet to establish a tunnel between Trigger.dev and your Next.js project.


_10
npx @trigger.dev/cli@latest dev

Rename the jobs/examples.ts file to jobs/functions.ts. This is where all the jobs are processed.

Next, install Zod - a TypeScript-first type-checking and validation library that enables you to verify the data type of a job's payload.


_10
npm install zod

In Trigger.dev, jobs can be triggered using the client.sendEvent() method. Therefore, modify the readFile function to trigger the newly created job and send the user's data as a payload to the job.


_27
const readFile = (req: NextApiRequest, saveLocally?: boolean) => {
_27
// @ts-ignore
_27
const chunks: any[] = [];
_27
const form = formidable({
_27
keepExtensions: true,
_27
fileWriteStreamHandler: () => fileConsumer(chunks),
_27
});
_27
_27
return new Promise((resolve, reject) => {
_27
form.parse(req, (err, fields: any, files: any) => {
_27
const image = Buffer.concat(chunks).toString("base64");
_27
//👇🏻 sends the payload to the job
_27
client.sendEvent({
_27
name: "generate.avatar",
_27
payload: {
_27
image,
_27
email: fields.email[0],
_27
gender: fields.gender[0],
_27
userPrompt: fields.userPrompt[0],
_27
},
_27
});
_27
_27
if (err) reject(err);
_27
resolve({ fields, files });
_27
});
_27
});
_27
};


Creating the faces with Replicate

Replicate is a web platform that allows users to run models at scale in the cloud. Here, you'll learn how to generate and swap image faces using AI models on Replicate.

Follow the steps below to accomplish this:

Visit the Replicate home page, click the Sign in button to log in via your GitHub account, and generate your API token.

CopyToken

Copy your API token, the Stability AI model URI - for generating images, and the Faceswap AI model URI into the .env.local file.


_10
REPLICATE_API_TOKEN=<your_API_token>
_10
STABILITY_AI_URI=stability-ai/sdxl:c221b2b8ef527988fb59bf24a8b97c4561f1c671f73bd389f866bfb27c061316
_10
FACESWAP_API_URI=lucataco/faceswap:9a4298548422074c3f57258c5d544497314ae4112df80d116f0d2109e843d20d

Next, go to the Trigger.dev integration page and install the Replicate package.


_10
npm install @trigger.dev/replicate@latest

Import and initialize the Replicate within the jobs/functions.ts file.


_10
import { Replicate } from "@trigger.dev/replicate";
_10
_10
const replicate = new Replicate({
_10
id: "replicate",
_10
apiKey: process.env["YOUR_REPLICATE_API_KEY"],
_10
});

Update the jobs/functions.ts file to generate an image using the prompt provided by the user or a default prompt.


_36
import { z } from "zod";
_36
_36
client.defineJob({
_36
id: "generate-avatar",
_36
name: "Generate Avatar",
_36
//👇🏻 integrates Replicate
_36
integrations: { replicate },
_36
version: "0.0.1",
_36
trigger: eventTrigger({
_36
name: "generate.avatar",
_36
schema: z.object({
_36
image: z.string(),
_36
email: z.string(),
_36
gender: z.string(),
_36
userPrompt: z.string().nullable(),
_36
}),
_36
}),
_36
run: async (payload, io, ctx) => {
_36
const { email, image, gender, userPrompt } = payload;
_36
_36
await io.logger.info("Avatar generation started!", { image });
_36
_36
const imageGenerated = await io.replicate.run("create-model", {
_36
identifier: process.env.STABILITY_AI_URI,
_36
input: {
_36
prompt: `${
_36
userPrompt
_36
? userPrompt
_36
: `A professional ${gender} portrait suitable for a social media avatar. Please ensure the image is appropriate for all audiences.`
_36
}`,
_36
},
_36
});
_36
_36
await io.logger.info(JSON.stringify(imageGenerated));
_36
},
_36
});

The code snippet above generates an AI image based on the prompt and logs it on your Trigger.dev dashboard.

Generate

Remember, you need to generate an AI image and swap the user's face with the AI-generated image. Next, let's swap faces on the images.

Copy this function to the top of the jobs/functions.ts file. The code snippet converts the image generated into its data URI, which is the accepted format for the face swap AI model.


_10
//👇🏻 converts an image URL to a data URI
_10
const urlToBase64 = async (image: string) => {
_10
const response = await fetch(image);
_10
const arrayBuffer = await response.arrayBuffer();
_10
const buffer = Buffer.from(arrayBuffer);
_10
const base64String = buffer.toString("base64");
_10
const mimeType = "image/png";
_10
const dataURI = `data:${mimeType};base64,${base64String}`;
_10
return dataURI;
_10
};

Update the Trigger.dev job to send both the user's image and generated image as parameters to the faceswap model.


_41
client.defineJob({
_41
id: "generate-avatar",
_41
name: "Generate Avatar",
_41
version: "0.0.1",
_41
trigger: eventTrigger({
_41
name: "generate.avatar",
_41
schema: z.object({
_41
image: z.string(),
_41
email: z.string(),
_41
gender: z.string(),
_41
userPrompt: z.string().nullable(),
_41
}),
_41
}),
_41
run: async (payload, io, ctx) => {
_41
const { email, image, gender, userPrompt } = payload;
_41
_41
await io.logger.info("Avatar generation started!", { image });
_41
_41
const imageGenerated = await io.replicate.run("create-model", {
_41
identifier: process.env.STABILITY_AI_URL,
_41
input: {
_41
prompt: `${
_41
userPrompt
_41
? userPrompt
_41
: `A professional ${gender} portrait suitable for a social media avatar. Please ensure the image is appropriate for all audiences.`
_41
}`,
_41
},
_41
});
_41
_41
const swappedImage = await io.replicate.run("create-image", {
_41
identifier: process.env.FACESWAP_AI_URL
_41
input: {
_41
// @ts-ignore
_41
target_image: await urlToBase64(imageGenerated.output),
_41
swap_image: "data:image/png;base64," + image,
_41
},
_41
});
_41
await io.logger.info("Swapped image: ", {swappedImage.output});
_41
await io.logger.info("✨ Congratulations, your image has been swapped! ✨");
_41
},
_41
});

The code snippet above gets the data URI for the AI-generated and user's image and sends both images to the AI model, which returns the URL of the swapped image.

Congratulations!🎉 You've learnt how to generate AI images of yourself with Replicate. In the upcoming section, you'll learn how to send these images via email with Resend.

PS: You can also get custom prompts for your images from Lexica.

Generate


Sending emails with Resend via Trigger.dev

Resend is an email API that enables you to send texts, attachments, and email templates easily. With Resend, you can build, test, and deliver transactional emails at scale.

Visit the Signup page, create an account and an API Key and save it into the .env.local file.


_10
RESEND_API_KEY=<place_your_API_key>

Key

Install the Trigger.dev Resend integration package to your Next.js project.


_10
npm install @trigger.dev/resend

Import Resend into the /jobs/functions.ts file as shown below.


_10
import { Resend } from "@trigger.dev/resend";
_10
_10
const resend = new Resend({
_10
id: "resend",
_10
apiKey: process.env.RESEND_API_KEY!,
_10
});

Finally, integrate Resend to the job and send the swapped imaged to user's email.


_33
client.defineJob({
_33
id: "generate-avatar",
_33
name: "Generate Avatar",
_33
// ---👇🏻 integrates Resend ---
_33
integrations: { resend },
_33
version: "0.0.1",
_33
trigger: eventTrigger({
_33
name: "generate.avatar",
_33
schema: z.object({
_33
image: z.object({ filepath: z.string() }),
_33
email: z.string(),
_33
gender: z.string(),
_33
userPrompt: z.string().nullable(),
_33
}),
_33
}),
_33
run: async (payload, io, ctx) => {
_33
const { email, image, gender, userPrompt } = payload;
_33
//👇🏻 -- After swapping the images, add the code snipped below --
_33
await io.logger.info("Swapped image: ", { swappedImage });
_33
_33
//👇🏻 -- Sends the swapped image to the user--
_33
await io.resend.sendEmail("send-email", {
_33
_33
to: [email],
_33
subject: "Your avatar is ready! 🌟🤩",
_33
text: `Hi! \n View and download your avatar here - ${swappedImage.output}`,
_33
});
_33
_33
await io.logger.info(
_33
"✨ Congratulations, the image has been delivered! ✨"
_33
);
_33
},
_33
});

Congratulations!🎉 You've completed the project for this tutorial.

Conclusion

So far, you've learnt how to

  • upload images to a local directory in Next.js,
  • create and manage long-running jobs with Trigger.dev,
  • generate AI images using various models on Replicate, and
  • send emails via Resend in Trigger.dev.

As an open-source developer, you're invited to join our community to contribute and engage with maintainers. Don't hesitate to visit our GitHub repository to contribute and create issues related to Trigger.dev.

The source for this tutorial is available here: https://github.com/triggerdotdev/blog/tree/main/avatar-generator

Thank you for reading!

Ready to start building?

Build and deploy your first task in 3 minutes.

Get started now
,