How to Batch Generate 100 Product Videos with Python
You have 100 products. Each one needs a UGC-style video ad — a real-looking person talking about the product with subtitles, music, and a CTA. Doing this manually would take weeks. With the agent-media Python SDK you can script the entire process, read your product catalog from a CSV, and generate every video programmatically.
What you need
Python 3.8+
Any recent Python version works. The SDK supports 3.8 through 3.13.
An agent-media account
Sign up at agent-media.ai and grab your API key from the dashboard. It starts with ma_.
Your product data
A CSV, a database query, a JSON file — anything that gives you product names, descriptions, and the actor you want for each.
Install the SDK
Install the official Python SDK from PyPI. It includes both a synchronous and an async client, automatic retries, and typed response objects.
pip install agent-media
Then set your API key as an environment variable:
AGENT_MEDIA_API_KEY=ma_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Prepare your data
Create a CSV with one row per video. At minimum you need a product name, a script or description for the actor to read, and the actor slug. You can export this from Shopify, your CMS, or a simple spreadsheet.
product_name,description,actor_slug Hydra Serum,"I have been using this serum for two weeks and my skin has never looked better. It absorbs instantly and does not leave a greasy feel.",sofia Protein Pro,"This protein powder mixes clean with no chalky taste. One scoop after my workout and I am good to go.",marcus Sleep Stack,"I started taking this before bed and I fall asleep in minutes. No groggy mornings either.",aisha Daily Greens,"I hate vegetables but this tastes like a green apple smoothie. I drink it every morning now.",james Glow Oil,"Two drops of this and my skin literally glows. My friends keep asking what I changed.",luna
Tip: write scripts in first person, conversational tone. These will be spoken by the AI actor as a natural UGC testimonial.
Write the basic generation script
This script reads your CSV, loops through each row, and calls client.create_video() for every product. It waits for each video to finish before moving to the next one.
import csv
import os
from agent_media import AgentMedia
client = AgentMedia(api_key=os.environ["AGENT_MEDIA_API_KEY"])
with open("products.csv") as f:
reader = csv.DictReader(f)
for row in reader:
video = client.create_video(
script=row["description"],
actor=row["actor_slug"],
subtitle_style="hormozi",
wait_for_completion=True,
)
print(f"[{row['product_name']}] {video.url}")This works but it is sequential — each video waits for the previous one to finish rendering. For 100 products that means a long wait. Let us fix that.
Add concurrency with asyncio
The SDK ships an async client that works with asyncio. Instead of generating one video at a time, you can fire off multiple requests in parallel using asyncio.gather(). The example below batches requests in groups of 10 to avoid overwhelming the API.
import asyncio
import csv
import os
from agent_media import AsyncAgentMedia
client = AsyncAgentMedia(api_key=os.environ["AGENT_MEDIA_API_KEY"])
BATCH_SIZE = 10
async def generate_video(row: dict) -> dict:
video = await client.create_video(
script=row["description"],
actor=row["actor_slug"],
subtitle_style="hormozi",
wait_for_completion=True,
)
return {"product": row["product_name"], "url": video.url}
async def main():
with open("products.csv") as f:
rows = list(csv.DictReader(f))
results = []
for i in range(0, len(rows), BATCH_SIZE):
batch = rows[i : i + BATCH_SIZE]
batch_results = await asyncio.gather(
*[generate_video(row) for row in batch]
)
results.extend(batch_results)
print(f"Batch {i // BATCH_SIZE + 1} complete: {len(batch)} videos")
for r in results:
print(f"[{r['product']}] {r['url']}")
asyncio.run(main())10 videos generating in parallel instead of one at a time. For 100 products that is 10 batches — dramatically faster than sequential processing.
Handle failures gracefully
When you are generating 100 videos, some will inevitably fail — a network hiccup, a transient API error, an invalid actor slug. The production-ready version adds retry logic, structured logging, and writes results to an output CSV so you know exactly which products succeeded and which need a rerun.
import asyncio
import csv
import logging
import os
from agent_media import AsyncAgentMedia
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(message)s")
logger = logging.getLogger(__name__)
client = AsyncAgentMedia(api_key=os.environ["AGENT_MEDIA_API_KEY"])
BATCH_SIZE = 10
MAX_RETRIES = 3
async def generate_with_retry(row: dict) -> dict:
for attempt in range(1, MAX_RETRIES + 1):
try:
video = await client.create_video(
script=row["description"],
actor=row["actor_slug"],
subtitle_style="hormozi",
wait_for_completion=True,
)
logger.info(f"OK: {row['product_name']}")
return {
"product_name": row["product_name"],
"status": "success",
"url": video.url,
"credits": video.credits_cost,
}
except Exception as e:
logger.warning(
f"Attempt {attempt}/{MAX_RETRIES} failed for "
f"{row['product_name']}: {e}"
)
if attempt < MAX_RETRIES:
await asyncio.sleep(2 ** attempt)
logger.error(f"FAILED: {row['product_name']}")
return {
"product_name": row["product_name"],
"status": "failed",
"url": "",
"credits": 0,
}
async def main():
with open("products.csv") as f:
rows = list(csv.DictReader(f))
results = []
for i in range(0, len(rows), BATCH_SIZE):
batch = rows[i : i + BATCH_SIZE]
batch_results = await asyncio.gather(
*[generate_with_retry(row) for row in batch]
)
results.extend(batch_results)
# Write output CSV
with open("output.csv", "w", newline="") as f:
writer = csv.DictWriter(
f, fieldnames=["product_name", "status", "url", "credits"]
)
writer.writeheader()
writer.writerows(results)
succeeded = sum(1 for r in results if r["status"] == "success")
logger.info(f"Done: {succeeded}/{len(results)} videos generated")
asyncio.run(main())The output CSV gives you a clear record of what was generated:
product_name,status,url,credits Hydra Serum,success,https://pub-xxx.r2.dev/videos/abc123.mp4,306 Protein Pro,success,https://pub-xxx.r2.dev/videos/def456.mp4,252 Sleep Stack,failed,,,0
Filter the output for failed rows and rerun just those — no need to regenerate the entire batch.
Use webhooks instead of polling
The examples above use wait_for_completion=True, which polls the API until each video is ready. For production systems you probably want fire-and-forget: submit all 100 jobs, then let agent-media POST the results to your server when each video finishes.
import csv
import os
from agent_media import AgentMedia
client = AgentMedia(api_key=os.environ["AGENT_MEDIA_API_KEY"])
WEBHOOK_URL = "https://your-app.com/api/video-complete"
with open("products.csv") as f:
reader = csv.DictReader(f)
for row in reader:
job = client.create_video(
script=row["description"],
actor=row["actor_slug"],
subtitle_style="hormozi",
webhook_url=WEBHOOK_URL,
metadata={"product_name": row["product_name"]},
)
print(f"Queued: {row['product_name']} -> {job.id}")When each video finishes, agent-media sends a POST to your webhook:
{
"event": "video.completed",
"jobId": "job_abc123",
"status": "completed",
"video": {
"url": "https://pub-xxx.r2.dev/videos/abc123.mp4",
"duration": 10.2,
"actor": "sofia",
"creditsCost": 306
},
"metadata": {
"product_name": "Hydra Serum"
},
"timestamp": "2026-04-14T12:34:56Z"
}The metadata field passes through untouched, so you can match each webhook callback to the original product in your database.
Cost breakdown: 100 product videos
Video generation costs 30 credits per second of output. A typical 10-second product video costs 300 credits. Here is what 100 videos looks like:
| Item | Value |
|---|---|
| Videos | 100 |
| Avg duration | ~10 seconds |
| Credits per video | ~300 |
| Total credits | ~30,000 |
| Cost (at $0.01/credit) | ~$300 |
| Cost per video | ~$3.00 |
Compare that to hiring 100 UGC creators at $50-200 each. The API approach is roughly 20-60x cheaper and delivers in minutes, not weeks. See the full tier breakdown on the pricing page.
TypeScript equivalent
Prefer TypeScript? The same batch generation pattern works with the @agentmedia/sdk npm package. Here is the async version with batching:
import { AgentMedia } from "@agentmedia/sdk";
import { parse } from "csv-parse/sync";
import { readFileSync, writeFileSync } from "fs";
const client = new AgentMedia({
apiKey: process.env.AGENT_MEDIA_API_KEY,
});
interface Product {
product_name: string;
description: string;
actor_slug: string;
}
const rows: Product[] = parse(readFileSync("products.csv"), {
columns: true,
skip_empty_lines: true,
});
const BATCH_SIZE = 10;
const results: { product: string; url: string }[] = [];
for (let i = 0; i < rows.length; i += BATCH_SIZE) {
const batch = rows.slice(i, i + BATCH_SIZE);
const videos = await Promise.all(
batch.map((row) =>
client.createVideo({
script: row.description,
actor: row.actor_slug,
subtitleStyle: "hormozi",
waitForCompletion: true,
})
)
);
for (let j = 0; j < batch.length; j++) {
results.push({
product: batch[j].product_name,
url: videos[j].url,
});
}
console.log(`Batch ${Math.floor(i / BATCH_SIZE) + 1} done`);
}
// Write results
const output = results
.map((r) => `${r.product},${r.url}`)
.join("\n");
writeFileSync("output.csv", "product,url\n" + output);
console.log(`Generated ${results.length} videos`);Start generating product videos at scale
100 products. 100 videos. One Python script. Get your API key and start generating today.