2026-05-07 • Morgan Reeves • 12 min
How We Actually Ship Ten Landing Pages Without Turning Them Into Thin Spam
Batch generation sounds magical until you end up with ten variants that all say the same thing in slightly different fonts. Here is the messy middle ground that still ships fast.
I used to treat “generate ten landing pages” like ordering appetizers at a busy restaurant: shout the SKUs and hope something edible arrives. The pipeline cheerfully complied—every page had a hero, three bullets, and a primary button—but nobody believed them. Traffic bounced because each variant smelled algorithmically plausible rather than personally convincing.
The breakthrough wasn’t model prompts alone; it was framing landings like paired comparisons instead of lottery tickets. Every candidate idea became “who hurts”, “what they fear admitting”, “what feels credible tomorrow”, and one ruthless constraint that differs across variants.
Start narrow enough that humans could argue about specifics
Spreadsheet paralysis sneaks in when prompts drift toward vibes (“better onboarding”). Rewrite prompts until your teammate argues whether messaging skews toward CTO guilt versus founding-team burnout—that specificity survives templating.
- One job-to-be-done per page (avoid stacking personas unless your segmentation pipeline proves lift)
- A headline lane measured against embarrassment (“would someone screenshot this?”) rather than cleverness
- Evidence placeholders baked early—quotes, metrics, logos—even if half ship as validated externally pending
- Single measurable outcome above the fold: signup, waitlist, or guided demo—never three competing CTAs
AI drafts behave better when they imitate ruthless critique circles rather than polite brainstorming circles.
Batch mode beats heroic prompting because friction hides in naming
Generate filenames that survive Slack archaeology (`pain-paywall-vs-pay-later-founders.md`), slug lineage mirrors experimentation hypotheses (`pricing-confidence-vs-security-paranoia`). Retrieval bots—and sleepy teammates scanning dashboards—thank you weeks later.
QA gates earn skeptic eyeballs rather than checkbox vibes if benchmarks articulate rejection verbs (“reject vagueness”, “reject symmetrical clichés”). Automated tooling shines brightest routing mediocre-but-readable drafts back into iteration buckets instead of silently merging bad variants.
Publish fewer pages publicly and obsess over comparison loops
Ship three genuinely differentiated pages plus placeholder shells for internal debates before blasting DNS everywhere. Track engagement deltas qualitatively—people forwarding screenshots beats vanity CTR spikes.
After launch, resist rewriting winners wholesale; mutate headlines while preserving narrative spine so attribution chatter stays intelligible during retro pizza debates.