500 posts. 5 hubs. 25 topic clusters.

Doing that manually? You’d need 10 writers, an SEO manager, and a project coordinator who doesn’t sleep. We do it with Claude Sonnet, 10 Python scripts, and n8n workflows — producing 12 SEO-optimized posts per week.

This is the exact AI content pipeline SEO teams ask us about. Every step. Every script. Every quality gate that keeps output consistent at scale.

No theory. No “it depends.” Just the system we run, the code we ship, and the results we measure.

Want the Full System Blueprint?

Read the pillar post: AI SEO Operation — Full Stack Breakdown


The Pipeline at a Glance

Here’s the full workflow, start to finish:

  1. Keyword Mappingkeyword_mapper.py finds and clusters keywords by intent
  2. Outline Generation — Templates + editorial_calendar.yaml define structure
  3. Drafting — Claude Sonnet writes from pillar_template.md or supporting_post_template.md
  4. Quality Scoringcontent_scorer.py runs 15 automated checks on every draft
  5. Publishingwp_publisher.py pushes to WordPress via REST API
  6. Orchestration — n8n ties it all together with scheduled triggers

Six steps. Five scripts. One n8n instance. That’s the entire operation.

Each step has a single job. If something breaks, you know exactly where. If quality dips, the scorer catches it before anything goes live.

WEEKLY OUTPUT

12 Posts

Published per week at full pipeline capacity


Step 1 — Keyword Mapping (keyword_mapper.py)

Everything starts with keywords. Not content ideas. Not brainstorms. Keywords.

keyword_mapper.py takes a hub topic — say, “AI-Powered SEO” — and generates full keyword clusters. It groups every keyword by search intent:

  • Informational → “how does AI content scoring work”
  • Transactional → “best AI SEO tool for WordPress”
  • Navigational → “DesignCopy content pipeline tutorial”

The script outputs three things:

  1. Cluster assignments (which keywords belong together)
  2. Pillar vs. supporting post assignments (which keyword anchors a cluster, which ones support it)
  3. Estimated search volume pulled from API data

This step takes about 4 minutes per hub. It replaces what used to be a 2-day manual research sprint.

💡 Pro Tip

Map ALL keywords for a cluster before writing a single word. This prevents duplicate content and ensures internal links make sense from day one. Writing before mapping creates orphan posts that don’t connect to anything.

KEYWORD MAPPER — SAMPLE OUTPUT

{
  "hub": "ai-powered-seo",
  "cluster": "1.1",
  "pillar_keyword": "AI SEO operation full stack",
  "supporting_keywords": [
    "AI content pipeline SEO",
    "content scorer python SEO",
    "AI keyword research automation",
    "wp publisher REST API SEO"
  ],
  "intent_distribution": {
    "informational": 68,
    "transactional": 22,
    "navigational": 10
  }
}

Step 2 — Drafting with Claude Sonnet

All writing goes through Claude Sonnet. Non-negotiable.

We don’t mix models. We don’t use cheaper alternatives for “simpler” posts. Every single draft — pillar or supporting — gets written by Sonnet.

Template-First Drafting

Every post starts from a template. Pillar posts use pillar_template.md. Supporting posts use supporting_post_template.md. The templates specify:

  • Section structure (exact H2s and their order)
  • Word count targets per section
  • Visual density targets (callouts, tables, code blocks per 1,000 words)
  • Internal link requirements (minimum count + mandatory link targets)
  • YAML frontmatter fields

The writer (Claude Sonnet) doesn’t decide structure. The template does. This keeps 500 posts consistent without a human editor reviewing every outline.

Batch by Cluster

We write an entire cluster in one sprint. That means 1 pillar post + 4-5 supporting posts, all drafted together. Why? Because the pillar defines the internal linking structure. Supporting posts reference it. Writing them together means cross-links are accurate on the first draft.

CLAUDE SONNET — PROMPT STRUCTURE

You are writing Post {cluster_id}.{post_number} for DesignCopy.net.

Hub: {hub_name}
Cluster: {cluster_name}
Post Type: {pillar | supporting}
Template: {template_path}
Focus Keyword: {focus_keyword}

RULES:
- No paragraph longer than 3 sentences
- Visual break every 100-150 words
- Min 3 internal links, min 2 external sources
- Use styled HTML callouts from the style guide
- BANNED WORDS: [banned_words_list]

Write the full post following the template structure.

⚠️ Warning

Don’t use Flash or Kimi for content writing. They produce adequate text but fail our quality gates consistently. Sonnet is 3x more expensive but passes quality scoring on first draft 85% of the time. The rewrite cost of cheaper models wipes out any savings.


Step 3 — Quality Gate (content_scorer.py)

This is where bad content dies. Every draft runs through content_scorer.py before a human ever sees it.

The scorer runs 15 automated checks:

CheckRuleAuto-Fix?
Paragraph lengthMax 3 sentencesNo
Sentence lengthMax 20 words avgNo
ReadabilityFlesch 60-80No
Keyword in H1RequiredFlag
Keyword in first 100 wordsRequiredFlag
Internal linksMin 3Flag
External sourcesMin 2Flag
Visual density5+ per 1,000 wordsFlag
Banned wordsTier 1 listAuto-replace
Image frequencyEvery 275 wordsFlag
Meta description length150-160 charsFlag
H2 countMin 5Flag
CTA countMin 2Flag
Duplicate contentCosine similarity < 0.3Flag
Schema markupRequired type presentFlag

The Grading System

Each draft gets a letter grade:

  • A → Publish immediately. All 15 checks pass.
  • B → Minor fixes needed. 1-2 flags, usually internal links or image spacing.
  • C → Rewrite sections. 3+ flags or a readability failure.
  • F → Scrap and redo. Fundamental structural problems.

FIRST-DRAFT PASS RATE

85%

Claude Sonnet drafts scoring A or B on first submission

The banned words check is the only auto-fix. Words like “delve,” “landscape,” and “tapestry” get replaced automatically with approved alternatives. Everything else gets flagged for human review.

💡 Pro Tip

Run content_scorer.py BEFORE any manual review. It catches 90% of issues automatically. Don’t waste human attention on problems a script can find in 3 seconds.


Step 4 — Publishing (wp_publisher.py + RankMath)

Once a post passes scoring, wp_publisher.py handles everything from markdown to live WordPress page.

Here’s what it does:

  1. Reads the markdown file with YAML frontmatter
  2. Converts markdown to clean HTML
  3. Pushes to WordPress via the REST API
  4. Sets RankMath metadata — focus keyword, meta description, schema type
  5. Assigns categories, tags, and hub relationships

The script supports four modes:

  • --draft → Creates a WordPress draft for manual review
  • --publish → Goes live immediately
  • --update → Updates an existing post (preserves URL, updates content)
  • --cluster → Publishes all posts in a cluster in the correct order (pillar first, then supporting)

WP_PUBLISHER — PUBLISH COMMAND

python wp_publisher.py \
  --file outputs/drafts/ai-seo-content-pipeline-automated.md \
  --mode draft \
  --site designcopy.net \
  --auth $WP_APP_PASSWORD \
  --rankmath-focus "AI content pipeline SEO" \
  --cluster 1.1

💡 Pro Tip

Always publish as draft first. Review in WordPress before going live. The REST API doesn’t preview styled HTML callouts the same way the front-end does. A 30-second visual check prevents broken formatting.


Step 5 — Orchestration (n8n)

n8n is the glue. It connects every script into automated workflows that run without manual triggers.

Scheduled Workflows

We run three recurring workflows:

  • Daily — Keyword monitoring. Checks for ranking changes and new keyword opportunities.
  • Weekly — Content batch. Triggers drafting for the next cluster in the editorial calendar.
  • Monthly — Freshness audit. Flags posts older than 90 days that need content updates.

Error Handling

If content_scorer.py returns a grade C or F, the workflow pauses. It doesn’t try to fix the content automatically. Instead, it sends a Telegram notification with:

  • The post title and cluster ID
  • The specific checks that failed
  • A direct link to the draft file

A human picks it up from there. Automation handles volume. Humans handle judgment calls.

Infrastructure

n8n runs self-hosted on a $5/month VPS. It uses about 200MB RAM. No paid plans, no per-workflow fees. The entire orchestration layer costs less than a single freelance article.

“The best automation isn’t the one that does the most. It’s the one that knows when to stop and ask a human.”

— DesignCopy Engineering


Our Publishing Cadence

We don’t publish randomly. Every post fits a schedule tied to cluster completion.

The Phases

Phase 0 (Month 1-2): Foundation.
Hub pages go up first. Then the first 2 clusters — fully complete with pillar + all supporting posts. This gives Google a clear topical structure to crawl from day one.

Phase 1 (Month 3-6): Core build-out.
8 posts per week. We target the highest-volume clusters first. Each cluster ships complete — never half-finished.

Phase 2 (Month 7-12): Full velocity.
12 posts per week. All 25 clusters in active production. Freshness updates begin for Phase 0 content.

Why Batch by Cluster?

Three reasons:

  1. Internal links work on publish day — no broken references
  2. Google sees topical depth immediately, not scattered signals
  3. Writers (Claude Sonnet) maintain context across related posts, producing tighter content

Scattered publishing — one post here, one there — kills topical authority. Clusters build it.

See the Full Toolchain

Scripts, templates, and workflow configs: AI SEO Operation — Full Stack Breakdown


FAQ

How long does it take to produce one blog post?

From keyword to published draft: about 18 minutes. Keyword mapping is batched (4 minutes for a full cluster). Drafting takes 8-10 minutes per post. Scoring runs in under 30 seconds. Publishing takes about 15 seconds. The human review step — scanning the WordPress draft — adds another 3-5 minutes.

Can I use this pipeline without n8n?

Yes. n8n is the orchestration layer, not the engine. You can run each script manually from the command line. keyword_mapper.py → write the draft → run content_scorer.py → run wp_publisher.py. n8n just automates the sequence and handles scheduling. If you’re producing fewer than 5 posts per week, manual execution works fine.

What happens when content_scorer.py fails a post?

Grade C posts get specific flags — you’ll know exactly which sections need rewriting. Fix those sections and re-run the scorer. Grade F posts are rare (less than 3% with Sonnet) and usually mean a template mismatch or a prompt error. Those get regenerated from scratch.

Is AI-generated content good for SEO?

Google’s stance is clear: they care about content quality, not content origin. Our AI content pipeline SEO approach works because the quality gates enforce the same standards a senior editor would. The scorer doesn’t care who wrote it. It checks readability, keyword placement, link structure, and visual density. If it passes, it performs.

How do you prevent duplicate content across 500 posts?

Three layers. First, keyword_mapper.py assigns unique focus keywords — no two posts target the same term. Second, templates enforce different structural patterns for pillar vs. supporting posts. Third, content_scorer.py runs a cosine similarity check against all existing posts. Anything above 0.3 similarity gets flagged.


🔎 Key Takeaways

  • The AI content pipeline SEO workflow has 6 steps: keyword mapping, outline generation, drafting, quality scoring, publishing, and orchestration.
  • Claude Sonnet handles all drafting — it passes quality gates on first draft 85% of the time.
  • content_scorer.py runs 15 automated checks and grades every post A through F before any human review.
  • wp_publisher.py converts markdown to WordPress posts with full RankMath metadata via REST API.
  • Batch by cluster, not by individual post. Ship complete topical groups so internal links work from day one.
  • n8n orchestrates everything on a $5/month self-hosted VPS using 200MB RAM.

What to Read Next

Build Your Own AI Content Pipeline

Start with the pillar post. It covers every script, template, and workflow config you need to replicate this system.