Most of the conversation about AI in WordPress is still about content generation — write a paragraph here, suggest a layout there. That’s useful, but it’s not where the leverage is.

The leverage is making products themselves callable by AI agents. Not “ask the AI to draft something for you to paste in” — let the agent operate the product directly, end-to-end, inside guardrails.

Over the last few weeks I shipped that for Promptless WP: a Model Context Protocol (MCP) connector that lets Claude scaffold pages, deploy structured content, build navigation menus, and apply design tokens to a real WordPress site. Six tools, full audit trail, no UI required.

Here’s what mattered.

The sandbox problem and the bridge solution

Claude Cowork runs in a sandboxed environment that blocks outbound HTTP. So a naïve “Claude → WordPress REST API” call fails immediately with a proxy error.

The fix is architectural: the MCP server doesn’t live in the sandbox. It runs as a Node.js process on the user’s machine, talks to Claude over stdio (the MCP protocol’s transport), and forwards authenticated HTTPS requests to the WordPress site.

Claude (sandbox)  ⟷  MCP server (user's machine)  ⟶  WordPress REST API
       (stdio MCP)               (HTTPS + Basic Auth)

The MCP server itself is a single Node.js file with no external dependencies — just Node’s http and https. It auto-detects whether the host is using LSP-style Content-Length framing or newline-delimited JSON-RPC, so the same script works across Claude Desktop, Claude Code, and any other MCP host.

That’s not glamorous engineering. But it’s the difference between “demo on my machine” and “actually works.”

Authentication: don’t reinvent the wheel

The temptation with any new API is to build a custom token system: generate keys, hash them, store them in wp_options, ship a “Connector Settings” page where users paste them in.

I started there. Then I deleted it.

WordPress 5.6+ ships Application Passwords natively. They support HTTP Basic over HTTPS, are scoped per-application, are revocable from the WordPress admin in two clicks, and are already validated by WordPress’s REST permission system. Zero new auth code. Real audit trail. Standard tooling.

The only thing left for me to write was a permission callback — current_user_can('edit_pages') plus a premium-license check. That’s it.

The principle: when the platform you’re building on already solved a problem well, you build on top of their solution rather than around it. New code is liability.

Six tools, atomic operations, fail loudly

The connector exposes six tools:

Two design decisions worth calling out:

Atomic scaffolding. When scaffold creates a 12-page site, all 12 either succeed or none do. A partial failure rolls back. The reason: agents retry. If a partial scaffold leaves orphan pages and the agent retries, you get duplicate pages and broken parent/child hierarchies — a debugging nightmare your support inbox will eventually have to deal with. Atomicity moves the failure to a clean error message instead.

Per-endpoint rate limits. preflight allows 60 requests/minute (cheap, agents poll it). batch_deploy allows 2/minute. reset allows 1/minute. The pattern: rate limits scale inversely with cost. Read-heavy endpoints stay open; destructive endpoints get throttled. This isn’t about thwarting attackers — it’s about giving agents (which can loop accidentally) a clear backpressure signal before they trash a site.

Reuse, don’t duplicate

The connector code is small — about 1,500 lines across six PHP classes — because it doesn’t reimplement anything the plugin already does. It calls the existing license manager, hands off content saves to the existing REST controller, lets the SEO manager and content sync layer handle the rest. The connector is a thin layer that translates “what an agent wants to do” into “what the plugin already knows how to do.”

This is the unglamorous secret to shipping things that don’t break: most of the work is composing existing systems correctly, not writing new ones.

What this enables

There’s a difference between a product that has an AI feature and a product that’s agent-callable.

The first is “click here, the AI helps you do this thing inside our UI.”

The second is “an agent can use this product as one tool in a larger workflow, with proper auth, rate limits, audit trails, and rollback semantics.”

The second is what survives. AI capabilities will keep improving — the products that benefit aren’t the ones with the prettiest chat UIs, they’re the ones whose primitives are exposed cleanly enough that an agent can compose them.

That’s what the MCP connector does for Promptless: it stops being just a page builder and becomes a deployment target for any agent workflow that knows how to produce structured content.

If you’re building anything in this space, the question I’d push you to ask is: what would your product have to look like for a competent AI agent to use it well? Most of the answer is boring engineering — auth, idempotency, atomicity, rate limits, audit trails. But that’s the work that makes AI integration actually shippable.