Install

Five surfaces, one content model.

Content-standards enforcement is moving upstream into the generation layer. Install ContentRX where your team actually writes product copy — in the IDE, on the command line, in pull requests — with the Figma plugin alongside for the strings that arrive through design.

All five surfaces hit the same public API. One API key covers them all. Pick the ones your team lives in.

Surface 1

MCP server — Claude Code, Cursor, any MCP client

The ContentRX MCP server exposes four tools to any MCP client: evaluate_copy, classify_moment, explain_violation, and list_standards. Claude Code or Cursor can check a string inline during generation — the LLM narrates the moment first, then the verdict, then the rationale chain on demand.

Claude Code:

claude mcp add contentrx -- uvx contentrx-mcp

Any MCP client (stdio):

export CONTENTRX_API_KEY=cx_...
uvx contentrx-mcp

Source + full tool surface: mcp-server.

Surface 2

LSP server — inline diagnostics in any LSP editor

Diagnostics appear as you type, the same way TypeScript errors do. Yellow squiggles for violations; blue squiggles for review-recommended strings. Right-click a diagnostic to rewrite in place (Claude via /api/suggest-fix), open the standard's rationale page, or mark as false positive.

VS Code / Cursor (one-click — the extension launches the server for you):

# Install the ContentRX extension from the Marketplace
# Then: command palette → "ContentRX: Set API key"

Any LSP editor (Zed, Neovim, JetBrains, emacs lsp-mode):

uv tool install contentrx-lsp        # or: pipx install contentrx-lsp
export CONTENTRX_API_KEY=cx_...
# Point your editor's LSP client at `contentrx-lsp` (stdio)

Scope: JSX / TSX text children + copy attributes (alt, aria-label, placeholder, title, tooltip, label). Random string literals aren't extracted — false-positive risk is too high. Source: lsp-server.

Surface 3

CLI — contentrx on PyPI

Stdlib-only runtime (no requests, no httpx). One pip install and you're checking strings from any terminal or CI runner. Exit codes are part of the public API so pipelines can gate on them.

pip install contentrx-cli
export CONTENTRX_API_KEY=cx_...
contentrx "Click here"
contentrx --batch strings.txt --json
contentrx --explain "Are you sure?"

--explain prints the full rationale chain after the verdict. --json emits the raw API response for scripting. Full flag list: cli-client.

Surface 4

GitHub Action — PR gate

Drop a YAML snippet into .github/workflows/ and ContentRX evaluates strings touched in every pull request. Use fail-on: review to block merges on review-recommended verdicts, or stay permissive with the default fail-on: violation.

# .github/workflows/contentrx.yml
name: ContentRX
on: pull_request
jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: thenewforktimes/contentrx-action@v1
        with:
          api-key: ${{ secrets.CONTENTRX_API_KEY }}
          fail-on: violation  # or review

Action source + the full input surface: github-action.

Surface 5 · alongside

Figma plugin — design-time check

The Figma plugin catches strings that arrive through design — badges, empty states, onboarding flows — before they land in code. Per-string verdicts, moment banners, three-button stance on every finding (Agree / Disagree / Ship anyway), and the rationale chain on demand.

Install from Figma Community:

Figma Community →

Sign in once via the dashboard to mint an API key; paste it into the plugin's sign-in panel.

Stacking surfaces

The surfaces stack cleanly. A team typically runs the MCP server locally for inline checks during authoring, the GitHub Action as the PR gate, and the Figma plugin for the designer-led flows. The CLI covers batch jobs and one-off checks from any terminal. All four share one model, one quota, one set of team rules.

See /model for the taxonomy, /accuracy for the calibration numbers, and /dashboard to mint an API key.