About the model
What a content designer sees
ContentRX is the content model that a working senior content designer would run on their own UI copy. The 47 standards, the 13 moments, the weighting system that says “in a destructive confirmation, emphasize the consequence; in a first-encounter, relax the tone” — all of it carries one designer's judgment calls, attributed and published.
Who wrote the model
Robo — a senior content designer. {bio: years shipping product copy, notable teams or companies, anything Robo wants surfaced here — brackets make this block fail the copy-pin test until edited} The pattern recognition that gets built over a career of that work is what the model encodes. The rules you can look up in a style guide are one input; the judgment calls about whether a modal needed to exist at all are a different input, and they're what the model is really for.
None of the moments or weights are inventions. Every one of them is a distillation of a live review where a content designer said “that button shouldn't say Submit, it should say what happens next.” The /sources page lists every style guide and OSS repo the model leaned on; the changelog shows every revision as it happens.
What a content designer sees
Take an error message that reads: An unexpected error occurred.
A grammar linter reads this as a fine sentence. A content designer reads it and asks three things:
- Does it own the failure? “An unexpected error” is passive about something the system did.
We couldn't load your dashboard.
names the actor and names the specific failure. - Does it blame the user? “Invalid input” tells the user they did something wrong.
We didn't recognize that email format.
describes the state without assigning fault. - Does it point somewhere? An error without a next action is a dead end.
Try reloading — if it keeps happening, let us know at support@example.com.
closes the loop.
Those three questions aren't in Grammarly's job description. They are in ContentRX's. The same three questions drive VT-05 (voice in error recovery), ACT-01 (specific verbs over generic affirmatives), and CLR-01 (plain language, matched to the audience).
Why the model is public
The moat isn't the rules. The rules are visible — 47 of them, each with a permalink, pass/fail examples, applicable content types, and a version history. Anyone can read the taxonomy, and anyone can disagree with a specific call.
What's harder to replicate is the weighting — which standards get emphasized in which moment, and why. That's what a content designer builds over a career, and it's what /model makes browsable. The model gets better every time Robo dismisses a verdict as “the standard doesn't apply here” — the override signal feeds back into calibration. And every weekly kappa measurement lands on /accuracy with its own confidence interval; when measured ceiling diverges from the design target, thresholds move with the measurement, not with the target.
Why this isn't the layer you already have
Grammarly checks grammar. LanguageTool checks grammar plus style. Alex checks inclusive language. They're all excellent at what they do, and running ContentRX on top of any of them is expected, not redundant.
ContentRX is checking a different thing: that a destructive confirmation names what will be destroyed, that a permissions button asks for access instead of declaring submission, that an empty state points somewhere. None of those are grammatical errors. All of them are content-design errors. A different job, a different model.
How to disagree with the model
If you run ContentRX and the verdict is wrong, there are three paths:
- Disagree with this finding — per-violation three-button stance captures your override for team-level analytics and, at scale, moves the standard's calibration.
- Correct the moment — if the tool detected
destructive_actionwhen you were writing a first-encounter, the moment banner has a picker that routes your correction into the moment-classifier backlog. - Expand the rationale chain — every verdict ships with its full pipeline, hop by hop, with confidence at each step. Upstream misdetection is a one-click feedback path; you shouldn't have to guess which hop went sideways.