Every recommendation has been stress-tested. Here’s how.

Every message is stress-tested against modeled audience panels — simulating how policymaker staffers, beat reporters, and public segments would actually respond. What survives the stress test is what ships.

From event to deployable messaging

Your message degrades every time someone repeats it

Your talking points travel through three people before reaching a decision-maker. We stress-test what arrives.

You say

“AI companies have spent $175 million buying elections and corrupting our democracy. We need to ban this kind of spending before it’s too late.”

Staffer tells colleague

“Some AI advocacy group came in saying Big Tech is buying elections — they want to ban PAC spending or something. Pretty aggressive.”

Colleague tells partner

“There’s this push to ban AI companies from political spending. Sounds like another regulation thing.”

Partner tells friend

“Some people want to ban tech companies from donating to campaigns. You know how that goes.”

Verdict: Message Lost

By the third retelling, the specific facts ($175M, ads about everything except AI) are gone. The message degraded into generic “ban corporate spending” — indistinguishable from any other money-in-politics argument. The unique insight disappeared.

Words carry political DNA

One word can cost 9–20 points of support. These aren’t style preferences — they’re documented in studies of 60,000+ people.

Avoid “Ban” Use “Standards”

9–20 pt swing. Research on 60,000+ participants across 23 countries. “Standards” outperforms across the political spectrum.

Avoid “Regulation” Use “Safety standards”

“Regulation” triggers a freedom-vs-control frame with conservative audiences. “Safety standards” activates a level-playing-field intuition instead.

Avoid “Accountability” Retold as “more regulations”

In simulated retelling tests, “accountability” degrades to “more regulations” — the original framing gets stripped and replaced with the opposition’s preferred frame.

What this methodology is — and what it isn’t

What it is

Safe to Say stress-tests every message against 17 audience models — representing policymaker staffers, journalists, and public segments — grounded in the strongest available communications research. Each model decides whether it would engage with the message in its typical context before providing full analysis.

This catches the most common failure: messages that are substantively sound but get scrolled past, filed without reading, or forwarded without the key point.

What it’s built on

Every recommendation traces to specific research. The evidence base includes:

This base grows with every issue. As new message testing data, polling, and RCTs on AI governance communications are published, they’re incorporated. The goal is a compounding knowledge advantage — each brief builds on everything before it.

What it isn’t

This is not polling. Not a focus group. Not a prediction engine. It’s a screening tool — narrowing thousands of message candidates against a curated evidence base before real-world deployment.

The value is in what it eliminates. Most messaging fails not because the right option wasn’t considered, but because the wrong options weren’t caught. This methodology catches backfires, retelling failures, and mobilization gaps before your message meets a real audience.

Start here, not finish here.

Know what to say before the story breaks

Published when the framing window opens.

Subscribe on Substack