Who builds this — and why

About Safe to Say

The AI governance community has good arguments and weak framing. The other side has $175 million in PAC spending, 3,500 lobbyists, and professional messaging operations. The governance side has policy papers, conviction, and talking points that have never been stress-tested against the audiences they need to reach.

Safe to Say exists to close that gap. Each issue takes a live AI governance event and produces concrete, audience-specific messaging guidance — what to avoid, what to use, and what’s safe to say — for policymakers, press, and public. Every recommendation is grounded in communications research and stress-tested against modeled audiences before the framing window closes.

No shared messaging infrastructure exists for AI governance communications. The clean energy sector has Potential Energy Coalition — 60,000 respondents, 23 countries, 8 years of message testing. AI governance has nothing equivalent. Safe to Say is building it.

Every organization fighting for AI governance deserves messaging as rigorous as the industry’s.

The methodology

Each Safe to Say brief is produced using simulated audience testing — AI-modeled audience reactions across 17 personas representing policymaker, press, and public segments. The methodology is designed and operated by a communications strategist with four years of federal policy communications experience, and grounded in empirical research on public opinion, media framing, and political communication, including the only published AI-specific audience segmentation study.

Simulated testing is a screening tool, not a prediction engine. It narrows thousands of message candidates against a curated evidence base before real-world deployment.

When the methodology’s recommendation conflicts with communications judgment — for example, when a message scores well with simulated audiences but relies on a pattern that research shows backfires long-term — the recommendation is overridden and the reasoning is documented. The system assists. The judgment is human.

Safe to Say is an independent project serving the AI governance ecosystem. It is not affiliated with any single organization — the methodology and recommendations serve the field, not a position.

Read more about the methodology


Ready for the next story that breaks?

Subscribe — free, event-driven, backed by research.

Subscribe on Substack