Gone is the culture of “No”. In its place is the culture of “Can Do.” This is a call for real impact, scaled responsibly and transparently. For the first time, the use of AI is specifically tied to driving the agency mission, as well as to increase efficiencies. The light is green for agency IT teams to move forward with proof of concepts.
But if you’re staring down a 30-, 90-, or 180-day deadline contained in the memo—whether appointing a Chief AI Officer, forming an AI Governance Board, or publishing an AI strategy—you might be wondering where to start.
M-25-21 Isn’t a Compliance Checklist—It’s a Roadmap for Action
The good news: Responsible AI adoption doesn’t require a moonshot. It just requires an innovative, iterative approach grounded in real-world mission needs. Fortunately, the often valid concerns slowing down adoption have been addressed by industry practices taming what only last year was a wild, wild west of risks. Now, acceptable mitigations exist, including Guard Rails, Response Auditing, Anomaly Detection, Secure Private LLMs, Bias Detection, and Data Catalogs.
The memo empowers agencies to move quickly while putting best practices in place for high-impact AI systems. It calls for proactive governance, minimum risk management practices, and AI strategies that go beyond policy-speak. The message is clear: AI isn’t a side project anymore. It’s central to how government improves services, enhances equity, and delivers value to the American people.
So how do you go from policy to practice?
We suggest a pragmatic three-move approach that aligns directly with M-25-21’s intent: fast, focused, and risk-aware execution.
OMB doesn’t ask you to start from scratch—it encourages reuse and adaptation. Most agencies already operate under frameworks like NIST’s AI Risk Management Framework, existing IT security policies, and privacy guidelines. That’s your baseline. Be able to answer how you are satisfying the following:
1) Detecting requests for critical data like PII, Financial, and mission-specific with Guard Rails, Response Auditing, Anomaly Detection.
2) Ensuring the Privacy of all data uploaded to the LLM with solutions like Secure Private LLMs.
3) Managing and limiting what data individual users can see via properly maintained Data Catalogs linked to Guard Rails and Response Auditing.
Identify three to five key standards or policies your AI use case must align with. If you’re exploring AI for tasks like document redaction or scheduling, ensure your short list covers data security, civil rights protections, and privacy safeguards.
This step doesn’t require a 50-page compliance document. Instead, consider it your AI project’s "nutritional label” - clear, concise, and confidence-boosting.
Governance shouldn’t be a barrier—it should be a launchpad. That’s why your next step is creating a simple evaluation rubric to guide AI proposals.
This doesn’t mean assembling a 20-person steering committee on day one. Instead, form a nimble task force of three to five people who understand both your agency’s mission and the tech ecosystem. Task them with answering three key questions for each AI initiative:
Run your rubric on one high-priority, low-risk use case. For example, these might include automating FOIA redactions or routing citizen inquiries through a policy chatbot. You’ll gain real performance, risk, and opportunity data—plus a governance playbook you can reuse and evolve.
This is the kind of "rubrics over red tape" governance that M-25-21 envisions—one that enables trust without paralyzing progress.
M-25-21 mandates that agencies document and report minimum risk practices for "high-impact" AI. But not every project starts there.
Start with lower-risk applications that deliver high mission impact. For example, an agency used AI to streamline processing for small business grants. They pulled from existing federal data standards, set clear goals ("reduce processing time by 20%"), and used a basic risk rubric to guide a pilot.
When the system over-flagged applications, the team refined its parameters—not by shelving the project but by tweaking the rubric. Within a month, they had a functional model with measurable results and lessons learned.
This approach builds credibility with leadership, users, and oversight bodies. It also aligns with M-25-21’s core themes: transparency, accountability, and iteration.
Giving a Green Light
The new AI policy does not contain specifics on the governance of agency AI projects. Each agency must make its own mitigation decisions, and an overly risk-averse posture could still slow down AI adoption.
So, if you’re unsure where to start, pick your standards, draft a one-page rubric, and pilot a use case. You’ll meet the spirit (and timelines) of M-25-21 while building internal momentum for broader adoption. With the right leadership and a strong desire to solve problems, the road to responsible AI adoption is now open.
Need help defining your AI governance process, setting up your rubric, or selecting that first use case? Through our DEAM practice, STS is currently working with federal clients to operationalize these steps—turning policy into progress.