
Building Strong AI Governance in 2026
The state of AI in 2025
AI governance is taking center stage in 2025. It’s August 2025, and companies are feeling the pressure of new AI tools. However, according to the study, Reshaping Business With Artificial Intelligence: Closing the Gap Between Ambition and Action published in the MIT Sloan Management Review, only 14 percent of surveyed executives, managers, and analysts believe AI is currently having a large effect on their organization’s offerings (Ransbotham, S., et al., 2017). Meanwhile, AI leaders including Meta are spending upwards of $100 million per year to attract top research talent (PYMNTS, 2025). The gap between ambition and implementation has never been wider.
While some organizations race to scale their AI capabilities, others are still hesitating, citing unresolved questions around data privacy, explainability, and model misuse. These fears aren’t unfounded, but they can’t be an excuse for inaction. The solution isn’t to slow down. It’s to govern better.
So how do you operationalize AI governance in a way that accelerates innovation and ensures safety?
Invert, always invert: A mental model for AI safety
One of the most effective ways to think about AI governance comes from an unexpected place: a German mathematician named Carl Jacobi, by way of investor Charlie Munger. Munger famously advised, “Invert, always invert”—meaning, if you want to achieve a good outcome, first define how to ensure a bad one, and then avoid those steps (Munger & Trenholm, 2005).
Let’s apply that to AI.
How do you ensure that your company doesn’t use AI safely?
- Don’t understand the risks that stem from AI.
- Don’t have AI safety policies in the first place.
- Don’t foster discussions about AI safety. Maintain silos.
- Don’t make it easy to comply with AI safety policies. Ensure shadow IT.
Once the failure modes are understood, the path forward becomes straightforward:
- Consider the risks that stem from AI.
- Maintain clear policies concerning AI safety.
- Break down siloes and foster discussions surrounding new and emerging threats and best practices.
- Encourage easy monitoring and compliance.
Putting it all together: AI Governance that scales
AI safety is not just about technology; it’s a socio-technical challenge. It requires aligned people, processes, and technologies, guided by a governance structure designed to evolve alongside the systems it supports.
Here’s what that looks like in practice:
- Start with a strong charter. Define the purpose, scope, and goals of your AI governance program.
- Secure executive sponsorship. Leadership buy-in is essential to break down barriers and signal organizational commitment.
- Balance centralization and decentralization. Policies should be standardized where necessary but allow flexibility based on use-case complexity and risk.
- Invest in enablement. Training, office hours, and regular communications help embed safety into everyday operations.
- Make it easy. As Nobel laureate economist Richard Thaler once said, “If you want people to do something, make it easy (2015).” This principle applies to every aspect of responsible AI deployment, from submitting model reviews to accessing risk playbooks.
Closing thoughts
AI Governance is not about slowing innovation. It’s about unlocking it safely. Moving from high-level AI principles to practical, actionable policies is the next frontier. The companies that get this right will not only move faster, but with greater trust, resilience, and long-term impact.
References
Munger, C. T., & Trenholm, P. D. (Ed.). (2005). Poor Charlie’s Almanack: The wit and wisdom of Charles T. Munger (Chapter 4) (Third Edition). Donning Company Publishers.
PYMNTS. (2025, August 1). Top AI Researchers Field Hundred-Million-Dollar Offers Amid Talent War. PYMNTS. https://www.pymnts.com/artificial-intelligence-2/2025/top-ai-researchers-field-hundred-million-dollar-offers-amid-talent-war/
Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017, September 6). Reshaping business with artificial intelligence: Closing the gap between ambition and action. MIT Sloan Management Review. https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/
Thaler, R. H. (2015). Misbehaving: The making of behavioral economics (Chapter 33). W. W. Norton & Company.
About the Author
Peter Baldridge is an analytics professional with deep expertise in machine learning, statistics, and AI-centered design. With a background in delivering high-impact analytics solutions across industries, he brings a practical, results-driven perspective to evaluating emerging technologies like generative AI. His background includes a wide range of work, from text classification and NLP automation to large-scale forecasting and risk modeling, equipping him to explore both the potential and pitfalls of generative AI with a balanced, analytical lens.