Building Generative AI Guardrails
Image credit: Author + Midjourney 5
Your people are already using ChatGPT. They're drafting client emails, analyzing data, and summarizing meeting notes. Some are feeding it proprietary information without realizing the risk.
Most executives know this is happening. The KPMG research confirms it: 90% of leaders have significant concerns about GenAI risks, but only 6% have a dedicated team addressing them. That's not a gap. That's a chasm.
The real risk isn't AI - it’s flying blind
Here's what we're seeing: companies either block AI tools entirely or let people use whatever they want with vague "be careful" guidance. Both approaches fail.
The blockers lose talent to competitors and fall further behind every quarter. The hands-off approach exposes confidential data and builds liability you can't see until it's too late.
The middle path requires something most organizations haven't built yet: the capacity to think clearly about AI, not just react to it.
Here’s what works
Effective guardrails aren't about compliance documents. They're about building organizational intelligence for a new kind of risk.
Start with visibility. You can't manage what you can't see. Most companies have no idea which teams are using AI, for what purposes, or with what data. That's the first thing to fix.
Create practical boundaries, not bureaucracy. Your people need to know what's safe to try and what requires escalation. Tools like Nova and Credo AI can help, but the real work is cultural. Can someone in finance experiment with AI for forecasting without waiting three months for IT approval? Can your sales team test AI drafting tools without legal panic?
Make it easy to do the right thing. If your approved tools are slower and clunkier than the free version of ChatGPT, people will keep using ChatGPT. Period.
Build Capability, Not Just Policy
The 72% of executives who see AI boosting productivity aren't wrong. But productivity gains come from people who understand both the tool's potential and its limits.
That means training that goes beyond "here's how to write a prompt." Your teams need to recognize when AI output is useful versus risky. They need to spot hallucinations, understand data lineage, and know when human judgment is non-negotiable.
Companies that crack this aren't just writing policy. They're creating sandbox environments where experimentation is encouraged within clear boundaries. They're measuring adoption and behavior change, not just checking compliance boxes.
The Leadership Gap
An uncomfortable truth is that 68% of companies haven't appointed someone to lead their GenAI strategy. That means these decisions are happening by default, driven by whatever tools people discover on their own.
Your competitors who figure this out first will move faster while protecting what matters. They'll attract the talent that wants to work with cutting-edge tools. They'll find the innovation opportunities hidden in daily workflow inefficiencies.
But none of that happens without someone who can translate between technical possibility and business reality. Someone who understands transformation isn't about tools—it's about how people think and work.
Where to Start
If you're reading this and thinking "we need to do something," here's your first move:
Map what's already happening. Talk to ten people across different functions. Ask what AI tools they're using and why. You'll likely be surprised.
Then decide: are you going to build the capacity to use AI wisely, or are you going to let that capacity build itself in the shadows?
AI Content Disclaimer: The human author who wrote this augmented the final post with Generative AI tools for ideation (ChatGPT), research (Perplexity AI) and image creation (Midjourney).
