Shadow AI Is the New Shadow IT (And It's Worse)
I was Shadow IT.
It started with a Microsoft Access tool for tracking yields in an advanced manufacturing area. Yield tracking tools existed, but they didn’t fit this division’s process. Getting new software approved and built would take years. I built a solution in a few weeks. Today I could vibe-code it in an hour.
But my shadow IT didn’t stop there. I got tired of waiting for data engineers to create a reporting server from change data capture SQL data. So I wrote it myself, in a night, running with Jupyter and Airflow. It worked and I got there first.
If I still worked for an enterprise, you can bet I’d be vibe-coding solutions to critical problems right now. And the company would let me do it, because the value would be too high to refuse. IT would be screaming about security and supportability risks, just like they did the first time around.
This is already happening inside most companies. Employees are quietly building AI workflows, vibe-coding internal tools, automating reports and sales motions and ops tasks. They’re running these systems on personal machines, personal accounts, personal API keys. This isn’t malicious. It’s motivated, capable people trying to move faster than formal systems allow.
This Is Different From Shadow IT
Shadow IT was about unapproved software, unsanctioned tools, workarounds. Shadow AI is about something harder to see: unowned logic, untracked costs, invisible dependencies, systems no one fully understands.
My Access tool? When I left that role, it probably died. Or worse, it kept running and someone inherited a system they didn’t build and couldn’t maintain. No documentation. No security review. No continuity plan. That was manageable for a yield tracker. It’s not manageable for AI systems touching customer data, automating decisions, or running business logic at scale.
Banning Won’t Work
You can’t outlaw curiosity. You can’t slow down capable people indefinitely. You can’t compete with “good enough built in an afternoon.” I know this because I was that person. The value I delivered was real. The problems I solved were real. And if leadership had clamped down, I wouldn’t have stopped building. I just would have stopped telling them.
That increases risk, not reduces it.
The Only Path Forward Is Ownership
Most companies cannot answer these questions today: What AI-powered systems exist? Who owns them? What data do they touch? What do they cost? What breaks if they stop?
If no one can answer those questions, the business is exposed—whether the tools “work” or not.
This doesn’t require a massive rebuild. It requires making the invisible visible, separating experiments from production, and putting boring structure around fast innovation. Not to slow it down. To enable it.
The people building shadow AI aren’t the problem. They’re showing you where the value is. The question is whether leadership will meet them there—or pretend it isn’t happening until something breaks.
Related: Speed — why enterprises need to enable AI, not ban it.