The Deploy Button Problem
Building is the easy part. Shipping is where it gets complicated.
March 16, 2026
I recently wrote about how Claude Code changed the way I operate as a non-technical COO.
The response surprised me. People reached out, started experimenting, sent me screenshots of things they’d built over a weekend. One person on our team prototyped an internal workflow before lunch that would have taken two weeks to spec and get into a sprint.
That’s the magic and I believe in it more than ever. The democratization of making.
But I’ve been losing sleep over something, and I think anyone leading a team through this moment needs to hear it. Because there’s a specific failure mode that nobody’s talking about. Not the AI safety researchers, not the governance committees, not the think pieces about whether AI will take our jobs. It’s simpler and scarier than all of that.
It’s the deploy button.
What “Deploy” Means When You Don’t Know What It Means
I’ll start with my own screwup.
I used Claude to build an OKR tracking site for our company. Internal tool. Objectives, key results, progress updates. The kind of stuff you absolutely do not want public. I told Claude I wanted it restricted to people within the company. We implemented authentication. It looked right. It felt right. Claude confirmed it was set up correctly. We deployed it.
Then, on a whim, I opened the site in a different browser and logged in with my personal Gmail. Not my work email. My personal one.
It worked. I was looking at our company’s OKRs, logged in as someone who should have had zero access.
The fix was simple. Took five minutes. But the point is that Claude had built what I asked for (a login flow) not what I meant: a login flow that only allowed our people in. I’d described the intent, Claude had implemented something that looked like it fulfilled the intent, and neither of us caught that it didn’t. I only caught it because I’ve spent enough years around software to reflexively poke at things after they ship. I knew to test the unhappy path.
Now here’s what keeps me up at night: what if nobody had thought to check? What if instead of OKRs, it was a dashboard pulling customer data from our CRM? Claude would have happily offered to deploy it. “Want me to deploy this?” And we would have said yes, because why wouldn’t we? Claude had been helpful and accurate all day.
“Deploy” sounds like the next logical step. It sounds like “save” or “finish.” It doesn’t sound like “put your company’s data on the internet.”
And deploy is just the most visible version of this. The same understanding gap shows up in database commands, API configurations, permission changes. Any moment where the stakes silently escalate and the prompt looks the same as every other prompt.
The Permission Prompt Problem
Here’s what makes this genuinely hard to solve. Claude Code actually has a security model. It asks permission before it does things. It shows you the command it wants to run and waits for you to approve it.
I’ve read Anthropic’s security documentation and one line stood out: “You’re responsible for reviewing proposed code and commands for safety before approval.”
That’s the whole ballgame. The security model assumes you can evaluate what you’re approving. That’s a big assumption. Most of the time, even I’m not sure. I got lucky with the OKR site because I happened to think to test it. That’s not a repeatable system. And nothing about the interface tells you when the stakes just changed. The permission prompt doesn’t convey magnitude. Approving a deploy command looks exactly like approving a file rename. There’s no flashing red warning that says “THIS WILL PUT YOUR DATA ON THE INTERNET.” It’s just another yes/no in a stream of yes/no’s.
We’ve actually developed internal shorthand for this. When Claude asks for permission, you can press “1” to approve that one action, or “2” to approve that action and all similar ones going forward. We call it “2-ing it.” Mashing the 2 key without reading what you’re agreeing to, because you’re in the zone and Claude has been right about the last fifteen things. You’re not just saying yes once. You’re saying yes forever.
Everyone who’s used these tools knows the feeling. You’re building, it’s working, the momentum is intoxicating, and then Claude asks if it can do something and you hit 2 because stopping to read feels like friction. I once let Claude delete my wife’s irreplaceable family photos because I was 2-ing it and approved a command I didn’t read. That was personal stakes. Now imagine that instinct with company data, or a database command, or anything else where the stakes silently escalated.
Why This Is My Problem and Not IT’s
There’s a whole conversation happening in security circles right now about “Shadow AI.” 77% of employees paste data into AI prompts, most from accounts their company doesn’t even know about. CISOs are writing governance frameworks. Compliance officers are building risk matrices.
That’s important work. But it’s not my problem. My problem is more human than that.
My problem is: I have a team of smart, capable people who just discovered they can build software. That’s an incredible unlock. And I also know that one of those people, on a random Tuesday afternoon, could accidentally publish our customer list to the internet. In our industry, that’s not just embarrassing. It’s a genuine liability. Our customers trust us with sensitive operational and geospatial data, and we can’t afford to treat deployment as an afterthought. Not because my team is careless. Because we’re all still learning, myself included. Because the tool was helpful. Because they said “yes” to a prompt they didn’t fully understand.
That’s not a governance framework problem. That’s a leadership problem.
Friendly Friction
I’ll be honest: I don’t have a system yet. I’m figuring this out in real time. But I have a concept I keep coming back to, and it comes from a previous life.
When I was at Revel we implemented helmet selfies. Before every ride, you had to take a photo of yourself wearing a helmet. It was friction. It added a step. Some people hated it. But it was friendly friction. It didn’t stop you from riding. It just made you pause for two seconds and think about safety before you twisted the throttle. That pause saved people.
That’s what I’m trying to build for AI tools. Not rules that stop people from building. Friction that makes them pause before they deploy.
Here’s where I’m starting, in phases. Start small, learn, then expand.
Phase one is a few people I trust, working on non-sensitive projects, with me looking over their shoulder. Not to gate keep, but to learn where the gaps are. What confuses people? Where do they 2-it without understanding? I need to see the failure modes before I can build guardrails for them.
Tools that intercept, not just warn. There’s a plugin called Safety Net that intercepts destructive commands before they execute. I want more tools like this. Things that make the dangerous path harder, not just the safe path easier.
Deployment as a checkpoint, not a step. This is the cultural piece. Building is encouraged. Deploying requires a conversation. Claude will offer to deploy. That’s what it does. But “Claude offered” is not the same as “we decided.” Before anything goes beyond someone’s laptop, I want them to answer three questions: What data does this touch? Who else needs to know this exists? What happens when you’re on vacation and it breaks? If there’s no good answer to that last one, it stays a prototype.
A prototype is not infrastructure, and the distance between those two things used to be months of engineering. Now it’s a single “yes.”
The Uncomfortable Middle
I want to be honest about where this leaves me: uncomfortable.
The security research says blocking AI tools drives adoption underground and makes everything worse. The governance people say you need frameworks and committees and audit trails. The AI evangelists say you’re overthinking it and just let people build.
I think all three are partially right and none of them are sitting where I’m sitting. Between a team that just discovered superpowers and a responsibility to make sure those superpowers don’t blow a hole in the company.
There’s no clean answer here. There’s no framework that resolves the tension between “this tool is incredible and I want everyone to have it” and “this tool can publish our customer data with a single click.” You just live in that tension and make the best decisions you can. With conversations, with guardrails that don’t depend on someone reading a permission prompt correctly, and with the uncomfortable acknowledgment that giving people powerful tools means accepting some risk you can’t fully control.
The deploy button isn’t going away. My job is to make sure the people pressing it know what it does.
Last week I watched a teammate build something with Claude for the first time. It was working. They were excited. Claude asked for permission to do something and I saw their hand move toward the 2 key. I said “read that one first.” They did. It was fine. Nothing dangerous. But the pause mattered.
That pause is the whole thing I’m trying to build.
If you’re navigating this too, figuring out how to give your team these tools without losing sleep, I want to hear how. I don’t think anyone has the playbook yet. But I think we can write it together.
Feel free to shoot me a note: asa@nearspacelabs.com.



