The AI + Security Issue
What are some AI security concerns you're dealing with?
There has been a lot of signal lately around the intersection of AI + Security. Maybe because I’m in the thick of it pushing AI vendors to help with centralizing their security, or maybe because a new and big AI+Security conference is happening this week. Some super exciting talks I’m looking forward to catching. What are some talks you’re looking forward to? Drop a comment.
In this issue I will go over some things to consider when trying to secure your enterprise regarding AI tooling as well some resources I stumbled upon along the way.
AI Generated Code Security
As you may know, I am a big fan of Claude Code. Been using it since spring of 2025.
The thing with new shiny tools is that they can be very nascent in maturity. However, as is with all things AI Claude Code, Cursor, and Codex have been improving dramatically.
There are two parts here. The tools themselves, and the frontier models behind them (Opus, ChatGPT, and Gemini) and more importantly the code generating models.
Briefly regarding the models, the code quality has been going up with every new release.However, better code doesn’t always mean secure code. They should still be regarded as Junior Engineers.
As for the interfaces and tools themselves, they are maturing. However, for enterprises to start adopting they need to integrate adequate centralized security and device management integration.
For example, Cursor’s enterprise controls are some of the best I’ve seen for coding agents. It’s pretty extensive and allows you a lot of centralized control of enterprise Cursor agents.
Codex and Claude have some centralized control, but they’re still maturing. For example, Claude Code’s recommendation for centralizing Claude.md files is to push it out using your MDM. Claude does have a sandboxing features, but does require additional measures like the Seatbelt kernel extension or bubblewrap to ensure they are in place.
This reminds of the AWS days when they designed their Account services and structure without the scalability in mind, having to go back and add security controls afterwards.
Is the code secure?
Ahhh, the $1M questions. Is the code secure? I would argue it’s only as secure, or security minded, as the engineer running it.
Let me ask you this: Do engineers code securely by default? No! Of course not. Some do, but the majority do not. They just need to ship things.
This is the same thing.
For example, you have an agent create Terraform for you. Will it work? Yeah! It may work, and probably in one shot. However will it be secure? Likely not.
A security engineer know what to look for. IAM and STS security, secrets written to files, default encryption vs KMS encryption.
What’s funny is what you will end up having is two agents battling it out. One agent to produce code and the other to being the security engineer find vulnerabilities.
Who’s to blame for bad ai generated code?
Not sure how this is even a debate, but apparently it’s happening. It’s bad enough people are talking about replacing engineers with coding agents, now engineers don’t want to be responsible for the output. Sounds like we’re handing everything over.
This came from recent news coverage about AWS outages caused by an AI coding bot blunder.
This goes back to the cloud days. People thought (and still think unfortunately) that using the cloud is secure. No. There is a shared responsibility model that cloud providers have. Same with cars and seat belts.
Cursor blame is an interesting feature where you can see what code was actually generated by AI.
There is so much to considered regarding generated code security. From malicious MCPs servers, skills, to API abuse, observability, to actually vulnerable code. AI Security vendors/tools are popping up to solve some of the nuance problems that primary vendors are not solving. But the landscape is shifting so quickly. Primary AI vendors will have to bake in enterprise style security management right from the beginning.
Openclaw Security
Openclaw is super powerful. What do you when you have something powerful though? Do you just let it loose, or put guardrails and try to contain it? Think of a powerful engine in a racecar. So much work has to go into making that engine not fly out of the car and destroy the driver.
Talked to a friend the other day that made Openclaw work really well for his company. He contained it, didn’t give it internet access, gave it access to certain slack channels, and read only access to github. It worked wonders for him and his team! The beauty is that it has the ability to update and modify itself. It can run cron jobs and updated instructions for future guidance.
This is an excellent model of how things can go RIGHT!
My friend and co-host Adrian Sanabria wrote a piece on Openclaw specifically. Check it out!
Awesome AI Security Repo
Ran into this github repo recently and found it pretty extensive.
A collection of awesome resources related AI security (Github)
There is so much to cover in AI Security, that one article can’t do it justice. We haven’t even touched on model security and models faking alignment at all either. Stay tuned for more updates.






