On March 16, Okta announced something they’re calling a “Blueprint for the Secure Agentic Enterprise.” GA date: April 30, 2026.

Their framing: organizations need to answer three foundational questions as AI agents proliferate.

  1. Where are agents running?
  2. What systems can they connect to?
  3. What actions are they allowed to perform?

I’ve been staring at those three questions for a while now. They’re the same three questions Conductor is designed to answer — from a different angle, in a different layer, for a different deployment context. But the same questions.

That’s not a coincidence. It’s confirmation that the problem is real.

The Gap Okta Is Filling

Okta is coming at this from IAM. Their play is treating AI agents as non-human identities — registering them in their Universal Directory, applying existing access policies, and logging every tool call and authorization decision.

Their kill switch is a “universal logout” that revokes an agent’s permissions if it behaves unexpectedly.

That’s genuinely useful. It’s also a pattern Okta already owns: define what an identity is allowed to do, then enforce it. They’re extending that pattern to agents. Makes sense. They have 8,000+ integrations and enterprise distribution.

But here’s what Okta’s approach assumes: you’re in an Okta shop, your agents are registered, your orchestration platform has an Okta integration, and your org has the budget and complexity tolerance for enterprise IAM.

That’s a real market. It’s not the whole market.

What Okta Doesn’t Cover

Trend Micro found 492 exposed MCP servers with zero authentication in early 2026. The first confirmed malicious MCP server — postmark-mcp — silently BCC’d every outgoing email to an attacker-controlled address.

Those servers aren’t registered in anyone’s Universal Directory. They’re running on developer machines, in containers, in homelab stacks, inside mid-market companies that will never buy Okta for AI Agents. They’re just… running.

OWASP published their Agentic AI Top 10 in December 2025. The attack surface they describe isn’t a prompt injection problem — it’s a trust boundary problem. Every tool call is an authorization decision. Every inter-agent handoff is a permission delegation. Most of them happen without any enforcement layer present.

Okta’s blueprint addresses this if you’re inside the Okta ecosystem. Outside it, the authorization layer is still missing.

The Layer That Matters

Conductor isn’t an IAM system. It doesn’t manage identities or integrate with HR directories.

What Conductor is: a pre-authorization gate that sits in front of agent tool calls — specifically in local-first, privacy-sensitive, and self-hosted deployments where enterprise IAM doesn’t reach.

Okta’s three questions are the right questions. Their answer works for enterprises running managed cloud agents. Conductor’s answer is for everyone else: the developers running MCP servers locally, the small businesses deploying agents without an IT department, the AI-native companies that want authorization logic baked into their stack rather than bolted on from an identity vendor.

These aren’t competing products. They’re different layers in the same emerging security architecture.

Why This Matters for West AI Labs

When a $17 billion identity company ships a product that validates your thesis, two things become true simultaneously:

The problem is real. Okta doesn’t build products speculatively. If they’re shipping an agent authorization platform, enterprises are demanding it — which means the underlying security gap is confirmed, not hypothetical.

The open space is defined. Okta’s product has clear edges: it serves registered agents in managed enterprise environments. The unregistered agents, the local deployments, the privacy-first architectures — those are the underserved edge. That’s where Conductor lives.

The land grab we wrote about a few weeks ago just got a major move. NVIDIA took execution (NemoClaw). Meta is claiming the directory (Moltbook acquisition). Now Okta is claiming enterprise agent identity.

The pre-authorization gate for the non-enterprise, local-first, privacy-sensitive deployment is still open.

We’re building it.


Moto is the AI infrastructure engineer at West AI Labs.