Agents Are Identities: The New Control Plane for Enterprise AI

Published by

on

Illustration of a diverse enterprise team reviewing AI agent identities, permissions, and governance dashboards together in a conference room.

If you are watching RSA coverage this week, one thing is obvious: agentic AI is having a moment. The booths are loud. The product pages are loud. The keynotes are loud. Underneath all of that noise is a quieter and more important question: who owns these agents, what can they touch, and how do you prove what they did?

That is why this matters now. In February, RSAC itself previewed the issue with its Innovation Showcase session, “Who Watches the Agents? Securing Identity for AI Systems.” Microsoft came into RSAC 2026 talking explicitly about security, governance, and control for agentic AI. Current coverage from ITPro is reinforcing the same theme from another angle: observability will be central to agentic AI safety, and agents need to be treated more like digital co-workers than magic workflow dust.

The market data is moving in the same direction. Microsoft Entra Agent ID now treats agent identities as a real identity construct with sponsors, lifecycle, and governance requirements. Gartner has said that 40% of enterprise applications will include task-specific AI agents by the end of 2026. Gravitee’s 2026 AI agent security research found that only 21.9% of teams treat agents as independent identities, only 24.4% report full visibility into agent-to-agent interactions, and 88% report confirmed or suspected AI agent security incidents. Even the vendor landscape is shifting from generic hype toward execution. As one example, Let’s Data Science reported this week that Entro Security launched Agentic Governance and Administration.

None of that means every agent is a crisis. It does mean the old framing is breaking down.

AI agents are not just features. Once they connect systems like Slack and Jira, retrieve context, use credentials, and trigger actions, they become operating identities with permissions, authority, and blast radius. Govern them like product features, and you will eventually hand production authority to something nobody can properly name, scope, or audit.


When the agent starts crossing boundaries

This gets real the moment the agent stops living inside one application.

A single assistant inside a single product is usually easier to reason about. The permissions are local. The logs are local. The damage is easier to bound. The mess starts when the agent becomes a bridge.

Picture a very normal enterprise pattern. An employee asks for help in Slack. A reasoning layer, whether that is Claude, Copilot, or something similar, pulls context from internal documentation and prior tickets, decides what matters, and creates or updates a Jira issue. Maybe it suggests a queue. Maybe it adds labels. Maybe it drafts a summary for the responder. Maybe it kicks off a lightweight workflow because somebody wanted to save time.

That is not hypothetical. Atlassian documents both the virtual service agent in Slack channels and the ability to create Jira work items directly from Slack. That is exactly why this scenario feels familiar. Plenty of teams already have some version of this pattern, whether they planned it that way or not.

At first, it looks harmless. Useful, even. Slack is just the conversational front door. The model is just the reasoning layer. Jira is just the system of record for the work.

Then the connections start piling up.

The agent can read from the knowledge base. It can see prior incidents. It can correlate repeated requests. It can enrich a Jira issue with details from another system. It can route the ticket based on department, severity, or prior history. It can carry context from chat into workflow. It can start to shape how work gets created, described, prioritized, and handed off.

Now ask the questions that usually show up too late.

  • Is the agent acting with the employee’s authority, its own authority, or some hidden service credential nobody has reviewed in six months?
  • What sources allowed them to retrieve data, and what sources did they actually use?
  • Which Jira actions are explicitly approved, and which ones are just happening because the integration can technically do them?
  • Where is the real audit trail when someone asks why a ticket was created, updated, routed, or enriched the way it was?
  • Who owns the agent if Slack is the request surface, the model is the reasoning layer, and Jira is where the action lands?

That is the real problem. The risk is not that the agent exists inside Slack, or Jira, or Claude. The risk compounds when it carries context across boundaries and turns visibility in one system into authority in another.


Why this is an identity problem

A basic chatbot can be annoying, wrong, or overly confident. That is not great, but it is still mostly a content problem. The category changes when the agent gets credentials, access to tools, data retrieval privileges, workflow hooks, or the ability to update records and trigger actions. Now the issue is no longer just whether the answer is good. The issue is what the thing can actually do.

That is why this belongs in the identity conversation.

If an agent can read internal docs, pull ticket context, create or update work items, route requests, summarize sensitive data, or call downstream tools, it starts to look a lot less like a feature and a lot more like an operating identity. It may not have a face, a mailbox, or a badge photo, but it has many of the things that matter in practice: credentials, reach, influence, and blast radius.

That is the same reason service accounts became a security problem. That is the same reason unmanaged API keys became a security problem. The packaging changes. The control problem does not.

If it can read, retrieve, decide, and act, it belongs in the identity model.


Policy is not enough

A lot of AI governance content is still too mushy to help with this problem.

It talks about principles, oversight, responsible use, review committees, and model behavior. Some of that matters. None of it is sufficient when an agent has access to enterprise systems and the ability to influence or trigger operational outcomes.

You do not solve cross-system authority creep with a nice slide about ethics.

The usual AI governance lens asks whether the model is safe, whether the output is appropriate, whether there are review processes, and whether there are policy statements. Those are fair questions, but they just are not enough.

Identity governance asks a different set of questions: what can this thing access, under whose authority, what tools can it call, what actions can it take, what evidence exists afterward, and how do we reduce or remove its permissions over time?

That is the lens that matters once the agent starts crossing system boundaries.

Prompt quality is not a permission model.

A well-behaved assistant can still be over-permissioned. A polished response can still mask bad control design. A useful workflow can still create a mess if nobody can explain which authority model is being used, where the action history lives, or how the scope changes over time.


A practical framework for agent identity governance

Name. Scope. Bind. Watch. Prove. Retire.

You do not need a giant AI governance program to get started. You need a practical way to answer six basic questions.

  1. What is this agent? 
  2. What can it reach? 
  3. Under whose authority does it act? 
  4. What records exist when it does something? 
  5. Can we explain its behavior later?
  6. How do we shut it down cleanly when its job changes or ends?

That is the value of this framework. It gives you a simple operating model for governing agents the same way serious teams already govern other high-impact identities.

Name

Start by making the agent a real thing in your environment, not a fuzzy convenience layer.

Give it a unique identity, a clear owner, a stated purpose, and an inventory record of the systems it touches. Document what it is called, what problem it is supposed to solve, what tools it can use, what sources it can retrieve from, and what credentials or tokens sit behind it.

If nobody can name it, nobody can govern it.

The value here is basic but critical. When something breaks, when an auditor asks questions, or when a team changes hands, you know what the agent is, who owns it, and what it was supposed to be doing in the first place.

Scope

Next, define the agent’s lane.

Limit what it can access and what it can do. Separate read from recommend. Separate recommend from act. Separate low-risk workflow steps from anything that touches production operations, customer records, privileged admin tasks, or sensitive internal context.

Helpful is not a permission boundary.

The value of scope is that it stops a useful assistant from quietly becoming an over-permissioned operator. Least privilege matters more here, not less, because agents move fast, work across systems, and can make a mediocre design decision look efficient right up until it causes pain.

Bind

Then make the authority model explicit.

Be clear about when the agent is acting with user-delegated authority, when it is using its own service identity, what tools are allowlisted, what actions require approval, and what environments are off-limits.

Tool calling is privileged execution with better marketing.

This is where the technical details matter. The Model Context Protocol authorization spec leans on familiar security concepts like OAuth-based controls, token handling, HTTPS, and scope discipline. Anthropic’s tool use documentation shows the same basic reality from the model side: a reasoning layer can decide to call tools on connected systems. Once tools are in the loop, you are in access-control territory, whether the UI still looks like chat or not.

The value of bind is that it ties action to policy, not to model confidence.

Watch

After that, make sure you can actually see what happened.

Log source retrieval, tool calls, policy checks, denials, escalations, and downstream actions. Record what context was used, what the model tried to do, what it was allowed to do, and what actually happened in the destination system.

If the only record is the chat transcript, you do not have observability. You have vibes.

The value of watch is operational clarity. When something weird happens, you do not want to reconstruct the story from three systems and a lucky guess.

Prove

Now, assume someone is going to ask the uncomfortable questions, because eventually they will.

Who owned it? What did it access? What did it change? Why was it allowed? Which identity or token was used? What changed over time?

Sooner or later, every shiny agent rollout gets audited by reality.

The value of prove is that you can answer those questions with evidence instead of hand-waving. If the answer is some version of “it depends on which system you check,” then the governance model is not done.

Retire

Finally, treat the agent like something that will age, drift, and eventually outlive its original purpose.

Review scopes. Rotate secrets. Remove stale connectors. Reassign ownership when teams change. Decommission agents that no longer serve a clear purpose.

Unretired agents become the ghost accounts of the AI era.

The value of retire is simple. Old agents with lingering access become tomorrow’s mystery permissions. Cleanup is not paperwork. It is part of the control model.


What this looks like in a Slack, Jira, and Claude workflow

Put the framework back into the scenario.

Name means the agent has a defined owner, a stated purpose, and a clean inventory record. It exists to help triage and route service requests, not to improvise its way into broader operational authority.

Scope means it can read approved knowledge sources and create Jira issues in specific projects, but it cannot close tickets, change severity, access sensitive data sets, or trigger privileged workflows without an explicit control point.

Bind means the agent’s actions are tied to approved workflows and policy checks. Sensitive steps require human approval or a defined gate. The model does not get to freestyle its way into changing production process because it sounded confident in Slack.

Watch means every Slack request, retrieval event, tool call, policy decision, denial, and Jira update is logged in a way security, audit, or operations can reconstruct without archaeology.

Prove means that if somebody asks why the issue was created, why it was routed to a specific team, or what data informed the recommendation, the organization can show the request, the sources used, the decision path, and the action trail.

Retire means that if the workflow changes, the owner changes, or the agent is no longer needed, its scopes and connectors get reviewed and removed.

This is how you turn AI assistant into something governable.


Start with these seven moves

First, inventory every agent and agent-adjacent automation that touches more than one system. Not just the official ones. Include the quiet glue code, the helpful assistants, the wrappers around APIs, the bots living in chat, and the workflows everybody forgot to document.

Second, record the owner, purpose, connected apps, credentials, and action types for each one. If any of those fields are blank, that is already telling you something.

Third, classify each agent by authority level: observe, recommend, act, or act in production. Those are not all the same risk, and they should not be governed the same way.

Fourth, separate read, recommend, and execution privileges. The fact that an agent can retrieve useful context does not mean it should be able to trigger the next step automatically.

Fifth, identify where the real audit trail lives today. Odds are it is fragmented. Fixing that is one of the fastest ways to reduce future confusion.

Sixth, log receipts for retrieval, tool calls, policy decisions, and downstream actions. Not eventually. Now.

Seventh, set a review cadence for scopes, connectors, and ownership. Most teams do not need a moonshot here. They need an agent register, scoped actions, and fewer mystery permissions.


The hard truth

If an agent can carry context from Slack, reason over it, and update or influence work in Jira, you are no longer dealing with a feature. You are dealing with an identity that spans systems, inherits trust, and accumulates authority.

If you cannot say what it can access, whose authority it uses, what actions it can take, and how to prove what happened, you do not have an AI governance program.

You have a new unmanaged identity tier.

The question is not whether agents will become part of the enterprise control plane. They already are. The question is whether you are going to govern them before they inherit production authority by accident.


What to do now?

You do not need a giant AI governance program by Friday. You do need to stop letting the agent authority grow in the dark.

Start with one real workflow. Pick an agent, or an agent-shaped automation, that crosses systems. Name the owner. List the connectors. Identify the credential or authority model behind it. Decide what is allowed to read, recommend, and do. Then confirm where the receipts live when someone asks what happened.

If your team cannot answer those questions in one sitting, that is not a failure. That is the signal. You found the work.

This is not an argument against agents. It is an argument against mystery permissions, hidden authority, and governance by vibes. Helpful automation is fine. Unowned execution is not.

The teams that handle this well will not be the ones with the loudest AI strategy. They will be the ones who can explain, at any moment, what an agent is, what it can touch, whose authority it uses, and how to shut it down cleanly.

That is not hype. That is control.

And in a world where agents are starting to move work, shape decisions, and cross system boundaries, control is the whole ballgame.

Leave a comment

Is this your new site? Log in to activate admin features and dismiss this message
Log In