RACI, re-review triggers, and the accountability gap inside already-approved SaaS
What happens after an already-approved platform adds new agentic capability and the organization keeps treating the old approval as if it still covers the new blast radius?
That is where the ownership model starts to fail.
In practice, the issue is often not a rogue bot built in secret. It is a trusted platform that gains new connectors, deeper retrieval, broader delegated actions, or new memory and automation behavior after the last meaningful security review. The tool is still sanctioned. The capability surface inside it may not have been re-evaluated with the same discipline. Microsoft’s current guidance for governing agent identities and its separation of owners, sponsors, and managers both point in the same direction: the identity is not the novelty anymore; the governance model is.
That is the gap this blog is about.
This is not a hypothetical risk class anymore. The NIST AI Risk Management Framework’s Govern and Manage functions are explicit that risk management does not stop at deployment — they require ongoing monitoring and re-evaluation whenever context, capabilities, or impacts change. The OWASP Top 10 for Agentic Applications, released in December 2025 and peer-reviewed by over 100 security researchers and practitioners, names tool misuse and privilege expansion inside existing systems as first-class risks. Both frameworks are pointing at the same problem: one-time approval does not cover a capability surface that keeps moving.
If you have read Verizon’s 2025 DBIR: Third-Party Risk, Identity Sprawl, and the Hard Truth About Modern Breach Vectors, Beyond Human: Securing Agentic AI and Non-Human Identities in a Breach-Driven World, Copilot, Can You Keep a Secret?, or When Hypergrowth Meets Identity Reality, you already know the broader pattern: access always scales faster than accountability unless somebody forces the issue.
The approved-tool problem
Take a platform like Asana.
Most enterprises do not think of Asana as suspicious. They think of it as what it is: a widely used, already-approved collaboration and work-management platform. That is exactly why it is such a useful example.
Over time, Asana has added a more agentic operating model. AI Studio introduced no-code AI workflows embedded directly into the place where work is already happening, and Asana describes those smart workflows as a way for workflow owners in operations, program management, and IT to build AI into existing processes without hard-coding. AI Teammates extends that model further by letting teams design custom AI teammates with defined roles, permissions, and responsibilities, keep them private until shared, and refine them over time with feedback and team memory. Asana also documents that admins can enable or disable Asana AI features at the domain level, and that AI Teammates follow the same access model as users in the domain. That is exactly the kind of capability evolution mature SaaS platforms should be doing. It is also exactly the kind of evolution that should trigger a fresh governance look.
That is the point. Asana is not the bad guy in this story. It is a good example of what modern enterprise software looks like now: approved platforms keep getting more capable.
And when that capability expands, the right questions are no longer just “Was this app reviewed?” They become more specific and more operational:
- Who owns the new agentic behavior?
- What data can now be discovered, summarized, or retained?
- What new connectors or scopes are in play?
- What actions can the agent now trigger?
- Who signs off on that change in reach?
- Who re-reviews it when the product surface changes again?
Those are not procurement questions. They are governance questions.
Ownership drift is the real failure mode
This is where the model breaks.
- The business approved the platform.
- The admin team manages the tenant.
- The workflow owner turns on the new capability.
- The identity team provisions around it.
- Security reviewed the platform some time ago.
- The vendor ships new agentic capability in a release note nobody treated like a material access change.
Now, who owns the expanded reach?
That is the real problem.
Not whether the organization understands that agents are NHIs. Whether anybody can answer, clearly and calmly, who is accountable for the new data reach, connector scope, approval path, retention behavior, and disablement workflow after the platform evolves.
That is why the better question is not “Who built the agent?” It is “Who owns the re-review when a trusted platform gains new agentic reach?” Okta’s framing on governing AI agent identity is useful here because it keeps coming back to visibility, accountability, and control. That is the same issue, just in simpler language.
The ownership model that works
Responsible
The technical operator who configures the agent identity, connectors, permissions, runtime behavior, and disablement path.
This might be a platform team, application engineering, or an internal automation owner. Their job is to make the control mechanics real. Their job is not to silently absorb business accountability forever.
Accountable
A named business or service owner who approves the agent’s purpose, acceptable scope, and continued use.
This is the person who owns the risk of the capability being present in production. Not the tenant admin. Not the engineer who clicked the toggle first. The business owner whose workflow, service, or process now depends on the agent.
If the capability expands, this is the person who should say yes again.
Consulted
Identity/security, the data owner, the application owner, and sometimes legal or compliance, depending on what the agent touches.
This is where organizations catch the hidden dependency problem early: data exposure, connector sprawl, secrets management, over-privileged service accounts, and approval workflow risk. If the capability touches sensitive content, the data owner belongs here. If it writes into a business system, the application owner belongs here. If it expands identity or access surface, the identity team belongs here, whether anyone enjoys the meeting or not.
Informed
IT ops, support, audit, and incident response.
They should not be learning about the existence of an agentic workflow from a support case or an incident bridge.
This model is not theoretical. In Microsoft’s current agent governance approach, there is a clear distinction between technical administration and delegated human responsibility, and sponsorship is designed to transfer if a sponsor leaves so there is always a human accountable for lifecycle and access decisions. That is essentially Microsoft acknowledging the same enterprise truth identity teams have known for years: ownership has to survive org-chart drift.
What should trigger a re-review
The right ownership model is not just about who approves the capability once.
It is about what forces the next review.
A meaningful re-review should trigger when any of these changes:
- A new connector is enabled
- The agent gains access to a new data source
- The permission model expands from read to write
- The agent can now take action in a new system
- Retention, memory, or feedback behavior changes
- A new model, AI partner, or processing path is introduced
- The original accountable owner leaves or changes roles
That list matters because approved tools evolve. AI Studio is specifically about embedding AI into existing workflows, and AI Teammates are explicitly designed to take on work, use context, and improve over time. None of that is a problem by itself. It just means the review boundary has to move with the capability boundary. This is where industry guidance has caught up. NIST AI RMF is explicit that the MAP function must be re-applied as context, capabilities, and potential impacts change — not as a one-time gate at deployment, but as an ongoing obligation tied to system evolution. The OWASP Agentic Top 10 frames tool misuse and unchecked privilege expansion as distinct, named risk categories, not edge cases. The principle of least agency — the agentic equivalent of least privilege — requires that capability scope stays tightly bounded to what the current approved use case actually needs. When the capability grows, the scope review grows with it. That is not opinion. That is the emerging consensus of every serious framework touching this space.
Anti-patterns that keep failing
“The app is already approved.”
The platform may be approved. The expanded agentic behavior may not have been reviewed in its current shape.
An approved platform does not automatically mean approved blast radius.
“The team that built the workflow owns it forever.”
That works right up until they transfer, leave, or forget it exists. Durable ownership has to survive personnel changes, not depend on memory and goodwill.
“The admin controls are the governance model.”
Admin controls matter. They are not ownership. A toggle is not a RACI.
“Security owns it because Security raised the risk.”
Security can set conditions, review risk, and enforce gates. That does not make Security the accountable business owner for a capability another team depends on.
“The service principal or integration record is the owner.”
Metadata is not accountability. If the best answer to “who owns this?” is a directory object, the ownership model is already broken.
“Human approvals make it safe.”
Only if the human side is actually secure. In GitHub’s published agentic security principles, agents gather context only from authorized users and operate under the permissions and context granted by the initiating user. That is a useful ownership lesson: human-in-the-loop only helps if attribution, authorization, and context still make sense. Auth0 lands in a similar place by focusing on fine-grained authorization for AI agents, especially least-privilege identities and policy checks on every call.
“If it goes wrong, we’ll turn it off.”
Can you? Quickly? Cleanly? With confidence? Cloudflare’s MCP governance guidance is useful here because it centers governance on vetting, authorizing, and auditing interactions, along with controlling which tools are authorized and who can access them. That is exactly the kind of disable-and-audit thinking organizations need once agents are acting inside approved platforms.
What good looks like
A workable ownership model is visible in the operating model.
It names a human accountable owner.
It names a technical operator.
It names a re-review trigger.
It separates data ownership from platform administration.
It defines who approves scope expansion.
It provides a clean disable path.
It survives org changes.
That is not flashy. It is also the difference between a useful agentic capability and an orphaned privilege tier.
NIST AI RMF calls this continuous governance: the Govern function applies across the full AI lifecycle, not just the approval gate. OWASP’s Agentic Top 10 builds the same argument from the security side, centering least agency and ongoing scope enforcement as the foundational controls for deployed agents. Both frameworks are converging on the same operating model. An ownership structure that names humans, defines re-review triggers, and survives org-chart drift is not an advanced program. It is table stakes for what the field is now calling mature agentic governance.
A simple ownership test
Ask five questions:
- Who is the named accountable owner?
- Who runs the technical identity and connectors?
- Who signs off when the agent’s reach changes?
- Who reviews the data and action surface?
- Who disables it if the owner leaves or the agent misbehaves?
If any answer starts with a team name instead of a person, depends on a directory object instead of a role, or assumes “the platform review probably covered that,” the ownership model is not done.
And if it is not done, the governance problem is not theoretical. It is already in production.
Final thought
What usually breaks is not the recognition that agents are identities.
What breaks is the assumption that platform approval freezes capability in time.
It does not.
Approved tools evolve. Connectors expand. Memory changes. Workflows get smarter. Retrieval gets deeper. Action surfaces grow.
When that happens, the ownership model has to grow with it.
If it does not, then the gap is not in the technology.
It is in the accountability.

Leave a comment