.webp)
AI adoption is no longer experimental. It is operational. Across enterprises, AI agents are reading email, pulling from CRMs, generating reports, interacting with enterprise search, and executing tasks across SaaS platforms. And in many cases, security teams do not fully understand where those agents exist or what they can access.
I explored this new CISO reality with Rinki Sethi (Chief Security & Strategy Officer, Upwind Security) and Andrew Wilder (Chief Security Officer, Vetcor), one theme kept surfacing.
The risk is not theoretical. It is invisible. As Andrew put it, “you can only control the risks that you know about.”
And right now, many organizations do not know enough.
For years, we operated with a relatively clean separation between human accounts and non-human accounts. “Pre the AI agent era, you had a real clear delineation between birthright accounts and non-human accounts,” said Andrew. That clarity is gone. “Now with AI agents, those lines are very blurred because an AI agent can take your birthright accounts access and do lots of stuff with it.”
This is the core shift.
An AI agent can act using delegated user access. It can retrieve, synthesize, and execute across systems. It can behave continuously rather than discretely. And from a risk perspective, that changes everything.
If what appears to be normal user activity is actually an autonomous agent operating on that user’s behalf, your traditional assumptions break. This is no longer just identity governance. It is identity plus agency.
Despite the risks and security team trepidation, AI adoption is not waiting for security maturity. Security leaders do not get to pause AI until everything is perfect. Businesses are feeling the AI productivity gains. The market sees the competitive advantage.
Rinki described the acceleration clearly: “It is just like a revolution happening within months.”
This is not a three-year migration cycle like early cloud adoption. Embedded AI functionality is rolling out across enterprise platforms almost by default. The market is experiencing a drastic shift.
SaaS sprawl, identity sprawl, service accounts, and foundational governance were not fully controlled before AI agents entered most enterprise environments and certainly are not better controlled now as Rinki noted. This is not introducing a new category of weakness.
This is not introducing a new category of weakness. It is accelerating exposure to old ones. AI becomes the amplifier.
Before policy. Before enforcement. Before architecture debates. Discovery.
Andrew was direct about this: “the first thing that I would do… is identify what AI we have in our organization? What agents do we have? What SaaS apps do we have that are using it?”
That is the starting point.
In our own experience at Opsin, organizations are often shocked by what they uncover in their 24 Hour Opsin Risk Assessment. We had one enterprise company that realized they had 400 AI agents running in their environment unbeknownst to them. They were completely shocked.
That is not an outlier story. Many organizations believe AI usage is limited to a few sanctioned deployments. In reality, shadow adoption and embedded agents proliferate quickly. Security teams must mobilize quickly to figure out just how big and bad the risk is across the organization. Without that baseline, everything else is guesswork.
There was an important tension in the conversation around whether organizations should prioritize AI security immediately or focus on fundamentals first. Here were the key takeaways:
Many enterprises formed AI governance committees. Principles were drafted. Policies were written, and Rinki acknowledged that effort: “We were coming up with AI governance committees… but putting it on paper isn’t going to solve anything.” Enforcement must be technical.
Agents do not read policy documents. They execute based on permissions and integration boundaries. That’s why mature organizations are now using AI to control AI. And yet, that too introduces another layer of complexity.
If AI is controlling AI, then who’s controlling that? This is where human oversight still remains critical. “You need to have a human in the loop,” said Andrew “for checks and balances.” Automation is necessary. Blind trust is not.
We discussed DLP and CASB in the context of agentic behavior. These tools were built to assume standard actors making discreet and observable requests, but agents do not operate that way. They’re dynamic and continuous, and that difference matters. If your controls assume explicit human-driven requests, they may miss implicit agent-driven actions.
Rinki reinforced that tool stacks are not fully aligned yet: “They’re not quite plugged in with the new… enterprise AI or agentic creation tools that are available.”
This is why runtime context becomes essential. Identity alone is not enough. Data alone is not enough. You need to see the interaction in context.
There was strong alignment between the three of us on one point: You cannot stop AI. Security leaders are not in place to prevent movement but to enable controlled speed for innovation. As Andrew put it, “cybersecurity leaders are like the brakes on a Formula One race car…the car can’t win the race if it doesn’t have good brakes.”
Strong governance does not slow innovation. It allows the business to move faster without catastrophic failure.
If I distill the discussion into practical guidance for CISOs over the next year, it looks like this:
Remember, you can only control the risks you know about. AI agents already have access in your environment. The real question is whether you know where.
AI adoption is no longer experimental. It is operational. Across enterprises, AI agents are reading email, pulling from CRMs, generating reports, interacting with enterprise search, and executing tasks across SaaS platforms. And in many cases, security teams do not fully understand where those agents exist or what they can access.
I explored this new CISO reality with Rinki Sethi (Chief Security & Strategy Officer, Upwind Security) and Andrew Wilder (Chief Security Officer, Vetcor), one theme kept surfacing.
The risk is not theoretical. It is invisible. As Andrew put it, “you can only control the risks that you know about.”
And right now, many organizations do not know enough.
For years, we operated with a relatively clean separation between human accounts and non-human accounts. “Pre the AI agent era, you had a real clear delineation between birthright accounts and non-human accounts,” said Andrew. That clarity is gone. “Now with AI agents, those lines are very blurred because an AI agent can take your birthright accounts access and do lots of stuff with it.”
This is the core shift.
An AI agent can act using delegated user access. It can retrieve, synthesize, and execute across systems. It can behave continuously rather than discretely. And from a risk perspective, that changes everything.
If what appears to be normal user activity is actually an autonomous agent operating on that user’s behalf, your traditional assumptions break. This is no longer just identity governance. It is identity plus agency.
Despite the risks and security team trepidation, AI adoption is not waiting for security maturity. Security leaders do not get to pause AI until everything is perfect. Businesses are feeling the AI productivity gains. The market sees the competitive advantage.
Rinki described the acceleration clearly: “It is just like a revolution happening within months.”
This is not a three-year migration cycle like early cloud adoption. Embedded AI functionality is rolling out across enterprise platforms almost by default. The market is experiencing a drastic shift.
SaaS sprawl, identity sprawl, service accounts, and foundational governance were not fully controlled before AI agents entered most enterprise environments and certainly are not better controlled now as Rinki noted. This is not introducing a new category of weakness.
This is not introducing a new category of weakness. It is accelerating exposure to old ones. AI becomes the amplifier.
Before policy. Before enforcement. Before architecture debates. Discovery.
Andrew was direct about this: “the first thing that I would do… is identify what AI we have in our organization? What agents do we have? What SaaS apps do we have that are using it?”
That is the starting point.
In our own experience at Opsin, organizations are often shocked by what they uncover in their 24 Hour Opsin Risk Assessment. We had one enterprise company that realized they had 400 AI agents running in their environment unbeknownst to them. They were completely shocked.
That is not an outlier story. Many organizations believe AI usage is limited to a few sanctioned deployments. In reality, shadow adoption and embedded agents proliferate quickly. Security teams must mobilize quickly to figure out just how big and bad the risk is across the organization. Without that baseline, everything else is guesswork.
There was an important tension in the conversation around whether organizations should prioritize AI security immediately or focus on fundamentals first. Here were the key takeaways:
Many enterprises formed AI governance committees. Principles were drafted. Policies were written, and Rinki acknowledged that effort: “We were coming up with AI governance committees… but putting it on paper isn’t going to solve anything.” Enforcement must be technical.
Agents do not read policy documents. They execute based on permissions and integration boundaries. That’s why mature organizations are now using AI to control AI. And yet, that too introduces another layer of complexity.
If AI is controlling AI, then who’s controlling that? This is where human oversight still remains critical. “You need to have a human in the loop,” said Andrew “for checks and balances.” Automation is necessary. Blind trust is not.
We discussed DLP and CASB in the context of agentic behavior. These tools were built to assume standard actors making discreet and observable requests, but agents do not operate that way. They’re dynamic and continuous, and that difference matters. If your controls assume explicit human-driven requests, they may miss implicit agent-driven actions.
Rinki reinforced that tool stacks are not fully aligned yet: “They’re not quite plugged in with the new… enterprise AI or agentic creation tools that are available.”
This is why runtime context becomes essential. Identity alone is not enough. Data alone is not enough. You need to see the interaction in context.
There was strong alignment between the three of us on one point: You cannot stop AI. Security leaders are not in place to prevent movement but to enable controlled speed for innovation. As Andrew put it, “cybersecurity leaders are like the brakes on a Formula One race car…the car can’t win the race if it doesn’t have good brakes.”
Strong governance does not slow innovation. It allows the business to move faster without catastrophic failure.
If I distill the discussion into practical guidance for CISOs over the next year, it looks like this:
Remember, you can only control the risks you know about. AI agents already have access in your environment. The real question is whether you know where.