Your AI Agents Already Have Access. You Just Don’t Know Where.

GenAI Innovation
Blog

Key Takeaways

The biggest AI risk today isn’t misuse. It’s invisibility. Many organizations don’t yet know where these agents exist or what they can access.
AI agents are collapsing the distinction between humans and service accounts by operating with delegated user access, making “normal” activity potentially autonomous behavior.
Over-permissioned data, SaaS sprawl, and identity gaps already existed. AI agents simply surface and amplify those weaknesses at machine speed.
Before policy, enforcement, or architecture decisions, security teams must first answer a simple question: Where is AI actually operating in the environment? Without that baseline, governance is guesswork.
The role of security is not to slow AI. It’s to install brakes.

▶️ WATCH THE PANEL DISCUSSION

AI adoption is no longer experimental. It is operational. Across enterprises, AI agents are reading email, pulling from CRMs, generating reports, interacting with enterprise search, and executing tasks across SaaS platforms. And in many cases, security teams do not fully understand where those agents exist or what they can access.

I explored this new CISO reality with Rinki Sethi (Chief Security & Strategy Officer, Upwind Security) and Andrew Wilder (Chief Security Officer, Vetcor), one theme kept surfacing.

The risk is not theoretical. It is invisible. As Andrew put it, “you can only control the risks that you know about.”

And right now, many organizations do not know enough.

AI Agents Are Blurring Identity Boundaries

For years, we operated with a relatively clean separation between human accounts and non-human accounts. “Pre the AI agent era, you had a real clear delineation between birthright accounts and non-human accounts,” said Andrew. That clarity is gone.  “Now with AI agents, those lines are very blurred because an AI agent can take your birthright accounts access and do lots of stuff with it.”

This is the core shift.

An AI agent can act using delegated user access. It can retrieve, synthesize, and execute across systems. It can behave continuously rather than discretely. And from a risk perspective, that changes everything.

If what appears to be normal user activity is actually an autonomous agent operating on that user’s behalf, your traditional assumptions break.  This is no longer just identity governance. It is identity plus agency.

Business Is Going to Move Regardless

Despite the risks and security team trepidation, AI adoption is not waiting for security maturity.  Security leaders do not get to pause AI until everything is perfect. Businesses are feeling the AI productivity gains. The market sees the competitive advantage.

Rinki described the acceleration clearly: “It is just like a revolution happening within months.”

This is not a three-year migration cycle like early cloud adoption. Embedded AI functionality is rolling out across enterprise platforms almost by default. The market is experiencing a drastic shift.

AI Amplifies What Was Already Broken

SaaS sprawl, identity sprawl, service accounts, and foundational governance were not fully controlled before AI agents entered most enterprise environments and certainly are not better controlled now as Rinki noted. This is not introducing a new category of weakness.

This is not introducing a new category of weakness. It is accelerating exposure to old ones. AI becomes the amplifier.

Discovery Comes First

Before policy. Before enforcement. Before architecture debates. Discovery.

Andrew was direct about this: “the first thing that I would do… is identify what AI we have in our organization? What agents do we have? What SaaS apps do we have that are using it?”

That is the starting point.

In our own experience at Opsin, organizations are often shocked by what they uncover in their 24 Hour Opsin Risk Assessment. We had one enterprise company that realized they had 400 AI agents running in their environment unbeknownst to them. They were completely shocked. 

That is not an outlier story. Many organizations believe AI usage is limited to a few sanctioned deployments. In reality, shadow adoption and embedded agents proliferate quickly. Security teams must mobilize quickly to figure out just how big and bad the risk is across the organization. Without that baseline, everything else is guesswork.

Maturity Still Matters

There was an important tension in the conversation around whether organizations should prioritize AI security immediately or focus on fundamentals first. Here were the key takeaways:

  • Understand your Program Maturity: Get brilliant at the basics first before you start to look at AI. If identify governance is weak, AI agents will expose that weakness faster. If data permissions are overly broad, AI will surface that exposure through simple prompts. 
  • Use AI as justification to strengthen foundational controls: Use the new AI buzz to build a proper security program. AI can become crucial leverage to strengthen foundational controls. 

Agents Do Not Read Policy Documents

Many enterprises formed AI governance committees. Principles were drafted. Policies were written, and Rinki acknowledged that effort: “We were coming up with AI governance committees… but putting it on paper isn’t going to solve anything.” Enforcement must be technical.

Agents do not read policy documents. They execute based on permissions and integration boundaries. That’s why mature organizations are now using AI to control AI. And yet, that too introduces another layer of complexity.

If AI is controlling AI, then who’s controlling that?  This is where human oversight still remains critical. “You need to have a human in the loop,” said Andrew “for checks and balances.” Automation is necessary. Blind trust is not.

Legacy Tools Were Built for a Different Era

We discussed DLP and CASB in the context of agentic behavior. These tools were built to assume standard actors making discreet and observable requests, but agents do not operate that way. They’re dynamic and continuous, and that difference matters. If your controls assume explicit human-driven requests, they may miss implicit agent-driven actions.

Rinki reinforced that tool stacks are not fully aligned yet: “They’re not quite plugged in with the new… enterprise AI or agentic creation tools that are available.”

This is why runtime context becomes essential. Identity alone is not enough. Data alone is not enough. You need to see the interaction in context.

 

Do Not Slow AI Innovation. Install Brakes.

There was strong alignment between the three of us on one point: You cannot stop AI. Security leaders are not in place to prevent movement but to enable controlled speed for innovation. As Andrew put it, “cybersecurity leaders are like the brakes on a Formula One race car…the car can’t win the race if it doesn’t have good brakes.”

Strong governance does not slow innovation. It allows the business to move faster without catastrophic failure.

The Next 6 to 12 Months

If I distill the discussion into practical guidance for CISOs over the next year, it looks like this:

  1. Assess maturity: Get brilliant at the basics of your program.
  2. Gain visibility: Discovery is the first step.
  3. Align with the business: Support the most critical AI use cases rather than attempting to block everything.
  4. Move towards runtime monitoring and contextual enforcement: Identify, data, prompts, and outputs must be evaluated together.

Remember, you can only control the risks you know about. AI agents already have access in your environment. The real question is whether you know where.

Table of Contents

LinkedIn Bio >

FAQ

No items found.
About the Author
James Pham
James Pham is the Co-Founder and CEO of Opsin, with a background in machine learning, data security, and product development. He previously led ML-driven security products at Abnormal Security and holds an MBA from MIT, where he focused on data analytics and AI.
LinkedIn Bio >

Your AI Agents Already Have Access. You Just Don’t Know Where.

▶️ WATCH THE PANEL DISCUSSION

AI adoption is no longer experimental. It is operational. Across enterprises, AI agents are reading email, pulling from CRMs, generating reports, interacting with enterprise search, and executing tasks across SaaS platforms. And in many cases, security teams do not fully understand where those agents exist or what they can access.

I explored this new CISO reality with Rinki Sethi (Chief Security & Strategy Officer, Upwind Security) and Andrew Wilder (Chief Security Officer, Vetcor), one theme kept surfacing.

The risk is not theoretical. It is invisible. As Andrew put it, “you can only control the risks that you know about.”

And right now, many organizations do not know enough.

AI Agents Are Blurring Identity Boundaries

For years, we operated with a relatively clean separation between human accounts and non-human accounts. “Pre the AI agent era, you had a real clear delineation between birthright accounts and non-human accounts,” said Andrew. That clarity is gone.  “Now with AI agents, those lines are very blurred because an AI agent can take your birthright accounts access and do lots of stuff with it.”

This is the core shift.

An AI agent can act using delegated user access. It can retrieve, synthesize, and execute across systems. It can behave continuously rather than discretely. And from a risk perspective, that changes everything.

If what appears to be normal user activity is actually an autonomous agent operating on that user’s behalf, your traditional assumptions break.  This is no longer just identity governance. It is identity plus agency.

Business Is Going to Move Regardless

Despite the risks and security team trepidation, AI adoption is not waiting for security maturity.  Security leaders do not get to pause AI until everything is perfect. Businesses are feeling the AI productivity gains. The market sees the competitive advantage.

Rinki described the acceleration clearly: “It is just like a revolution happening within months.”

This is not a three-year migration cycle like early cloud adoption. Embedded AI functionality is rolling out across enterprise platforms almost by default. The market is experiencing a drastic shift.

AI Amplifies What Was Already Broken

SaaS sprawl, identity sprawl, service accounts, and foundational governance were not fully controlled before AI agents entered most enterprise environments and certainly are not better controlled now as Rinki noted. This is not introducing a new category of weakness.

This is not introducing a new category of weakness. It is accelerating exposure to old ones. AI becomes the amplifier.

Discovery Comes First

Before policy. Before enforcement. Before architecture debates. Discovery.

Andrew was direct about this: “the first thing that I would do… is identify what AI we have in our organization? What agents do we have? What SaaS apps do we have that are using it?”

That is the starting point.

In our own experience at Opsin, organizations are often shocked by what they uncover in their 24 Hour Opsin Risk Assessment. We had one enterprise company that realized they had 400 AI agents running in their environment unbeknownst to them. They were completely shocked. 

That is not an outlier story. Many organizations believe AI usage is limited to a few sanctioned deployments. In reality, shadow adoption and embedded agents proliferate quickly. Security teams must mobilize quickly to figure out just how big and bad the risk is across the organization. Without that baseline, everything else is guesswork.

Maturity Still Matters

There was an important tension in the conversation around whether organizations should prioritize AI security immediately or focus on fundamentals first. Here were the key takeaways:

  • Understand your Program Maturity: Get brilliant at the basics first before you start to look at AI. If identify governance is weak, AI agents will expose that weakness faster. If data permissions are overly broad, AI will surface that exposure through simple prompts. 
  • Use AI as justification to strengthen foundational controls: Use the new AI buzz to build a proper security program. AI can become crucial leverage to strengthen foundational controls. 

Agents Do Not Read Policy Documents

Many enterprises formed AI governance committees. Principles were drafted. Policies were written, and Rinki acknowledged that effort: “We were coming up with AI governance committees… but putting it on paper isn’t going to solve anything.” Enforcement must be technical.

Agents do not read policy documents. They execute based on permissions and integration boundaries. That’s why mature organizations are now using AI to control AI. And yet, that too introduces another layer of complexity.

If AI is controlling AI, then who’s controlling that?  This is where human oversight still remains critical. “You need to have a human in the loop,” said Andrew “for checks and balances.” Automation is necessary. Blind trust is not.

Legacy Tools Were Built for a Different Era

We discussed DLP and CASB in the context of agentic behavior. These tools were built to assume standard actors making discreet and observable requests, but agents do not operate that way. They’re dynamic and continuous, and that difference matters. If your controls assume explicit human-driven requests, they may miss implicit agent-driven actions.

Rinki reinforced that tool stacks are not fully aligned yet: “They’re not quite plugged in with the new… enterprise AI or agentic creation tools that are available.”

This is why runtime context becomes essential. Identity alone is not enough. Data alone is not enough. You need to see the interaction in context.

 

Do Not Slow AI Innovation. Install Brakes.

There was strong alignment between the three of us on one point: You cannot stop AI. Security leaders are not in place to prevent movement but to enable controlled speed for innovation. As Andrew put it, “cybersecurity leaders are like the brakes on a Formula One race car…the car can’t win the race if it doesn’t have good brakes.”

Strong governance does not slow innovation. It allows the business to move faster without catastrophic failure.

The Next 6 to 12 Months

If I distill the discussion into practical guidance for CISOs over the next year, it looks like this:

  1. Assess maturity: Get brilliant at the basics of your program.
  2. Gain visibility: Discovery is the first step.
  3. Align with the business: Support the most critical AI use cases rather than attempting to block everything.
  4. Move towards runtime monitoring and contextual enforcement: Identify, data, prompts, and outputs must be evaluated together.

Remember, you can only control the risks you know about. AI agents already have access in your environment. The real question is whether you know where.

Get Your Copy
Your Name*
Job Title*
Business Email*
Your copy
is ready!
Please check for errors and try again.

Secure, govern, and scale AI

Inventory AI, secure data, and stop insider threats
Get a Demo →