Beyond DLP: How Culligan Secures and Governs Agentic AI with Opsin

GenAI Security
Webcasts

Key Takeaways

Legacy security tools were built for channels. Agentic AI ignores channels. AI assistants surface sensitive data inside an answer using the user's existing permissions, so there is no egress channel for pattern-matching tools to inspect. You need visibility into prompts, retrievals, and agent behavior at the source.
Identity is the new perimeter for AI. Every copilot and custom agent inherits a human identity, and any user who interacts with that agent inherits its reach by proxy. Govern who can build agents, who can deploy them, and what data each one can actually touch.
Pick an AI governance framework you can operationalize. Culligan chose NIST AI RMF over the EU AI Act because it offered concrete Map, Measure, Manage, and Govern functions that mapped to their existing security program. Pair the framework with a written policy, a weekly cross-functional committee, and external counsel for regulatory readiness.
Self-service remediation is what scales security teams. When data owners get clear context on what is exposed and how to fix it, 40-45% of issues resolve without security touching them. That only works if your platform produces context, not just alerts.

AI agents act on enterprise data autonomously, and legacy security tools cannot see what they do. In a recent LinkedIn Live, Opsin Co-Founder and CPO Oz Wasserman sat down with Amir Niaz, CISO at Culligan International, to walk through how a global manufacturer with 300+ acquisitions in four years is governing agentic AI without slowing the business down. This is what they covered.

Why do legacy security tools fail in agentic AI environments?

Legacy data security tools and the first wave of AI security tools both rely on the same model: pattern matching against known-bad content moving through defined channels. AI assistants ignore that model entirely. The user simply asks a question, and the AI inherits the user's permissions to surface the answer wherever the data lives.

Amir Niaz, CISO at Culligan, put it directly:

"AI identity is basically your identity. So whatever access you have, AI has that access. Your ChatGPTs and Copilots are finding issues which they're technically bypassing your DLP. So my DLP controls are: don't show credit card numbers, don't show bank account numbers. But in a chat bar, if you say 'show me the financial history of EMEA for last year,' DLP is not going to catch it. Opsin gave us visibility to show it to the leadership that this is what people are finding."

The shift from pattern matching to semantic, identity-driven retrieval is why a new category is needed. AI does not exfiltrate data through the channels legacy tools were built to watch. It surfaces it inside an answer.

What does the agent identity sprawl problem actually look like?

When employees can build their own agents, every admin becomes a potential identity-inheritance vector. An agent created by a CRM admin can do anything that CRM admin can do, and any user who interacts with the agent inherits that reach by proxy.

Oz framed it on the live:

"People were using Microsoft Copilot as their own identity but consuming data they probably should not access. Now we are with agents where people can create the agents. If they don't lock their actual identity, that concept can be breached. Think about your CRM admins that could literally create an agent with their own respective permissions, and that agent can do whatever they could do, and people interact with it and consume whatever data they want."

This is why agent governance starts at discovery. You cannot secure what you cannot see, and most security teams cannot see the agents being spun up inside Copilot Studio, internal builders, or SaaS plugins.

How is Culligan governing agentic AI today?

There is no single accepted framework for governing agentic AI today. Standards bodies are still catching up. OWASP's GenAI Data Security Risks and Mitigations 2026, which Opsin CPO and Co-Founder, Oz Wasserman, contributed to, defines 21 distinct risk categories, while the EU AI Act, NIST AI RMF, and a patchwork of US state laws all approach the problem differently. Culligan made a pragmatic call: build the program on NIST AI RMF (Risk Management Framework) because it already aligned to NIST for general security, with concrete Map, Measure, and Monitor functions that could be operationalized today.

Amir's blueprint:

"From a security standpoint, early on we said this is not just a security issue, this is a business issue. The CISO and CIO are 100% aligned. Then we brought in our DPO, we brought in legal teams from both regions. We understood that there is nobody with the skill set that's needed to govern this. The EU AI Act is really hard to understand what controls it's asking us to do. So we decided we're just going to follow the NIST AI RMF. We brought in an external law firm that started to implement Map, Measure, and Monitor for us. We have a 14-page AI policy that covers everything from legal to compliance to governance, and what good looks like for Culligan. We meet on a weekly basis, sometimes twice a week, at the committee level. If somebody is requesting an AI tool, most are point solutions. We evaluate: is our data going to be used for training their models? What happens if we off-board that vendor? We look at it from a GDPR lens. There's a lot of third-party risk management before we onboard a vendor."

The next frontier Culligan is solving for: regulator readiness.

"We're now focusing on when the regulators come knocking on your door, because if AI made a mistake, how do we answer those tough questions? That's the governance model we're building."

Why is "less is more" the right starting point for agentic AI?

Because agents inherit access to everything they can reach, the cheapest control is reducing what is reachable in the first place. For Culligan, the data sprawl problem is amplified by acquisition history.

"Culligan has done a little over 300 acquisitions in the last four years, so we have a lot of data that nobody has touched. It's just sitting there. This is where the less is more. We're trying to eliminate as much data as we can, and then have some guardrails. If you're developing an agent, have some kind of a committee that reviews it. Today this is a part-time job for every committee member, so we're trying to establish a chief AI officer type organization within Culligan that will have the skill set and the resources to get in front of this thing."

Two takeaways for any enterprise: stale data is agent fuel, and AI governance eventually needs a full-time owner.

What does automated remediation look like in practice?

Culligan reduced AI exposure issues by 40-45% through self-service remediation alone. The principle: if AI created the problem, AI should help fix it.

"I'm a big automation guy. I don't like to do busy work. I said, well, if AI broke this for us, AI should fix it for us. We came into automation and said, can we provide instructions to the people and see if they can fix their own things. We had SharePoint sites that were open to public, files and folders, someone's passport pictures. During COVID, when you traveled to Europe, you had to take a picture of your passport with a negative COVID test, so people had shared all that information in their OneDrive and Teams folders. We were finding all this stuff that was critical. The other piece we realized very quickly was GDPR compliance was out the door because data is stored everywhere. Some items we were doing were okay in the U.S. but not in EMEA, especially Germany. With Opsin giving us visibility, we reduced the number by automation about 40-45%. People were nice enough to fix their own problems. The rest needed a little bit more nudge."

Self-service remediation works when the system tells the data owner exactly what is exposed, why it matters, and how to fix it without filing a ticket.

How does Opsin help CISOs answer the three questions that matter to the board?

Every CISO eventually has to answer three questions about a new risk class, and Amir laid out the framework directly:

"There are basically three pains: what is the risk, how is it applied to Culligan, and if we don't do anything, what's the biggest material impact to Culligan and our business that can be. The first two have been answered by Opsin, which is really helpful for us. So now it's just a matter of getting in front of it and saying, okay, we have all the data, we have all the understanding, we know we understand the risk, and then this is how we can fix it."

Opsin closes the gap between "we have an AI policy" and "we can prove it works." The platform gives CISOs the evidence base they need to brief leadership, satisfy auditors, and prioritize remediation by business impact.

Catch the Full Recording HERE

See Opsin for yourself

Get an AI agent risk assessment for your environment in under 24 hours. Opsin maps every agent, copilot, and GenAI app touching your data, prioritizes the exposures that matter, and shows you exactly how to remediate them.

Free 24 Hour Risk Assessment →

Table of Contents

LinkedIn Bio >

FAQ

No items found.
About the Author
Opsin Security
Purpose-built for enterprise AI, Opsin delivers visibility, context, and protection across the LLMs and cloud environments your organization is already using, from Microsoft Copilot and ChatGPT Enterprise to Google Gemini and Claude. Opsin makes AI risk visible, clear, and actionable, enabling security teams to safely scale AI adoption.
LinkedIn Bio >

Beyond DLP: How Culligan Secures and Governs Agentic AI with Opsin

AI agents act on enterprise data autonomously, and legacy security tools cannot see what they do. In a recent LinkedIn Live, Opsin Co-Founder and CPO Oz Wasserman sat down with Amir Niaz, CISO at Culligan International, to walk through how a global manufacturer with 300+ acquisitions in four years is governing agentic AI without slowing the business down. This is what they covered.

Why do legacy security tools fail in agentic AI environments?

Legacy data security tools and the first wave of AI security tools both rely on the same model: pattern matching against known-bad content moving through defined channels. AI assistants ignore that model entirely. The user simply asks a question, and the AI inherits the user's permissions to surface the answer wherever the data lives.

Amir Niaz, CISO at Culligan, put it directly:

"AI identity is basically your identity. So whatever access you have, AI has that access. Your ChatGPTs and Copilots are finding issues which they're technically bypassing your DLP. So my DLP controls are: don't show credit card numbers, don't show bank account numbers. But in a chat bar, if you say 'show me the financial history of EMEA for last year,' DLP is not going to catch it. Opsin gave us visibility to show it to the leadership that this is what people are finding."

The shift from pattern matching to semantic, identity-driven retrieval is why a new category is needed. AI does not exfiltrate data through the channels legacy tools were built to watch. It surfaces it inside an answer.

What does the agent identity sprawl problem actually look like?

When employees can build their own agents, every admin becomes a potential identity-inheritance vector. An agent created by a CRM admin can do anything that CRM admin can do, and any user who interacts with the agent inherits that reach by proxy.

Oz framed it on the live:

"People were using Microsoft Copilot as their own identity but consuming data they probably should not access. Now we are with agents where people can create the agents. If they don't lock their actual identity, that concept can be breached. Think about your CRM admins that could literally create an agent with their own respective permissions, and that agent can do whatever they could do, and people interact with it and consume whatever data they want."

This is why agent governance starts at discovery. You cannot secure what you cannot see, and most security teams cannot see the agents being spun up inside Copilot Studio, internal builders, or SaaS plugins.

How is Culligan governing agentic AI today?

There is no single accepted framework for governing agentic AI today. Standards bodies are still catching up. OWASP's GenAI Data Security Risks and Mitigations 2026, which Opsin CPO and Co-Founder, Oz Wasserman, contributed to, defines 21 distinct risk categories, while the EU AI Act, NIST AI RMF, and a patchwork of US state laws all approach the problem differently. Culligan made a pragmatic call: build the program on NIST AI RMF (Risk Management Framework) because it already aligned to NIST for general security, with concrete Map, Measure, and Monitor functions that could be operationalized today.

Amir's blueprint:

"From a security standpoint, early on we said this is not just a security issue, this is a business issue. The CISO and CIO are 100% aligned. Then we brought in our DPO, we brought in legal teams from both regions. We understood that there is nobody with the skill set that's needed to govern this. The EU AI Act is really hard to understand what controls it's asking us to do. So we decided we're just going to follow the NIST AI RMF. We brought in an external law firm that started to implement Map, Measure, and Monitor for us. We have a 14-page AI policy that covers everything from legal to compliance to governance, and what good looks like for Culligan. We meet on a weekly basis, sometimes twice a week, at the committee level. If somebody is requesting an AI tool, most are point solutions. We evaluate: is our data going to be used for training their models? What happens if we off-board that vendor? We look at it from a GDPR lens. There's a lot of third-party risk management before we onboard a vendor."

The next frontier Culligan is solving for: regulator readiness.

"We're now focusing on when the regulators come knocking on your door, because if AI made a mistake, how do we answer those tough questions? That's the governance model we're building."

Why is "less is more" the right starting point for agentic AI?

Because agents inherit access to everything they can reach, the cheapest control is reducing what is reachable in the first place. For Culligan, the data sprawl problem is amplified by acquisition history.

"Culligan has done a little over 300 acquisitions in the last four years, so we have a lot of data that nobody has touched. It's just sitting there. This is where the less is more. We're trying to eliminate as much data as we can, and then have some guardrails. If you're developing an agent, have some kind of a committee that reviews it. Today this is a part-time job for every committee member, so we're trying to establish a chief AI officer type organization within Culligan that will have the skill set and the resources to get in front of this thing."

Two takeaways for any enterprise: stale data is agent fuel, and AI governance eventually needs a full-time owner.

What does automated remediation look like in practice?

Culligan reduced AI exposure issues by 40-45% through self-service remediation alone. The principle: if AI created the problem, AI should help fix it.

"I'm a big automation guy. I don't like to do busy work. I said, well, if AI broke this for us, AI should fix it for us. We came into automation and said, can we provide instructions to the people and see if they can fix their own things. We had SharePoint sites that were open to public, files and folders, someone's passport pictures. During COVID, when you traveled to Europe, you had to take a picture of your passport with a negative COVID test, so people had shared all that information in their OneDrive and Teams folders. We were finding all this stuff that was critical. The other piece we realized very quickly was GDPR compliance was out the door because data is stored everywhere. Some items we were doing were okay in the U.S. but not in EMEA, especially Germany. With Opsin giving us visibility, we reduced the number by automation about 40-45%. People were nice enough to fix their own problems. The rest needed a little bit more nudge."

Self-service remediation works when the system tells the data owner exactly what is exposed, why it matters, and how to fix it without filing a ticket.

How does Opsin help CISOs answer the three questions that matter to the board?

Every CISO eventually has to answer three questions about a new risk class, and Amir laid out the framework directly:

"There are basically three pains: what is the risk, how is it applied to Culligan, and if we don't do anything, what's the biggest material impact to Culligan and our business that can be. The first two have been answered by Opsin, which is really helpful for us. So now it's just a matter of getting in front of it and saying, okay, we have all the data, we have all the understanding, we know we understand the risk, and then this is how we can fix it."

Opsin closes the gap between "we have an AI policy" and "we can prove it works." The platform gives CISOs the evidence base they need to brief leadership, satisfy auditors, and prioritize remediation by business impact.

Catch the Full Recording HERE

See Opsin for yourself

Get an AI agent risk assessment for your environment in under 24 hours. Opsin maps every agent, copilot, and GenAI app touching your data, prioritizes the exposures that matter, and shows you exactly how to remediate them.

Free 24 Hour Risk Assessment →

Get Your Copy
Your Name*
Job Title*
Business Email*
Your copy
is ready!
Please check for errors and try again.

See, secure, and scale AI

Get your free AI agent risk assessment.
Results in 24 hours.
Start Your Free Risk Assessment →