How Culligan’s CISO Safely Scaled AI Across 15,000 Global Users: Enterprise AI Security Lessons

Webcasts

Key Takeaways

Highlights

  • Oversharing is the silent enterprise threat. AI tools amplify decades of misconfigured access, especially in small to mid-size firms lacking AI governance.
  • Security leaders should enable, not impede, shifting from gatekeeper to strategic partner empowers innovation while managing risk.
  • Ask why and how, not just who and what. Adding intent and context into access decisions strengthens AI governance and reduces misuse.

A Real Conversation with Amir Niaz, VP & Global CISO at Culligan International

In this Oversharing episode, Amir Niaz shares how Culligan, a 140-country water services provider, adopted Microsoft Copilot and other GenAI tools without losing control of its data.

Rather than blocking AI, Culligan chose to observe, guide, and collaborate with users from the start. This meant building governance in parallel with adoption, forming a global AI steering committee, and tackling two high-stakes risks:

  1. Shadow AI tools sending sensitive data to uncontrolled environments.
  2. Long-standing oversharing issues in SharePoint, OneDrive, and Teams, amplified by AI’s ability to surface them instantly.

From Early Use Cases to Global Policy

Culligan’s first AI experiments were pragmatic:

  • Reading customer emails and auto-creating orders 24/7.
  • Converting after-hours voicemails into Salesforce cases.
  • Using Copilot for executive meeting minutes.

These quick wins proved AI’s business value, but also underscored the need for policy. The steering committee worked with outside counsel to draft a 12-page global AI policy covering generative AI usage, data classification, access controls, and decision-making transparency.

Oversharing: The Risk They Couldn’t Ignore

Copilot’s pilot rollout surfaced a hidden problem: users accessing sensitive HR, sales, and finance data far beyond their job scope. The root cause wasn’t AI. It was years of permissive sharing, public SharePoint sites, and ungoverned folders. AI just made it too visible to ignore.

Culligan paused AI expansion until the scope was clear. Opsin’s assessment found an 87% data exposure risk score, prompting a global cleanup and user-level remediation.

AI to Fix AI-Driven Problems

With Opsin, Culligan deployed an AI-powered remediation chatbot to guide users through fixing their own oversharing issues. This shifted responsibility from IT policing to user accountability, while also educating teams on secure sharing practices.

Within months, critical exposures dropped sharply, and Culligan’s risk score fell from 87 to 14. The bot continues scanning and engaging users in real time, embedding better habits for future AI expansion.

Final Takeaway: Governance in Tandem with Adoption

Amir’s advice is simple: “If you block AI, you lose visibility and influence.” Instead, start with enablement, wrap it with cross-functional governance, and tackle foundational data issues before scaling. Use AI not just for productivity, but to safeguard the environment it operates in.

Ready to hear the full story? Watch the webcast at the top of this page.

Table of Contents

LinkedIn Bio >

FAQ

No items found.
About the Author
LinkedIn Bio >

How Culligan’s CISO Safely Scaled AI Across 15,000 Global Users: Enterprise AI Security Lessons

Highlights

  • Oversharing is the silent enterprise threat. AI tools amplify decades of misconfigured access, especially in small to mid-size firms lacking AI governance.
  • Security leaders should enable, not impede, shifting from gatekeeper to strategic partner empowers innovation while managing risk.
  • Ask why and how, not just who and what. Adding intent and context into access decisions strengthens AI governance and reduces misuse.

A Real Conversation with Amir Niaz, VP & Global CISO at Culligan International

In this Oversharing episode, Amir Niaz shares how Culligan, a 140-country water services provider, adopted Microsoft Copilot and other GenAI tools without losing control of its data.

Rather than blocking AI, Culligan chose to observe, guide, and collaborate with users from the start. This meant building governance in parallel with adoption, forming a global AI steering committee, and tackling two high-stakes risks:

  1. Shadow AI tools sending sensitive data to uncontrolled environments.
  2. Long-standing oversharing issues in SharePoint, OneDrive, and Teams, amplified by AI’s ability to surface them instantly.

From Early Use Cases to Global Policy

Culligan’s first AI experiments were pragmatic:

  • Reading customer emails and auto-creating orders 24/7.
  • Converting after-hours voicemails into Salesforce cases.
  • Using Copilot for executive meeting minutes.

These quick wins proved AI’s business value, but also underscored the need for policy. The steering committee worked with outside counsel to draft a 12-page global AI policy covering generative AI usage, data classification, access controls, and decision-making transparency.

Oversharing: The Risk They Couldn’t Ignore

Copilot’s pilot rollout surfaced a hidden problem: users accessing sensitive HR, sales, and finance data far beyond their job scope. The root cause wasn’t AI. It was years of permissive sharing, public SharePoint sites, and ungoverned folders. AI just made it too visible to ignore.

Culligan paused AI expansion until the scope was clear. Opsin’s assessment found an 87% data exposure risk score, prompting a global cleanup and user-level remediation.

AI to Fix AI-Driven Problems

With Opsin, Culligan deployed an AI-powered remediation chatbot to guide users through fixing their own oversharing issues. This shifted responsibility from IT policing to user accountability, while also educating teams on secure sharing practices.

Within months, critical exposures dropped sharply, and Culligan’s risk score fell from 87 to 14. The bot continues scanning and engaging users in real time, embedding better habits for future AI expansion.

Final Takeaway: Governance in Tandem with Adoption

Amir’s advice is simple: “If you block AI, you lose visibility and influence.” Instead, start with enablement, wrap it with cross-functional governance, and tackle foundational data issues before scaling. Use AI not just for productivity, but to safeguard the environment it operates in.

Ready to hear the full story? Watch the webcast at the top of this page.

Get Your Copy
Your Name*
Job Title*
Business Email*
Your copy
is ready!
Please check for errors and try again.

Secure Your GenAI Rollout

Find and fix oversharing before it spreads
Book a Demo →