
In this Oversharing episode, Amir Niaz shares how Culligan, a 140-country water services provider, adopted Microsoft Copilot and other GenAI tools without losing control of its data.
Rather than blocking AI, Culligan chose to observe, guide, and collaborate with users from the start. This meant building governance in parallel with adoption, forming a global AI steering committee, and tackling two high-stakes risks:
Culligan’s first AI experiments were pragmatic:
These quick wins proved AI’s business value, but also underscored the need for policy. The steering committee worked with outside counsel to draft a 12-page global AI policy covering generative AI usage, data classification, access controls, and decision-making transparency.
Copilot’s pilot rollout surfaced a hidden problem: users accessing sensitive HR, sales, and finance data far beyond their job scope. The root cause wasn’t AI. It was years of permissive sharing, public SharePoint sites, and ungoverned folders. AI just made it too visible to ignore.
Culligan paused AI expansion until the scope was clear. Opsin’s assessment found an 87% data exposure risk score, prompting a global cleanup and user-level remediation.
With Opsin, Culligan deployed an AI-powered remediation chatbot to guide users through fixing their own oversharing issues. This shifted responsibility from IT policing to user accountability, while also educating teams on secure sharing practices.
Within months, critical exposures dropped sharply, and Culligan’s risk score fell from 87 to 14. The bot continues scanning and engaging users in real time, embedding better habits for future AI expansion.
Amir’s advice is simple: “If you block AI, you lose visibility and influence.” Instead, start with enablement, wrap it with cross-functional governance, and tackle foundational data issues before scaling. Use AI not just for productivity, but to safeguard the environment it operates in.
Ready to hear the full story? Watch the webcast at the top of this page.
In this Oversharing episode, Amir Niaz shares how Culligan, a 140-country water services provider, adopted Microsoft Copilot and other GenAI tools without losing control of its data.
Rather than blocking AI, Culligan chose to observe, guide, and collaborate with users from the start. This meant building governance in parallel with adoption, forming a global AI steering committee, and tackling two high-stakes risks:
Culligan’s first AI experiments were pragmatic:
These quick wins proved AI’s business value, but also underscored the need for policy. The steering committee worked with outside counsel to draft a 12-page global AI policy covering generative AI usage, data classification, access controls, and decision-making transparency.
Copilot’s pilot rollout surfaced a hidden problem: users accessing sensitive HR, sales, and finance data far beyond their job scope. The root cause wasn’t AI. It was years of permissive sharing, public SharePoint sites, and ungoverned folders. AI just made it too visible to ignore.
Culligan paused AI expansion until the scope was clear. Opsin’s assessment found an 87% data exposure risk score, prompting a global cleanup and user-level remediation.
With Opsin, Culligan deployed an AI-powered remediation chatbot to guide users through fixing their own oversharing issues. This shifted responsibility from IT policing to user accountability, while also educating teams on secure sharing practices.
Within months, critical exposures dropped sharply, and Culligan’s risk score fell from 87 to 14. The bot continues scanning and engaging users in real time, embedding better habits for future AI expansion.
Amir’s advice is simple: “If you block AI, you lose visibility and influence.” Instead, start with enablement, wrap it with cross-functional governance, and tackle foundational data issues before scaling. Use AI not just for productivity, but to safeguard the environment it operates in.
Ready to hear the full story? Watch the webcast at the top of this page.