← Resources

Guardrails, Not Guesswork: How Healthcare Is Tackling GenAI Oversharing

GenAI Security
Blog

Highlights

  • Healthcare organizations face major GenAI risks from accidental oversharing, especially through tools like Microsoft Copilot.
  • Default sharing settings and vague use cases often lead to PHI exposure, even when unintentional.
  • Effective mitigation requires clear governance, stakeholder involvement, and role-aware access controls that evolve with AI capabilities.

GenAI is transforming how enterprises operate — but without the right safeguards, it's also exposing sensitive data in ways many organizations aren’t prepared for. In the first edition of our Oversharing Perspective series, I sat down with Mike D’Arezzo, Executive Director of Information Security and GRC at Wellstar Health, to talk about one of the most pressing challenges in AI adoption: accidental data exposure.

Why Oversharing in GenAI Tools Poses a Real Threat

For healthcare organizations like Wellstar, the stakes are especially high. HIPAA compliance and patient trust depend on protecting sensitive data — including PHI — from unintentional leaks. But as Mike emphasized, oversharing isn’t just a healthcare issue. Any organization using Microsoft Copilot, ChatGPT, or other GenAI platforms faces similar risks.

“My biggest concern is data — how it gets into GenAI tools, who can access it, and whether we can pull it back if it leaks.”
— Mike D’Arezzo

From GenAI Use Cases to Enterprise Guardrails

The conversation made one thing clear: deploying GenAI without purpose creates unnecessary risk. To mitigate oversharing, Mike recommends:

  • Starting with clear use cases and success metrics
  • Involving legal, compliance, and risk teams early
  • Forming an AI governance council to ensure alignment and accountability

This isn’t about blocking AI — it’s about enabling safe, scalable, and compliant adoption.

Real-World Oversharing Risks with Microsoft Copilot

Mike shared specific examples of how oversharing happens:

  • Employees accidentally uploading PHI into Copilot prompts
  • SharePoint folders defaulting to org-wide visibility
  • Users asking Copilot to surface files they shouldn’t access

Even when unintentional, these incidents can lead to severe consequences, from compliance violations to reputational damage.

“Every Microsoft tool is built for collaboration by default, and that makes oversharing way too easy.”
— Mike D’Arezzo

Security Isn’t Just IT’s Job

Mike’s philosophy is that GenAI security must be cross-functional. His framework includes:

  • Education: Run lunch-and-learns, targeted training, and scenario-based awareness
  • Engagement: Include skeptics and champions alike in your AI rollout
  • Policy: Keep policies flexible to evolve with new AI capabilities and emerging threats
“Get people involved, especially the ones who challenge you.”
— Mike D’Arezzo

What’s Next for GenAI in Regulated Industries?

Mike sees a shift toward agentic AI — systems that not only generate but act on behalf of users. That future demands tighter identity and access controls. According to Mike:

  • AI will begin building workflows and systems autonomously
  • Natural language interfaces will replace traditional UIs
  • Role-based access control (RBAC) will be essential to prevent abuse
“You and I could ask Copilot the same question and get different answers, because of our roles.”
— Mike D’Arezzo

Final Takeaway: Build the Guardrails Now

GenAI offers immense promise, but its potential will only be realized if it’s deployed securely. As Mike warned, “We need to build the firewalls now — not after we’ve already burned the house down.”

About the Author

James Pham is the Co-Founder and CEO of Opsin, with a background in machine learning, data security, and product development. He previously led ML-driven security products at Abnormal Security and holds an MBA from MIT, where he focused on data analytics and AI.

Guardrails, Not Guesswork: How Healthcare Is Tackling GenAI Oversharing

Highlights

  • Healthcare organizations face major GenAI risks from accidental oversharing, especially through tools like Microsoft Copilot.
  • Default sharing settings and vague use cases often lead to PHI exposure, even when unintentional.
  • Effective mitigation requires clear governance, stakeholder involvement, and role-aware access controls that evolve with AI capabilities.

GenAI is transforming how enterprises operate — but without the right safeguards, it's also exposing sensitive data in ways many organizations aren’t prepared for. In the first edition of our Oversharing Perspective series, I sat down with Mike D’Arezzo, Executive Director of Information Security and GRC at Wellstar Health, to talk about one of the most pressing challenges in AI adoption: accidental data exposure.

Why Oversharing in GenAI Tools Poses a Real Threat

For healthcare organizations like Wellstar, the stakes are especially high. HIPAA compliance and patient trust depend on protecting sensitive data — including PHI — from unintentional leaks. But as Mike emphasized, oversharing isn’t just a healthcare issue. Any organization using Microsoft Copilot, ChatGPT, or other GenAI platforms faces similar risks.

“My biggest concern is data — how it gets into GenAI tools, who can access it, and whether we can pull it back if it leaks.”
— Mike D’Arezzo

From GenAI Use Cases to Enterprise Guardrails

The conversation made one thing clear: deploying GenAI without purpose creates unnecessary risk. To mitigate oversharing, Mike recommends:

  • Starting with clear use cases and success metrics
  • Involving legal, compliance, and risk teams early
  • Forming an AI governance council to ensure alignment and accountability

This isn’t about blocking AI — it’s about enabling safe, scalable, and compliant adoption.

Real-World Oversharing Risks with Microsoft Copilot

Mike shared specific examples of how oversharing happens:

  • Employees accidentally uploading PHI into Copilot prompts
  • SharePoint folders defaulting to org-wide visibility
  • Users asking Copilot to surface files they shouldn’t access

Even when unintentional, these incidents can lead to severe consequences, from compliance violations to reputational damage.

“Every Microsoft tool is built for collaboration by default, and that makes oversharing way too easy.”
— Mike D’Arezzo

Security Isn’t Just IT’s Job

Mike’s philosophy is that GenAI security must be cross-functional. His framework includes:

  • Education: Run lunch-and-learns, targeted training, and scenario-based awareness
  • Engagement: Include skeptics and champions alike in your AI rollout
  • Policy: Keep policies flexible to evolve with new AI capabilities and emerging threats
“Get people involved, especially the ones who challenge you.”
— Mike D’Arezzo

What’s Next for GenAI in Regulated Industries?

Mike sees a shift toward agentic AI — systems that not only generate but act on behalf of users. That future demands tighter identity and access controls. According to Mike:

  • AI will begin building workflows and systems autonomously
  • Natural language interfaces will replace traditional UIs
  • Role-based access control (RBAC) will be essential to prevent abuse
“You and I could ask Copilot the same question and get different answers, because of our roles.”
— Mike D’Arezzo

Final Takeaway: Build the Guardrails Now

GenAI offers immense promise, but its potential will only be realized if it’s deployed securely. As Mike warned, “We need to build the firewalls now — not after we’ve already burned the house down.”

Get Your
Blog
Your Name*
Job Title*
Business Email*
Your
Blog
is ready!
Please check for errors and try again.

Secure Your GenAI Rollout

Find and fix oversharing before it spreads
Book a Demo →