GenAI is transforming how enterprises operate — but without the right safeguards, it's also exposing sensitive data in ways many organizations aren’t prepared for. In the first edition of our Oversharing Perspective series, I sat down with Mike D’Arezzo, Executive Director of Information Security and GRC at Wellstar Health, to talk about one of the most pressing challenges in AI adoption: accidental data exposure.
For healthcare organizations like Wellstar, the stakes are especially high. HIPAA compliance and patient trust depend on protecting sensitive data — including PHI — from unintentional leaks. But as Mike emphasized, oversharing isn’t just a healthcare issue. Any organization using Microsoft Copilot, ChatGPT, or other GenAI platforms faces similar risks.
The conversation made one thing clear: deploying GenAI without purpose creates unnecessary risk. To mitigate oversharing, Mike recommends:
This isn’t about blocking AI — it’s about enabling safe, scalable, and compliant adoption.
Mike shared specific examples of how oversharing happens:
Even when unintentional, these incidents can lead to severe consequences, from compliance violations to reputational damage.
Mike’s philosophy is that GenAI security must be cross-functional. His framework includes:
Mike sees a shift toward agentic AI — systems that not only generate but act on behalf of users. That future demands tighter identity and access controls. According to Mike:
GenAI offers immense promise, but its potential will only be realized if it’s deployed securely. As Mike warned, “We need to build the firewalls now — not after we’ve already burned the house down.”
GenAI is transforming how enterprises operate — but without the right safeguards, it's also exposing sensitive data in ways many organizations aren’t prepared for. In the first edition of our Oversharing Perspective series, I sat down with Mike D’Arezzo, Executive Director of Information Security and GRC at Wellstar Health, to talk about one of the most pressing challenges in AI adoption: accidental data exposure.
For healthcare organizations like Wellstar, the stakes are especially high. HIPAA compliance and patient trust depend on protecting sensitive data — including PHI — from unintentional leaks. But as Mike emphasized, oversharing isn’t just a healthcare issue. Any organization using Microsoft Copilot, ChatGPT, or other GenAI platforms faces similar risks.
The conversation made one thing clear: deploying GenAI without purpose creates unnecessary risk. To mitigate oversharing, Mike recommends:
This isn’t about blocking AI — it’s about enabling safe, scalable, and compliant adoption.
Mike shared specific examples of how oversharing happens:
Even when unintentional, these incidents can lead to severe consequences, from compliance violations to reputational damage.
Mike’s philosophy is that GenAI security must be cross-functional. His framework includes:
Mike sees a shift toward agentic AI — systems that not only generate but act on behalf of users. That future demands tighter identity and access controls. According to Mike:
GenAI offers immense promise, but its potential will only be realized if it’s deployed securely. As Mike warned, “We need to build the firewalls now — not after we’ve already burned the house down.”