GenAI is transforming how enterprises operate — but without the right safeguards, it's also exposing sensitive data in ways many organizations aren’t prepared for. In the first edition of our Oversharing Perspective series, I sat down with Mike D’Arezzo, Executive Director of Information Security and GRC at Wellstar Health, to talk about one of the most pressing challenges in AI adoption: accidental data exposure.
For healthcare organizations like Wellstar, the stakes are especially high. HIPAA compliance and patient trust depend on protecting sensitive data — including PHI — from unintentional leaks. But as Mike emphasized, oversharing isn’t just a healthcare issue. Any organization using Microsoft Copilot, ChatGPT, or other GenAI platforms faces similar risks.
“My biggest concern is data — how it gets into GenAI tools, who can access it, and whether we can pull it back if it leaks.”
— Mike D’Arezzo
The conversation made one thing clear: deploying GenAI without purpose creates unnecessary risk. To mitigate oversharing, Mike recommends:
This isn’t about blocking AI — it’s about enabling safe, scalable, and compliant adoption.
Mike shared specific examples of how oversharing happens:
Even when unintentional, these incidents can lead to severe consequences, from compliance violations to reputational damage.
“Every Microsoft tool is built for collaboration by default, and that makes oversharing way too easy.”
— Mike D’Arezzo
Mike’s philosophy is that GenAI security must be cross-functional. His framework includes:
“Get people involved, especially the ones who challenge you.”
— Mike D’Arezzo
Mike sees a shift toward agentic AI — systems that not only generate but act on behalf of users. That future demands tighter identity and access controls. According to Mike:
“You and I could ask Copilot the same question and get different answers, because of our roles.”
— Mike D’Arezzo
GenAI offers immense promise, but its potential will only be realized if it’s deployed securely. As Mike warned, “We need to build the firewalls now — not after we’ve already burned the house down.”
GenAI is transforming how enterprises operate — but without the right safeguards, it's also exposing sensitive data in ways many organizations aren’t prepared for. In the first edition of our Oversharing Perspective series, I sat down with Mike D’Arezzo, Executive Director of Information Security and GRC at Wellstar Health, to talk about one of the most pressing challenges in AI adoption: accidental data exposure.
For healthcare organizations like Wellstar, the stakes are especially high. HIPAA compliance and patient trust depend on protecting sensitive data — including PHI — from unintentional leaks. But as Mike emphasized, oversharing isn’t just a healthcare issue. Any organization using Microsoft Copilot, ChatGPT, or other GenAI platforms faces similar risks.
“My biggest concern is data — how it gets into GenAI tools, who can access it, and whether we can pull it back if it leaks.”
— Mike D’Arezzo
The conversation made one thing clear: deploying GenAI without purpose creates unnecessary risk. To mitigate oversharing, Mike recommends:
This isn’t about blocking AI — it’s about enabling safe, scalable, and compliant adoption.
Mike shared specific examples of how oversharing happens:
Even when unintentional, these incidents can lead to severe consequences, from compliance violations to reputational damage.
“Every Microsoft tool is built for collaboration by default, and that makes oversharing way too easy.”
— Mike D’Arezzo
Mike’s philosophy is that GenAI security must be cross-functional. His framework includes:
“Get people involved, especially the ones who challenge you.”
— Mike D’Arezzo
Mike sees a shift toward agentic AI — systems that not only generate but act on behalf of users. That future demands tighter identity and access controls. According to Mike:
“You and I could ask Copilot the same question and get different answers, because of our roles.”
— Mike D’Arezzo
GenAI offers immense promise, but its potential will only be realized if it’s deployed securely. As Mike warned, “We need to build the firewalls now — not after we’ve already burned the house down.”