GenAI is transforming how teams across every industry operate — and global sports is no exception. In this episode of the Oversharing webcast, we sat down with Paul Sheth, CISO and Head of IT & Development at WTA (Women’s Tennis Association), to explore how GenAI is being rolled out across a distributed, multi-stakeholder organization — while keeping data exposure and shadow AI in check.
WTA sees GenAI adoption touching nearly every workflow — from refining internal communications to simplifying travel and scheduling. Copilot and other LLM tools act as virtual assistants that improve productivity across departments.
But with that value comes risk: data must be input to be useful. And that brings us to the problem of oversharing.
Paul didn’t mince words: oversharing is the most immediate risk for organizations adopting GenAI. Because large language models ingest whatever data they can access — structured or unstructured — anything exposed in SharePoint, OneDrive, or team folders is fair game.
To stay ahead of risk, Paul recommends a layered strategy:
WTA’s security team is developing controls to monitor and detect:
Paul’s advice to security leaders is clear: “If you try to block GenAI, users will just go around you.” Instead, lean into education, shared accountability, and composable controls that adapt over time. This isn’t about enforcing with fear. It’s about building confidence with care.
Ready to dive into the full discussion? Watch the webcast at the top of this page.
GenAI is transforming how teams across every industry operate — and global sports is no exception. In this episode of the Oversharing webcast, we sat down with Paul Sheth, CISO and Head of IT & Development at WTA (Women’s Tennis Association), to explore how GenAI is being rolled out across a distributed, multi-stakeholder organization — while keeping data exposure and shadow AI in check.
WTA sees GenAI adoption touching nearly every workflow — from refining internal communications to simplifying travel and scheduling. Copilot and other LLM tools act as virtual assistants that improve productivity across departments.
But with that value comes risk: data must be input to be useful. And that brings us to the problem of oversharing.
Paul didn’t mince words: oversharing is the most immediate risk for organizations adopting GenAI. Because large language models ingest whatever data they can access — structured or unstructured — anything exposed in SharePoint, OneDrive, or team folders is fair game.
To stay ahead of risk, Paul recommends a layered strategy:
WTA’s security team is developing controls to monitor and detect:
Paul’s advice to security leaders is clear: “If you try to block GenAI, users will just go around you.” Instead, lean into education, shared accountability, and composable controls that adapt over time. This isn’t about enforcing with fear. It’s about building confidence with care.
Ready to dive into the full discussion? Watch the webcast at the top of this page.