
SAN FRANCISCO — March 13, 2026 | Opsin, an AI security and governance platform built to help enterprises scale AI safely, today announced support for Claude Enterprise, expanding its ability to secure and govern AI usage across the most widely adopted enterprise AI platforms.
With this launch, Opsin now provides AI risk governance, visibility, and remediation across:
This expansion allows security teams to manage AI risk across multiple AI models and cloud ecosystems from a single platform.
As enterprises rapidly deploy AI copilots, agents, and assistants across their organizations, security teams face a growing challenge: understanding how AI systems interact with enterprise data and where those interactions create risk.
“Enterprise AI adoption is accelerating across multiple platforms at the same time,” said James Pham, CEO and co-founder of Opsin. “Organizations are deploying Copilot, ChatGPT, Gemini, and Claude in parallel. Security teams need a way to understand where AI is interacting with sensitive data and govern that risk without slowing down innovation.”
Enterprise AI systems such as Claude Enterprise enable employees to summarize documents, analyze internal information, generate code, and interact with knowledge bases through natural language prompts.
While these capabilities drive productivity, they also introduce new security challenges.
AI systems often retrieve information dynamically from enterprise systems including document repositories, collaboration platforms, and internal knowledge bases. This creates potential risks such as:
Traditional security tools were designed for static data and predictable workflows. AI systems behave differently, retrieving and generating information dynamically in response to user prompts.
As a result, many organizations lack visibility into how AI systems are interacting with enterprise data.
Opsin helps organizations understand and govern enterprise AI risk by analyzing AI interactions with full context across users, prompts, data access, and responses.
With Claude Enterprise support, security teams can:
By providing visibility and actionable remediation, Opsin enables organizations to scale AI adoption safely without creating governance bottlenecks.
The addition of Claude Enterprise reinforces Opsin’s AI-agnostic approach to enterprise AI security.
Most enterprises are not standardizing on a single AI platform. Instead, they are adopting multiple AI systems across different cloud ecosystems and business units.
Opsin was designed to provide centralized governance across these environments, allowing security teams to manage AI risk consistently across Claude, Copilot, ChatGPT, Gemini, and internally built AI applications.
“AI is quickly becoming the interface for how employees access and interact with enterprise data,” said Pham. “To scale AI safely, organizations need visibility into how these systems actually behave. Our goal is to make AI risk clear and easy to manage so companies can move forward with AI confidently.”
Organizations deploying Claude Enterprise can learn more about securing the platform with Opsin here:
https://www.opsinsecurity.com/use-cases/claude-enterprise-security
Opsin provides security coverage across the Claude deployment models enterprises use most.
Supported environments include:
Coverage extends to the data Claude accesses, not just the interface employees use. Opsin maps what internal systems Claude can reach, which employees have access, and where sensitive data exposure exists across each deployment model. Learn more about AI governance.
Opsin helps organizations identify where Claude usage creates compliance gaps and maps your posture against relevant regulatory frameworks.
Compliance support includes:
Opsin helps you identify compliance gaps and build the governance controls needed before regulators ask questions. Learn more about GenAI security.
Opsin integrates with Claude via API to provide visibility into how employees use AI across the organization.
Monitoring capabilities include:
Opsin balances security visibility with privacy. Conversation content can be masked by default, with controlled access for authorized investigations. All access to monitoring data is logged for compliance and auditing.
Learn more about AI Detection and Response
Anthropic provides strong model-level safety controls, but enterprise security teams need a separate layer of visibility into how Claude is used across the organization.
Gaps Opsin fills that Anthropic's controls do not address:
Anthropic secures the model. Opsin secures how your organization uses it. Together, they give enterprises the coverage needed to scale Claude confidently.
Opsin is designed to provide security visibility without treating employees as suspects. Monitoring is focused on data exposure and policy violations, not personal surveillance.
Privacy protections built into the platform:
Opsin gives security teams the signal they need to act on real risks without exposing employee activity unnecessarily. Learn more about AI Detection and Response.
Opsin delivers an initial Claude security risk assessment within 24 hours of connecting your environment.
The assessment process includes:
Unlike manual audits or employee self-reporting, Opsin provides automated visibility into how Claude is actually used across the enterprise. This allows security teams to identify oversharing risks quickly and deploy governance controls before scaling AI adoption.
Custom Claude projects allow employees to build AI workflows using prompts, internal knowledge sources, and integrated tools. These projects help teams automate work and analyze internal data more efficiently.
Security risks from Claude Projects:
Projects represent a form of citizen development - employees building tools without IT involvement. While Claude projects accelerate productivity, they also create new governance challenges. Opsin discovers these projects, maps their data connections, and assesses their risk so organizations can govern them safely.
Learn more about agentic AI security.
Claude can access internal documents, proprietary data, and employee prompts to generate answers and summaries. While this improves productivity, it can also expose sensitive information if permissions are overly broad or employees paste confidential data/upload files into prompts. Security teams often lack visibility into these interactions, making it difficult to detect oversharing, data exposure, or policy violations as Claude adoption grows.
What Claude Enterprise does not provide:
To safely scale Claude across the enterprise, organizations need a security layer that provides visibility into AI interactions, detects policy violations, and governs custom AI projects. That’s where Opsin helps.
SAN FRANCISCO — March 13, 2026 | Opsin, an AI security and governance platform built to help enterprises scale AI safely, today announced support for Claude Enterprise, expanding its ability to secure and govern AI usage across the most widely adopted enterprise AI platforms.
With this launch, Opsin now provides AI risk governance, visibility, and remediation across:
This expansion allows security teams to manage AI risk across multiple AI models and cloud ecosystems from a single platform.
As enterprises rapidly deploy AI copilots, agents, and assistants across their organizations, security teams face a growing challenge: understanding how AI systems interact with enterprise data and where those interactions create risk.
“Enterprise AI adoption is accelerating across multiple platforms at the same time,” said James Pham, CEO and co-founder of Opsin. “Organizations are deploying Copilot, ChatGPT, Gemini, and Claude in parallel. Security teams need a way to understand where AI is interacting with sensitive data and govern that risk without slowing down innovation.”
Enterprise AI systems such as Claude Enterprise enable employees to summarize documents, analyze internal information, generate code, and interact with knowledge bases through natural language prompts.
While these capabilities drive productivity, they also introduce new security challenges.
AI systems often retrieve information dynamically from enterprise systems including document repositories, collaboration platforms, and internal knowledge bases. This creates potential risks such as:
Traditional security tools were designed for static data and predictable workflows. AI systems behave differently, retrieving and generating information dynamically in response to user prompts.
As a result, many organizations lack visibility into how AI systems are interacting with enterprise data.
Opsin helps organizations understand and govern enterprise AI risk by analyzing AI interactions with full context across users, prompts, data access, and responses.
With Claude Enterprise support, security teams can:
By providing visibility and actionable remediation, Opsin enables organizations to scale AI adoption safely without creating governance bottlenecks.
The addition of Claude Enterprise reinforces Opsin’s AI-agnostic approach to enterprise AI security.
Most enterprises are not standardizing on a single AI platform. Instead, they are adopting multiple AI systems across different cloud ecosystems and business units.
Opsin was designed to provide centralized governance across these environments, allowing security teams to manage AI risk consistently across Claude, Copilot, ChatGPT, Gemini, and internally built AI applications.
“AI is quickly becoming the interface for how employees access and interact with enterprise data,” said Pham. “To scale AI safely, organizations need visibility into how these systems actually behave. Our goal is to make AI risk clear and easy to manage so companies can move forward with AI confidently.”
Organizations deploying Claude Enterprise can learn more about securing the platform with Opsin here:
https://www.opsinsecurity.com/use-cases/claude-enterprise-security