
Oz Wasserman (CPO and Co-Founder, Opsin) was a contributing author on the OWASP GenAI Data Security Risks and Mitigations 2026, bringing his experience helping leading enterprise security teams navigate AI adoption. Here are his top takeaways.
The OWASP GenAI Data Security Risks and Mitigations 2026 (v1.0) is a comprehensive framework covering 21 distinct risk categories for Large Language Models (LLMs), Generative AI, and Agentic AI systems. Published in March 2026, it is not a replacement for the OWASP Top 10 — it goes deeper, focusing specifically on how AI pipelines create new data exposure surfaces that traditional security frameworks were never designed to address.
Each of the 21 risk entries (DSGAI01–DSGAI21) includes attack scenarios, attacker capability profiles, real-world CVEs, and a tiered mitigation structure organized into Foundational, Hardening, and Advanced tiers — a crawl-walk-run approach designed to meet organizations wherever they are in their AI security maturity.
As a contributing author on the guide, I want to walk through what I consider the most important insights for security and product teams building or deploying AI today.
Before getting into specific risks, there is one architectural reality that underpins nearly every entry in this guide. The GenAI context window has no internal access controls.
When a model processes a request, it pulls together data from multiple sources — system prompts, user input, RAG-retrieved documents, tool outputs, conversation history — and collapses them into a single flat namespace. A sensitive HR document retrieved via RAG sits alongside a casual user query with equal trust weight. There is no native mechanism to mark data as "available for reasoning but not for direct output."
This is a fundamental shift from every prior computing model, and it is the reason why so many of the risks in the OWASP guide cannot be solved by securing the model itself. They require controls at every point where data enters and exits the AI pipeline.
The guide structures risks to follow data as it moves through a GenAI system:
Direct Exposure Risks
Pipeline Integrity Risks
Governance and Compliance Risks
GenAI-Specific Attack Surfaces
Operational Infrastructure Risks
Model as Data Artifact
The guide is consistent on this point: organizations need a continuous, accurate inventory of every AI asset — training datasets, vector stores, RAG sources, agent memory, prompt logs, and tool integrations — before controls can be meaningfully applied. The DSPM section of the report frames this as the foundational capability everything else depends on.
The report identifies a specific and recurring failure pattern: classification labels, retention schedules, and erasure obligations that stop at the raw data layer provide no protection once AI pipeline processing begins. DSGAI07 documents how this propagation gap causes erasure obligations to fail — when a source record is deleted, derived embeddings, fine-tuning artifacts, and cached retrievals may persist and continue surfacing that data. The report is clear that without data-to-model lineage, machine unlearning and targeted retraining cannot even be scoped, let alone executed.
The credential and identity risks described in DSGAI02 and DSGAI06 are not addressed by traditional IAM alone. The report specifically identifies the OAuth architectural mismatch — three-legged consent flows designed for human delegation being applied to autonomous agents — as an exploitable structural property. The mitigations the report recommends include per-agent identity issuance, task-scoped credentials that expire at task completion, and machine-to-machine credential patterns that don't depend on human delegation events.
DSGAI08 explicitly cites EU AI Act Article 10 training data governance requirements entering force in August 2026 and describes the data lineage, classification, quality documentation, and bias evaluation controls required. The report frames the deadline as a driver for accelerating the governance controls it describes throughout — not a standalone compliance exercise.
The enterprise security conversations over the next six months will likely be full of compliance anxiety. Our view is simpler. The enterprises that treat AI data governance as a security discipline rather than a legal checkbox will be the ones who scale AI confidently and stay ahead of whatever comes next. The goal was never just to be compliant. It was always to be in control.
Oz Wasserman (CPO and Co-Founder, Opsin) was a contributing author on the OWASP GenAI Data Security Risks and Mitigations 2026, bringing his experience helping leading enterprise security teams navigate AI adoption. Here are his top takeaways.
The OWASP GenAI Data Security Risks and Mitigations 2026 (v1.0) is a comprehensive framework covering 21 distinct risk categories for Large Language Models (LLMs), Generative AI, and Agentic AI systems. Published in March 2026, it is not a replacement for the OWASP Top 10 — it goes deeper, focusing specifically on how AI pipelines create new data exposure surfaces that traditional security frameworks were never designed to address.
Each of the 21 risk entries (DSGAI01–DSGAI21) includes attack scenarios, attacker capability profiles, real-world CVEs, and a tiered mitigation structure organized into Foundational, Hardening, and Advanced tiers — a crawl-walk-run approach designed to meet organizations wherever they are in their AI security maturity.
As a contributing author on the guide, I want to walk through what I consider the most important insights for security and product teams building or deploying AI today.
Before getting into specific risks, there is one architectural reality that underpins nearly every entry in this guide. The GenAI context window has no internal access controls.
When a model processes a request, it pulls together data from multiple sources — system prompts, user input, RAG-retrieved documents, tool outputs, conversation history — and collapses them into a single flat namespace. A sensitive HR document retrieved via RAG sits alongside a casual user query with equal trust weight. There is no native mechanism to mark data as "available for reasoning but not for direct output."
This is a fundamental shift from every prior computing model, and it is the reason why so many of the risks in the OWASP guide cannot be solved by securing the model itself. They require controls at every point where data enters and exits the AI pipeline.
The guide structures risks to follow data as it moves through a GenAI system:
Direct Exposure Risks
Pipeline Integrity Risks
Governance and Compliance Risks
GenAI-Specific Attack Surfaces
Operational Infrastructure Risks
Model as Data Artifact
The guide is consistent on this point: organizations need a continuous, accurate inventory of every AI asset — training datasets, vector stores, RAG sources, agent memory, prompt logs, and tool integrations — before controls can be meaningfully applied. The DSPM section of the report frames this as the foundational capability everything else depends on.
The report identifies a specific and recurring failure pattern: classification labels, retention schedules, and erasure obligations that stop at the raw data layer provide no protection once AI pipeline processing begins. DSGAI07 documents how this propagation gap causes erasure obligations to fail — when a source record is deleted, derived embeddings, fine-tuning artifacts, and cached retrievals may persist and continue surfacing that data. The report is clear that without data-to-model lineage, machine unlearning and targeted retraining cannot even be scoped, let alone executed.
The credential and identity risks described in DSGAI02 and DSGAI06 are not addressed by traditional IAM alone. The report specifically identifies the OAuth architectural mismatch — three-legged consent flows designed for human delegation being applied to autonomous agents — as an exploitable structural property. The mitigations the report recommends include per-agent identity issuance, task-scoped credentials that expire at task completion, and machine-to-machine credential patterns that don't depend on human delegation events.
DSGAI08 explicitly cites EU AI Act Article 10 training data governance requirements entering force in August 2026 and describes the data lineage, classification, quality documentation, and bias evaluation controls required. The report frames the deadline as a driver for accelerating the governance controls it describes throughout — not a standalone compliance exercise.
The enterprise security conversations over the next six months will likely be full of compliance anxiety. Our view is simpler. The enterprises that treat AI data governance as a security discipline rather than a legal checkbox will be the ones who scale AI confidently and stay ahead of whatever comes next. The goal was never just to be compliant. It was always to be in control.