← Resources

Securing Generative AI in the Enterprise: Expert Insights from a 30-Year IT Veteran

Industry Insights
Webcast

How CIOs Can Balance Innovation with Security When Deploying AI Applications

Introduction

As generative AI transforms business operations, CIOs face a critical challenge: harnessing AI’s power while maintaining robust security. Karl Moskofian, former CIO at Gainsight with over 30 years of IT experience, shares expert insights on navigating this complex landscape.

From Security Lockdown to Strategic AI Adoption

Early AI Security Concerns

When ChatGPT emerged, many organizations adopted defensive strategies. “There were companies saying, ‘we’re just shutting the door. We’re not allowing people to touch it at all,’” Moskofian recalls. Initial concerns included:

  • Data Training Risks: AI models training on user data without consent
  • Data Leakage: Security vulnerabilities in web interfaces
  • Unknown Threats: Limited understanding of security implications

The Evolution

Forward-thinking leaders recognized AI’s potential early. Within six months, providers like OpenAI addressed major security concerns through enhanced data protection, improved privacy controls, and clear contractual commitments.

The Hidden Danger: Data Oversharing in AI Environments

The Core Problem

“We’ve all got overshared data in our environment,” Moskofian emphasizes. “AI isn't creating a new problem; it’s exploding the severity of an existing one.”

Before AI-powered search, overshared files were difficult to discover. Now, AI finds everything instantly, including improperly shared sensitive information.

Real-World Impact

Moskofian shares a compelling example: “Someone shared salary information with 'anyone with the link can view its setting. When enterprise AI search was implemented at Gainsight, employees began reporting: ‘I’m seeing this in my search results. Pretty sure I’m not supposed to be seeing this.’”

Building Effective AI Security Strategy

1. Proactive Data Governance

Key Actions:

  • Audit existing file sharing permissions across platforms
  • Implement regular data access reviews
  • Establish clear data classification standards
  • Monitor unusual sharing patterns

2. Risk-Based AI Adoption

Trust but Verify Approach:

  • Evaluate vendor security commitments
  • Configure privacy settings properly
  • Establish clear data handling agreements
  • Conduct regular security assessments

3. Department-Specific User Education

Generic training fails because it lacks relevance. Customize approaches:

  • HR Teams: Focus on compliance and legal obligations
  • Professional Services: Emphasize customer data protection requirements
  • Engineering: Address intellectual property and code security

Explain the “why” behind policies: legal implications, business impacts, and personal accountability.

Managing Custom AI Applications

The Challenge

AI’s natural language accessibility democratizes development but creates governance challenges:

  • Benefits: Rapid innovation, reduced IT bottlenecks, business-driven solutions
  • Risks: Uncontrolled data access, lack of security reviews, hidden exposure pathways

The Solution

Implement structured processes:

  • Sandbox environments for unrestricted experimentation
  • Approval gates before production deployment
  • Security reviews for shared applications
  • Clear guidelines for data access and sharing

Comprehensive AI Policy Framework

Learning from Cloud Adoption

AI follows similar patterns: high risk/low value → risk reduction/value increase → mature deployment

Policy Recommendations

  • Enterprise AI Services: Trust established vendors (Google, Microsoft) with proper configurations
  • External AI Tools: Clear data guidelines and approval processes
  • Custom Development: Mandatory security reviews and sandbox-to-production workflows

Future Outlook: The AI Hype Cycle

Drawing from decades of experience, Moskofian predicts AI will follow the traditional technology hype cycle:

  1. Peak of Inflated Expectations (Current)
  2. Trough of Disillusionment (Emerging)
  3. Slope of Enlightenment (Future realistic adoption)
  4. Plateau of Productivity (Mature implementation)

Human-AI Collaboration

AI will augment, not replace, human capabilities:

  • Engineering: AI assists with code, humans provide creativity
  • Customer Success: AI analyzes data, humans build relationships
  • Decision Making: AI provides insights, humans provide judgment

Key Implementation Steps

Immediate Actions (Months 1-3)

  • Audit current data sharing practices
  • Implement basic AI usage policies
  • Begin department-specific training
  • Establish vendor evaluation criteria

Governance Phase (Months 4-6)

  • Deploy data loss prevention tools
  • Create approval workflows for custom AI
  • Implement regular access reviews
  • Develop AI incident response procedures

Essential Takeaways for CIOs

  1. Security as Enabler: Frame AI security as enabling innovation, not blocking it
  2. Proactive Governance: Address data oversharing before implementing AI tools
  3. Balanced Assessment: Trust established vendors while maintaining oversight
  4. Targeted Training: Provide relevant, department-specific education
  5. Structured Innovation: Enable experimentation within controlled environments

Conclusion

Successfully deploying enterprise AI requires balancing innovation with security. As Moskofian emphasizes: “We’ve got to find a way to securely and safely enable our companies to really get all the value that’s sitting in front of us with this technology.”

The key is moving beyond fear-based policies to frameworks that enable responsible innovation. Organizations that master this balance will turn security from a barrier into a competitive advantage in the AI-driven economy.

About the Author

Oz Wasserman is the Founder of Opsin, with over 15 years of cybersecurity experience focused on security engineering, data security, governance, and product development. He has held key roles at Abnormal Security, FireEye, and Reco.AI, and has a strong background in security engineering from his military service.

Securing Generative AI in the Enterprise: Expert Insights from a 30-Year IT Veteran

How CIOs Can Balance Innovation with Security When Deploying AI Applications

Introduction

As generative AI transforms business operations, CIOs face a critical challenge: harnessing AI’s power while maintaining robust security. Karl Moskofian, former CIO at Gainsight with over 30 years of IT experience, shares expert insights on navigating this complex landscape.

From Security Lockdown to Strategic AI Adoption

Early AI Security Concerns

When ChatGPT emerged, many organizations adopted defensive strategies. “There were companies saying, ‘we’re just shutting the door. We’re not allowing people to touch it at all,’” Moskofian recalls. Initial concerns included:

  • Data Training Risks: AI models training on user data without consent
  • Data Leakage: Security vulnerabilities in web interfaces
  • Unknown Threats: Limited understanding of security implications

The Evolution

Forward-thinking leaders recognized AI’s potential early. Within six months, providers like OpenAI addressed major security concerns through enhanced data protection, improved privacy controls, and clear contractual commitments.

The Hidden Danger: Data Oversharing in AI Environments

The Core Problem

“We’ve all got overshared data in our environment,” Moskofian emphasizes. “AI isn't creating a new problem; it’s exploding the severity of an existing one.”

Before AI-powered search, overshared files were difficult to discover. Now, AI finds everything instantly, including improperly shared sensitive information.

Real-World Impact

Moskofian shares a compelling example: “Someone shared salary information with 'anyone with the link can view its setting. When enterprise AI search was implemented at Gainsight, employees began reporting: ‘I’m seeing this in my search results. Pretty sure I’m not supposed to be seeing this.’”

Building Effective AI Security Strategy

1. Proactive Data Governance

Key Actions:

  • Audit existing file sharing permissions across platforms
  • Implement regular data access reviews
  • Establish clear data classification standards
  • Monitor unusual sharing patterns

2. Risk-Based AI Adoption

Trust but Verify Approach:

  • Evaluate vendor security commitments
  • Configure privacy settings properly
  • Establish clear data handling agreements
  • Conduct regular security assessments

3. Department-Specific User Education

Generic training fails because it lacks relevance. Customize approaches:

  • HR Teams: Focus on compliance and legal obligations
  • Professional Services: Emphasize customer data protection requirements
  • Engineering: Address intellectual property and code security

Explain the “why” behind policies: legal implications, business impacts, and personal accountability.

Managing Custom AI Applications

The Challenge

AI’s natural language accessibility democratizes development but creates governance challenges:

  • Benefits: Rapid innovation, reduced IT bottlenecks, business-driven solutions
  • Risks: Uncontrolled data access, lack of security reviews, hidden exposure pathways

The Solution

Implement structured processes:

  • Sandbox environments for unrestricted experimentation
  • Approval gates before production deployment
  • Security reviews for shared applications
  • Clear guidelines for data access and sharing

Comprehensive AI Policy Framework

Learning from Cloud Adoption

AI follows similar patterns: high risk/low value → risk reduction/value increase → mature deployment

Policy Recommendations

  • Enterprise AI Services: Trust established vendors (Google, Microsoft) with proper configurations
  • External AI Tools: Clear data guidelines and approval processes
  • Custom Development: Mandatory security reviews and sandbox-to-production workflows

Future Outlook: The AI Hype Cycle

Drawing from decades of experience, Moskofian predicts AI will follow the traditional technology hype cycle:

  1. Peak of Inflated Expectations (Current)
  2. Trough of Disillusionment (Emerging)
  3. Slope of Enlightenment (Future realistic adoption)
  4. Plateau of Productivity (Mature implementation)

Human-AI Collaboration

AI will augment, not replace, human capabilities:

  • Engineering: AI assists with code, humans provide creativity
  • Customer Success: AI analyzes data, humans build relationships
  • Decision Making: AI provides insights, humans provide judgment

Key Implementation Steps

Immediate Actions (Months 1-3)

  • Audit current data sharing practices
  • Implement basic AI usage policies
  • Begin department-specific training
  • Establish vendor evaluation criteria

Governance Phase (Months 4-6)

  • Deploy data loss prevention tools
  • Create approval workflows for custom AI
  • Implement regular access reviews
  • Develop AI incident response procedures

Essential Takeaways for CIOs

  1. Security as Enabler: Frame AI security as enabling innovation, not blocking it
  2. Proactive Governance: Address data oversharing before implementing AI tools
  3. Balanced Assessment: Trust established vendors while maintaining oversight
  4. Targeted Training: Provide relevant, department-specific education
  5. Structured Innovation: Enable experimentation within controlled environments

Conclusion

Successfully deploying enterprise AI requires balancing innovation with security. As Moskofian emphasizes: “We’ve got to find a way to securely and safely enable our companies to really get all the value that’s sitting in front of us with this technology.”

The key is moving beyond fear-based policies to frameworks that enable responsible innovation. Organizations that master this balance will turn security from a barrier into a competitive advantage in the AI-driven economy.

Get Your
Webcast
Your Name*
Job Title*
Business Email*
Your
Webcast
is ready!
Please check for errors and try again.

Secure Your GenAI Rollout

Find and fix oversharing before it spreads
Book a Demo →