GenAI is transforming how enterprises operate, but it’s also surfacing long-standing security blind spots. In this edition of our Oversharing Perspective series, I sat down with Steve Januario, Deputy CIO at Bill.com, to talk about the messy reality of AI adoption: what happens when AI reveals risks you didn’t even know you had.
Bill.com jumped quickly into GenAI with tools like Glean for enterprise search, Gemini for productivity, and Copilot for engineering. What made Steve’s approach different was his openness about risk and his willingness to course-correct as issues arose.
Glean helped employees uncover long-buried documents, but that visibility created new liabilities. Sensitive files such as salary data and calibration reviews suddenly appeared in search.
As Steve noted, you cannot wait until everything is clean before deploying GenAI. The reality is that your environment will always be messy, so leaders need to solve issues as they surface rather than holding out for perfection.
GenAI did not create new problems, it magnified existing ones. Misconfigured JIRA boards, default-sharing settings, and exposed onboarding files all came to light once AI tools began indexing them.
Steve’s response was layered defense, not lockdown: default to least-privilege sharing, train users with hands-on demos, and deploy tools like Opsin to detect oversharing at scale. In his view, enterprises now need AI-powered safeguards to keep up with AI-driven risks.
Rather than banning tools, Steve emphasized enabling safe use. That means meeting users in their daily environments such as Slack, Google, and Microsoft while adding guardrails instead of gates. Heavy-handed “no” policies only drive shadow IT; the goal is to create safe pathways for adoption.
Looking ahead, Steve sees agentic AI as the next wave, systems that do not just generate but also act on behalf of users. Adoption, however, will depend heavily on education.
He recommends staged training (Beginner → Intermediate → Advanced), embracing “citizen development” to unlock innovation, and even using AI itself to build adaptive learning programs.
GenAI is not a passing trend, it is a paradigm shift. But productivity and security do not have to conflict. With the right leadership, layered defenses, and ongoing education, enterprises can innovate at speed without losing control of their data.
GenAI is transforming how enterprises operate, but it’s also surfacing long-standing security blind spots. In this edition of our Oversharing Perspective series, I sat down with Steve Januario, Deputy CIO at Bill.com, to talk about the messy reality of AI adoption: what happens when AI reveals risks you didn’t even know you had.
Bill.com jumped quickly into GenAI with tools like Glean for enterprise search, Gemini for productivity, and Copilot for engineering. What made Steve’s approach different was his openness about risk and his willingness to course-correct as issues arose.
Glean helped employees uncover long-buried documents, but that visibility created new liabilities. Sensitive files such as salary data and calibration reviews suddenly appeared in search.
As Steve noted, you cannot wait until everything is clean before deploying GenAI. The reality is that your environment will always be messy, so leaders need to solve issues as they surface rather than holding out for perfection.
GenAI did not create new problems, it magnified existing ones. Misconfigured JIRA boards, default-sharing settings, and exposed onboarding files all came to light once AI tools began indexing them.
Steve’s response was layered defense, not lockdown: default to least-privilege sharing, train users with hands-on demos, and deploy tools like Opsin to detect oversharing at scale. In his view, enterprises now need AI-powered safeguards to keep up with AI-driven risks.
Rather than banning tools, Steve emphasized enabling safe use. That means meeting users in their daily environments such as Slack, Google, and Microsoft while adding guardrails instead of gates. Heavy-handed “no” policies only drive shadow IT; the goal is to create safe pathways for adoption.
Looking ahead, Steve sees agentic AI as the next wave, systems that do not just generate but also act on behalf of users. Adoption, however, will depend heavily on education.
He recommends staged training (Beginner → Intermediate → Advanced), embracing “citizen development” to unlock innovation, and even using AI itself to build adaptive learning programs.
GenAI is not a passing trend, it is a paradigm shift. But productivity and security do not have to conflict. With the right leadership, layered defenses, and ongoing education, enterprises can innovate at speed without losing control of their data.