CISO Blog

Governing AI Risk in Healthcare

April 29, 2026
Share this blog post

On the 39th episode of Enterprise AI Defenders, hosts Evan Reiser, CEO and co-founder of Abnormal AI, and Mike Britton, CIO of Abnormal AI, sit down with Mark Ballister, CISO at Montefiore Health System. Mark runs security for an academic medical center with more than ten hospitals, 50,000 employees, and 10,000 physicians, and he is clear about the assignment: the goal is not a world-class security organization; it is a really good security organization that supports a world-class hospital. That switch reorders everything, from how his team gates AI tools to how it defends equipment that cannot be patched.

That mindset shows up first in governance. Montefiore’s AI governance council recently advanced to a 2.0 version, pulling in more cross-functional reviewers, and the default is now "yes, with controls" rather than "no." Requesters are walked through financial, security, and privacy risk together. A web-gateway splash page warns anyone heading to an unapproved AI site, asks them not to enter PII, and gives the team time to map shadow usage before a hard block lands. The council also added an ROI test, because, as Mark puts it, "the keyword is think": people assume an AI tool will help, and the cost-benefit math does not always back that up. App rationalization closes the loop, since Montefiore is an Epic shop and Epic already has plenty of AI capabilities of its own.

On the threat side, Mark flags an exposure most teams overlook: the work-versus-web toggle in some AI tools. In work mode, the data stays inside the enterprise guardrails. In web mode, "you are no longer protected," and what gets typed in becomes searchable and indexable just like any other consumer query. Mark is pragmatic about why that matters: threat actors run their own AI tools, harvesting whatever leaks across that gap. The defender's instinct that "normal people are safe" does not apply when adversaries have access to the same overlays, the same models, and a purpose-built set of tools tuned for exfiltration.

The most striking moment in the conversation is what is happening inside Mark's own team. A small group built a custom model that ingests the non-functional requirements that come into architecture review and produces a risk evaluation in return. The build came in at roughly 22,000 lines of AI-generated code, and after static and dynamic code analysis the team needed only minor tweaks. That is the AI workforce shift landing inside the security function before it lands anywhere else. The cultural arc is uneven, with some engineers excited and others spooked by recent industry layoffs framed as automation, but the practical result is a security team that models the same governance discipline it applies to the rest of the business.

Not every problem has a clean fix. Mark got his start in healthcare in 2018, with an MRI machine still running Windows 2000 at a non-profit hospital that could not write a $600,000 check for a replacement. The answer was isolation, additional monitoring, and a canary-in-the-coal-mine pattern that flagged anyone touching the machine in a way they should not. Compromise was not ruled out, blast radius was. That muscle is still core to how Mark operates. One of his first moves at Montefiore was bringing in a third-party SOC and EDR partner so the team could detect and react faster across a fleet that cannot always be patched on a normal cycle.

The line that anchors the whole episode is Mark's reframe of agentic AI. Tools like Claude Code and Claude Cowork are designed to be helpful, which means they will reach out to the web on a user's behalf and spill data unintentionally if no one tells them not to. Mark's frame: agentic AI is "another worker doing something that it shouldn't do" if no one sets the scope. The next era of the CISO role is treating AI agents like new employees who need identity, scope, and accountability before they get the keys. 

Listen to Mark's episode here and read the transcript here.