
On the 39th episode of Enterprise AI Defenders, hosts Evan Reiser (CEO and co-founder, Abnormal AI) and Mike Britton (CIO, Abnormal AI) sit down with Mark Ballister, CISO at Montefiore Health System, to discuss governing AI risk in a hospital system. Mark shares how his team flipped the default from "no" to "yes, with controls," why work-versus-web toggles are a quiet exposure point, and how his own security team produced 22,000 lines of AI-generated code for an internal risk-evaluation model.
Quick Hits from Mark:
On the AI governance posture: "We don't look to say no. We look to say yes, as long as we can put controls around it."
On the Microsoft Copilot work-versus-web toggle: "By just clicking that button that says 'web,' you are no longer protected."
On bringing AI inside the security team: "It wrote all…22,000 lines of code."
Book Recommendation: The One Minute Manager by Ken Blanchard and Spencer Johnson
Like what you hear? Leave us a review and subscribe to the show on Apple, Spotify, and YouTube.
Enterprise AI Defenders is a show where top security executives share specific ways AI changes the threat landscape and the defenses that hold up in real environments.
Find more great insights from technology leaders and enterprise software experts at https://www.enterprisesoftware.blog/
Enterprise AI Defenders is produced by Abnormal Studios.
Evan Reiser: Hi there, and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most anticipated cyber attacks. In each episode, Fortune 500 CISOs share how AI is changing the threat landscape, real world examples of modern attacks, and the role AI will play in the future of cybersecurity. I'm Evan Reiser, the founder and CEO of Abnormal AI.
Mike Britton: And I'm Mike Britton, the CIO of Abnormal AI.
Today on the show, we're bringing you a conversation with Mark Ballister, CISO at Montefiore Health System. Montefiore is an academic medical center with over ten hospitals, 50,000 employees and 10,000 physicians, operating in New York and western Connecticut. Their perspective on AI and security is particularly relevant given the life-critical nature of healthcare operations.
A few things stuck with me from this conversation.
First, Mark flagged a vulnerability that most people are overlooking: the “work versus web” toggle in tools like Microsoft Copilot. In work mode, your data stays behind guardrails. But one click to web mode, and those protections vanish. Threat actors are already running AI tools designed to exploit this gap.
Second, Mark's team defaults to “yes, with controls” instead of “no.” They developed a splash page on unapproved AI sites to map usage before moving to blocking, and added ROI analysis to the review process. A lot of people think an AI tool will help, but the cost-benefit doesn't always back that up.
And finally, the AI workforce shift is already happening inside Mark's own team. His security team built a custom model that ingests architecture review requirements and automatically generates risk evaluations. They produced 22,000 lines of AI-generated code, which passed static and dynamic analysis with only minor tweaks.
Evan: Well Mark, thank you so much for joining us today. Maybe to kick off, do you mind giving our audience a little bit about your background and how you got to where you are today?
Mark Ballister: Sure. Mark Ballister. I've been in cybersecurity since the year 2000. Actually, even before then. It was going into the year 2000 with Y2K, and I caught the cyber bug, and I haven't looked back since. I went from there to starting a couple of departments when I was at a company called Paychex, and I grew into a manager role, into a director type role as the deputy CSO, and then was recruited out of that to go over to the University of Rochester, where I was their CSO to build their program, mature their program.
Five and a half years later, I was recruited into Montefiore Einstein to really build their program. I've been here for… coming up on two years. It'll be two years next week that I've been over here to build the program. So, that's my quick Reader's Digest of my cyber career.
Evan: Do you mind sharing a little bit of your scale of operations? I think probably not everyone appreciates the scale and the impact maybe you have in the world.
Mark: Yeah. So, Montefiore Einstein is an academic medical center. Depending on how you look at it, it can be anywhere from ten, which sounds weird, ten hospitals to 13 hospitals. It really depends on how you look at Article 28. And it also has the college to it. So, the Albert Einstein College is really an interesting story.
There is no tuition. It is paid for. There was a donation, a very, very generous donation, that everybody's tuition is paid for in perpetuity. It also has research in it, as well. Most of the hospitals are located in the Bronx area, so that's where the hospital is headquartered. I'm actually in upstate New York, but like I said, it is downstate.
About almost 50,000 employees, 10,000 physicians. So, it's a fairly big organization.
Evan: So, Mark, when cyber things go bad in the healthcare industry, people's lives are at risk. Obviously, that's true to a lesser extent in other industries. So, when you talk about recruiting and building the team and the culture, and just the mindset for what it takes to work on the organization, how do you remind people about what's really at stake?
Mark: I'd rather have somebody who is not quite there and train them up than have somebody that is absolutely perfect from a talent perspective, but has the personality of a pitbull. So, the big part of it is making sure you bring in the right folks from the get go. The second one is, as we all are in the trenches, we forget that we are supporting a hospital system.
Truly, this is one of the reasons why I'm back in an academic medical center, is because of the mission. So, having people go around, we're getting ready to do hospital visits, where we're going to go around and actually visit and see exactly why we're doing this. We're not doing this to build a world class security organization. We are looking to build a really good security organization that supports a world class hospital. A little bit different mindset, but it is truly how you have to look at it. Because otherwise you're going to be impacting the business by the decisions you make, instead of influencing your decisions to help the business. So, that little switch in mindset has really helped out a lot.
But really going around and visiting, and having folks see what they're supporting, has been incredible moving forward. So they can see it.
Evan: What are some of the ways you think… The good guys are training, so are the bad guys. What do you really think are some of the most underestimated ways that adversaries are trying to use AI to attack healthcare systems, attack anyone?
Mark: So, we do have an AI governance council that we've just advanced to the 2.0 version, where we brought in a lot more folks into it. That's actually managed by a peer of mine, but I'm also on the committee. One of the things that I don't think people understand, and I'm not picking on Microsoft by any means, but it is the use of Microsoft's Copilot.
There are two versions of it. Even if you have the enterprise version, you have the work and you've got the web. So, if you look at the very top of it, when you have the work version or the enterprise version, if you are on the work, you're good: the bar is in place, your data is contained, you've got guardrails around it. By just clicking that button that says “web,” you are no longer protected. Now it is just the personal version, and your information is out there. That's information that can be searched, that can get indexed, that can get exposed out there.
I don't think people are really thinking about the bad guys, because they know all of this, and they've got tools themselves. We have Copilot, and Grok, and all these other tools that we're using, and we'll say, as the good guys, “But, normal people are safe.” But they also have the tools, as well, that they're utilizing.
From the threat actor perspective, they're also using good tools. They're also using the LLMs, and the clouds, and everything else. But they have their own specific set of tools that they're using, as well, that's harvesting this data, and is going to expose us in the long run.
Evan: Are there one or two areas, maybe ten of your peers are listening, that you recommend security focus on, that they're probably underestimating?
Mark: I would say the two main things that we're working on right now, and granted, we are in its infancy, like a lot of other folks unfortunately, is really controlling the narrative. Being able to control which ones they can go to, which models they can utilize.
OpenClaw in itself is not bad. It's basically just an overlay to be able to use an AI model. Very powerful. I use it at home and it's pretty impressive, but having it learn and having it be able to go out to the web, you need to be able to model that, and be able to figure out: okay, do you have that running in your organization? Because it could be bleeding out data. One of the big things with OpenClaw and things like it, like Mantis, and even Claude Code and Claude Cowork, is it's looking to help you out. It's looking to do things that anticipate some of the needs that you have. And if it has the capability of going out there and spilling data out there, it's going to do that unwillingly.
It's almost like the old insider threat, where it doesn't matter the intention to manage the impact. It's the same thing really with this. It's just another worker doing something that it shouldn't do, because you didn't tell it it shouldn't.
Mike: On a similar topic, obviously attackers are using AI and these tools. But on the flip side, I'm sure your business is wanting to harness and leverage AI, as well. Historically, security, we've always been the, “try to say no and try to avoid risk.” How does your role change with this age of AI, where your business is moving faster, wants to adopt more of these tools? How do you partner with them? What are some things that are working, or what are some areas that you guys are still struggling with, to help them innovate and leverage these tools as quickly and safely as possible?
Mark: It goes to what I was saying before with the governance committee that we have. Our governance committee takes a different approach to it. We don't look to say no. We look to say yes, as long as we can put controls around it. We try and help the business by showing them all of the risks that are associated with it, whether it is financial risk, whether it's security, privacy, whatever the risk is, we try and show them that, and how we can alleviate that from the conversation.
The organization right now, my boss, the CIO, he's now putting together a task force to be able to bring in AI to identify all of the different areas that we can utilize it to be able to get more efficiencies. It's just another technology that's out there that we need to be able to leverage. We need to be able to frame it. We need to be able to utilize it to be that differentiator in the workforce here, especially in the healthcare space. We've got to be able to empower the business to be able to do that.
One of the things that we're seeing a lot of, though, is everything is AI. There's the trash can that is now AI-enabled. Everything is AI. So, it's the same thing in the workforce, where we need to be able to say, “Okay, we don't need another AI tool. We've got X, Y, and Z.” We're an Epic shop, from an EMR perspective. Epic has a lot of AI capabilities. So, let's leverage what we have, and not bring in another tool. Because as you bring in another tool, you're adding more exposure to the organization. So, you want to be able to weigh that as well. But you also want to be able to give the business what they need. So it's having those conversations, and being extremely open about it.
Looking at the architectural design, looking at the way that it impacts the organization, and looking at the commonalities that it has. If we have X, Y, and Z tool that can do exactly what A, B, and C can do, but choose X, Y, and Z because we already have them in the infrastructure.
Mike: Do you feel like, from your team's perspective, and even the broader employee base, do you feel like they're embracing and wanting to use more AI? Or is there a certain timidity about, “this is still new technology,” and a little bit of uneasiness? How does it pull the spectrum at Montefiore?
Mark: It's all over the board. There are some folks that are extremely excited about it. We have folks on the team that are looking at it, using different tools. One of which I thought was really interesting. They pulled together and they did a nonfunctional requirement for Architectural Review, and they put the content through a model that they created, and it spit out a risk evaluation. So, they can do this in a very quick fashion. Took a couple of prompts, a couple of tweaks here and there. It wrote all the code. I think it's got 22,000 lines of code. We put it through checks, or a dynamic static code analysis, and it came out pretty well. There's a couple tweaks it needed, but just doing something like that.
We have folks that are doing that, and then we have folks that are very scared about it, because Amazon just went to 30,000 people, they let go. They're saying that was due to automation and through AI. So, there are a lot of people that are scared about it, as well, because they're fearful that they're going to lose their job, instead of embracing it and saying, “Okay, how can I utilize this tool to make me even more valuable?” They're looking at it as, “I'm a dinosaur now, because AI is taking my job, and I can't move forward.”
Mike: So, one of the concerns, I spent time in manufacturing and it was very similar. But I know healthcare oftentimes runs on very legacy types of systems. You've got OT things that just can't be patched in normal cycles. How do you approach securing networks where you've got all these critical endpoints, and you've got attackers moving at faster speeds? We've even seen some of this with some of the attacks recently with Iran and everything. How do you stay on top of those things? How do you focus on protecting things that are often harder to protect, even in an old world without AI being used by the attacker?
Mark: I got my feet wet in healthcare, like I said, in 2018, and one of my first experiences was with an MRI machine. It was on a legacy Windows system. I think it was Windows 2000 at the time, and the only way to upgrade that was to buy a whole new system, and it was a $600,000 machine. The organization, it was a not for profit organization. There's no way they have $600,000 to be able to do this. So, we had to do isolation. We had to do additional monitoring on it. It doesn't guarantee that you're not going to get compromised, because it's an old system that's very easily compromised, but at least it gives us that early warning, and then isolating it. Also, if it does get compromised, that it doesn't compromise other systems parallel to it.
So it's a lot of things like that. Sometimes you have to get creative. Sometimes you have to put the canary in the coal mine, to be able to identify if somebody is doing something that they shouldn't on the machine, and get that early warning. It's all about early warning, in my opinion, especially when you're dealing with legacy systems that don't have the capacity of being upgraded or patched. So, it's trying to think of it a little bit differently. That goes a long way.
That was one of the first things that we did here when I took over, was really getting that monitoring aspect of it. We brought in a third party to help us with the SOC, with the EDR, and being able to react quicker, so that we can hopefully reduce the overall blast radius if somebody does come in.
Evan: You also mentioned that one of the strategies you recommended was controlling the narrative or managing what tools people want to have access to, to make sure there's not crazy proliferation onto shadow IT, shadow AI. How do you manage that?
Mark: A lot of that is going through that AI governance council that we have. We do have feeds coming in from our supply chain. So, when things are purchased, or they're attempted to be purchased, it comes through that gateway and it's identified and reviewed at that level. We have the folks that come in there that are looking to utilize those tools. We have the conversation with them, and like I talked about before, we really talk about the risks from all parts of the organization. We also look at it from an app rationalization standpoint, to make sure that we don't have existing tools in the enterprise that can do what they're looking to do. We just started introducing the ROI, as well.
Everybody wants to utilize these AI tools, because they think, and the keyword is think, it's going to help them out. But it could be more expensive by utilizing it, because of how it's built or how it's utilized. We also have some phase gates in our different review boards, where it'll have to come back through that AI framework.
We just implemented a block. Actually, it's not a block. It's a recognition on our web gateway. So, if somebody does try and go out to these different sites, it lets them know, “This is not an approved source, do you still want to go out there?” Don't utilize, because that splash page, don't put identifiable information that's in there, either PII, whatever the case may be. Don't put it into this tool because it is not part of the enterprise solution. Eventually we're going to move away from just the splash page to a block, but we've got to identify all of the areas that are currently using it. Going back to the statement earlier, as well: we can't impact the business. So, we've got to do that balance between security, privacy, and what the business needs. So, that's the part that we're at right now.
Mike: Let's move to think about the future. Mark, if you looked five years out, what's the one cybersecurity challenge for the healthcare industry that you think AI is going to solve?
Mark: That's a great question. I think it's already starting to do that, especially with some of the ways that it's reading images and diagnosing some of the data that it has around the patients. But it scares me in a sense, because you're still dealing with patients, you're still dealing with people. Granted, we can get it wrong, but so can it, with the different poisoning, and biases, and what have you. But I really do think that's going to be kind of the future of it. It's going to be putting the information into it, reading the different X-rays and PACS imaging, and all of these different technologies, to help the doctor make the right call. Because they're human, they're going to make mistakes. It's going to be able to see all of this data, to be able to move forward. I think that's really going to be the big future for it. I think that's going to be the big up and comer, that we're almost there now. But I think it's going to get even better.
Evan: If you had to pick one area in cybersecurity that's going to get 2x harder with AI, and one that's 2x easier, what would those categories be?
Mark: 2x harder is going to be the controlling of it. Identifying it in the organization. If the data is going to be getting traded and utilized by the models, DLP in my opinion really just never made it per se, even though we still try, this is just another form of data going out of the organization. Where it's going to make it 2x easier: really around the environment itself, being able to identify all of the things that could go wrong, and doing those really complex anomaly thoughts and structures, if you will. I think it's going to help out a lot there.
Mike: If you could wave a magic wand and instantly solve one cybersecurity problem for either healthcare globally or your organization, what would it be?
Mark: Data de-identification would be a huge one. We talk about the tokenization of data. We talk about de-identified data, and not being able to re-identify it as it goes out, both from a research perspective and a data share perspective. That, and then taking that to the next level, is really the third party. With the third parties, they have access to our data. We could be locked down tight as can be, but our data goes to somebody else, and we're really trusting them to do the right thing with it. So, being able to come up with some type of solution that ensures that the data is still secure, regardless of where it is, would be a game changer.
Mike: If a doctor or someone in the hospital comes to you and says, “Hey, I need this AI tool today,” how do you avoid being the “No” person? How do you get them an answer quickly and enable them to do what they want to do?
Mark: Yeah, and we actually do that through our governance, as well. We had the CMO come through just a couple of weeks ago on a tool that they were looking to do, was one of the member hospitals. We really look at it, like I said, we look at it from a different lens. We look at it from, how can we say yes? Does it make sense to say yes, and then educate them on why we're saying yes, or why we're going to say no? Sometimes that gets a little difficult, because it's something they really want. A lot of times if it's not going to impose a lot of risk to the organization, we're going to say yes, because we want people to be innovative. We want people to look and say, “Okay, we can utilize this tool, we can do this.” We want them to be those forward thinkers, and we don't want to be known as the “No” people, like we used to be. Cybersecurity was all about no. Then we had to figure out how to say yes, that has changed, because the business has changed it.
Mike: Yeah. Ultimately, the ideal state is to give them the ability to run fast, and not create damage to the organization. Plan sandboxes and things like that. So, that would be the ideal scenario. You don't even have to come to me. You can play in your little sandbox and not hurt anything.
Evan: So Mark, at the end of the show, we like to do a bit of a lightning round. We ask you knowingly difficult questions, and then require you to answer in the one tweet format. So, please forgive us in advance for the questions. Looking for your quicker take on things. So, Mike, you wanna go first?
Mike: What's the one piece of advice you'd give to someone stepping into their very first CSO job? Maybe something they might overestimate or underestimate about the role?
Mark: Overestimate is, you are not a technologist anymore. You are managing technologists. But you're really a business leader, and you have to make sure that you have those partnerships with the other leaders within the organization, and have a great relationship with the board, because ultimately they're your oversight. I think that's overlooked, because most of the time, CSOs get promoted up through the ranks as a technologist, and have to learn the other part of it. You need to have that part when you move into the role.
Evan: What's your advice to some of your peers that are trying to stay up to date with all the new stuff? It's like every day is a new AI, a new era, breakthrough. Claude Code is like a new feature every day. I think I'm a Claude Code expert, and I read whatever the recent tweet is, and I feel super far behind. So, what's your advice to stay up to date on stuff?
Mark: It's funny, because I started going out and I started looking at all the tools. NotebookLM and Claude and Claude Code and Claude Cowork, which are all just a little bit different. And download it, Ollama, and, what is it? AnythingLLM, I think. So, I started playing around with all these different tools, and then ChatGPT, and Gemini, and Grok, and it got to the point where I wasn't getting good at anything. I wanted to learn them. But once you learn one, they all kind of make sense. I would say, get good at one or two, whatever you're looking to utilize.
If you're looking to do that again, then look at Mantis, or look at Claude, Claude Cowork, or whatever the case may be. Get good at it, and determine if that's going to work for you. Same thing with the LLM, which, like ChatGPT or Copilot, or whatever the one you want to utilize. Get good at that, and get a really good deep understanding of it. The other ones will make a lot more sense.
Evan: How would you advise someone about getting good at one of these tools? How do you know if you're good?
Mark: That's a great question. You can even ask the AI agents to actually interview you for that. Most folks are looking at it where, okay, if I say, “Create an ASM job description for me,” they think they're experts in ChatGPT when they're barely even scratching the surface to do what it needs to do. Looking at it from taking some of the courses that are out there. MIT has courses out there; there's like 10 or 15. Just looked at LinkedIn, the top ten are coming out all the time, “Take those courses.” So, you really understand what it's doing. Then you can really provide yourself with a grade of where you stand with it. Most of the people that say that they're a B+, they're really not even on the grading scale, because they don't have enough information to truly make a determination of where they are.
Mike: We always like to ask this question. On the more personal side, what's a book that you've read that's had a big impact on you, and why? It doesn't have to be cyber or work related.
Mark: I think one of the, and this is going to go way back, The One Minute Manager was one of the ones that was really impactful, especially as I moved into management, to not make things so difficult. Most things are not as complex as you make them, when you break them down into smaller little chunks. To me, that was a great little, tiny little book, but there were a lot of gems in there. So, I think that would be the one book that I would say was really impactful in a very strange way. The Digital Doctor was really good, too. That was another good one, but that was when I got into healthcare.
Evan: What do you think will be true in the future for AI in cybersecurity than most people today would consider science fiction?
Mark: I truly think that a lot of the work that we're doing now, and a lot of the roles that we have within the organization from cybersecurity, are going to be done by agentic agents. It's not going to necessarily replace the human, but it is definitely going to augment the work that we do, kind of like SOAR did many years ago. We're talking about doing the orchestration and the automation. This is going to take it that step further, and it's going to be learning how to utilize those tools, learning how to do the input into the tools, learning how to manage them. I think it is going to be a big component of cybersecurity, and other areas within IT and the business, as well.
Evan: Do you think three years from now there'll be more security engineers or less?
Mark: I think it's going to be about the same. We're in a deficit right now. If you look out there, we can't get enough. But I think we're going to be at about the same spot. I think it's going to look a whole lot different. The skill sets are going to look a lot different. It's going to be easier to get into it, because of the way that we can leverage different components.
Evan: I've asked this question to a bunch of CISOs and people. They said it goes up the same. No one says it goes down. Even, I talked to the CSO of Anthropic, and the CSO of OpenAI, and both of them said, “I think it's going up.” They're pretty advanced using some of these tools. And so, yeah, I think we'll see. But I'm with you, I think it's going down.
Well, Mark, thank you so much for joining us today. I really appreciate having you on the show, and hope you can do it again soon.
Mark: Absolutely. Thank you. I appreciate it.
Mike: That was Mark Ballister, CISO at Montefiore Health System. I'm Mike Britton, the CIO of Abnormal AI.
Evan: And I'm Evan Reiser, founder and CEO of Abnormal AI. Thanks for listening to Enterprise AI Defenders. Please be sure to subscribe, so you never miss an episode. Learn more about how AI is transforming the enterprise from top executives at enterprisesoftware.blog
Mike: This show is produced by Abnormal Studios. See you next time!
Hear their exclusive stories about technology innovations at scale.

