
On the 37th episode of Enterprise AI Defenders, hosts Evan Reiser and Mike Britton talk with Matt Posid, Chief Security Officer at KPMG US. AI accelerates the attacker’s playbook by increasing overall capability and reducing the time between vulnerability discovery and exploitation. Matt explains why KPMG consolidated cyber, insider risk, physical security, life safety, resilience, and third-party risk into one enterprise security program, and how defenders can keep up by pairing strong controls with AI-enabled workflows and clear governance.
Quick hits from Matt:
On how AI changes the threat curve: “AI is really good at a couple of things. It is really good at making people better, and it’s really good at making people faster.”
On deepfakes and why fundamentals still work: “The controls we’ve had to protect against non AI-based attacks are still, in many cases, effective against the AI-based variants.”
On the defender’s response, fight at AI speed: “If the bad guys are using certain tools, the good guys probably have to also, in order to keep up with the capabilities, the velocity that we need to defend.”
Recent Book Recommendation: Unreasonable Hospitality by Will Guidara
Like what you hear? Leave us a review and subscribe to the show on Apple, Spotify, and YouTube.
Enterprise AI Defenders is a show where top security executives share how moves to the cloud have created an evolved threat landscape that requires new tools to protect against cybercrime. Find more great lessons from tech leaders and enterprise software experts at https://www.enterprisesoftware.blog/.
Enterprise AI Defenders is produced by Abnormal Studios.
Evan Reiser: Hi there, and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most anticipated cyberattacks. In each episode, Fortune 500 CISOs share how AI is changing the threat landscape—real-world examples of modern attacks, and the role AI will play in the future of cybersecurity. I’m Evan Reiser, the founder and CEO of Abnormal AI.
Mike Britton: And I’m Mike Britton, the CISO of Abnormal AI. Today on the show, we’re bringing you a conversation with Matt Posid, Chief Security Officer at KPMG US. KPMG is a $40 billion global professional services network and one of the Big Four. As a firm trusted with sensitive client and financial data across industries and geographies, KPMG’s security posture is closely tied to client trust and compliance—making Matt’s views on AI and security especially relevant.
There are three interesting things that stood out to me in the conversation with Matt.
First, AI is democratizing cyberattacks in a way that should change how we think about the threat landscape. The good news: controls you already have for old-fashioned fraud—like requiring invoice numbers, pre-registered vendors, and verified bank accounts—still work against AI-powered deepfake attacks.
Second, when generative AI exploded onto the scene, KPMG didn’t block it. They put up a splash page. The message reminded employees to use approved tools for client work while encouraging them to experiment and learn.
And finally, Matt’s sharp point: your edge isn’t tools—it’s people and organizational design. When insider risk arises, you don’t solve it in one lane. You connect the dots across data access, policy violations, and real-world behavior to see the full pattern early.
Evan: Matt, first of all, thank you so much for joining us today. Do you mind giving our audience a little bit of background about your role today, and maybe how you got into cybersecurity?
Matt Posid: Sure, Evan—and thanks for having me here today. I currently serve as the Chief Security Officer for KPMG’s U.S. firm. In that role, I run a multi-domain security program that focuses on everything from cybersecurity to insider risk, physical security, life safety matters, compliance, and I also run our business resilience program and third-party risk programs. We’ve integrated all of those things into one enterprise security program that I have the opportunity to lead.
I’ve been in this role for about two and a half years. Before this current role, everything I just described was fractured at KPMG. We had security happening all over the place. The cyber program was part of it. The physical security program was coupled with other parts of our business. And we recognized the world had changed—threats are multi-domain—and we needed to pull everything together.
Prior to my current role, I spent a couple of years as KPMG’s Chief Information Security Officer, running our cyber program. So I’ve been with the firm about five years in total. Before that, I spent close to 20 years at the Central Intelligence Agency—and honestly, I spent most of that time having nothing to do with security, or actually, said a different way: being on the opposite side of the coin from security.
I was an IT engineer and IT innovator—really trying to push technology services—and I constantly got into challenges and debates with the security program about how to apply innovative technology against the CIA’s mission space. After about 15 years of doing that, I had the opportunity to shift from being an IT innovator over to security, so I could make sure the security program at CIA was absolutely focused on adopting innovative technology that would allow us to bolster the CIA’s mission in the world.
Once I got into that role—as Deputy CSO first, but ultimately as CSO of the Central Intelligence Agency—I fell in love with security. I found I was able to contribute significantly to the agency’s mission from the security team, and I was able to touch all facets of the mission. It was truly a horizontal that cut across everything.
I fell in love with it, and I’ve been doing security ever since.
Evan: Do you mind sharing a little bit more about KPMG? In the enterprise world, it’s a household name, but it may not be for every listener.
KPMG globally is a $40 billion brand, give or take, and it is one of the Big Four professional services and accounting firms across the globe. It’s most known for our accounting business—that’s sort of the tried-and-true reputation of KPMG—but we’re much more multifaceted than that.
Matt: We absolutely are an accounting business. We also offer tax services to our clients. We offer deal advisory services. We offer professional and management consulting services. And we also have a number of related subsidiaries that allow us to do that. So we provide a full suite of professional services to meet the needs of our clients. Our clients range from Fortune 100-size clients all the way down to mid-market and smaller clients across the globe.
So we provide these kinds of professional services and consulting to nearly any kind of company.
Mike: We have the outsider’s perspective, and obviously you are the person protecting the organization. Maybe you could give our listeners some insight into what it looks like to protect KPMG from the inside as an organization.
Matt: Yeah. I think one of the most interesting and challenging aspects of the role is that we essentially are a third party to some of the world’s biggest brands across all kinds of different regulated sectors that our clients operate in.
And if we really think about third-party risk these days—third-party risk is critical to any security program. There are studies out there saying third parties are responsible for upwards of 30–35% of data incidents and cyber incidents. And so we have to care about our third parties.
So all of our clients are absolutely worried about the security program at KPMG and how we can help our clients meet their security obligations. But that means we have to be responsive to each of their unique needs.
Those needs vary from client to client, from sector to sector. And so we have to have a security program that is built with the flexibility to meet the needs of any and all of those different client obligations—those client regulatory requirements—and those client needs from us. That means we maintain a very complex, very multifaceted security program.
It makes it incredibly complex to run, but at the same time, that’s part of the challenge that makes it really exciting as a security leader.
Mike: I’d love to understand, from your perspective at KPMG, what are some things that enterprises maybe underestimate or overestimate about the impact of AI? Not so much on the benefits of it, but what it does to the threat landscape or your enterprise security program.
Matt: AI is really good at a couple of things. It’s really good at making people better, and it’s really good at making people faster. And this creates two fundamental paradigm shifts, I think, for the security landscape.
If you think of bad guys in three different buckets—your amateur, your criminal, and your nation state—there are absolute capability and maturity differences between those three. Everyone’s now getting better, though. Our amateurs are now having more tools, more capabilities than they’ve ever had before. And I now have to worry about a group of people who might not have been as risky to me before.
At the same time, everyone’s getting faster at what they do. Vulnerabilities are going to be exploited faster. Every company out there has patch management cycles and IT hygiene challenges, and has a cadence where they’re used to applying patches and applying updates and doing maintenance and things of that nature. If the attacks are coming at me faster than they ever have before, we might find ourselves in a situation where that typical rhythm is no longer fast enough to keep up with changes in the threat.
Fundamentally, we’ve democratized cyberattacking capabilities. Anyone and everyone can now do this if they want to, and we’ve made them significantly faster at doing it. And so I think that creates a world where the number of potential attacks and the velocity of those attacks is faster and wider than it has ever been before.
On top of that, we see attackers putting new tools in their toolboxes. And this is a little bit different than what I just talked about a few minutes ago. So let’s take fraud or impersonation. These have always been concerns for security professionals. Somebody could email us and pretend to be our CFO or our CEO and attempt to cause a professional to take action. Now we’re seeing those same kinds of things with deepfake videos and AI-based attacks, and that makes them much harder to spot or detect.
The good news is: good security programs have been worried about these threats for a really long time, and the controls we’ve had to protect against non-AI-based attacks are still, in many cases, effective against the AI-based variants.
Let me give you a real example. If you’re worried about impersonation leading to payment fraud, you likely have robust controls in your accounts payable department—perhaps you need to send me an invoice number, perhaps you have to be a pre-registered vendor, perhaps you have to have a pre-registered bank account. If you have those controls to combat old-fashioned kinds of fraud, when you get the deepfake video that pretends to be from your CFO and asks for payment, if you’re following your proper procedures, that deepfake still should not be effective—even though it’s incredibly lifelike.
So some of the controls we’ve had for years actually still prove very effective against AI-based impersonation and fraud attacks.
Evan: Maybe where are the areas you think enterprises need to—or maybe are underestimating—the level of transformation required to keep up with the AI-enabled adversary?
Matt: I think this is an area where I can be both the poison and the antidote. Not to over fearmonger or anything, but it’s like any other arms race. If the bad guys are using certain tools, the good guys probably have to, also, in order to keep up with the capabilities and the velocity that we need to defend.
And so that’s going to drive us to one of those other dimensions I talked about: security with AI. How do we leverage these tools to bolster our ability to defend? How do we make us better? How do we fight at computer speed—at AI speed?
Also, how do we leverage that to upscale our program, instead of having junior analysts manually analyzing data? How do I leverage technology to be doing that for me, and leverage my humans—not displace them—but leverage them to move higher up the stack, to be level two, level three analysts, where they can solve harder and harder problems? Just like the attackers using these tools, I think we have to also.
Mike: From your perspective as a CISO, how do you balance enabling that innovation with also protecting sensitive customer data, proprietary data, things like that? Where’s the balance?
Matt: I think that’s the challenge a lot of us are focused on right now. I’ll start out by saying security professionals, but I’m going to expand this as I talk.
If you recall from my introduction, my background is as an IT innovator. I want to leverage innovative tech to achieve our business objective. That’s how I’m wired. That’s how I still think of myself in many ways. But now I’m sitting in a security role where I’m accountable for also protecting the organization.
If we go back in time a couple of years to the huge spike in the emergence of generative AI, you start to see headline after headline of organizations blocking these kinds of sites because they didn’t know what to do with them yet, and they didn’t know how to protect themselves.
At KPMG, we took a different approach. We put a splash page in front of these sites, and we said: as a reminder, you have to use approved technology services to provide professional services to our clients. But we encourage you to experiment and learn and play with these tools—not for service delivery—but to explore how they might help your business in the future.
That allowed us to do a few things. One, it allowed us to build really robust business cases on the power of these tools and how they might help us internally. Two, it let us quickly see a lot of the risks of these tools, so we could build the right organizational governance around how we adopt them. And three, it gave us a really good head start on many of our competitors.
As clients across the world were trying to figure out how to adopt AI in smart ways, we felt we had a good head start, and we felt we went to market very, very fast on how to help clients adopt these capabilities.
But I want to come back to that second one. Understanding the risks allowed us to build the right governance around this. What we did was build something for our internal firm use that guided how we adopt these tools. It looked at the security risks, but also the legal risks, the ethical risks, the transparency risks, and all of the other facets that go into AI risk—above and beyond just security.
We looked at who all of those stakeholders were, how they needed to contribute to the approval processes, and we built this internal process that worked incredibly well. In fact, so well that we branded it and marketed it as the KPMG Trusted AI Framework. And we are now leveraging the same frameworks we used internally to help clients adopt these AI technology services.
So it’s something I’m very focused on as a security professional, but it’s something I don’t unilaterally own because there are so many other kinds of risks related to AI also.
Evan: What’s our advantage, right? What are some of the techniques or tools or training or assets that we have to fight back? Is it the data advantage? Is it that we know the behavior of our organization?
Matt: I would love one day to see bad guys at their staff meeting saying, “Oh, wait a second. That threat is not aligned with our policy. We can’t do that one.” Or sitting around saying, “Wait a second—this is an insider-assisted cyberattack, and we know their cyber program or their insider risk program doesn’t talk. This one is not fair to use.”
I would love to see a real-world scenario where they are governed by the same rules that defenders are. But that’s exactly what a defender’s world is. We need to move fast, but our policy says X. Or we need to zig, and our policy says zag first, then zig.
Those are the things we are often challenged with—and why any risk management domain requires some degree of judgment to be able to make calls to deviate from those where appropriate, to better safeguard the organization.
So pulling the thread on where your question was going—what can we do as defenders? Where can we gain advantage? Tools are tools. Everyone has the same tools. Vulnerabilities are vulnerabilities. Everyone has the same access there. I think we have to focus on the things we can uniquely use to differentiate ourselves. And the number one thing is talent.
At the end of the day, talent matters a ton—especially as we look into an uncertain future. In three to five years, we will be defending against threats that have not been invented yet, using technology that has not been invented yet.
We cannot be static in how we think about the cyber threat, how we think about our tooling, or how we think about our approach. We have to be fluid, adaptable, and we need to invest in talent that is agile and adaptable. So as the world changes, the threat landscape changes, and the tools change, our people can move with that.
If we ever find we are wed to tools or processes or approaches, I think we’ve taken a game that’s already really hard and put one hand behind our back. We have to be adaptable and agile.
So number one is invest in our talent and our people. The other thing—and we’re seeing this a lot over the last five to ten years—is making sure security is positioned right within your company.
If we go back ten years ago, cyber was maybe a niche domain within an IT program. Then we started to see more organizations say cyber is not just an IT problem—it’s a business problem. And it may not be well positioned under a CIO or a Chief Technology Officer to properly protect the business because, in some ways, it needs to challenge and pressure that leader. That’s always hard to do when it’s your boss.
So you started to see this trend of more security programs peeling out of IT and becoming peers to those technology leaders, much like we have done.
I think the next stage is multi-domain security: making sure, as we think about things like insider risk, we don’t have a traditional security program thinking that’s employee background checks and investigations and misbehavior, and cyber professionals thinking that’s data loss prevention. It’s both. But if we’re not treating it like a whole-person risk, we’re not seeing the entirety of the problem.
So: one, invest in talent. Two, position security in the right place in the company to have the influence it needs to drive change. And three, think holistically about security problems rather than in individual silos, so we’re combating the total threat facing the organization.
Evan: There’s no doubt that someone might be listening right here whose only work is cybersecurity. Or maybe they’re a junior in cybersecurity thinking, “How can I step up?” I just heard Matt talk about how we’ve got to innovate and improve, but I’m just a tier-two analyst. I don’t know what I can do. What would you say to that person who can maybe do more than they realize?
Matt: The first thing I’d say is your career is a bit more like a jungle gym than a ladder. I talked about how I grew up as an IT innovator—an IT engineer. When I was 20 or 30 or 35, I envisioned a future role for me as being a Chief Technology Officer or a CIO.
And even in my IT career, I moved around a lot. I was in networks, data centers, cloud computing, virtualization, application development, and data management. I moved around intentionally to become a very broad IT professional. I never envisioned moving to security—I envisioned a move to the C-suite. Then an opportunity to move to security came up.
I took it because I thought working there for a few years would allow me to help fix the security program and get it to a position where it would be more business-oriented. I thought taking that out-of-body experience would make me a better IT leader when I came back. And I found I just haven’t gone back yet.
It’s only been a decade or so. Maybe I’ll look one day. But don’t be afraid of that uncertain horizon. Don’t be afraid to take a role that may not align with where you thought you were going, because you might find yourself closing doors that otherwise might open a whole realm of possibilities.
If I had been closed off to thinking about security as an IT professional, I would not be where I am today. And I think I have a great role at a phenomenal organization that lets me contribute to the value of the organization, protect our brand and our clients, and have a fulfilling career at the same time.
So don’t think vertical is the only answer. Even within IT, I did a lot of horizontals. Look at roles that you can use to build your skill set and acumen to better position yourself for wherever the future might take you. The future is uncertain. Roles you might end up in may not even exist today. It’s building the right set of skills and competencies that gets you ready.
Mike: Where do you think we’re going? Do you think we ever get to a point where we trust AI enough to be fully autonomous?
Matt: AI is certainly awesome. Let’s just get that out there: AI is awesome. But it’s not magical. It doesn’t do everything, and it doesn’t do everything perfectly. And I think the answer to your question, in part, comes to: what kind of AI are we talking about?
As a cyber defender, I absolutely love chat-based AI tools. I use them every day, over and over again. But as a cyber defender, I can’t copy billions of alerts into one of those chat-based tools and get a meaningful security outcome. I need AI built into my tool, built into my workflow, and purpose-made for the problem I’m trying to solve.
Do I think we get to a world where, with unique niche business problems that have highly focused AI solutions, we are doing some kind of solution that doesn’t have humans in the loop? Probably—just like we did with automation.
A difference is that automation can be very deterministic, and AI isn’t always. But humans aren’t always right either. We need to get to a point where the technology is as good as—or better than—the humans. Then we need to say: what decisions can we allow it to make that are low-risk enough that it’s okay if it sometimes gets them wrong, just like we do with humans?
What decisions do we allow a level-one analyst to make versus a director of incident response versus your head of cyber defense versus your CSO versus your CEO? We allow different humans with different sets of experience, knowledge, and understanding of the business to make different levels of decisions. I think we absolutely get to a world where AI is making some degree of decisions in our environment.
I think we’re probably already there in some cases. Though in anything sensitive, we absolutely have humans in the loop here at KPMG. I suspect there are businesses out there that are letting AI make decisions.
So yes, I think we get there. What that looks like, what tools, whether or not that is general purpose—I don’t have an answer. We don’t know what tomorrow’s technology inventions will be and where they’ll take us, but we have to be willing to have a dynamic enough strategy.
A strategy is kind of like a hurricane plot. I know where it’s going to be on day one or in the next couple of hours, and by day two, day three, day four, we have a wider cone of uncertainty. We know where it’s not going, but we don’t know precisely where it’s going to be.
Technology is the exact same. We see the trends we’re on. We can guess where it might be. We know where it’s probably not going to go, but it could be anywhere inside that cone of uncertainty. And as technology and security executives, we have to be prepared for anything in that cone.
Evan: As we like to end the episode, Matt, with a bit of a lightning round—the idea is we’re going to ask questions that are possible to answer succinctly, and I’m going to try to get the one-tweet version from you.
Matt: That sounds fun. Let’s do it.
Mike: What advice would you give to a new CISO stepping into their very first CISO job—maybe something they might overestimate or underestimate about the role?
Matt: First thing I’d say is make sure you learn the business. If you don’t understand the business imperatives and what the business risk tolerance is, you can’t be an effective CISO.
Evan: How do you personally stay up to date on this stuff? What’s your information diet?
Matt: Two things. One, I listen to things like this. I listen to a variety of podcasts about the security profession, the state of AI, the state of technology change. I try to stay abreast of current trends and events, threats, etc.
Two, I surround myself with really smart people who do the same thing. Hopefully that creates an ecosystem around me where we’re each learning different things, and together we come up with a corpus of knowledge that helps us be successful.
Mike: What’s a book that you’ve read that’s had a big impact on you—and why? It doesn’t necessarily have to be a cyber or even work-related book.
Matt: I recently heard a talk by Will Guidara. He wrote a book on the same topic about Unreasonable Hospitality. The message he gave was how the little things add up to make a significant difference, and how you approach building organizational culture and serving those around you.
At the end of the day, security is a business enabler, and understanding how we can make the process better for the business—there was so much in his talk that I think we can apply to what we do in the security world.
Evan: Matt, what do you think is going to be true about the future of AI in cybersecurity that maybe most people in the industry today would consider science fiction?
Matt: There are so many aspects of any risk management decision that require a judgment call. Going back to a conversation earlier: do I think AI will make decisions? Yes. But where in the stack will it make them—at the analyst level, at the CSO level, at the CEO level?
I think there is always going to be a human aspect of security or risk management. And I read articles saying this is something we’re just going to “AI away,” and security will be the easiest thing in the world—there won’t be any more flaws. I think that’s a naive worldview on where the security industry is going.
I think this will always be a challenging back-and-forth between attacker and defender.
Evan: Matt, thank you so much for joining us today. Really enjoyed your perspective, and I’m looking forward to doing that again soon.
Matt: Thanks for having me here. I really appreciate it.
Mike: That was Matt Posid, Chief Security Officer at KPMG US. I’m Mike Britton, the CISO of Abnormal AI.
Evan: And I’m Evan Reiser, the founder and CEO of Abnormal AI. Thanks for listening to Enterprise AI Defenders. Please be sure to subscribe so you never miss an episode. Learn more about how AI is transforming cybersecurity and enterprise software.
Mike: The show is produced by Abnormal Studios. See you next time.
Hear their exclusive stories about technology innovations at scale.

