
On the 38th episode of Enterprise AI Defenders, host Mike Britton talks with TJ Mann, Global Chief Information Security Officer at Lockton. TJ argues the fastest path to resilience is not chasing every shiny tool; it is treating identity, APIs, and SaaS configuration as the new frontline, because attackers “don’t need to breach your network anymore,” they need one compromised identity, integration, or misconfiguration. He also breaks down how AI shifts email and impersonation risk toward hyper-personalized social engineering, and why Lockton is investing heavily in employee awareness, muscle memory for reporting, and identity-first controls to keep fraud from becoming a business process.
Quick hits from TJ
On AI-driven impersonation: “We are seeing… hyper personalized social engineering… deepfake voice or face or audio or video or both.”
On what changed in cloud security: “Identity… is the new perimeter.”
On what attackers really need now: “The bad guys don’t need to breach your network anymore. They just need to compromise one identity, one integration or one misconfiguration.”
Recent Book Recommendation: Ikigai by Héctor García and Francesc Miralles
Evan Reiser: Hi there, and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, Fortune 500 CISOs share how AI has changed the threat landscape, real-world examples of modern attacks, and the role AI will play in the future of cybersecurity. I’m Evan Reiser, the founder and CEO of Abnormal AI.
Mike Britton: And I’m Mike Britton, the CIO of Abnormal AI. Today on the show, we’re bringing you a conversation with TJ Mann, global chief information security officer at Lockton. Lockton is a global insurance brokerage that provides insurance, risk management, and employee benefits, with over 140 offices worldwide and over 13,000 employees.
There are three interesting things that stood out to me in the conversation with TJ. First, TJ explains why AI-driven social engineering is getting harder to spot. Deepfake voice and video paired with business context-aware phishing is pushing the problem to a whole new level. TJ’s response is to fight fire with fire. Lockton has doubled down on its security awareness training using real attacks hitting their own environment.
Second, TJ shares a clean, measurable ROI story from security operations. During a SIEM migration, his team used AI to identify which logs were actually being used versus those that were just collected. The result: the cost of that logging function was cut by over 30%.
And finally, in cloud environments, attackers don’t need to break in through the network. They just need one weak link: a compromised identity, a risky integration, or a single misconfiguration. From there, lateral movement is faster and easier to hide.
TJ, thanks so much for joining us. I’ve really been looking forward to this episode. To start, can you give our audience a brief overview of your career—how you got into cyber—and tell us about your current role at Lockton?
TJ Mann: Currently at Lockton, I oversee the global information security program, the global information security strategy, vision, and operations, and everything that comes with it.
My career has been around—if I had to sum it up in a sentence—turning security from a gate into a growth engine, so we can embed it in architecture, in culture, and in the metrics that we’re sharing with the business: the way we’re talking about security, and the way we’re designing security.
Mike: Maybe for those in our audience that may not fully understand the depth and breadth of Lockton’s operations—or what the footprint looks like—give us something that sets Lockton’s cyber program apart from maybe other insurance companies or other organizations.
TJ: I think what sets Lockton’s security program apart—first and foremost—we provide risk advice to clients as part of clients who come in and purchase cyber insurance policies through us. So there is a certain level of expectancy as well that if we’re giving advice, we’re doing some things right ourselves.
We are on a build journey. I think security programs are never fully built because the infrastructure, the landscape, the environment changes so rapidly. I think there’s always work every year where you look at it like, “Okay, crap—we need to start building in this area now.” So it’s ongoing work.
But I think we have a lot of good focus on identity. We understand that is one variable we can control within the equation—we have a little bit of an upper hand. So we’re focused on strengthening our identity areas.
I think another big one, from a risk perspective, is financial fraud. It continues to be a big risk because we have data from so many different partners that we work with. We work with carriers directly, so there is a lot of sensitive data, especially in the benefits space.
So how do we look at that and make sure we’re addressing financial fraud? We use advanced email protection systems—Abnormal being one of the industry leaders—so that’s an area we’re heavily invested in as well, knowing that employees are going to fall for malicious hooks. How do we make sure we have a little bit of certainty in terms of what is going to get through and what is not going to get through?
And I think everything else—having a SOC, looking at applications—that’s table stakes. I think AI is another big area. Like every company, we are investing; we’re embracing it. As a security leader, personally, I think security leaders need to embrace it. It’s here to stay. It’s not going to go anywhere, so we better figure out how to protect the company and secure its usage and enable safe usage of AI, rather than saying, “Hey, what are we doing?”
Mike: Where do you see AI attacks hitting Lockton—and the insurance brokers and insurance companies in general? And from your unique perspective of seeing your customers, where do you see AI attacks more broadly?
TJ: Some of the things that we are seeing—and some of my peers that I’ve talked to are seeing—is hyper-personalized social engineering. That could be either deepfake voice or face, or audio or video, or both.
Another area that we’ll start seeing more in the future is automated exploit chains—chained attacks. That takes quite a bit of manual effort today and is kind of part and parcel of your APT threat actors who have that kind of expertise and manpower in one room to be able to do the chaining in the right timing, in the right spot. But I think some of that automation is going to be deadly. It’s probably coming our way as an industry.
And one other area that’ll be really difficult—and it’s extremely difficult to investigate today—is insider threats. I think that’s going to become even more difficult to figure out what happened and do those investigations. Insider threats are going to get more complex.
But if you look at most organizations—your email threat vector—business email compromise is the most dangerous one. It’s more likely to get through, and it’s more hazardous. Some of the products on the market are doing good with other kinds of email-based threats. But if you put hyper-personalized social engineering on top of that, that becomes very easy to get past the controls.
Then you’re looking at deepfake voice, you’re looking at video, you’re looking at true business context-aware phishing emails—knowing what’s happening in the industry. “Hey, you had an event yesterday,” and using publicly available information to craft social engineering attacks at scale. I think that’s going to be huge.
Mike: What I would say from that point, too, is: with the rise of deepfakes and how good AI is getting—and, you know, this is as bad as it’s ever going to be; it’s only going to get better infinitely faster—has that made you rethink awareness around your employee base and how you teach and provide those awareness examples and training to your employees?
TJ: Yes. We doubled down on awareness. I think we’re doing almost double the awareness that we were doing last year. The primary reason is AI-based attacks and the business email compromise stuff, because those are hard to identify and detect.
If we can make it stick in people’s minds how to spot and report, I think half the job is done. People, most of the time, don’t even know: where do I send it, how do I report it?
So we’re giving these real examples that we’re getting hit with as part of the awareness, to tell people: this isn’t a story or a news article. This is here. This is where you work. It’s happening to us, just like it’s happening to everybody else out there.
So here are some of the things: how to spot, how to detect—look for the eyelashes, look for the hair not moving, or “Hey, you’re saying the same thing to different questions.” Spot those indicators—and how do you report it?
Mike: If you had to say, “Look, this is the one area you cannot skimp on. This is the one area you have to invest”—what would that be? Is there one area you’d say you absolutely have to nail this and get it right?
TJ: The threat landscape has changed. Everything’s moving to the cloud. The cloud didn’t just shift where our data is—it shifted who touches it, how they touch it, and how attackers can grab it.
There are a few areas. Identity is top of mind—that’s the new perimeter. So investing in identity areas from a threat detection perspective.
APIs—making sure you’ve got good API standards. How do you make sure you’ve got everything written down? Who has the keys, and how do you exchange them? Because everything’s API-exposed rather than server-exposed now.
So you’ve got all these vendors who are in your environment, and it’s not your infrastructure. How are you assessing them, making sure they’ve got the right things in place to protect your data and your customer’s data?
And then you’ve got all the SaaS software in your environment. Making sure you’ve got eyes on it—especially from a data leakage perspective—and all the AI capabilities that are part of it. Who is training? Who’s using your data? Is your data leaving? Is somebody training models on your data?
Combined, all of these are probably the bare minimum in the new threat landscape that we’re living in. Because the bad guys don’t need to breach your network anymore. They just need to compromise one identity, one integration, or one misconfiguration. That’s all they need.
And moving laterally is much faster in the cloud than it is on-prem—easy to conceal. So from an investment perspective: threat detection and response, and identity.
Mike: So when it comes to the defender side and what you’re building at Lockton for your program, what are some tangible results you’ve seen from AI technologies that maybe some who listen to the show might be surprised to hear about?
TJ: I think the best is yet to come, but there are areas where we’re already seeing benefits. Enormous impact—practical impact—I think detection correlation is one of those areas.
We talked about identity and APIs being the new perimeter, and attack vectors and misconfigurations—linking all these weak signals. Usually what’s getting through is low-and-slow attacks: using an IP address, trying to do reconnaissance—“Where’s the user from? Okay, I’m going to find an IP from the same city so I don’t flag impossible travel,” or “I’m going to stay within the same IP ranges.”
So the weak signals—as I call them—I think AI is already doing that. It’s connecting these weak signals across identity, network, endpoint, and cloud—correlations on what either would get past a human, or is extremely difficult or improbable. That’s already happening, and it’s going to get better over time.
Hunting has gotten better. The capabilities to do a threat hunt earlier—which manually would take hours—now take minutes. You have enormous information and indicators and attack paths available through AI. Security Copilot does some of that.
And I think it’ll help defenders get better quickly. Writing new playbooks is easier. We just tell it, “Here’s a new attack vector I saw—write a playbook.” Building detection configurations will be faster.
But I think the best is yet to come. We still haven’t—there’s generative AI, there’s large language models with training stuff—but true practical capabilities, I think, are yet to come in products and commodities for everyday things.
Mike: What are you doing at Lockton to stay aligned with your business?
TJ: It’s our job as security leaders—technologists—to understand and educate: “Okay, what do you want?” and figure out what the right solution is.
We’re fortunate that we have a kickstart to that in terms of there’s a dedicated leader globally for the organization who’s responsible for AI—introducing AI, building AI, using AI, building internal products.
Internally, we have Copilot and other technologies—fully embracing AI for all our employees. So there is awareness. There are resources. There is an organization—a group of people. We have internal policies around it. We have to use it within the right guidelines, within the right guardrails—making sure it’s safe, it’s smart business practice, and we’re not putting ourselves into a privacy issue or regulatory issue.
Being a global business, a big chunk of our business is international. There’s the EU AI Act that applies to a lot of our offices. Latin America, Australia, Pacific, Asia—everybody’s got something. The U.S. is probably the most relaxed in that space.
From a privacy and AI perspective, if you look at insurance as an industry: it’s regulated at the state level in the U.S., not at the federal level. So we don’t have a lot of strict laws around that that bind private companies.
So working in an organization like Lockton, there’s a lot more regulatory scrutiny internationally than there is in the U.S.—besides, obviously, you have NYDFS, you have a SOC 2 kind of scrutiny that we go through. But states are getting better: CCPA, NYDFS—AI laws are coming at the state level, more privacy laws are coming at the state level. So we’re not going to be able to enjoy this for very long.
So we’re building anticipating that. The security program we’re building is anticipating all of that—making sure we have a common denominator in terms of what we’re building.
There’s a cross-functional body with privacy and compliance folks. Anytime anybody wants to buy something that has AI in it, it goes through the process. We look at it and understand: do you really need it? We’re building something in-house that’s going to do it in the next three months—can you wait?
So we’re trying to do it consciously—trying to do it in a way that doesn’t stifle innovation—trying to do it safely and smartly.
Mike: Is there a particular cybersecurity initiative or capability within your team that you’ve deployed recently that you’re proud of—and maybe outsiders may not necessarily understand or appreciate?
TJ: Most recently, what comes to mind is: we’re switching our security incident and event management tool. That’s the tool that gathers logs from all the various technology devices and correlates them.
As part of that switch, we were able to purchase a product that deduplicates those logs—gives us intelligence using AI to understand what we’re actually using, and what logs we’re not using. So we were able to identify a huge percentage of logs that we were not using, but were collecting and paying for—all that ingestion and collection.
So we were able to cut down the cost of that function by over 30%, which is huge. I’m extremely proud of the team for that effort. That’s tangible benefits to the organization—making us more efficient, smarter, in how we protect the organization.
Mike: And as you look at how things are evolving and what’s ahead, is there anything you’re excited about with AI and cyber as the capabilities continue to expand?
TJ: The true capabilities of AI—commodities or products on the shelf—I think people are still building. It’s going to come.
But I think it would be really cool if, as part of developments in AI, we have dynamic scores for identities. Some of the things that are going to happen in the not-so-distant future: being able to understand, as an identity interacts with so many different things—some service is pinging it, somebody’s trying to log in, there’s a DNS request—looking at what is changing the trust score of the identity. What is the dynamic trust score for the identity?
Self-protecting data—data bits that are able to understand and know how they’re being used, what’s happening, and maybe self-protection with each data bit. Some of these things sound like science fiction, but I think they’re going to be here sooner than later. We’ll see that in our lifetimes.
My hope is that the internet fabric itself, with the use of AI, can learn and we’re able to fix our mistake. When we built the internet, we built it completely insecure. So the hope is that AI can secure it in some ways.
Mike: With the use of AI on the rise by attackers and the use of AI on the rise by defenders, what do you think it’s going to do to the cyber insurance marketplace?
TJ: I think that’s a great question. I’m not an insurance analyst, but I can take a stab based on what I’ve seen going through the ransomware threat and how the market responded.
I think there will be a time where cyber insurance will be expensive because the bad guys will figure it out first and be successful in AI-based attacks. Some of that is happening now—you’ve got threat actors like Scattered Spider who are extremely successful in their operations. You’ve got others who are going out, and social engineering is working.
And with AI, as we talked about earlier, it’s going to get more convincing. I think that’s going to continue. So there will be more AI-based attacks, and there will be more successful AI-based attacks across the globe. I think that’s going to jump the policy prices—that’s my guess.
And I think it’ll take a couple of years until the market catches up with products that are strong enough and could be deployed enterprise-wide or globally in organizations and big businesses that will be able to counter those attacks.
So I think it’ll be a peak and it’ll come back—just like ransomware and the whole insurance policy when it first started. If you look at maybe 10–15 years ago, the policy prices, the scrutiny was so high. You’d try to get a policy and they’d ask like 100 questions, and they don’t know what those mean or how they relate to the policy.
But there’s more expertise on the insurance and business side. There are more cyber people who understand cybersecurity, so that has changed.
Now there are good products. So if you show that you’ve got the right investments, your policy premiums are probably not very high—versus early on in the age of ransomware, we didn’t even have products that were capturing and catching all that stuff.
So it’s the correlation between how quickly we throw good products in the market, so the insurance industry sees: “Okay, you can cover your risk—the policy premiums will come down.” The first few years probably will be high.
Mike: All right, TJ—we have about five to ten minutes left. You’ve given us all sorts of great insights throughout this episode, but we like to do this small lightning round at the end of every episode. Think of it like the one-tweet version.
What single piece of advice would you give to a security leader who’s stepping into their very first CISO job—maybe something they might overestimate or underestimate about the role?
TJ: Don’t take it personally. It’s just a job.
Mike: Where’s the number one place you go to stay up to date with all of the news related to cyber and technology?
TJ: I’ve got an app which collects security news from all the different sources, and that’s what I use. That’s what I look at first thing in the morning after I wake up—what happened overnight?
Mike: On a personal note, what’s a book that you’ve read that’s had a big impact on you, and why? It doesn’t necessarily have to be cyber or work-related either.
TJ: There’s a book called Ikigai. Ikigai is a Japanese word, and it talks about the ways of living life and how to live a life with which you’re content. It talks about the habits of Japanese people, and ikigai as in the purpose of life. It talks about what different things people in Japan do to stay prosperous and live a purposeful life—like being out, being social.
And I think there are principles around embracing imperfection. If a cup is broken, you look at it as: the imperfection is beautiful, versus trying to see it as a wrong thing. But I think that’s a really good book to find—how do you stay content in life?
Mike: Excellent. We’ll have to add that to our list—our ever-growing list.
I’m going to throw a random one out here, just because you mentioned The Terminator earlier as kind of your inspiration for getting into cyber. I’ve often said where we’re going with AI could either end like The Terminator or the Disney movie WALL-E. In your opinion, where do you think it ends—towards one of those or somewhere in between? Where do you think this all ends up?
TJ: I think it’s going to end up like some of the other movies that are being made—that humans and robots are going to live alongside humans. We’re going to have relationships with machines in the future. I think people are going to keep them as, “Hey, this is my support person,” for somebody who may need it.
I think the future is coexistence. I do think that at some point machines are going to become smarter than humans at the pace that they’re learning. But I think the future is coexistence. I think there will be machines among us in our lifetimes—we’ll see that.
Mike: All right, final question. What do you think is going to be true about the future of AI and cyber that most people would consider science fiction today?
TJ: I think some of the things that I talked about earlier are probably science fiction—that the internet will become secure. If you tell people today, they’ll laugh, because inherently it is insecure. But I think it’s possible with the right AI use and training it in the right way to look for the right things.
I think security becomes embedded at the protocol and fabric level. It’s probably science fiction, but it’s probably doable—and will probably happen in our lifetime.
Continuous cyber resilience is probably science fiction, but it’s probably possible. And with a lot of products yet to hit the market with AI capabilities, I think that’s going to come in the near future.
And I also mentioned earlier self-protecting controls on each data piece. I think those will happen. They’re not here yet, but it’ll be fun and easy when that happens.
Mike: All right. Well, TJ, thanks so much for being our guest today on the show. I enjoyed the conversation, and I think there are a lot of great insights our audience can get out of this. Appreciate you taking the time out of your busy schedule to join us on the podcast.
TJ: Sure. Thanks for having me.
Mike: That was TJ Mann, global chief information security officer at Lockton. I’m Mike Britton, the CIO of Abnormal AI.
Evan: And I’m Evan Reiser, the founder and CEO of Abnormal AI. Thanks for listening to Enterprise AI Defenders. Please be sure to subscribe so you never miss an episode. Learn more about how AI is transforming cybersecurity at enterprisesoftware.blog. This show is produced by Abnormal Studios. See you next time.
Hear their exclusive stories about technology innovations at scale.

