On the 25th episode of Enterprise AI Defenders, hosts Evan Reiser and Mike Britton, both executives at Abnormal Security, talk with Joshua Brown, former Vice President and Chief Information Security Officer at H&R Block. H&R Block is one of the largest tax preparation companies in the United States, with tens of millions of customers relying on its services each year. Managing security for a global tax enterprise requires defending against large-scale fraud, identity theft, and AI-powered social engineering attacks—all while ensuring compliance with strict regulatory requirements. In this conversation, Joshua discusses how AI is accelerating cyber attacks, the challenges of using AI for fraud detection in financial services, and the impact of automation on the next generation of cybersecurity teams.
Quick hits from Joshua:
On the state of fraud in financial services and how AI can help: “If you're talking about a normal year, you might see a thousand potentially fraudulent returns, and then suddenly it jumps up to a million or more. You don’t have enough analysts to look through that. It’s not possible. You have to do something with machine learning or AI to be able to narrow that down and help make faster decisions.”
On balancing the need for efficiency and the need for future talent in cybersecurity: “I think businesses are so hungry for efficiency that they risk gutting their talent pipelines. If we’re not careful, we’re going to end up with a senior workforce and no way to develop new security talent.”
On leadership strategy in security: “How you motivate a team is by connecting them with the why of what they’re doing and letting them figure out the how. That’s why you hire people smarter than you, right? It’s not so that everybody does things the way you do it.”
Book Recommendation: Right Kind of Wrong by Amy Edmondson
Evan: Hi there and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, fortune 500 CISOs share how AI has changed the threat landscape, real-world examples of modern attacks, and the role AI can play in the future of cybersecurity.
I'm Evan Reiser, the CEO and founder of Abnormal Security.
Mike: And I’m Mike Britton, the CIO & CISO of Abnormal Security.
Today on the show, we're bringing you a conversation with Joshua Brown, former Vice President and Global Chief Information Security Officer at H&R Block.
H&R Block is one of the largest tax preparation companies in the US, with tens of millions of customers relying on its services each year. Managing security for a global enterprise requires defending against large-scale fraud, identity theft, and AI-powered social engineering attacks—all while ensuring compliance with strict regulatory requirements.
In this conversation, Joshua discusses how AI accelerates cyber attacks, the challenges of using AI for fraud detection, and the impact of automation on the next generation of cybersecurity teams.
Evan: Josh, thank you so much for joining us. Do you mind sharing a little bit about kind of the role you're in today and maybe how you got there in your career?
Joshua: Yeah. So I actually just finished a three year, nine month stint as the global CISO for H&R Block.
Um, I was brought in by the previous CISO as director of security engineering to help kind of reboot the program. And he left after two and a half years, I took the seat. Um, and you know, really, really proud of, of what I accomplished during my time there. Um, you know, went from a small team with mostly outsourced services to a globally distributed team. Uh, about seven times the size of what we started with, but all the services were being provided in house. Uh, and so, you know, just a tremendous amount of talent and tremendous amount of, of pride in, in ownership of, of what that team became.
Mike: You know, there's been so much going on over the last couple of years with just some rapid developments and just so much explosion of adoption of different technologies in the enterprise.
How do you see the threat landscape expanding and how do you think it's changed with, you know, generative AI being on the forefront and just everybody being in this cloud first world today?
Joshua: Yeah, it's a great question. And you know, my, my answer is colored by my, my past and my experiences. Right. So I came up when script kitties were a big thing. Right. And so, you know, for anybody listening that doesn't know what a script kitty is, um, those are the, you know, the, the, the bad guys that would take work that other people had put time in. They didn't necessarily understand the tool. They didn't have the capability to write the tool themselves, um, but they could use it. And you got a proliferation of, of, you know, moderately sophisticated attacks from unsophisticated, uh, attackers.
Um, I, I think the same thing is happening with AI right now. So it's enabling people to make a leap forward in what they're able to do as an attacker. It makes it easier. It makes it faster. And, and those are the two things like particularly the time aspect is what really concerns me. Right.
But, you know, what we were seeing already, um, just in the last year was a proliferation of much more sophisticated phishing attacks. Uh, smishing attacks, like all of the ishings. Um, and a lot of them you could tell were AI generated, um, but they can, they can go from registering a domain and sending out that first attack in a matter of minutes, right? And, and, and a lot of these things have been scripted for a long time. That's not new. What's new is how quickly AI can go, go find everything about this person I want to target on the web and come up with, with five email topics that are going to, you know, have a high rate of, of success. And, you know, the, the, the, the attacker didn't have to do any of that hard work. Um, that, that whole recon aspect and execution can be largely, uh, automated through AI.
Evan: What you're saying is like, but you're kind of applying like the kind of time to pull off an attack, right? It's going down. Yeah. And due to the complexity, the growing complexity enterprise environment, right? That mean time to response going up, maybe we're already there, but some of these like lines cross and we get into like a pretty dangerous state where, you know, you can like, you know, human brains have got some latency to them, right?
So like, at what point does, You know, the paradigm of how we do things become, you know, unsustainable?
Joshua: Right? Yeah. Um, and I think it's a great point. It's, it's extremely worrisome and, you know, I, I mean, I picked out phishing because I think most of the attacks we were seeing were against people. Um, you know, cause people are hard to patch.
Um, and, uh, you know, if you look at what we're already seeing now, like there's been some press in just the last couple of weeks about, Uh, zero day vulnerabilities discovered by AI that fuzzing tools hadn't picked up and, and things like that. And so I think, you know, we used to think that, wow, if you can get a critical patched in seven days, you're doing really well.
Well, um, that's probably not going to be good enough, uh, is, is the problem, right? Is, as these things are going to be detected and the attack is going to be, uh, driven by AI and it's going to be a much quicker turnaround. So. Um, yeah, I think the timelines are, are definitely shrinking.
And so what, what I'm waiting to see, and I know there's a lot of people talking about it as, you know, the marketing spin machine goes into overdrive. Right. And like our tool is a hundred percent more AI than it did yesterday. Um, I, I have not seen, and to be fair, like I haven't been diving deeply into this, just, you know, Looking at tools. Um, I haven't seen anything that's just blown me away yet. Like, wow, that's a amazing use of AI. I think as platforms like SIM platforms and XDR platforms, things like that, start integrating it more into their tool sets, you're going to improve in theory, improve fidelity, improve the rate at which you're able to analyze a potential incidents that come up. Um, maybe help you categorize those things in ways that you can put the more serious ones to your, your big brain people to let them look at. But it's a pretty significant problem.
I, I'll give you another analogy, um, or another example. There's been such a rise in identity theft across the U. S. globally, of course, but Our taxpayers are in the U. S. Um, that all of the information you would need to file with the I.R.S. Is out there. Social security numbers, former place you live, all of that kind of stuff. And all of those sort of knowledge based questions that that those platforms used to validate that you are who you say you are. Can be easily gamed.And this is, this has exploded really since COVID.
Um, and so, you know, if you're talking about on a, on a normal year, you might see, you know, a thousand potentially fraudulent returns and it jumps up to let's say a million, million and a half, 2 million, I don't care how big your team is. You don't have enough analysts to look through that. It's not possible. And so you have to do something. with machine learning or AI to be able to narrow that down and help you make faster decisions.
Um, then the, the big problem is fraud schemes change year to year so substantially that you could train it. We did this, we trained a machine learning model on one year of data and got really good fidelity. And then we fed it the next year's data and it was garbage, you know, and so, we expect people to be able to adapt and figure those things out, right? Um, I, from what I've seen, some promising stuff, but it's, it's not there. Um, and so, you know, this is, this, like I said, this is another area of asymmetry that, that defenders are going to have to deal with for the time being.
Evan: You can imagine a world, right? I don't know if it's one year out or 10 years out, but at some point, some magical AI can go through and make sure all of our code is perfect and everything is perfectly patched. It's all, you know, amazing, but, you know, people are still, you know, people are still fallible.
Right. And, you know, people are kind of interfaced into these enterprise systems. You can have perfect authentication, but if one person clicks the wrong thing, like, you know, you're kind of bypassing the technical controls and it does seem like, you know, I'm, I'm a little bit newer to cybersecurity, but historically it's like we've been very focused on infrastructure.
So do you think they were kind of like, I'm not trying to lead the witness here, but like, to what extent do you think we're barking up the wrong tree? Right. You know, we're spending a lot of time talking infrastructure, not a lot of time about people, Five years out, like, how do you think that does that makeshift change? Or are we kind of like even working on the right things in cybersecurity?
Joshua: Well, and I, you know, I vacillate depending on my mood, whether I'm feeling optimistic or cynical about, about the value of end user security awareness training. Right. And I have to remind myself that like. Yeah, people aren't patchable, they are teachable, um, but if your security program, if the success of your program depends on end users making the right security choice 100 percent of the time, like you need to maybe get a different job. That is a very hopeful strategy and it is probably not going to be very successful.
I have seen some AI being used by some of the security awareness and like phish testing and training programs. So, you know, somebody fails a phish test a couple times, the AI can give them more tailored training rather than here's the generic thing that tests that we send out to everybody each month. You know, I think there's probably some value there.
Um, but really you have to focus on the other side, right? So we, we kind of had set this North star of, we want to make passwords not matter anymore. So people are going to give up their credentials. Um, we see it constantly, um, across all industries, right? And, and you know, that flies in the face of like, well, we, you know, we, we got our, our Click prone rate down to 5%. Like it still just takes one person, right? So what you have to look at is, okay, I want to make it so that your credentials don't matter anymore, right? And that's a combination of a whole bunch of other buzzwords, like zero trust or sassy or, you know, SC or any of those things, right? Where, you only should be able to get to the things that you're authorized to get to when you need to get to them, and you shouldn't be able to get from a thing that you have access to to another thing that you have access to. Those should be discrete connections. And so really, from a risk mitigation point of view is we're just trying to get the blast radius as small as possible. We know that small accidents and incidents are going to happen. We want them to not matter to the business.
Evan: Today kind of state of the art phishing training, right, is you have probably someone on the organization whose job is to think about, you know, what people should get what types of simulations or training that's kind of, you know, most relevant for them.
Um, it's not hard to imagine a future where, you know, some of that work can be done with AI, right? You know, if you know what people's jobs, like presumably AI can know more about, if you're in a hundred thousand percent organization is going to know more about what's relevant for those individual people than the world's best kind of training analysts. And obviously there will still be a role for people on the team, but it seems like the nature of security organizations, or even like the org chart is going to change in the era of AI.
There's some roles that get added. There's some roles that kind of get split, right? Everyone's probably has some job response from their job description, kind of being crossed off, and then now it's an A. I.
How does the organization of security teams change right? A couple years down the road as AI changes in the architecture, the threat landscape and some of the tools that teams are using.
Joshua: I was just having this conversation with a developer friend of mine the other day, and the org he's in, they're really pushing kind of these co pilot tools, right, for the developers. And, and the goal from the business perspective is, you know, we want to get a 20 to 30 percent productivity increase out of our development teams. Great.
Now the cynic in me is like, and the business is going to recoup that by reducing the size of their development teams by 20 to 30%, rather than like, look at how much new, cool, great stuff we can get out there. Look at how much tech debt we can burn down, you know, all of those sorts of things. But as far as the structure of the team goes, this, this was the thrust of the conversation we had.
Really experienced developers, like seniors and principals are going to use those tools to, to basically remove all of the rote kind of stuff from their jobs. Right. It's going to become a, a force multiplier for the senior people. For the junior people, um, and I'm talking like really, really, really junior, like associate level type, you are at a risk of, You know, potentially removing some of those people from the workforce.
And what, what concerns me about that, uh, is not, it doesn't have anything to do with the accuracy or the capability of the AI tools. It's, okay, well, how do we build senior talent? What happens to those talent pipelines?
It reminds me that if you've ever seen the movie Children of Men, uh, where all of a sudden women stop being able to have babies globally, just instantly, they don't know why. And you see the, the impact of that just cycle through the entire economy, right? So no more pediatricians and no more nursery schools and then no more kindergartens and no more elementary and then no more colleges because nobody's, you know, coming to go through those things. I worry about that kind of an impact, right?
I think businesses are so hungry for efficiency that, I would just warn, like, don't, don't trade off long term gains for short term gains. Like you have to have a long term strategy for this stuff. Um, and you know, in my programs, young, um, young in career, not necessarily young in age, talent was absolutely critical to building the, uh, you know, a, a, a really, um, comprehensive, well knit and, and living organism of a program.
Um, You know, and so we leaned way into, um, uh, moms re-entering the workforce, second career, you know, like ex military, ex teachers. One of my favorite examples was a guy who was an undercover KCK, um, narcotics cop for 14 years. And, you know, he came in and, and ended up leading our forensics division because he had the right mindset. Right.
And so, um, I think, you know, if you look at what I would hope would happen is the same hope that I had for kind of SOAR technologies. Right. Is we want to let the big brain people use their big brains on big brain problems. Uh, and so you want to automate all the rote stuff away. Um, AI is going to help that and accelerate that, but we also have to look at the other side of, okay, what do we give to interns? What do we give to, to our new people? Because you need those people to be able to grow and learn, and, and we already have a talent shortage in the industry, right? I think AI can help, but it could also really hurt long term into, in terms of what we're doing industry wide.
Evan: That's a, actually a really good point that I don't think if anyone else bring up on the show, like there's some obvious wins of kind of like increasing productivity and removing some of that work, but then, and like, but then like, how do you get the next generation of senior people?
Right. It's like, it's not super, not super obvious.
Joshua: Yeah, it's, it's a real problem. Um, and you know, I think if we expect our colleges and universities, um, to be pre, pre professional programs and to provide whatever training is needed, like that is not happening now. I don't, I wouldn't anticipate that's happening in the future.
And I worry less about bigger organizations that have a team that can kind of absorb some of these changes and much more about like small and mid sized businesses, which are already underserved from a security perspective.
Mike: Do you have any examples or tangible things that you've seen from AI that maybe were a little bit surprising? I, you know, I think we've touched on some of the ones where we think AI is, is, you know, potentially able to solve problems. But, you know, where has it surprised you?
Joshua: Well, I wouldn't say I'm surprised, but I would say I'm heartened to see what some of the major SIM, at least the next gen SIM platforms are starting to do. I was looking at one the other day that was claiming, you know, that it could, it automatically created detections mapped to 90 percent of the MITRE ATT& CK framework or something.
And I don't know, most teams that are doing that work manually, like it is hard work, you know, to figure those things out and get those automations built. So I thought that was pretty cool, right? If you can use AI to improve the fidelity of the tools that you already have, that's a force multiplier, right? Just like automation was supposed to be. So that's pretty cool. That's pretty heartening.
I've been disappointed at, at how some places are, they're claiming they're using AI or ML, and you don't really see a difference in the effectiveness of the product. Like it's all happening behind the scenes. I don't want to call out any specific products, but I was, I was really impressed by a product not too long ago. Uh, and it's like a SaaS, um, it's a SaaS posture, uh, management type of tool. Right. And all it's doing as these different platforms, I mean, Microsoft has a graph database in the background.
I know sale point is just transitioning to graph, like some of these big players right in the space are switching to graph database on the background. And you can mine that data and, and collect all of the really weak signals. And if you have the right kind of learning model there, it can really help you sift through that data as much as a way a SIM would, right? Um, but, but anyway, these, these kind of point solutions are, are starting to show real promise in that space. And again, it's not, it's not just driving down detection time, but it's driving up detection accuracy. The last thing you want is your, your sock to just be swarmed with, with constant alerts that, that are of low value.
Mike: You know, you mentioned SIM, and I feel like that's one of those areas that it's just, you know, long in the tooth and ripe for disruption. Do you feel like there's other, other domains or other technology families where, it's long overdue for some disruption and AI could be that disruptor.
Joshua: Well, yeah. And I think it's, it's for me, it's for tools that had great promise, but the promise wasn't really actualized. Like UAVA is a great example of that. And lots of different platforms build you Ava into it. Right. Um, if you're a Microsoft shop, like, and you've, you've got E5 licensing, you can, you can do stuff like impossible travel, right. Those are the most kind of basic examples.
So I'll tell you about a weird attack pattern that, that I was discussing with some peers not too long ago, and it was one of those, um, Rumsfeldian, uh, unknown unknowns type of things, like we hadn't really thought about this, you know, so we're so focused on the devices that we give people through which they can access resources, uh, of value.
Whether that's a tax pro who's, who's doing taxes. Uh, whether it's, uh, you know, a sales guy who's, who's combing through his, you know, CRM tool to, to find the next prospect. We started seeing people get targeted on their personal devices and these are not BYOD devices, right? These are literally like, I'm at grandma's house. And now I'm getting targeted on grandma's computer.
And a part of this I think has to do with how our digital identities follow us wherever we are. It was really kind of a puzzler for us. Like, okay, like we would hope that a user would know, like, well, this is my personal device. I do zero work on this device and now it's prompting me for my work credentials. I probably shouldn't enter those. But as we said, we know people are going to do that.
Um, so then from our side, the question was like, how do we detect that? You know, it was a bit of a head scratcher moment and followed by a lot of navel gazing as we're like, uh, I feel kind of sick to my stomach now and I maybe need to go sit somewhere dark for a while and just rock. But you know, some of these tools are now looking at those back end graph databases around identity and saying, Oh. Hey, this, um, while, while they, while it does, it does it like the person gave the right username and password, and they did complete multi-factor authentication. Um, this is a device we've never seen before. And then we see the behavior of that account immediately after the bad guys have the credentials, right? You see all this bad stuff happen in a flash cause it's all scripted. And I think, you know, AI recognizing that collating that and raising the red flag. is going to be way faster than, uh, your analyst pouring over that data and trying to figure out what's going on.
So I think, you know, force multiplier, time reducer, efficacy increaser, like those are all powerful things. But you're right. SIM is, is right for disruption, uh, and I, I think you're right. There's a lot of other tools, uh, identity platforms. Probably we're seeing lots more AI and, you know, in those.
It's also fascinating to me how some of these, how some of these players are now like expanding their scope to try to become platform plays to compete with the Microsofts of the world. Right. I mean, CrowdStrike started off as like EDR, a really good EDR. Right. And now they've got just all of these adjacent services and they're trying to take over that, that whole space, which I mean, good for them. Right? That's, that's the, if you can do it well, it's, it's a great play to make.
Evan: So Josh, for the kind of maybe last five minutes or so, we like to do a bit of a lightning round, just get your kind of like quick hit kind of one tweet answers right to questions that are impossibly hard to answer in one, in one tweet.
So Mike, why don't you kick us off? We've got like four or so.
Mike: Sure. What advice would you give to a security leader who's stepping into their very first CISO job about maybe something they might overestimate or underestimate about the role?
Joshua: Spend more time listening, especially your first 90 days. A lot of people want to come in, they want to make a difference, they want to show they know what they're doing, they want to make an impact.
If you're coming into an established program or established company, You need to take the time to learn, learn from the people that are there and have been in the trenches, um, instead of dictating.
Um, the other piece I would say is people want to be creative and they want to be producers. They don't want to be ticket takers largely. And so, How you motivate a team is by connecting them with the why of what they're doing and letting them figure out the how. That's why you hire people smarter than you, right? It's not so that everybody does things the way you do it. Um, and, and so that's, that's what those are the two bits of advice I would give. I know you only asked for one, but you got a bonus one.
Evan: Josh, um, you know, one of the reasons why we're excited to have you on the show is because, you know, you, you're, you're pretty forward thinking, um, when it comes to, you know, how cybersecurity, right? We just, we spent the last 40 minutes talking about that.
Um, if you're maybe a CISO out there that feels like maybe they're on the lagging edge of that kind of knowledge versus leading edge, what would be your advice about how people should kind of stay up to date and, you know, start thinking a little more about what's important going forward?
Joshua: Yeah, it's, you know, Evan, it's a great question.
It's, it's tough, right? Because, um, as you noted early on, like the CISO role is pretty demanding. Um, high stress, high pressure. There's always stuff to learn. And so, what I tried to do, and I wasn't always successful, but I tried to carve out an hour each day just for random research.
There's a ton of stuff like you can go down the rabbit hole in a variety of places, you know, Reddit or, or, or just Googling things. There's tons of content on YouTube. Go to conferences and network and meet people and talk to smart people. And don't necessarily just go to security conferences, like look for conferences that are like CIO CISO conferences. Those are pretty good. Because what you want to do is you want to expose yourself to not just what your company is thinking and doing, but what the industry is thinking and doing.
Especially in the last few months, I didn't do a great job of that because we were so focused on getting ready for tax season, but it's important for in this role to be a lifetime learner and you have to be just digesting as much information as you can all the time.
Mike: So speaking of that, on the more personal side, what's a book that you've read that's had a big impact on you and why? And it doesn't have to be work related.
Joshua: So the last kind of businessy book I read was called the right kind of wrong. I forget the woman's name who wrote it, but she's brilliant, and it really reinforced to me that for, not just a security department, but I think for any sort of an it department, maybe broader than that, I haven't really thought about it, for it to thrive, you have to, you have to enable people to take reasonable risks and you have to motivate them to want to take reasonable risks, which means, uh, I think Brenee Brown said this, uh, that, that growth comes at the point of struggle, right?
We don't learn by winning. We learn by losing, and too many places they'll say it's, Oh, it's a blameless culture, but they don't celebrate the loss in terms of driving lessons from it, and I think that's really, really important. We used to say like we put the guardrails up in the background so that people, you know, it's a safe space to try dangerous things. I think that is a fundamental truth of trying to engender creativity and risk taking.
Evan: What advice would you share to inspire the next generation of security leaders?
Joshua: I think that getting people to see the value of what security practitioners bring to the table, regardless of the industry, private, public education doesn't matter. You have to get people to care about the outcome. Otherwise, you're just getting a paycheck, right? And, I've certainly worked with people that in the past that they just want to work a 9 to 5 and get paid and go home. And I get that, especially, you know, like our lives are complicated enough, right? And we're more than just what we do. But the people who become really successful in cyber really kind of eat, live, and breathe it, and if you don't have a passion for it, it's probably not the right fit for you because it's, you know, the burnout rate is high. The hours can be long. You're not always everybody's favorite person. Especially in a corporate environment, right? It's too easy for a department to become the place where good ideas go to die. And so you really have to be thinking differently about it and you have to be passionate about the outcomes you're trying to drive.
Evan: Josh, really appreciate you making time. Great catching up with you and looking forward to chatting more soon.
Joshua: Hey, it's been my pleasure guys. Thanks so much.
Mike: That was Joshua Brown, former Vice President and Global Chief Information Security Officer at H&R Block. I'm Mike Britton, the CIO & CISO of Abnormal Security.
Evan: And I'm Evan Reiser, the founder and CEO of Abnormal Security. Thanks for listening to Enterprise AI Defenders. Please be sure to subscribe, so you never miss an episode. Learn more about how AI is transforming the enterprise from top executives at enterprisesoftware.blog
This show is produced by Josh Meer. See you next time.
Hear their exclusive stories about technology innovations at scale.