On the 27th episode of Enterprise AI Defenders, hosts Evan Reiser and Mike Britton, both executives at Abnormal AI, talk with Gareth Packham, Chief Information Security Officer at Save the Children International. Save the Children is one of the world's largest nonprofit organizations focused on protecting the rights and well-being of children. Operating in over 100 countries, it delivers healthcare, education, and emergency response programs—often in high-risk, conflict-affected areas. In this conversation, Gareth shares insights on the life-or-death stakes of cybersecurity in humanitarian work, the rising danger of AI-powered impersonation and fraud, and why driving behavioral change—not just awareness—is the next frontier in protecting global organizations.
Quick hits from Gareth:
On the real-world consequences of cybersecurity failures at Save the Children: “Without sounding glib or flippant—it really isn't. It can be a matter of life and death. We have information on children and families… in the wrong hands, that could put them at risk of physical harm.”
On the threat of AI-generated impersonation: “A few years ago, we were seeing business email compromise attempts asking to approve invoices. Now, it’s shifted to things like deepfake video. When someone says, ‘Let’s jump on a call,’ and you see a video of someone that looks and sounds like your CEO, you really need to challenge that.”
On the limits of awareness training: “The challenge with a lot of awareness programs is that they’re static. People might remember the right answer on a quiz, but it doesn't mean they’ll act the right way under pressure. We need to stop checking boxes and start measuring actual behavior change.”
Book Recommendation: Flow by Mihaly Csikszentmihalyi
Evan: Hi there and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, fortune 500 CISOs share how AI has changed the threat landscape, real-world examples of modern attacks, and the role AI can play in the future of cybersecurity. I'm Evan Reiser, the CEO and founder of Abnormal AI
Mike: And I’m Mike Britton, the CIO & CISO of Abnormal AI
Today on the show, we're bringing you a conversation with Gareth Packham, CISO of Save the Children International. Save the Children international is one of the world’s largest nonprofit organizations focused on protecting the rights and well-being of children. Operating in over 100 countries, it delivers programs in healthcare, education, and emergency response—often in high-risk, conflict-affected areas.
In this conversation, Gareth shares insights on the life-or-death stakes of cybersecurity in humanitarian work, the rising danger of AI-fueled impersonation and fraud, and why driving behavioral change with the latest AI tooling is the next frontier in protecting global organizations.
Evan: Well, Gareth, thanks so much for making time. It's good to see you. Um, maybe before we dive in, do you mind sharing a little bit about kind of your background and, um, how you got to where you are today?
Gareth: Yeah, sure. I mean, to be honest, it was no surprise I ended up in an IT or an information security role. I was always interested in computers when I was young, fledgling sort of information security and things like that.
I started in, in kind of consulting and data analysis, then moved to, to a more generalist IT role and actually became a IT manager and an IT director. But it was really the information security area that I found the most interesting.
It was beginning, it was at a time where cyber security attacks were beginning to get into the media. Uh, I could see that it was going to be a growth area. The more we depend on computers, the more people are going to try to attack that, disrupt it, and things like that. So moved into the information security role. And prior to my current role, I was the chief information security officer at a university.
In the UK, I was there five, six years. Really enjoyed that, uh, particularly trying to make fast paced transformations in in an organization institution that like a university isn't really used to fast paced change. So that was challenging at times, but really enjoyed it.
And then about four years ago, I moved to Save the Children international. So I'm Director of Information Security and Data Protection and Global CISO.
Evan: Do you mind sharing a little bit about Save the Children and maybe like kind of what the mission is and why cybersecurity matters so much?
Gareth: So, Save the Children is one of the world's largest not for profit organizations and certainly the world's largest not for profit organization dedicated to children and children's rights.
We deliver international programming and when I talk about international programming, it's healthcare for children, it's education. Uh, children's rights as well as what we're probably best known for in the media, which is emergency disaster response. So coming in after earthquakes, flooding, civil conflict or war, particularly in places like Gaza and Ukraine.
Evan: You guys have a very worthy mission, right? And, you know, cybersecurity matters. It has a real world impact, right? For the organization. Um, can we help kind of bring that to light? Like what happens if things go wrong, right? It, it, it has a real world impact on people.
Gareth: We have information on children and families that we work with. They may be internally displaced. They may be refugees. Some of our staff are quite high risk individuals. They're operating in countries that may not fully agree with the work we're doing and actually in the wrong hands, information on their location, information on children and families that work with could put them at risk of physical harm.
I mean, when, when I've done risk assessments in the past, it generally isn't something you look at an information security assessment, risk to harm, risk to persons. But that's first and foremost, the risk that we're most concerned about stopping and reducing in the organization.
Mike: Obviously, Save the Children is a very unique organization, and while there's probably aspects of your security program that are very similar to other types of organizations, there's probably some very unique, different things that your security team and your security program does.
Are there any particular use cases that the average listener might not fully appreciate?
Gareth: Yeah, I suppose one thing that's quite different from, from our organization, so we've got around 16-17 thousand staff. Now, if we were in the insurance sector, you'd probably, 15 out of those 16, 000 staff would be sat at a desk all day in a, in a nice air conditioned office, whether that's London, Boston, wherever.
Whereas most of the vast majority of our employees are working out in the field. So they're working in refugee camps. They're working in schools or medical centers. Sometimes they are literally working in tents, in, in, in areas of that.
So having to protect individuals that are working outside of a typical office environment, individuals where technology probably hasn't progressed as much in other areas. So we do still have legitimate use cases for people using USBs I mean like in your personal life when was the last time unless you were maybe building a bootable bootable OS or something like that. When was the last time you really used a USB drive?
Yeah We operate in parts of the world where there's no internet connectivity, where we work with partner organizations and the only way of sharing that information is in a USB That has to be taken from one individual to another individual. I can't just lock down every USB port in the organization like any large organization would do now because that has real world implications.
It's always a balance between complete security and allowing people to do their jobs. And I think depending on what sector you are, that's a sliding scale, but we have to really think about if we accidentally stop someone working, one, that's going to impact the children and families we work with, but also too it could impact their own physical safety.
Mike: I'm sure Save the Children is no different than a lot of other organizations where you're, you're looking to innovate, you're looking to take advantage of things like SaaS and AI and, and, uh, you know, as attackers are also looking to do the same thing.
How do you feel the, the threat landscape is evolving as it, As it, uh, you know, these type of technologies come more to the forefront. Uh, I'm sure cyber criminals want to attack, Save the Children for, for various reasons. And, you know, that, that, that keeps you, uh, you know, trying to stay paced with them.
Gareth: Yeah, it is, it, it is a challenge. I mean, obviously we operate across a big geographical area, but also quite a big socio political sort of landscape as well. So we see a whole wide range of threats. And I think AI is a force for good. But it's also, as you both know, threat actors are beginning to use that more and more, particularly putting together, uh, better business email compromise campaigns, putting together better, better phishing emails.
It's quite difficult for me and my team to know when you see a phishing email was did this come from AI but we're beginning to suspect that more and more of these are and the challenge for us in a global organization is a lot of our workforce, English isn't their first language and actually the use of these tools, it looks a professionally written email, quite difficult for for a non native English speaker that looks like an email that could have come from someone in finance, looks like an email that could have come from someone in in HR.
So it's definitely presenting challenges. And I think that's why within our security team and the tools and services we use, we are now beginning to leverage more AI and machine learning to keep up with both the sophistication of attacks, but also the volume of those attacks. And I think that kind of arms race is, is only going to increase.
Evan: Gareth, what do you think are some of the biggest cybersecurity issues that arise, as AI becomes more prevalent that you don't think people are talking enough about.
Gareth: I think one area we're really concerned about, um, is the rise of deepfakes. For, um, particularly deepfake audio, deepfake video, impersonating individuals, proving false, uh, false invoices, and things like that, because a few years ago we started to see a spike in business email compromise, can you approve this invoice? And you can, you can do your checks and we've got some really fantastic tools that, that help us say this, this is unusual. This isn't the address this usually comes from. And obviously you've got your training and awareness program for, for that vigilance.
But when someone says let's jump on a call and you see a video of someone that looks like your CEO, looks like your CFO, and they're saying, oh yeah, it's for this reason. Approve that. You really need to challenge that. And obviously experienced members of staff that might have worked with some of these individuals or are familiar with deep fake that they might spot something like that.
But I suppose these attacks work because eventually you are gonna find someone who's A) tired B) new on the roll, doesn't want to let people down, or, or C) bad reception, they might have bad bandwidth or something like that, they might think the glitches are just because of their internet connection. And it's, and it's easy, it's easy to be fooled.
And I think there are some fledgling technologies that can determine, uh, deepfake. I don't think any of them are really mainstream or have really proved their efficacy as far as I've seen so far, so I think it's definitely an issue that needs to be tackled. And I think it raises a wider issue in cybersecurity and IT management around proving your identity around what when someone could fake a video of you on a real time call. How, how do you really prove that that's you? What do you, what do you use for that? And so that's something, that's something we're increasingly looking to do.
And as I mentioned a short while ago, talking about conditional access. Well, actually, if an individual claims to be this person, and yet they're logging in with a laptop you've never seen before, they're logging in from a location you've never seen before, you probably need something like AI or ML to bring all that telemetry and intelligence together in real time to put up an alert saying this probably isn't the individual you think it is, and I think that's where there probably is a gap at the moment that the data is out there but bringing all that telemetry together of bringing that telemetry together in real time is a big challenge.
Evan: What does that mean for the future of work? Right? When you can no longer trust the authenticity of the digital communications when you don't know if it's really us on this podcast versus our AI avatars let alone the Zoom meetings or the emails or the text messages or any other kind of digital communication or collaboration like, how does the world work? Cause we're, you know, especially in this remote world especially for your organization that is so remote as Mike mentioned, um you know, you, you, we have to put a lot of trust in digital communications and so, Yeah, like how do we thrive? How are we, how are we going to survive? Right. If not thrive in that future world.
Gareth: The term zero trust has been bandied about in information security for probably last, last 10 years. And I think it's only really now that actually you, you are in that kind of situation where what, what can you trust?
But I think in order to work, there has to be, there has to be a level of trust, but I think what's really important is that you have a level of trust, but you also have an insurance policy, so you know around what data is being exposed, you know what systems are potentially at risk, and then you protect those against that, so I think one term CISO's often like to use, and I think we use it flippantly sometimes, is defense in depth, but I think it is increasingly with these AI tools and that really, really important.
And I think certainly as a CISO, that's my challenge. And I think probably as cybersecurity vendors, you know, that's your challenge as well. How, how can we scale our technologies and how can we operate them faster and faster to deal with what is going to be an increased range of attacks?
And I think what's really scary, I was reading, I can't remember where the article was. about some AI tools, uh, that are now being used. Well, they'll try one attack, and if that doesn't work, they'll try another one. But actually, they'll learn from the behavior of the defender systems around what technologies are being used. And then it can actually learn on the fly, okay, this isn't going to work, but I'll try this. And that's really scary because, yeah, there's some quite sophisticated hackers out there, and there's some quite unsophisticated hackers out there.
I think AI is potentially, if it's not managed, is then going to put that kind of nation state level hacker in the hands of any group.
Evan: Yeah, that nightmare scenario, which I think probably two years ago would sound like science fiction, is starting to feel a lot more real today, right? Um, you know, maybe five years ago, if you wanted to create a really personalized attack, right, if you were a, I don't know, cyber warfare division. Or offensive, you know, division. Um, you might have a bunch of people doing research on the victim and the organization and their social network. You may have never tool another team of ten people building tools to launch these personalized attacks. Um, and now like Yeah. Like you said, like the petty criminal can utilize AI tools to do all that for them, right?
Using our open AI deep research to learn about someone, um, using a cursor or code generation to build the tool to send autonomous personalized attacks that adapt, you know, again, that, that would have sounded like science fiction to all three of us, right? A couple of years ago. And now it's like, oh yeah, that's like, that's happening today. I don't think we would be too surprised.
Gareth: And, uh, and we are seeing slightly lower level, but we are beginning to see a rise in the number of sort of spear phishing and personalized attacks. And I, I think that is probably because of the use of general AI tools to speed up that reconnaissance and that research part of it.
Fortunately, we've detected them and we've stopped them, but yeah, it's either a coincidence that it's happening around the time that Generative AI tools are in the news and more people use them, but I don't think it is. I think it is just making it easier for people to do that reconnaissance to actually look at who sort of, also you can use them for organizational hierarchies, because certainly for an organization like Save the Children, we're a big organization.
We have to be in the media and things like that. And actually, like you said, three, four years ago, it would have taken someone actually to sit down. Google searches, reading lots of documents, trying to put together on a whiteboard, kind of, oh, so and so reports to so and so. They report there. If you're good using generative AI tools, you could do that in 20 minutes.
You can actually build quite a detailed picture and I think to combat that we started to run. So we've, uh, we've had a guy known in the past. He's got his own company now, and he's run some really good sessions with our finance teams around open source intelligence and around how cyber criminals, AI tools are gathering that and, uh, Limiting social media profiles, limiting what goes out in press releases and different reports, and it's been really insightful, actually, and a lot of the people in that training have never really thought about that, and I suppose there's one good thing I like about cybersecurity.
When we do those kind of training and awareness sessions. Yeah, we're doing them because we want people to, to protect our organizational data. And systems, but also it helps them in their personal lives with an increased number of scam emails with phishing texts and things like that, that awareness, that vigilance, that experience of being able to spot scams helps people at work and at home as well.
Mike: So I guess part of that, too, is, you know, awareness has always been such a big area for security programs to help train their users. With all these, you know, with the advent of A. I. and the blurring of the lines from legit from from attacks. Where do you feel like, on an awareness and training perspective, that you're going to have to, to reimagine new ways to do that. Like where where's security awareness failing today to keep up with the threats from an AI perspective and what needs to be different or what needs to be true about security awareness training for your employees. You know, 6 months from now, a year from now.
Gareth: Yeah, I think the challenge with many information security awareness programs is they're too static and they don't really drive behavioural change.
That actually, yeah, the content of a course is, is important, but people are busy. People do lots of different online learning. They're busy in their day jobs. Yeah. Someone might remember that I've said that you need to report an incident by this, but is that? That kind of 10 minute online course is that driving behavioral change. So I think there's two different aspects of that.
One is about keeping training programs innovative and sort of agile, changing them up. So we, last year, we really ramped up at our own information security training and awareness program. So we started to do quizzes. We actually gave a prize giveaway for one of the quizzes we did. We've done some online games. We've done some interactive workshops, both online and actually in person. We, uh, we've done in some of our regional areas as well as more traditional video based or, or reading based. But I think you need to combine that with looking at how do you measure the impact of those.
So like we do simulated phishing tests, like lots of organizations, I'm not, I'm probably less interested in, in the click rate, which is the common metric of how many people click on a link with more on how many people report that, how many people are flagging that up to the IT support desk or using the phishing reporting email. Is that particular offices? Is that particular sectors that particular parts of the business that aren't doing that? Because that's really that behavioral change. So we're not only just saying this is a scam, but I need to make other people aware of this. Not because I'm going to get in trouble if I don't, but because I don't want my colleagues to fall victim to this. I need to report it to I.T. Security.
I think the challenge with a lot of training and awareness programs is you're preaching to the converted so the people that most engage with voluntary training and awareness. So if you give someone 30 videos that they can choose are the kind of people that would be doing the right thing anyway they're the kind of people that are going to be clicking on links. They're the kind of people that are going to be really wary about downloading apps and things like that. And they're really interested in it because they're already engaged. And they're like, oh, we're eating it up because they, they love it. The people that actually, the bad behavior. Yeah, they're not going to engage unless you can force them to. So I think it's really need to up our game in information training awareness to how can you reach those high risk? Well, how do you identify those high risk users? And then how can you actually work with them to drive that behavioral change?
And I think we operate in what makes it even more challenge for us. We operate globally and there are quite different cultures and different responses to that. So parts of the world, actually an email from your manager, your director telling you to do ABC and don't do D is really, really effective. People don't want to do this because I'm going to lose my job. I'm going to get in trouble. But as we know, other parts of the world, particularly in the West is where I don't need to do that training and things like that. So it's about matching that to people's job roles, but also their culture and their geographical location.
Mike: And where do you feel like I can help drive that behavioral change? You know, I feel like that's an area where there's a lot of promise and there's a lot of opportunity there. But where do you see it actually making a difference in helping drive some of that behavioral change that you're talking about?
Gareth: I think probably in that first stage of identifying bad or risky behavior, because I could go through everyone's internet history, but you do that for 16, 17, 000 users, it's not going to be practical.
So, and looking at what kind of websites they, they visit with AI tools, you can actually anonymize that, but you can, you can then get an idea of where those risks are, where where people are downloading apps, where they're visiting sites that are, that may be compromised. Where there are other activity.
I mean, yes, you have to preserve privacy. Don't want to be monitoring every keystroke of every individual all the time. And I think AI gives you that power to start to gather that together. So that you can say these are the individuals I need to target. And then you also get that real time feedback. So identify a cohort of individuals that are high risk.
You can apply multiple interventions. That could be workshops. It could be, like I said, it could be online games. But then you can actually look. in the next three months, has that worked? Have they stopped using x. com, SaaS application or whatever? Have they stopped downloading these apps or putting loads of different applications on their laptop and things like that?
And if they haven't, Okay, we'll try something else. And I think there probably hasn't been that scientific approach to training and awareness that, uh, is being used in some other areas of cybersecurity in the past. I think the metrics that are used in the moment are too simplistic, but AI probably represents an opportunity to do some really clever stuff in that area. And I'm sure there's some clever people than me, uh, working on some solutions like that, that can, that can leverage those, those technologies at the moment.
But, uh, and I think, and I think that's probably an area overall in cyber security where AI can bring that computational power and scale to around the effectiveness of, of cybersecurity controls. Whether they're technical controls or social controls, because it allows that kind of crunching of big data in real time to give insights that we probably hadn't seen in the past. I suppose I'd love to know, and perhaps cybersecurity vendors don't want to know, but if you could use a tool that actually helps you do your business value analysis, an AI tool that says, okay, you invested this much money and actually this is what it's delivered for you, over the last six, 6-12 months.
Yeah, I can do it and we do it already, but it's, it's, it's quite tricky to do and there's a lot of manual analysis. There's a lot of, yeah, a lot of legwork in it. And if you can do that and you do it once and think that's done. I'm not going to do that for another 12 months. I'm not going to do that for another 24 months. But actually AI gives you the ability to do that kind of assessment and rationalization in real time continually.
Mike: I'd love to double click just in some general AI use cases for your security team. Maybe, what are some interesting or tangible results you've seen from AI that, you know, a lot of your peers may not have necessarily, you know, gone through that same type of use case.
Gareth: So I suppose in, in, in the, um, email security solution, we, we use, AI ML based, the ability to stop quite complex business email compromise attacks has been, has been a real game changer for us. We are in, we work in a sector where, unfortunately, we are a target for cyber enabled fraud and things like that. And so that ability to use AI and ML to analyze patterns of emails and behavior that realistically, you're not going to get fraud teams proactively working through all your suppliers and emails to identify unusual trafficking. So I think that's one area we've seen significant impact in reducing cyber enabled fraud around business email compromise and, and other attacks like that. So that's, that's been, that's been one really good beneficial use of AI.
Mike: I'd love to just get your, you know, 32nd opinion on, obviously AI is changing, you know, the landscape from a technology perspective, both how the bad guys are using it and how your teams are using it. But where do you see AI impacting the workforce?
Like, obviously I think there's, AI is not going to eliminate all jobs and, you know, all work's not going to be done by AI, but I do think there's going to be shifts in the type of people you're looking for and the impacts that it's going to have. What do you, what do you see at Save the Children?
Gareth: You know, I think whether it saves the children or elsewhere, it will shift the focus of people's roles. So, I mean, I'm thinking about my, my own career. So I started off as a data analyst many years ago, working in Excel spreadsheets and things like that, doing tasks. And I was quite, I consider myself an Excel guru.
I knew all the formulas. I could, I could do all the fancy formatting. You don't need that anymore. I've seen some AI tools work with spreadsheets and things like that, and you can just in plain text tell it what you want to pull out of the data. So those kind of roles are changing. So I think there's a challenge for individuals around how does an individual provide value because actually pure reliance on a set of technical skills is probably going to disappear.
I think it's about oversight, governance, and, sort of bringing together quite different concepts and patterns and that's, that's where the won't won't get into, uh, won't get into a conversation on artificial general intelligence, which is in the news at the moment about is is that six months away or is it six years is a way interesting field, but probably probably a conversation for another day, but actually, so I think we didn't Save the Children.
It's people are looking at now where where can I use AI tools, gen AI tools to get rid of the minutiae, get rid of those repetitive, boring tasks so that I then have more time to focus on delivering value, which for us is obviously more time for, uh, delivering better, better programs for children, health care, education, or people that work in fundraising. Getting rid of those repetitive tasks that we can focus more on their experience and insight into getting more funding and things like that.
There's been some early adopters in, in any organization, people who are dipping their toes in, in those tools. I'm old enough to remember when the internet first came in and people started to use search engines. You're almost seen as a wizard if you knew how to use, use a search engine quite well. I think that's probably how I progress quite well in tech really, that I, I just seem to be quite good at search engines and things like that, whereas other people kind of struggled on, but actually, how do I get it to do a search?
And I think that's probably where we need to start training people on how to use generative AI tools effectively. What are the risks? And we've already done that within our own organization about putting in a guide rails and, and policies. But I think we probably need to, you need to remind people of the risks, but actually the benefits, and where is it safe to use these technologies and the type of tasks that can be automated so that we have some fantastically intelligent individuals working for us. And we all know there's elements of any job that can can sometimes be repetitive and boring. How do we remove those so that people can use their skills and experience to deliver value further so i think i think it's going to have profound, i think it's going to have profound change.
Um but i think with it and certainly for managers and that and I think there has to be caution around things like automated responses because it's going to get, I think it's worrying, one more area is worrying is automating things like approval emails where you still need to review that document. You still need to, and if it gets too easy to just say also reply, yes, I reviewed, please proceed. You could, you could be setting yourself up for a, for a fall if you're not careful.
Evan: We like to do, Gareth, at the end to do kind of a bit of a lightning round. So looking for your one tweet responses to questions that are very difficult to answer in one tweet.
So please, please forgive us. Um, Mike, do you want to kick it off for us? Then we've got like four or five.
Mike: Sure. So what advice would you give to a security leader who's stepping into their very first CISO job? Maybe something they might overestimate or underestimate about the role.
Gareth: Advice I would give is make sure you really understand your business and it's strategy. You're only going to be able to provide value to your organization if you understand what that organization is trying to do.
Evan: So Gareth, you just talked about like getting a little bit early on some of the new tools kind of makes you feel like a wizard. There's a lot of CISO's out there that are maybe nervous that there will be behind on AI stuff. And so any kind of best practice you can share about how you keep up with like the latest right in a, in AI or kind of new cyber technologies, like given technology is advancing so fast.
Gareth: Yeah, I think I, I have my go to kind of tech sites that I trust. So I asked technica. com has some fantastic articles on cybersecurity and AI. And I trust the journalists and actually there's some really good, there's a really good community of, of technologists behind that, that, that will really critique. And if you can, you can get lost in the comments, but I think that's one area, but also talking to, talking to your cybersecurity partners, um, around their knowledge, exploiting their knowledge and peers, whether that's at conferences or networking events and, and actually not being afraid to ask. Say, yeah, I don't really know a lot about these Gen AI tools. How are you using them? What are you using them for?
So, I think there are people that are quite evangelical around a bit like. With the internet early on really evangelical how they're using these tools and what they found the works and what doesn't.
Mike: What's a book that you've read, that's had a big impact on you and why? And it doesn't need to be work related and maybe not even something recently read.
Gareth: This book, Flow. I think it's a Czechoslovakian author around the flow state, so around focus and concentration, and the subtitles, it's the psychology of, of happiness.
Uh, I've read it about, I think probably about, Nine, 12 months ago, and it's probably had most profound impact on my productivity and work from any book I've read in quite some time, just around how the mind works, all the challenges in it, in our psychology that sometimes stop us concentrating on things.
And some really good examples from sport, some really good examples from medicine and things like that. So, yeah, recommended to a lot of friends and, uh, thoroughly recommend anyone to read it. Really good book.
Evan: What's some advice you would share to inspire the next generation of security leaders?
Gareth: I think the advice I would share is to not be afraid of innovation in, in cybersecurity. I think there's been a traditional progression over the last 15, 20 years, like you said, Evan, it's come from that kind of infrastructure background. And I think there's still an element of castle and moat around a lot of cybersecurity programs that is becoming to, that is now effectively meaningless because we are so reliant on distributed systems and SaaS and things like that. Yeah, there's still some organizations that do still have a core data center and they need to lock that down.
So I think security leaders don't be scared to be innovative. Don't be scared to challenge the status quo. And, uh, that would probably be my advice.
Evan: Gareth, thank you so much for joining us today. Really appreciate you sharing your thoughts and, uh, looking forward to chatting again with you soon.
Gareth: You're welcome. Thanks for inviting me on the call.
Mike: That was Gareth Packham, CISO of Save the Children International. I'm Mike Britton, the CIO & CISO of Abnormal AI
Evan: And I'm Evan Reiser, the founder and CEO of Abnormal AI. Thanks for listening to Enterprise AI Defenders. Please be sure to subscribe, so you never miss an episode. Learn more about how AI is transforming the enterprise from top executives at enterprisesoftware.blog. This show is produced by Josh Meer. See you next time.
Hear their exclusive stories about technology innovations at scale.