Mark Ferguson sees AI as a practical tool that enhances defenders’ ability to analyze data, reduce noise, and get to answers far faster than manual methods ever could.
For the Chief Information Security Officer at Bombardier, a global leader in business aircraft design, development, and manufacture, artificial intelligence is one piece of a broader security evolution, one that helps defenders respond to a rapidly changing threat landscape without proportionally increasing team size or toil.
Ferguson explained the unique complexities of aviation cybersecurity, how criminals will leverage AI alongside defenders, and real examples where AI, even in its early practical applications, has already helped his team do things they couldn’t do fast enough before.
Bombardier is a specialized aircraft manufacturer with about 18,000 employees plus thousands of contractors and partners. Ferguson described the organization’s core business: designing, building, and selling premium business jets, operations that involve complex engineering systems and long product lifecycles that must be supported for decades.
That complexity feeds directly into cybersecurity challenges. Ferguson explained that the company’s reliance on supplier ecosystems, a migration to the cloud, and an expansive legacy IT estate all contribute to risk. Many critical systems still exist simply because aircraft and their supporting tooling must remain viable for 25 years or more, making the security environment both vast and heterogeneous.
As he put it: the perimeter is gone. In today’s era, “the perimeter is where the person is.” Users connecting from different devices and geographies mean that defending the enterprise requires new approaches and technologies, not just traditional controls.
Asked about criminals using AI, Ferguson offered a nuanced view: attackers will use the same capabilities defenders seek to harness, but in ways that scale volume and lower cost. He explained that for attackers, “work is all about volume,” and AI will widen the net of potential targets because automation and generative tools make it much easier to craft campaigns at scale.
He expects threat actors will use AI to improve phishing emails, automate reconnaissance, and experiment with deep fakes, not necessarily in dramatic sci-fi senses, but in ways that meaningfully change attack economics.
Ferguson also pointed to a more immediate internal risk: well-intentioned employees feeding confidential data into public generative tools. This is a far more likely cause of data exposure today than obviously AI-enabled attacks. He said that’s “probably the biggest issue” for many organizations at this stage, rather than sophisticated AI-augmented criminal campaigns.
While cautioning against hype, Ferguson shared a clear example where an AI tool helped his team more quickly interpret complex telemetry than was realistically possible through manual investigation.
When an anomaly, an “impossible traveler” login pattern, was flagged, his team fed the data into a security AI copilot. In seconds, it identified the relevant change and contextualized who made it and what had occurred, something that would have taken far longer by sifting through logs manually. “Copilot came back within seconds… and we were able to narrow in on what the change was, who made the change.”
This concrete example illustrates AI’s utility not as sci-fi prediction but tangible augmentation, turning a sea of event and login data into a rapid, contextual answer and accelerating investigative timelines.
Ferguson also pointed to a problem familiar to many security leads: data volume has grown beyond the capacity of teams to manually interpret it. He described giving a team a list of 11,000 SaaS applications used across the organization and asking which were most critical. The volume made a human-only analytical approach impractical.
His response was instructive: turn to AI and data scientists to extract meaningful insights from the data, to rank and focus efforts on the most important applications. This is precisely the kind of scaling challenge where AI shines: distilling insights from volumes humans can’t examine.
Ferguson is clear that aviation, like many enterprise contexts, cannot simply adopt external AI tools without governance and guardrails. The risk of inadvertently exposing sensitive data to public models or workflows is a practical concern. Organizations must think proactively about how tools are used, where data goes, and who has access.
This points to a broader leadership insight: AI’s potential requires process, governance, and risk management as much as technical implementation. Without those pieces, deploying AI widely can create as many headaches as it solves.
Toward the end of the episode, Ferguson offered guidance that rises above tactical implementation:
Both points reinforce that cybersecurity leadership today blends technical fluency, business context, and strategic communication, and that AI has a role to play if it’s thoughtfully woven into that fabric.
Across the discussion, Ferguson conveyed several key lessons for CISOs and security leaders navigating AI’s role in modern defense:
Mark Ferguson’s perspective presents a balanced, grounded view of AI in cybersecurity, not as a magic wand, but as a critical amplifier of analytical capacity and investigative productivity. In a domain as safety-critical and complex as aviation, AI helps defenders cope with scale, extract insight from noise, and align security strategies with business realities rather than fear-driven hype.
As he summed up through the episode’s example, AI’s promise comes when it accelerates understanding and enables teams to navigate complexity more confidently, something every enterprise can appreciate.