On the 23rd episode of Enterprise AI Defenders, hosts Evan Reiser and Mike Britton, both executives at Abnormal Security, talk with Matt Modica, Vice President and Chief Information Security Officer at BJC HealthCare. BJC HealthCare is one of the largest non-profit healthcare organizations in the United States, operating 14 hospitals across Missouri and Illinois. BJC has over 30,000 employees and over 4,200 doctors across its network. In this conversation, Matt discusses the unique challenges of securing patient privacy in a digital world, new opportunities and risks in healthcare with recent AI advancements, and aligning security practices with an AI-enabled future.
Quick hits from Matt:
On the increasing effectiveness of AI powered attacks: “Voice technology and mimicking a person got very good. Pretending to be somebody else and trying to get credential access or compromise credentials, it's not just executives anymore. It's anybody with a credential. So the credential is valuable and they're being sold. It's just a matter of how criminals can best get the ID and password to be able to sell.”
On critical areas where AI allows us to focus more attention: “We have time to do the things we've always talked about wanting to do. We've talked about wanting to do more threat hunting, about wanting to do more risk quantification. We've always talked about wanting to do a better job and be more proactive in shifting security left in our, in our agile environment, our workflows and things. So we have some time to do that now because we're making some of those things either automated or more efficient.”
On the maintained need for humans in the loop with enterprise AI: “ When you're running a large enterprise, uptime is of utmost importance. If I change a firewall rule that blocks something legitimate, I'm going to hear about that. If that was done because it was a low security risk, but the automation decided to do that, then there’s a lot of ramifications there. I don't know if we'll ever get to a hundred percent full automation. I think we're always going to have to have someone validating accuracy. And the models and making sure that our risk tolerance as an organization is taken into consideration as we instrument those things or allow those things to take action on our behalf.”
Recent Book Recommendation: The One Minute Manager by Ken Blanchard and Spencer Johnson
Evan: Hi there and welcome to Enterprise AI Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, fortune 500 CISOs share how AI has changed the threat landscape, real-world examples of modern attacks, and the role AI can play in the future of cybersecurity. I'm Evan Reiser, the CEO and founder of Abnormal Security.
Mike: And I’m Mike Britton, the CIO and CISO of Abnormal Security.
Today on the show, we're bringing you a conversation with Matt Modica, Vice President and Chief Information Security Officer at BJC HealthCare. BJC HealthCare is one of the largest non-profit health care organizations in the United States, operating 14 hospitals across Missouri and Illinois. BJC has over 30,000 employees and over 4,200 doctors across its network.
In this conversation, Matt discusses the unique challenges of securing patient privacy in a digital world, new opportunities and risks in healthcare with recent AI advancements, and aligning security practices with an AI-enabled future.
Evan: Matt, first of all, thank you so much for joining. Do you want to share a little bit about kind of your background and how you got to where you are today?
Matt: Yeah, yeah, sure. So, uh, currently the vice president, chief information security officer at BJC Health System. Been in that role for seven and a half years. I would say, you know, the start of my career, I was going to be a graphic designer, right? If you interviewed me a long time ago, uh, when I was in school and doing internships and everything, um, interned with, uh, with a couple of companies here in St. Louis area and, um, just loved doing that kind of stuff, right?
So, fast forward after college, graduation, I'm in a temp job trying to break in the graphic design work and things like that. Um, and my wife's cousin was a recruiter at the time for IT in an organization here in St. Louis, and he said, Hey, at my wedding reception, by the way, he said, Hey, you know, I don't have anything in, in graphic design or marketing, but I do have this thing in IT. Uh, this job in IT. And so why don't you try that, you know, when you get back from your honeymoon and come back, let's talk about it and figure it out.
So, uh, long story short, got that job. It was a business analyst essentially, and, and working, uh, on project management efforts and things in infrastructure, um, eventually found out that I love project management and it and all the technology components associated with it. So always been kind of a geek and a nerd. Um, and so just kind of built my, uh, built my career from there. So. One thing led to another.
Ran a lot of projects, infrastructure based projects and things. Ultimately, I was responsible for a team of project managers and architects in the security space, and that's how I Got into security, right? And so, uh, from there did, you know, certification accreditation for some of the, some DOD services and, um, identity access management and all the, all that fun stuff, vendor access management, third party risk, as we call it now.
And then, uh, moved into several different roles across, uh, multiple companies. So ultimately landing here at BJC.
Evan: So if I understand right, BJC is like the, one of the largest, like nonprofit healthcare, um, systems or organizations. Do you want to share a little about the organization and kind of maybe talk a little bit about what makes it unique?
Matt: Yeah, absolutely. So BJC Health System. So we actually recently merged with St. Luke's Health System in Kansas City. Um, and prior was called BJC Health Care and in the St. Louis metropolitan region. Um, and so across both of those entities now across what we call BJC Health System. We have 28 hospitals. We have 44, 000 employees. We're about 10 billion in revenue. Um, we have hundreds of clinics and care facilities across, outside of those hospital locations.
Um, and I just pulled some numbers earlier, as far as, you know, a day in the life of BJC, right? So, um, inpatient stays, we have approximately 3, 100 folks that stay overnight with us, right? Every single night, um, in one of our hospitals. Emergency department visits, we have about 2, 000 every single day that come into our emergency departments across the different regions. 650 different surgeries every single day. 50 babies are born every day in our facilities. Um, and the biggest number that we have is about 12, 000 outpatient visits at all of our clinics and facilities and different kinds of things every single day.
Right. So a lot of folks, you know, they haven't heard about BJC sometimes, but then when you put some of those numbers in perspective, they're like, wow, that's kind of a big thing, right? And so, so we're a large, uh, really large health system regionally in the, uh, in these regions.
Mike: So what are some unique cybersecurity use cases at BJC that the average listener might not fully appreciate?
Matt: Yeah, yeah. So I'd say it may not be unique for health care, but I'd say health care is unique in the fact that, um, you know, patient safety is number one. Privacy is also another very close number one, right? And so not only do we have regulations and things that we have to follow, but it's really, um, You know, of the utmost importance to make sure that people are not accessing things they shouldn't be doing. Um, and they're not exposing data that they shouldn't be exposing, um, and it's only on a need to know basis, right?
And so, a lot of people would say, well, yeah, that's security, right? That's what we do for data privacy and data security. It's a whole other level, right, when you get to the healthcare environment. Um, and especially when you think about some of the numbers I shared earlier about how large BJC is and how many patients we have and serve, um, it's a large number, right? And getting it right every single time is the expectation, right? Um, including accidental disclosure and all those kinds of things. And so, because of our, again, regulatory requirements for what we have to submit in the event we have a situation, um, there's a lot of repercussions that can occur there.
So, so for us, you know, it's not only about how do we keep the operations up and running? That's absolutely important. Um, we have a variety of downtime procedures and continuity plans and things in place to help with that. Um, it's really, you know, how are we protecting the patient when they're here and when they're gone from our facility, right?
Mike: So, Matt, we've seen a shift in technology today, and it's, you know, moving at this incredibly fast pace, and especially in the AI era. And while there's a broad adoption of cloud platforms across most organizations, and it's been beneficial, also, this changes our security paradigm.
How do you see the threat landscape change in today's cloud first and AI first world?
Matt: You know, from my perspective, you can't build walls high enough. We could never build walls high enough before. But now we have many walls in many different locations. And, um, I would say the SaaS and the cloud based Um, you know, movement, if you will, um, it's just going to continue. It's just going to get even deeper into these microservices and other kinds of things. And as I think about all of the interfaces that we have for some of our clinical platforms, right, as an example. There's hundreds, if not close to a thousand interfaces to our main EMR. And each one of those is a different SaaS system. On premise system could be a variety of things. Right.
And so, so we can't just think about these big platforms. We have to think about all of the things that they're connecting to and what's the downstream look like. Right. And, I would say that's probably the biggest shift that I've seen.
We used to be able to kind of build ourselves into a fortress. Right. Um, with on prem, uh, hosted stuff and that kind of thing and things that we thought we could control to begin with. But, um, I think in reality, we, we never really had full control. We always just were, we're working on some of those things.
So, so I would say, um, definitely just that explosion of, you know, microservices, API kind of things. Um, all the different interfaces, um, and really understanding like, okay, where, where do our four walls end? Where does our responsibility from a data integrity and privacy and security perspective end? And when does the third party or fourth party or fifth party begin?
And that begins with inventory. And understanding workflow and understanding, you know, kind of where are things going and why are they going in the places they shouldn't so, so to me it's all that bifurcated stuff and then it's all the, you know, what are those processes and when is enough, enough, right.
From an organization's perspective to say, this is our responsibility. This is others. Um, and you know, what are those levers we can pull to, to better defend all those things.
Mike: And I think it kind of depends on industry and company, but like where, where is BJC from a, from the context from a business side of, you know, looking to AI to solve problems, and how do you stay up to speed with your business, as they're probably moving very fast to embrace new technology like AI.
Matt: Yeah. Yeah. We, we actually just announced, so we have an academic partnership with Washington University here in St. Louis. And Wash U and BJC just recently announced a center for AI focus right? And the idea is clinical AI use cases. Um, one of the first ones we're working on and a lot of folks in health care working on is how do you get the physician and the caregiver more efficient, right? And not just from a cost savings perspective or a revenue perspective. It's really, Work life balance, right? It's really how can they pay more attention to the patient while they're in the room rather than typing on the keyboard and things like that, right? So, so there's a lot of opportunity there to help make our clinicians more efficient and more effective, um, and, you know, some care possibilities as well, right?
So I think there's, um, there's opportunities we're looking at to say, hey, how can we get to a diagnosis faster? How can we take into consideration thousands of patients histories versus maybe the histories of what that physician knows and works with on a day to day basis, right? So, so there's a lot of potential opportunities there and that's what that center for AI has been tasked to do is to say hey, what are those things that we should experiment with?
How should we do it? How can we be leaders in that space and how can we share with others?
Evan: You mentioned a couple use cases about, you know, doctors and physicians using AI powered tools to maybe do better note taking or to kind of like, you know, pull in more context or better help, you know, patients or even remember, like, kind of, you know, intricacies in a diagnosis.
AI is obviously a powerful tool that makes a lot of work kind of easier and, you know, some of these tools more accessible. How do you see that affecting the threat landscape?
Matt: Yeah. Yeah. You know, I've thought about this and I think a lot of times people will think about the threat landscape on, Hey, what are the bad actors trying to do to us? Um, I would say one of the bigger risks that I see is what are we going to accidentally do to ourselves?
And so, from that perspective, let's just take diagnosis, but be very careful when we're providing a diagnosis or treatment plan for a patient. And we have to make sure that. There's proper due care, there's proper analysis, there's, you know, good accuracy in that diagnosis and that treatment plan.
And so, using a large language model to capture ambient notes when a physician is talking to a patient. Um, and let's just say that patient, that doctor says, yeah, that looks good enough, and clicks OK. Okay. And, you know, what if that diagnosis is wrong? So we have not only, you know, a duty to care and make sure that we're providing patients with the best care possible. We also need to make sure that that is, you know, that's accurate, right? And it's something that we can, that we can stand behind and, and, uh, and focus on.
So, so I'd say, you know, one of the bigger threats is that data privacy aspects. What maybe the patient didn't consent to me recording them. Because even though it's a large language model, it still has to listen, right?
And so, so maybe there's things there, from regulatory perspective that we need to focus on. So I see a lot of regulatory and accidental disclosure kind of things that can happen, um, that if we're not careful about it and if we're not thinking about it and going in with our eyes open, which is, you know, why we have that dedicated group now to say, Hey, what are those things? How do we verify? How do we validate the models? How do we make sure that it's actually hearing correctly and interpreting correctly? Um, and what does that audit cycle look like right around that?
It's not much different than what we have today for healthcare, right? So a lot of healthcare has dictation and, you know, recorded dictation by physicians on notes and things like that. And so you're only as good as the person typing the notes for you or the system typing the notes for you. Um, but there are audit procedures and things that are in place to protect, you know, to make sure that we have the right things in place there. So, so I'd say that's kind of maybe, maybe a little more of the nontraditional healthcare perspective on, on AI.
Um, outside of that, you know, from a threat landscape perspective in, from a security perspective, we've seen it in phishing emails just like everybody else, right? Um, much better targeted, much better worded, more difficult to detect.
Starting to see. Um, as others in the industry are, um, voice changing capabilities, right? And so, uh, folks pretending to be executives or pretending to be someone else, um, and using voice and voice print technology around that and AI capabilities around that. So, um, definitely starting to see those kinds of things. around it and, you know, have, have had to put some other procedures and different things in place for identity proofing and identity verification, which is going to be a continuous challenge. I think for all of us.
Mike: Are there any specific attacks that you've seen or heard about maybe from your peers where, you know, you're seeing high potential where it's probably they're leveraging AI or generative AI, and you thought, man, that's a really unique way for them to attack, you know, any specific examples we could dig into?
Matt: Yeah, you know, I'd say, um, like I said, I've, I've seen it. I'll say firsthand, not necessarily BJC, but other locations around the voice technology, right. And mimicking a person, um, got very good.
You know, it's the help desk situation that everybody's young with of people calling in, pretending to be somebody else and trying to get credential access or compromise credentials. Again, it's all about the identity verification. What are we doing to bolster that? So I'd say that's a big one right now.
And it's not just executives anymore. It's anybody with a credential, right? And so the credential is valuable, and they're being sold out there. And so it's just a matter of how can they best get the, the ID password and, uh, maybe an active session, all right, um, to be able to sell.
Evan: So across the health care systems, right? I can imagine you're going to have new AI tools, new AI technologies, right? Maybe to determine the exact form. There's going to be these new AI generated attacks. You're going to have some new AI generated security tools, right? To help on the fence. How does it affect your your team, right?
Does it affect the, um, like everyone's job changed a little bit, right? There's some work they used to do that AI can now do. There's probably new new responsibilities. Um, Like, yeah, do you, do you see, how do you see like the organizational model changing, right? Are there new kind of cultural things that you need to set in place to encourage people to like really test out these new technologies or how do you see it affecting kind of, uh, the team and culture?
Matt: Yeah, I think, um, I would say we have some folks that are fine and excited about it, and we have some that are worried about it, right. And, and I can understand both perspectives. And, and I would say that. I'm excited about it because what we are doing will change. We still need defenders. We still need people on our team to protect us or protect the organization.
Um, how we do that is going to be different. And if you're not used to change by now, I'm sorry, you're going to get used to it. Right. So, so I would say for me, it's. And what I try to tell my team is, look, you know, and it's like the book, you know, what got you here doesn't get, is not going to get you there, right?
So it's, you need to figure out how can you leverage these tools? How can we do it in a safe way? Um, to make our jobs more efficient. So we have time to do the things we've always talked about wanting to do.
We've always talked about wanting to do more threat hunting. We've always talked about wanting to do more risk quantification. We've always talked about wanting to do a better job and be more proactive in shifting security left in our, in our agile environment, our agile workflows and things. So we have some time to do that now because we're making some of those things either automated. Or more efficient or those kinds of things. Right.
And so I, That's my perspective on it, right? But I can totally understand somebody being scared about, gosh, I'm doing this thing. I like doing this thing. And that's totally changing my dynamic and shifting what I do. Um, and I can, I can get that right. And, for those folks, what I try to do is, or what I would recommend to them is, we have a lot of continuing education capabilities and things, our partnership with Wash U and other universities, you know, leverage those, Those benefits that you have through employers and others, to go learn, right?
And to go continuously figure out how you change, right? And, and how you can be on that cutting edge and bringing those things to the team. Because again, we need that diverse thought. We need that diverse perspective on how to do things.
Mike: Let's double click into some other AI use cases for the security team. Maybe touch on some tangible results that you've seen from AI that other people might be surprised to hear about, or maybe underestimate the power of AI in those particular areas.
I know you mentioned risk management as one, but are there, are there some other areas where you've really seen AI shine from a security use perspective?
Matt: Yeah, you know, um, I would say the, the risk quantification thing is interesting, as I mentioned before, I'd say the other area, um, honestly, which is kind of right in line with all the large, like large language model stuff is security awareness, right?
I mean, making something reworded to be more effective to, to read, to reach a very diverse audience, right? So, you know, at BJC, I have a variety of personas of users, right? Um, I would say the majority of our caregivers are here to do just that, to provide care and their focus is not on email and things like that. It's about the patient that's sitting in front of them, right? And they want to do their best to fix that patient or to help, help correct, whatever's ailing them. Um, and so developing something that can resonate and quickly get the point across to folks that, you know, that's not their primary focus, um, is key.
And so I love the idea of, and we have started to use a little bit of large language modeling, um, in some of our communications and, you know, some of the, some of the pieces that we're looking at.
Mike: No, I love the, I love the, uh, you know, your thoughts on security awareness and AI being especially generative AI being able to disrupt their, um, maybe just get your perspective on, you know, what's broken about security awareness today and maybe what are, what do you think are some particular things around security awareness that you think AI can really make it more meaningful because I do, I do agree, I think it's hard to measure impact and I do think this is an area that's ripe for disruption.
Matt: Yeah, some of the challenges I see with awareness today are, you know, it's really that you have very short attention spans from all your different users. And how do you craft a message and how do you get the message out about the urgent or emerging things that are going on? How do you, Make sure you're not always crying wolf, right, or coming across as crying wolf.
How do you make sure it's relevant to the user groups and the folks that you're trying to communicate to? So I'd say, and really just making it real, right? And making people understand or helping people understand, this affects your work life, but it also affects your personal life. Significantly, and here's some things you can do to protect your personal life.
And even though that's not, you know, typically what a business is supposed to do, we care about our folks. We care about our employees, right? So how do we make sure that they have good cyber hygiene, at work and at home? Um, and how do we point to some things there.
So I, I'd say it's really, um, you know, for me, it's that messaging and how do you get your message across in a timely, efficient, um, simple way, um, that gets your point across and gets, Ultimately the outcome you're trying to achieve, right? So in the case of phishing, great. So I failed a simulation test. Now what? Right. So, so, you know, we could use potentially AI or AI solutions to maybe help hone in on, well, You clicked on this specific thing. So that was actually your problem. It wasn't, I'm going to send you a general training class. It was a, Hey, you didn't recognize the link and you clicked on it and that you shouldn't have done that.
Right. And so there's some tools out there. They kind of start to do that now and are helping folks to hone in on, you know, that very quick training aspect versus a broad base, kind of broad strokes, everybody's got to do this security training thing.
Evan: And Matt, where do you see it going? Like, long term, do you think that the AI will understand kind of every worker, right? The doctor in the ER, right? And says, hey, for, you know, across our healthcare system, right? Other ER doctors get these types of attacks. And so, you know, be very careful when you're kind of discussing, you know, patient matters. Like, is that the kind of the future you're envisioning?
Matt: Yes, and I already see it in some, some tools that are out there, right? I mean, I think, there's the more behavioral analysis and then more so anomalies that we can pull out to say, hey, this is what a typical physician looks and feels like and acts like.
Um, we're seeing this change, we're seeing these people doing X, Y, Z, or hey, we're, we see people using personal data to call into the helpdesk to do X, Y, Z for these user groups, right? The more we can target that, you know, I think it makes it more real and it's not just a, oh, well, that doesn't happen to us, right?
It's a, it's a, no, it happened to my friend, like, in the unit next door, right?
Evan: Yeah.
Mike: Yeah. I guess just quickly too, like. Yeah. You know, everyone's hyping AI and I think, I think it does solve a lot of problems, but maybe anything you're seeing where AI is promising to solve a problem that maybe it's just overhyping and it's not necessarily needed in that space.
Matt: You know, and I can't remember who said this, but I've heard it before and I would agree. It's that AI is going to automagically take care of all of our security threats. And immediately block something from happening. And while I think some of it's possible, when you're running a large enterprise, right? And you can, you can do something like that. Maybe when you're smaller, right? A smaller organization, smaller company. Um, but when you're running a large enterprise, And uptime is of utmost importance. If I change a firewall rule that blocks something legitimate, I'm going to hear about that. Um, and if that was done because it was a low security risk, but the thing, you know, the automation decided to do that, then there's just a lot of ramifications there.
So I don't know if we'll ever get to a hundred percent full automation. I think we're always going to have to have someone validating accuracy. And the models and making sure that our risk tolerance as an organization is taken into consideration as we instrument those things or allow those things to take action on our behalf.
Um, so I think we ought to be careful with it, but I, I just, I think it's a little overhyped to say everything's going to change and we're not going to have to, we're not going to have people doing instant response and, uh, security threat reviews and hunts and things like that. I think you still need a brain, a real brain behind it somewhere.
Evan: So the way we like to end our episodes, we like to do kind of a, what we call lightning round, so kind of looking for your, your quicker hits, kind of like your one tweet responses. And these questions are very difficult to answer one tweet. So, uh, please, please forgive us.
Uh, but Mike, do you want to kick it off for us?
Mike: Yeah. So what's one piece of advice you'd give to a security leader who's stepping into their very first CISO job? Maybe something they might overestimate or underestimate about the role.
Matt: Give yourself time. So that would be the tweet response. The longer response for that would be, um, again, I'm a very impatient person. Most people want to hit the ground running and they want to see immediate results.
You need to listen and learn. You need to take your time to understand what the problems are and what are those things that you're trying to solve or that you think you need to go solve. Um, and then make your plans and go for it. Right. Um, you can move fast, but you also need to, just take a breath, listen, learn, and understand the organization you're in. Or you just joined.
Evan: Appreciate you sharing. So, um, I was talking about earlier, you know, AI is one of the fastest moving technology we've seen, right? You mentioned, you know, he has changing what happens in the hospital, what happens, uh, with, with doctors and patients and other clinicians, um, how, how has this like, what would be your advice for how a CISO can stay up to date with, you know, the latest in the AI trends, right?
Either how it's affecting their business, how it's affecting the criminals, how it affects new cybersecurity technologies or programs or process, you know, how do, how do you stay up to date and what's your advice for others?
Matt: So for me, it's, you know, it's really reading and taking in and ingesting and constant learning, right? So you can't just, Stop and put it on autopilot. You need to learn and understand. And whatever way you go do that. The only thing I would encourage folks to do is do it in a way that is balanced, right?
So a lot of times, you know, there's, there's always an agenda or there always seems to be agenda behind a piece of some sort, right? So cut through that, read both sides, things you do agree with, things you disagree with. And make your judgment and make your, you know, Decide what your perspective is on those things
Mike: On the more personal side What's a book that you've read that had a big impact on you, and why? And it doesn't have to be cyber security related could be personal could be leadership. Whatever.
Evan: Could be a cool Star Wars book. I mean, it could be anything.
Matt: It could be I I would say It's gonna be so boring, but it's one of the first management books I ever read. The One Minute Manager, right? and like It's so easy. It's so, so quick to read, but it's something that we tend to forget, right, is how do we, um, quickly round with folks? How do you stay in touch with folks? How do you make sure that you're accessible? Um, you know, you're really listening and learning from everybody, right? And so I think it's, uh, you know, it's those little, little moments of time that are precious for folks.
Evan: So last question is, um, what's the advice you'd have for the next generation of security leaders or aspiring security leaders or cybersecurity professionals, what wisdom or advice would you share with them?
Matt: I would say be comfortable in the gray, right? So not everything is black and white. Not everything is right or wrong. Especially when you're in the business world and in personal life too, right? I mean, just again, kind of going back to that balance concept, look at things you agree with, look at things you disagree with. Make sure you're taking those things into account. Understand that you're not going to be able to fix everything. A lot of folks in security want to get into this job because they want to fix everything. And that's great. But the reality is we can't fix 100 percent of everything. So we have to fix the most important things.
Um, and so how you deal with that, how you deal with not being able to fix something, but mitigating a risk or letting a risk go because it's just not something we need to deal with right now. That can be tough. Especially folks new to security, because like I said, they really want to be that, um, I'm going to come in, I'm going to fix and change the world and do everything for the better.
And believe me, I want to as well. I want to fix it all. But it's just not feasible. So we got to make sure we're focused on the right things.
Evan: Well, I appreciate you sharing Matt. And, um, I imagine, uh, there's lots of people out there that are going to really pull apart some, some key insights from, from the show.
So I appreciate you taking time with us today. Um, yeah, really sincerely thank you for joining Matt. And, um, I'm looking forward to chatting again soon.
Matt: Thank you for the opportunity. I appreciate it.
Mike: That was Matt Modica, Vice President and Chief Information Security Officer at BJC HealthCare. I'm Mike Britton, the CIO and CISO of Abnormal Security.
Evan: And I'm Evan Reiser, the founder and CEO of Abnormal Security.
Thanks for listening to Enterprise AI Defenders. Please be sure to subscribe, so you never miss an episode. Learn more about how AI is transforming the enterprise from top executives at enterprisesoftware.blog This show is produced by Josh Meer. See you next time.
Hear their exclusive stories about technology innovations at scale.