On the 67th episode of Enterprise AI Innovators, Joel Hron, Chief Technology Officer at Thomson Reuters, joins the show to share how the 150-year-old information company is rebuilding its core legal and tax products around agentic AI and how AI-written code redefined what it means to be an engineer at the organization.
On the 67th episode of Enterprise AI Innovators, host Evan Reiser (CEO and co-founder, Abnormal AI) talks with Joel Hron, Chief Technology Officer at Thomson Reuters. Joel shares how Thomson Reuters is rebuilding 150-year-old knowledge-work franchises in legal, tax, and compliance around agentic AI, what changed when more than half of his engineers' code started being written by AI, and why the right mental model for working with AI is "colleague," not "copilot."
Quick Hits from Joel:
On the engineer-to-controller reframe: "Your job as an engineer shifted from being the contributor and owner of the code base to more being the controller and governor of the code base."
On the trust gaps blocking enterprise agents: "The control system around the agent is something that I think really needs to be built out further for enterprises to get comfortable with allowing agents to just do work in a more independent way."
On doing technical review at 5,000-engineer scale: "You can literally go clone the repo and spend an hour with Claude or with Codex talking about the code."
Book Recommendation: Thinking, Fast and Slow by Daniel Kahneman.
Evan Reiser: Hi there, and welcome to Enterprise AI Innovators, a show where top technology executives share how AI is transforming the enterprise. In each episode, guests uncover the real-world applications of AI, from improving products and optimizing operations to redefining the customer experience. I'm Evan Reiser, the founder and CEO of Abnormal AI.
Saam Motamedi: And I'm Saam Motamedi, a general partner at Greylock Partners.
Evan: Today I’m talking with Joel Hron, CTO at Thomson Reuters.
Thomson Reuters is a global information services company with more than 150 years of history. They’re the world's largest legal software provider and are home to some of the largest tax and compliance software businesses in the world. Their perspective on AI is especially compelling because they're deploying it directly into the knowledge work that lawyers, accountants, and tax professionals depend on every day.
A few things stuck with me from this conversation:
First, Joel drew a clear line between using AI as a copilot versus using it as a colleague. The shift happened when one of his engineers noticed that 50% of their code was being written by AI. At that point, the engineer isn’t the primary contributor anymore; they’re the controller. Joel thinks this reframing will extend well beyond software engineering.
Second, Thomson Reuters has built agents that take a stack of W-2s and 1099s, read and extract the data, apply the tax law, and compute a complete tax return. The tax professional's job shifts from pulling numbers off documents to advising clients on their financial decisions. AI isn't just improving a workflow. It's filling a real labor gap.
And finally, AI has made Joel more technically engaged as a CTO, not less. With 5,000 engineers and 100-plus products, it was impossible for him to ever look at anyone's code. Now, before a Friday engineering meeting, he can clone the repo and spend an hour with Claude or Codex to actually understand what's happening. Now, he shows up with real direction instead of just discussing project status.
Evan: Joel, thank you so much for joining us today. Maybe kick us off, do you mind sharing a little bit about your background, how you got to where you are today, and your current role at Thomson Reuters?
Joel Hron: So I’ve been at Thomson Reuters for about four years now. I’ve been in this role as CTO for just over a year. Early in my career, I actually didn’t start out in software. I started out in mechanical engineering, working on simulators and sonar systems, and I worked in the energy industry for a while. Those early days really got me into computer science and linear algebra and, sort of by translation, AI and machine learning over the course of years.
So I did that for quite a while. At one point, the timing felt right to go out and help start and grow a startup that we did with a few guys, and we grew that over the course of five or six years and successfully exited to Thomson Reuters. That was about four years ago.
When I joined, the company was at a really interesting time. Obviously, the first days were really focused on taking what was a small company of maybe 80 or 90 people and really integrating it into the fabric of the larger enterprise. But it was a really interesting moment. About nine months after I joined, OpenAI released ChatGPT.
For a company like Thomson Reuters that works in law and tax, it was, as it was for every company regardless of industry, quite an abrupt moment of, “Okay, what just happened? What does this mean for me? How do I need to reflect and adapt our strategy top to bottom?”
And having just been acquired not so long ago, and having quite a lot of experience in the applied AI space, I found myself front and center in that conversation of trying to help shape our strategy in the early days of how we adapted to those changes. So that was a really phenomenally interesting time.
I think it’s really just kind of grown and blossomed since then. It’s been fun to really operate at that scale.
Evan: I didn’t really fully appreciate the full scope and scale of operations. You might share a little bit more there, just so we have a bit more context about you in the organization.
Joel: So the company has existed for more than 150 years, dating back to when Reuters used to deliver the news on carrier pigeon. So a tremendously rich history, but also a rich history that’s grounded in some really time-tested but also perpetual ethics and truths.
Mainly that’s around delivering trusted information to the world, as you think about that job to be done. Certainly most people recognize the Reuters news business, but we’re the largest legal software provider in the world, probably most famously known and recognized for our Westlaw product, which is the number one legal research product in the world, but also Practical Law, which is the largest legal know-how content set in the world.
We also have some of the largest tax products in the world. We sell primarily, not like a household name like Intuit necessarily, but to large and small accounting firms primarily. We also have quite a tremendous transactional tax compliance business. Any time you go online and buy something and the tax gets computed, that’s our system doing that work.
We feel like a big part of the supply chain from that perspective, and also a large compliance and audit software business, as well. Most of the large audit providers in the world license software from us. So we touch a lot of segments of the market that really anchor on deep knowledge work and deep expertise being a pivotal part of it.
Most of our products have some element of that expertise and content that powers what they do. That could be a tax engine. It could be, in the case of legal research, a bunch of case law and markup. But we’re also one of the largest employers of legal professionals and tax professionals in the world.
We have experts in-house that support the development of a lot of those underlying content assets and data assets that drive the products at the end of the day. So that’s a little bit about who we are. Like I said, I definitely didn’t appreciate or understand the breadth and scale of that when I first joined the company either.
But it’s been quite eye-opening getting to know that.
Evan: What do you think is going to be one of the big jumps in terms of how most enterprises will get the full value of GenAI versus the current technology today?
Joel: I think, from an enterprise perspective, there are really two things today that are holding back the pace of adoption. One is what I mentioned before, which is trust. Not just from the accountability of the accuracy of the AI system; that’s one element of trust. But also, the element of: what do I explicitly trust an agent to access? What files can it access on my system? On my computer?
What other web applications can it access? Can it access my password manager? And if it does access those things, what can it do with them? The control system around the agent is something that I think really needs to be built out further for enterprises to get comfortable with allowing agents to just do work in a more independent way.
Then the second element of trust, like I said, is building the processes around agents actually doing work versus just being thought partners. I think that’s the step that needs to happen from a human change management standpoint. You’re seeing this happen in software engineering right now, where the job is being redefined and people are now formulating a different understanding of, “What is my role as an engineer?”
The value that you create as an engineer is not the lines of code that you write. In fact, it’s never been the lines of code you write. The lines of code were just the way for you to distribute and express your ideas and judgments. So, what being an engineer means is actually changing.
Some engineers are jumping headfirst into that. Some of them saw their value as the lines of code, and you’ve got to really reframe their thinking there. I think that kind of reframing of roles will happen more broadly beyond software engineers as we go through the next few years too.
Evan: I saw something you presented or wrote recently, and you called it the transition from “copilot to colleague.” Can you share a little more about that? It feels like it relates to this conversation.
Joel: Yeah, I think so. I think it speaks to this mindset change. One of my engineers said something really insightful, at the tail end of last year, when the hockey stick of engineering capability was happening with agents. He said, “Something really changed for my team when we hit 50-plus percent of our code being written by AI.”
It was insightful because it signaled that no longer was a human the primary controller and contributor to the code base. Some other entity was; in this case, AI. So now your job as an engineer shifted from being the contributor and owner of the code base to more being the controller and governor of the code base.
You have to now worry about building systems around AI to steer and correct and guide what it does, rather than doing that work yourself. The more that we start to think about AI as a human-like contributor, the better frame of mind we put ourselves in in terms of how to actually use it.
That’s really what I was trying to get to in terms of copilot to colleague. In some ways, a copilot is: you’re sitting there talking to something to stir up ideas or fill up a blank page. But when you think about it as a colleague, you actually think about delegating work.
If I were to delegate work to a direct report or somebody in my org, I would give them all the information they needed to be successful, but not so much information that I would just do the job myself. That’s the same sort of act that needs to happen with AI: building systems that give AI enough information, but not so much information that you could just do the job yourself if you did.
That’s the mental model that people need to get into in terms of how they should think about working with AI in the future.
Evan: I think I saw your CoCounsel probably had like a million users. I was kind of curious: what are some of the ways your customers or users are using AI? How is AI changing how they engage with the company or how they use the products? Some things that maybe some of our listeners might find surprising about that.
Joel: Yeah, that’s interesting. You’re right, there are a million users with CoCounsel today. We see really strong adoption. I think a lot of people have looked at the legal industry for a lot of years as this laggard industry that was slow to adopt technology. From day one of ChatGPT starting to shape the changes in technology, the legal industry has been eager and ready to go in terms of trying new technology.
There’s been a lot of eager adoption by the legal industry, but also a lot of hesitation. You see a lot in the news about various challenges in terms of being wrong in the legal industry, particularly when using AI, and the risks of doing that. So there’s a lot of apprehension at the same time, but also a lot of recognition that, “Look, we need to lean in here because it’s really going to reshape how we do work.”
So I think that’s been really good. Our customers have been really great partners in terms of how we build. Thomson Reuters as a whole has really invested and excelled for many, many years in the legal research domains, particularly litigation. About midyear last year, we released something called Westlaw Deep Research, which, across all of the legal industry, might have been one of the more profound product releases.
We hear substantial feedback from customers about the quality of the research that’s been being done with that product. Perhaps like you experience using deep research on the web and you’re shocked at how it can read through 200 websites and synthesize that into a coherent study, we’re doing the same things with deep legal research.
It’s really impressive. It’s really changed the narrative of how people think about legal research. We’re excited we’re in beta with our next version of CoCounsel right now, which takes a similar paradigm to a broader swath of legal workflows. In tax, we’ve actually built products that, in the end, automate the development of tax returns. To take a shoebox of W-2s and 1099s and put it through all the tax regulation and tax law and compute a tax return at the end of the day.
We’re certainly trying to scale up the complexity of those use cases, but we see real potential for agents to complete tax returns now, as well. We’re quite excited about continuing to build that product.
Evan: What are some of the surprising ways you’re using AI across the products? The average person, unfamiliar with your domain, might not fully appreciate the true impact and power, or the new opportunity AI is bringing for new customers.
Joel: Let me give you maybe a couple of things that would stand out. As an individual in the United States, you have to file tax returns every year. For many people, you might have a CPA that does that return. What your CPA would normally do is send you, and if you’re anything like me, like 30 emails to try to get you to give them all the documents they might need: your W-2s, your 1099s, or whatever.
They would sit there and pour through them one by one, write down all these values into Excel, and use their knowledge of the tax law to then figure out, “Okay, do I take this deduction? How do I do this?” And map that into a tax calculator and ultimately bring you back a return.
They say, “Okay, well, here you go. This is what we’re going to file. This is what credit you’re going to get. This is the refund you’re going to get,” and so forth. With AI today, we now have agents that can do all parts of that process.
The tax professional now basically takes these documents and puts them into our system. Our system reads them and extracts all the appropriate data values. It looks at the tax law and makes interpretations as to how certain data elements should be treated based on the tax law, what positions are most advantageous to take in terms of deductions versus not, and then computes all of that into an actual tax return.
So the tax professional’s job now turns much more into advisory. Rather than just doing grunt work of pulling data off of documents and dumping it into a tax calculator, they’re advising clients much more directly now on how they should think about their taxes, or what alternatives they could think about next year in terms of how they could think about their businesses or their personal lives.
Their job, hopefully, is much more fulfilling in that way. There’s also a significant decline in the number of people graduating and taking the CPA. So there are more people filing taxes and fewer people doing them. You have this real constraint in the market, and AI is really helping fill that void.
If you look across the industries that we serve, there are similar examples of that as well.
Evan: What about on the internal AI use side, rank-and-file adoption? Are there some ways you and the team are using AI that your team has come back to you about? Some things your team has been doing with AI that maybe some of your peers might find surprising, might be like, “Oh wow, we should be doing that too.”
Joel: Yeah. In terms of internal AI adoption, like I said, one of my engineers mentioned that 50% figure to me, and I hung on that pretty hard, because last year most of our effort was all about adoption. It was like, “Hey, I just want to make sure that I’ve got 90-plus percent of my org using these tools every day.”
Then we shifted and we said, “Okay, we want to start thinking about not just using the tools, but how we’re using the tools.” For me, in software engineering in particular, I think that meant actually shipping code with AI, like making PRs with AI independently rather than having humans curate every step of the way.
So that’s one of the main OKRs that we have, through the middle of the year: to be AI-first in our engineering teams. We still have quite heavy levels of human review and these kinds of things, but we really want to be quite intentional about how we’re using AI-first in the development flow.
In partnership with our product and design organizations, there’s a lot to be said about just being able to create prototypes faster. But at the end of the day, it’s not being able to create a prototype. It’s that the prototype the product or design person makes is a much better translation of what’s actually in their head as a requirement, rather than a written requirement.
Documentation, basically. You can actually use that artifact to develop with AI to say, “I need it to do this, I need it to look like this.” You’re actually using this mockup as the requirements spec to AI. There’s a much tighter loop there in terms of things getting lost in translation or lack of communication.
So those two things together have a really powerful flywheel effect, if you will. The last thing I would say is, I’m really still trying to get better at this at the moment, but it’s really about bringing better transparency and visibility to the whole organization about all the things happening in the organization.
I believe any engineer on my team should be able to ask, “What’s happening with our customers? What did customers say about this product feature in the last 30 days?” We should be able to look at every interaction we had through support or other mechanisms to give our engineering teams more directly a picture of what is happening with the state of their application or the state of this product scope or whatever it may be. That’s just one example, but I think it exists in sales, it exists in marketing.
This data layer of AI is incredibly important and empowering for the rest of the organization. We’re trying to spend a lot of time getting that right and making it available and accessible to more people. I can already see examples where we have done it.
It’s really opened up a lot of doors.
Evan: When you think about your team and you’re thinking about building the next generation, like, “I want a fully AI-powered team,” where are some of the skills that are kind of going up in value versus going down in value?
Joel: Yeah, 100%. I actually wrote an article on this recently, but I really think that these deep specializations, in some areas, have become less important. I don’t think that they’re unimportant. I guess I still believe in this T-shaped model of an engineer, but I think the horizontal slice of the T is more important now than the vertical slice of the T. And your ability to traverse that horizontal slice of the T, I think, is even more important than it ever was before.
A lot of people use the term curiosity, and I do like that term, but it’s not just about being able to be curious. It’s also about being able to learn quickly and fail quickly. A lot of people can be curious and go try a lot of things, but they can ultimately just swim in circles and get nowhere.
The people who are curious and can apply that curiosity to learn and evolve are the ones who have outsized potential in this world. Those are the ones that we really see thriving. They’re also the ones who, just personally, don’t see themselves in a box.
Like I told you, my academic career was as a mechanical engineer. I had no business working as a software engineer. But I always identified myself as an engineer, and an engineer’s job is to solve problems. I don’t care what mechanism I’m using to solve that problem, I’m going to go try to solve it.
The more that people can identify themselves as problem solvers rather than front-end engineers, or back-end engineers, or whatever, the more they open those doors up for themselves to do things that they perhaps aren’t necessarily trained to do in the traditional sense of the word.
Evan: Okay, so speaking of new things, tell me a little bit about how your personal day-to-day has changed in the age of AI. Are there tools you’re using? Are there workflows that have dramatically changed, or things that maybe now, given you can do so much with AI, you’re spending more time on, or vice versa?
How are you personally, as a leader and executive, changing how you work?
Joel: When I have ideas that are strategic or need to be thought through, I’ll run multiple-day-long or multi-day-long conversations with AI to work through those problems before I ever bring it to anybody else, to my team. I try to test corners of things that are in my head or intuitions that I have quite rigorously before I talk to anybody else about them. I find it really helpful to do that, just in terms of thinking.
Then the last thing, which is maybe the most impactful, is it’s allowed me to be way more hands-on than I could have ever been before. What I mean by that is, there are 5,000 engineers in my organization. It was impossible for me to ever look at a line of code that any of them were doing, and it’s 100-plus products.
But now, if I’ve got a meeting with the team coming up on Friday and we’re talking about a product, you can literally go clone the repo and spend an hour with Claude or with Codex talking about the code, what’s happened and what happened recently. Get grounded in what’s going on from a technical perspective and be a lot more useful in that conversation versus just coming in and talking about the niceties of how the project is going. That has changed how I operate. First of all, it’s a lot more exciting for me to be able to work at that level. But it also helps me give better guidance and direction to the teams, as well, which is the most important thing at the end of the day.
Evan: At the end of the show, I do a bit of a lightning round where I basically try to get your one-tweet version of some questions that are a little hard to answer.
So, how do you think companies should measure the success of the CTO in the age of AI?
Joel: I think companies should measure the success of the CTO based on velocity and impact right now. First of all, velocity is non-negotiable. But, I think a lot of people are doing really simple, ephemeral things that have a three-month-long shelf life. CTOs should be shipping things that have primitives that are going to be durable, and I think that will prove to be more impactful than just the flashy next thing.
Evan: Let’s say you were talking to someone who just stepped into a new CTO role, and they’re like, “Hey, just give me three must-do things in my first three months, six months, to make sure that I can get the ball rolling around AI transformation.” What might be three areas to focus on?
Joel: First would be coding tools. I would say pick a coding tool and get to more than 50% of your code being shipped by AI as the first thing. The second would be talent. I would index on what you believe is important for talent going forward versus in the past, and where you are relative to that.
Chart a path to make sure your talent is mapped to where you need to be going. Then think about what it is you’re doing. AI has sort of afforded people a lot of opportunities to do anything they could dream up. I think it’s more important to figure out what’s really going to matter going forward and make sure you’re working on the right things.
Evan: What’s your advice for your peers about how to make sure they’re staying close to the frontier of technology, given it’s moving so fast?
Joel: Use it. Use it every day. Just use it. When you see a press release drop on Twitter or something like that, go see what it’s all about. At least for me, that’s how I learn. I just go use it.
Evan: That resonates with me. I have to play around with it to really understand. Okay, so here’s the personal side. What’s a book you’ve read that’s had a big impact on you? I’d love to hear why. And it doesn’t have to be work-related.
Joel: I’ve given this answer before, but I would say Thinking, Fast and Slow by Daniel Kahneman is probably the most impactful book I’ve ever read. First of all, I like social psychology, but it also really helped me understand how to work with others. Not just operate and communicate from my perspective and with my bias, but also perceive how others might come to this conversation with their perspective, with their bias, and adapt myself to the situation to ultimately either sell my idea, or get alignment, or whatever it is I need to do.
It helps me move faster by having that social awareness, if you will, as well as personal awareness, because I have my own biases and things like this. That recognition, and also that he presents it in an exceptionally data-driven way, has really shaped how I think about interaction in general.
Evan: The last question is my favorite one: what do you think could be true about the future of AI’s impact in the world that most people consider science fiction today?
Joel: I would believe we’re going to go through a bubble, even though I think it’s going to be hugely impactful in the long run. As hugely beneficial as I find it to myself today, I think the economic models of how this actually scales aren’t quite proven yet. There’s a lot of really high-leveraged investment, sort of betting on it.
It just feels quite aggressive from that perspective. So I think there’s going to be some short-term bumps in the road as we figure out how the model of that works, but the end state of it will be as impactful as people project it will be.
Evan: Well, Joel, I’ve got a hundred more questions for our version 2 of this episode, but I appreciate you taking the time to join us. Thank you so much.
Joel: Thanks for having me, Evan.
Evan: That was Joel Hron, CTO at Thomson Reuters.
Saam: Thanks for listening to Enterprise AI Innovators. I'm Saam Motamedi, a general partner at Greylock Partners.
Evan: And I'm Evan Reiser, the founder and CEO of Abnormal AI. Please be sure to subscribe so you never miss an episode. Learn more about enterprise AI transformation at enterprise software (dot) blog.
This show is produced by Abnormal Studios. We'll see you next time.