On the 66th episode of Enterprise AI Innovators, host Evan Reiser (Founder and CEO, Abnormal AI) talks with Aldo Noseda, Chief Information Officer at Eastman Chemical Company. He frames Eastman’s AI posture in a way that matches the reality of a 100+ year chemical manufacturer: move fast where the downside is limited, and raise the bar sharply where AI output could influence physical operations. Eastman is a roughly $10 billion specialty chemicals company, and Aldo’s throughline is practical: build capability that people will actually use, package value into customer-facing offerings, and treat risk tolerance as situational, not philosophical.
One of the most concrete examples is that Eastman is not keeping AI inside the enterprise. The company has begun offering digital solutions to customers “in the form of services,” including a product called Fluid Genius. Eastman sells thermal fluid that customers use to move heat through plant operations, and the fluid degrades over time. Fluid Genius embeds an AI capability that predicts degradation, enabling customers to time maintenance and avoid costly disruptions. Aldo’s point is important for enterprise leaders who want AI to show up as revenue, not just productivity: start with a narrow operational pain that customers already pay to solve, then wrap the model in a workflow that makes the timing of action obvious.
Inside Eastman, the complementary move is what Aldo calls “AI for the masses.” Rather than betting on only a handful of advanced projects, Eastman built an internal engine that leverages tools already on the market and wraps them with security and customization for Eastman’s needs. That wrapper is the product decision. It creates a sanctioned place for day-to-day use, without every use case becoming a heavy IT project. The result is broad employee pull, with Noseda citing approximately 6,000 recurring users. The adoption lesson is simple: if you want enterprise-wide usage, remove friction, make it safe, and meet people at the individual-work level before you ask for deeper workflow rewrites.
From there, Eastman pursued operational wins that compress time-to-value. Aldo points to the Tier 1 IT helpdesk as an example of high-volume, repeatable work that maps cleanly to scripted resolution paths. Eastman “loaded the script” and put an AI layer on top, standing up a working experience in two weeks. He also shares a striking internal signal on software throughput: programmers moved from roughly 5,000 lines of code per month to around 40,000 using AI agents. Regardless of how each organization measures output quality, the takeaway is that leaders need a few visible, repeatable wins that change employees' daily experience. Those wins become the credibility engine for more ambitious initiatives, including more complex growth use cases and deeper process redesign.
The chemical industry context forces sharper boundaries around trust. Aldo is explicit that these systems are probabilistic and can hallucinate, so deployment decisions should be governed by consequence. If the use case is a sales coaching tool, some risk can be tolerated because a human can verify, and the downside is contained. If the output can influence “opening or closing a valve in the manufacturing plant,” the organization needs higher scrutiny, validation, and control. He extends the same realism to the broader AI conversation: the industry talks plenty about possibility but not enough about “translation to value” and change management. Eastman’s response includes training, a communication plan, and a change management plan, plus space for experimentation in a safe sandbox so employees can learn what works without creating avoidable exposure. The operating principle he returns to is not hype; it is sequencing: widen access with guardrails, prove value in visible workflows, and only then push into higher-stakes domains with stricter controls and accountability.