Siemens’ CTO on industrial AI: train doors matter more than brakes

Siemens' CTO on industrial AI: train doors matter more than brakes

Industrial AI delivers the most value in everyday, high-frequency processes, like predicting train door failures, where accuracy, reliability, and domain-specific data are critical. However, the real challenge isn’t the technology itself but organizational readiness, including data sharing, workflow integration, and preserving expert knowledge.

In a recent episode of MIT Sloan’s “Me, Myself, and AI” podcast, Peter Koerte – Siemens’ Chief Technology Officer and Chief Strategy Officer – made a point that’s easy to miss if you’re skimming for big-picture AI strategy talk.

He was discussing predictive maintenance for trains. Most people would assume the brakes are the most failure-critical component. Siemens’ data says otherwise: it’s the doors. They open and close thousands of times a day, wear down faster than almost anything else on the vehicle, and when they fail, the train stops running. Using AI, Siemens can now predict door failures several days in advance.

It’s a small example, but it says something important about where industrial AI actually creates value. Not in the flashy demos. In the mundane, high-frequency processes that nobody thinks about until they break.

Industrial AI is not consumer AI

Koerte drew a clear line between consumer AI and industrial AI. Consumer models train on publicly available internet data – text, images, conversations. Industrial AI depends on proprietary, domain-specific data that companies are often reluctant to share. A model that writes emails and a model that predicts equipment failure operate in entirely different worlds.

The accuracy requirements are also different. A chatbot that gets something wrong wastes a few minutes of someone’s time. An industrial AI system that misreads a sensor value or miscalculates a tolerance can shut down a production line, damage equipment, or create safety hazards. Koerte was blunt about this: in industrial contexts, AI needs to be safe, reliable, and trustworthy. There’s no “move fast and break things” when the things you’re breaking are physical.

This distinction matters because a lot of the current AI conversation treats all models as roughly interchangeable. They’re not. The infrastructure, data pipelines, validation requirements, and failure modes for industrial applications look nothing like those for a consumer chatbot.

The data problem nobody wants to talk about

One of the more honest moments in the conversation was about data sharing. Industrial AI is hungry for domain-specific data, but companies treat that data (rightly) as proprietary. The result is a chicken-and-egg situation: AI models need diverse industrial data to become reliable, but no company wants to hand over operational data that could reveal inefficiencies, production volumes, or competitive information.

Koerte pointed to data-sharing partnerships as a way forward – arrangements where companies contribute data under controlled terms and get tangible benefits back, like better maintenance predictions or reduced downtime. Siemens itself has seen companies using its Senseye Predictive Maintenance platform reduce maintenance costs by 40% and cut machine downtime by half. But getting there requires trust and structure that most industries haven’t built yet.

This is probably the single biggest bottleneck for industrial AI adoption right now. Not model capability, not compute costs. Just the basic willingness to let data flow where it needs to go.

Smart glasses and the knowledge drain

An interesting tangent in the discussion was about smart glasses on the factory floor. Siemens announced a collaboration with Meta at CES 2026 to bring industrial AI to Ray-Ban smart glasses. The idea: a maintenance technician wearing the glasses gets real-time guidance overlaid on what they’re looking at. Step-by-step instructions, historical repair data, alerts from the predictive maintenance system.

The use case Koerte was most interested in wasn’t efficiency – it was knowledge preservation. Manufacturing companies across Europe are losing experienced workers to retirement at a rate that’s hard to backfill. When a 30-year veteran leaves, their institutional knowledge about specific machines, edge cases, and workarounds leaves with them. Smart glasses with AI can capture some of that knowledge and make it available to the next person on the shift.

It’s a pragmatic approach to a problem that training programs alone don’t solve. A new technician can learn procedures from a manual. They can’t learn from a manual that the third valve on line 7 sticks in cold weather and needs a specific tap before it’ll open smoothly. That kind of knowledge either gets recorded and embedded in systems, or it disappears.

The transformation problem

The most relevant part of the conversation for anyone involved in digital transformation wasn’t about any specific technology. It was Koerte’s point that integrating AI is fundamentally an organizational challenge, not a technical one.

New AI tools disrupt established workflows. They raise questions about who does what, how decisions get made, and what happens to roles that were built around manual processes. A factory that introduces AI-based quality inspection doesn’t just need the model – it needs to figure out what happens when the model flags something. Who reviews it? What’s the escalation path? How does it connect to the MES? Who owns the data?

These aren’t engineering questions. They’re management questions. And they tend to take longer to answer than the technical implementation itself.

Koerte also acknowledged that Siemens doesn’t build all of this alone. The company partners with Nvidia on simulation and digital twins, with AWS and Google on cloud infrastructure, and with specialized AI companies on specific vertical applications. That ecosystem approach is probably more realistic than any single-vendor strategy for most industrial companies.

What this means in practice

The Siemens example is useful because it’s concrete. This isn’t a startup demo or a research paper. It’s a company with 300,000 employees and 50 years of applied AI experience talking about what works in production.

A few things stand out. First, the highest-value AI applications in industry tend to be boring. Predicting door failures, routing maintenance crews, optimizing building climate systems. Not glamorous, but measurable.

Second, domain expertise matters more than model sophistication. Siemens’ advantage isn’t that it has a better language model than anyone else. It’s that it has decades of industrial data and engineers who know what the data means.

Third, the organizational side is the hard part. The technology is increasingly ready. The question is whether companies have the data infrastructure, the process clarity, and the management buy-in to actually use it.

For companies in the DACH region and Central Europe, where manufacturing is a core economic driver, these aren’t abstract observations. The gap between companies that figure out industrial AI and those that don’t will show up in operational costs, maintenance budgets, and the ability to retain institutional knowledge as the workforce turns over. It’s not a future problem. It’s a current one, getting more expensive to postpone with every quarter.

“Content generated using AI”