How leading companies are actually scaling industrial AI (and what you can learn from them)

At Hello Tomorrow Consulting, a recurring theme in conversations with senior leaders across industrial sectors is that their organisations are not lacking in AI in industry initiatives, most have several running simultaneously. Nor is it that they lack the budget, vendor relationships or internal appetite for transformation. Rather, a more specific frustration keeps surfacing: the AI is working in the narrow sense that the models perform and the industrial pilots deliver results, but almost none of it has changed how work is actually done.

According to McKinsey, around 78% of organisations now report using AI in at least one business function, yet only approximately 6% achieve significant bottom-line impact, defined as a reduction in EBIT of five percent or more. That gap is explained by the decisions organisations make, or fail to make, about how AI connects to operational reality: who owns the data it depends on, whether the people responsible for acting on its outputs actually trust it, and whether the knowledge that makes it useful has ever been properly extracted from the heads of the people who hold it.

We hosted a knowledge sharing webinar with senior leaders from Engie, Thales, and AI Bobby to showcase the key pain points faced by our Hello Tomorrow Consulting clients, and how we can support them in their AI in industry growth journey.

Here are the main takeaways 

Engie: Start with the simplest model and the most domain knowledge

Mihir Sarkar, Head of AI at Engie Research and Innovation, has spent seven years applying AI to physical energy systems, power plants, networks, storage assets, and his starting point is one that most corporate AI teams instinctively resist. Rather than reaching for sophistication, he starts with the problem, then builds the simplest model that can address it, loaded with as much domain knowledge as the organisation can bring to bear. As he put it:

“Starting with the simplest AI model you can build, with the least amount of data and the most amount of domain knowledge, makes the better usage of it.”

That principle matters more than it might first appear, because industrial data is almost never in the condition people assume it to be. It is fragmented across systems, inconsistent in format, and full of gaps that only become visible once a model tries to act on it. Making it usable requires sustained work: cleaning, structuring, and building the shared definitions that allow a system to understand what the data actually means in context.

That work depends entirely on domain expertise, and for most of Engie’s history, that expertise lived exclusively in the heads of experienced technicians who had worked the same assets for decades. Engie is now systematically drawing it out, capturing it in ticketing systems, ERP platforms, and graph databases, and feeding it back into AI models directly. The result is what Mihir calls hybrid models: data-driven AI trained on sensor data, augmented with physics-based equations, and enriched with the unstructured knowledge that operators have accumulated over careers.

Getting to that point, however, surfaces an organisational problem that most AI roadmaps overlook entirely. The teams responsible for generating and maintaining the data that feeds a model are rarely the teams that benefit from its outputs. That misalignment is one of the most consistent reasons pilots never reach production: the people being asked to invest in data quality have no stake in what the model produces. Resolving it requires deliberate decisions about data ownership and incentive alignment well before the technical work begins, and organisations that treat it as an afterthought consistently pay for that decision later.

Once that foundation is in place, the operational shift Mihir describes becomes possible, and it is perhaps the most significant of the three. Most predictive maintenance systems stop at generating an alert: they tell an operator something is wrong and leave the rest to human judgment. What Engie is building goes further, combining AI with physics-based modelling to diagnose the root cause and propose the next maintenance action. The operator receives not just a signal but a reasoned suggestion grounded in what is actually happening inside the asset. That changes the relationship between the system and the people using it in a way that a dashboard of alerts never does, and it is ultimately what determines whether adoption becomes self-sustaining or stalls the moment the implementation team moves on.

Takeaway

Before scoping your next AI initiative, three questions are worth answering first: 

  • What does your most experienced engineer know that is not written down anywhere? 
  • Who owns the data this system will depend on, and do they have any reason to care about its quality? 
  • When you scope an AI in industry iinitiative, do you clearly define the objective  it needs to support ? 

Hello Tomorrow Consulting can support you in designing and implementing your AI in industry strategy

Thales: Setting up the right governance frameworks are a key success factor 

While Engie’s case establishes what it takes to embed AI inside a single organisation’s operations, Juliette Mattioli, VP AI Fellow at Thales, takes the argument into its most demanding context – air traffic management, defence systems, and critical infrastructure – where the standard for AI is not accuracy alone but verifiable proof, and where the consequences of getting it wrong extend well beyond a missed business target.

Thales has been working with AI for forty years, but for most of that time, Juliette notes, it was essentially craftsmanship: bespoke solutions built by individual experts for specific projects, rarely portable and impossible to audit at scale. What changed, she explains, is the emergence of AI engineering as a formal discipline, one that Thales has spent the last six years actively helping to build, revisiting software and systems engineering practices from the ground up to account for the specific properties of AI systems. 

That effort produced a governance framework built around four properties that any AI deployed in a critical environment must be able to demonstrate: 

-Security, meaning the system is resilient to adversarial interference and data manipulation;

-Explainability, meaning a human operator can understand and account for any decision the system makes; 

-Responsibility, meaning accountability is unambiguous when something goes wrong;

-Validity, meaning the system does what it is supposed to do, all of it and only that.

That last property is where the operational discipline becomes most demanding, and where the lesson for corporate teams outside defence is sharpest. Juliette is explicit that deploying a valid system is not the end of the governance obligation. A system that met the validity standard at deployment may diverge from it over time as data distributions shift, edge cases accumulate, and the operational environment evolves. The engineering discipline Thales has built exists precisely to catch that drift before it becomes consequential, through continuous monitoring, structured audit trails, and the kind of version control that most commercial AI deployments do not yet treat as a baseline requirement.

The reason this matters beyond aerospace and defence is that regulatory and organisational expectations around AI accountability are moving in the same direction across every sector. The teams that begin building audit-ready systems now, rather than retrofitting governance onto models that were never designed for it, are the ones that will be able to move quickly when those expectations arrive rather than pausing to rebuild.

Takeaway

Juliette Mattioli’s work highlights the concept of trustworthy AI as a structural requirement for the broader corporate world. As AI becomes involved in higher-stakes decisions relating to quality control, the supply chain, and financial operations, the governance and explainability requirements for which Thales has already developed solutions will become essential.

At Hello Tomorrow Consulting, we support organisations to treat trustworthiness as a design principle from the outset, rather than as an afterthought for compliance purposes. These companies are the ones whose systems will actually be used and scaled.

AI Bobby: Turn fragmented expertise into cumulative intelligence with the right partners 

Both Mihir and Juliette are solving the knowledge and governance problem inside their organisations. Dominik Grabinski, founder of AI Bobby, a DeepBright Ventures portfolio company and a Hello Tomorrow Deep Tech Pioneer, is dealing with the same problem at a different scale: across an entire value chain, where the knowledge needed to solve a shared industrial challenge is scattered across companies that have every incentive to protect it and no existing mechanism to share it. That dynamic is worth understanding carefully, because it surfaces in some form across most complex industrial sectors.

Dominik structures his argument around four lessons built from direct experience in the alternative protein sector, an industry where, by his own account, more than $20 billion has been invested over two decades and where the sector still accounts for less than one percent of the global protein market. His diagnosis of why is instructive well beyond food and ingredients.

The first lesson is about targeting. AI must be directed at the actual bottleneck limiting performance, not the most impressive demonstration or the most fashionable business case. In the protein world, that bottleneck is functionality: understanding with precision how a protein will behave under real industrial conditions of extraction, processing, formulation, and scaling. Most AI initiatives in the space, he observes, address different problems entirely, which is why the investment has not translated into market penetration. The same pattern of misaligned targeting appears across sectors.

The second lesson concerns data quality over data volume. A smaller, well-understood set of decision-relevant data points will consistently outperform a larger, unstructured one. The practical implication is not to wait until more data is available, but to invest in understanding what the data already held actually means: its context, its relevance, and its limitations. This is the same point Mihir made about industrial sensor data, and the fact that it recurs across two very different technical domains suggests it is structural rather than sector-specific.

The third is about embedding. AI only creates compounding value when it becomes part of how teams actually work: how they search, how they decide, how they prioritise, how they learn from outcomes. Until that integration happens, AI remains a tool that sits alongside the work rather than inside it, which means its outputs are consulted occasionally rather than acted on consistently.

The fourth lesson is where Dominiki is most direct, and where the argument moves from organisational to systemic. In the alternative protein sector, no single player sees the full picture. The seed producer, the ingredient supplier, the formulator, and the final brand each hold a piece of the knowledge needed to solve the functionality problem, and when each protects that knowledge tightly enough, the entire system moves at the speed of its most isolated participant. As Dominik puts it:

“The biggest problem is not lack of talent. It is not even a lack of data, since we have plenty of it. The real problem is that knowledge is everywhere, but usable intelligence is nowhere.

The pharmaceutical industry faced a structurally similar problem in clinical trial data and began building infrastructure to address it, most notably through TransCelerate Biopharma, a non-profit consortium founded in 2012 by a group of major pharmaceutical companies to reduce duplication, establish shared data standards, and create common platforms for clinical trial information. The model did not eliminate competitive boundaries; instead it drew them more precisely, allowing organisations to collaborate on pre-competitive knowledge while protecting what was genuinely proprietary. The progress has been incremental, and the infrastructure it created continues to accelerate development cycles in ways that individual organisations could not achieve independently.

The food and ingredients industry is approaching a comparable inflection point. The evidence from the pharmaceutical industry suggests that cross-industry data collaboration can accelerate innovation. But can the sector establish the governance frameworks necessary for this before the cost of continued fragmentation becomes unsustainable?

As Dominik puts it: ‘If AI is the engine, data collaboration is the fuel.’ What he is describing is a shift from fragmented expertise to cumulative intelligence: knowledge that compounds at an ecosystem level rather than remaining locked inside individual organisations. This shift is what makes the other three lessons worthwhile.

Takeaway

In the AI era, infrastructure that enables shared learning across an organisation and between sectors, corporate R&D, start-ups and research institutions is a major competitive asset. Companies that invest in this infrastructure now will set the pace for everyone else. This requires identifying what your organisation is willing to share, who the right partners are to share it with, and establishing those relationships early on.

With our network of over 10,000 qualified startups and ecosystem players worldwide, Hello Tomorrow Consulting can help you identify and engage with the right partners for your AI in industry journey.

Recognising the pattern of the cross-industry application of industrial AI, and where it leaves you

Three industries, three distinct technical contexts, one structural finding running through all of them. 

-Engie built measurable operational value not by deploying sophisticated models but by encoding decades of domain expertise into systems that operators could act on with confidence. 

– Thales demonstrated that the difference between AI that gets adopted and AI that gets bypassed is whether the people responsible for acting on its outputs can understand, verify, and trust what it tells them.

– AI Bobby identified that the real bottleneck in alternative protein innovation is not technological capability but the fragmentation of knowledge across a value chain where no single player sees the full picture. 

What these three cases have in common is a series of organisational decisions that were made before technology became the main focus. These decisions included extracting and encoding the knowledge of the most experienced people, establishing governance around data ownership, designing for trustworthiness from the outset and investing in the ecosystem of partners needed to turn fragmented expertise into something that can be built upon. None of these decisions are technically complex. However, they are the ones that distinguish organisations that use AI from those that scale it, and the longer they are deferred, the harder they are to implement.

Before scoping your next AI initiative, five questions are worth sitting with honestly. 

-Is the domain knowledge your organisation has accumulated structured clearly enough to feed a system – or does it still live primarily in the heads of your most experienced people?

-When you scope an AI initiative, do you clearly define  the objective it needs to support ?

-Who owns the data your AI systems depend on – and do they have any reason to care about its quality?

-When your systems generate outputs, can you prove why – and will that proof still hold in three years?

-Who are the partners – startups, research institutions, peers in adjacent sectors – who hold the knowledge your organisation needs and would benefit from what you hold?

If any of those questions surfaces a gap, that is precisely where the work starts. And it is the kind of work that is significantly better when you are not navigating it alone.

Let's Talk

Hello Tomorrow supports corporates in their innovation journey and in identifying and engaging with relevant startups. We engage with our partners across several complementary levels:

-Strategic foresight: market mapping, trend analysis, and technology watch to identify emerging opportunities

-Startup scouting & open innovation challenges: running targeted scouting or designing and managing open innovation challenges

-Startup evaluation: supporting the assessment of startups through dedicated due diligence

-Collaboration support: structuring and de-risking partnerships by defining frameworks such as IP terms, R&D timelines, and resource allocation

Get in touch with our consulting team

Authors

Massera Winigah
Massera Winigah
Small Image
Marketing & Communications Manager
Ksenia Etcheverry
Ksenia Etcheverry
Small Image
Head of Marketing & Communications

Contributors

Fannie Delavelle
Fannie Delavelle
Small Image
Consulting Senior Manager – Deep Tech & Innovation Strategy

Social Media Sharing

Post A Comment

You must be logged in to post a comment.