The Moats Are Falling: Will Consulting Survive AI?

Posted on Fri 13 March 2026 in Industry • 10 min read

There's a joke that makes the rounds among engineers: Doctors will survive AI because everybody needs a fall guy. Engineers will survive because no engineer will use AI.

It's funny because both halves contain an uncomfortable truth. The liability moat is real — someone has to stamp the drawing, someone goes to court when the bridge fails. And the conservatism of the engineering profession (some more than others) is also real — we're slow to adopt, slow to trust, slow to change.

But I've been thinking about this more seriously lately. Not as a joke, but as a genuine question: what happens to engineering and scientific consulting when AI can do most of the analysis?

I don't ask this as a spectator. I've spent years in engineering consulting — first during my PhD and research fellow days, and now building machine learning software that data scientists use to deliver consulting in domains I wasn't originally trained in. I've watched AI change my own work in ways I didn't expect. And some of what I've seen makes me genuinely uncertain about where this profession is headed.

The Democratisation Didn't Start with AI

We tend to talk about AI as if it appeared out of nowhere in 2022. It didn't. I see it as a democratisation of knowledge process that has been underway for decades.

Wikipedia made information accessible. Google made it searchable. But AI does something qualitatively different: it teaches. It doesn't just store knowledge or help you find it — it explains, contextualises, and adapts to your level. It can synthesise large chunks of seemingly disparate information into coherent explanations. For someone with strong foundations and a hunger to learn, it's like having a patient domain expert available at all hours.

This matters enormously for consulting. The traditional consulting model rests on knowledge asymmetry — the consultant knows things the client doesn't. That asymmetry has been eroding for years, but AI accelerates it dramatically. A client can now ask an LLM to explain failure modes, generate preliminary analysis, or cross-check a consultant's recommendation. The "information broker" role of consulting is collapsing.

The question is: what's left when the domain expertise is cheap?

What Consulting Actually Sells

Let me be direct about what engineering/scientific consulting delivers. It's not reports. It's not analysis. It's not even expertise in the narrow sense. What clients actually pay for is judgment.

Judgment is knowing which analysis to run and which to skip. To borrow an analogy from my engineering days; it's walking onto a site, looking at a crack, and knowing from experience whether it's cosmetic or structural — not because you ran a model, but because you've seen hundreds of cracks in hundreds of structures. It's the ability to say "this analysis looks right, but I know it's wrong because..." It's integrating structural, environmental, economic, and social considerations into a recommendation that accounts for uncertainty and risk appetite.

AI is getting remarkably good at the execution layer — running analysis, processing data, drafting reports. It's making progress on interpretation — spotting patterns, flagging anomalies. But it barely touches the judgment layer. Or at least, not yet.

This creates a three-layer picture of consulting value:

Execution — doing the analysis. AI disrupts this heavily. What took a junior engineer two weeks now takes two days, sometimes two hours.

Interpretation — understanding what the analysis means. AI is partially capable here, but still misses context that experienced engineers catch intuitively.

Judgment — deciding what to do about it. This remains deeply human. It requires experience, ethical reasoning, risk calibration, and the kind of cross-domain thinking that comes from years of seeing how systems actually behave.

If AI handles execution and partially handles interpretation, then the consultant's job becomes primarily about judgment. But most scientific consulting firms aren't structured for that. They have large teams whose day-to-day work is execution — running models, processing data, writing reports. If AI takes over that work, the org chart doesn't make sense anymore.

The Moats Are Falling

For decades, engineering and scientific consulting firms have built their competitive advantage on a few key moats: proprietary software, accumulated domain expertise, and closely guarded methodologies. These moats are falling faster than most firms realise.

AI is, at its core, a plagiariser. It copies. It absorbs the collective knowledge of published literature, technical reports, methodologies, and standards — and makes that knowledge available to anyone who asks. The IP concerns that dominated headlines when ChatGPT launched in 2022? Society has largely moved on. The technology can't be stopped, and we've collectively reconciled with that fact.

But here's what keeps me up at night: it only takes a few leaks to drain an entire industry's moat.

Even if your firm is careful about protecting intellectual property, it doesn't matter if a handful of firms elsewhere in the world aren't. Employees using LLMs can inadvertently feed proprietary knowledge into systems that learn from it. Service agreements with tech companies may not hold their end of the bargain. A few leaks in the dam, and the reservoir of closely guarded expertise becomes publicly accessible knowledge. It's a tragedy of the commons, and I don't think most organisations appreciate how quickly it's happening.

I wish I had a clean answer for how firms should protect their knowledge. I don't. But ignoring the problem isn't a strategy.

The Real Divide: Knowers vs. Figure-Outers

Here's what I think is the most important — and least discussed — shift that AI brings to consulting.

The most valuable person in consulting used to be the one who knew the most. The walking encyclopedia. The person who had memorised standards, understood regulatory nuances, accumulated decades of pattern recognition in a narrow domain. This person was expensive to replicate because their knowledge took 20 years to build.

AI changes the equation. If domain knowledge is increasingly accessible — if the regulations, standards, and accumulated wisdom of a field can be surfaced on demand — then the premium shifts. It shifts toward the person who can figure things out.

The figure-outer has a different skill set. First-principles reasoning. Mathematical maturity. The meta-skill of learning how to learn. The ability to pick up a new domain, understand its structure, identify its core problems, and start contributing at a high level — not in 20 years, but in months.

My PhD taught me many things, but the most important was this: if you can think from first principles and you don't have bottlenecks in mathematics or complex reasoning, there isn't much you can't learn. The only question was always time. How much time does it take to ramp up in a new domain?

AI compresses that time dramatically. Not for everyone — garbage in, garbage out. If you don't have strong foundations, AI won't build them for you. But for people with the right toolkit and a genuine drive to learn, AI is a force multiplier. The foundations are the differentiator. AI is the multiplier. A multiplier on zero is still zero.

This has profound implications for how consulting firms should think about hiring, team composition, and what "expertise" actually means. The domain keeper — the person whose value comes from knowing the rules — is increasingly vulnerable. The figure-outer — the person whose value comes from reasoning ability and adaptability — is increasingly powerful.

Young Firms, Old Problems

There's a structural advantage that younger, leaner organisations have in this transition, and it comes down to something mundane: their tech stack.

Many established consulting firms are locked into proprietary software ecosystems. Their workflows depend on specific tools, their institutional knowledge lives in specific platforms, and any change requires waiting for software vendors to integrate new capabilities. When AI advances, they wait for the next software release.

Younger firms building on open ecosystems — Python-based tools, open-source libraries, modular architectures — can move much faster. They can train language models to act as agents that orchestrate their analytical workflows. They can integrate new capabilities as they emerge. MCP wasn't a thing two years ago; today it enables agentic patterns that fundamentally change how software tools interact with AI. Firms built on flexible, programmable foundations can absorb these advances immediately. Firms waiting for vendor integration are perpetually behind.

And here's what concerns me: most engineering and scientific consulting firms seem to think that building LLM chatbots and RAG systems is "adopting AI." Writing better emails. Drafting reports faster. That's surface-level adoption. It's useful, but it's not where the real disruption happens.

The real opportunity — and the real threat to firms that miss it — is when AI moves into the hardcore engineering and scientific work itself. Think end-to-end probabilistic simulation pipelines that burn GPU compute, orchestrated autonomously by LLM agents. Not a chatbot that summarises your report, but an agent that designs the simulation, runs it, interprets the results, and flags the cases that need human judgment. That's vision. And the firms built on open, programmable foundations are the ones that can get there.

This doesn't mean large firms can't adapt. But it means adaptation requires more than buying an AI subscription. It requires rethinking how work is structured, how tools are built, and how quickly the organisation can evolve.

And here's the thing that established firms should find genuinely uncomfortable: AI is getting better at judgment too. Not just execution, not just interpretation — judgment. It's not there yet, but the trajectory is clear. With the right vision, a resource-constrained young organisation can go all-in on AI and use its massive leverage to either tackle problems grander than what traditional consulting even attempts, or close the gap with the biggest and best firms in a terrifyingly short time. The competitive advantage that large consultancies have spent decades building? It wouldn't take much to erode it.

Opportunity and Existential Risk

I don't want to end this as either a doom piece or a hype piece, because it's genuinely both.

The opportunity is enormous. AI makes it possible for smaller teams to deliver work that previously required large organisations. It makes cross-domain consulting viable for people with strong foundations. It democratises access to analytical capability. For consultants who genuinely deliver judgment and insight — not just hours on a timesheet — AI makes their work more valuable, not less. If you solve a problem worth millions, the fact that you solved it in four hours instead of four hundred doesn't diminish your contribution. It might increase it.

But the existential risks are real. The erosion of knowledge moats threatens firms that haven't built defensible value beyond "we know things you don't." The IP leaching problem has no clean solution. And the cultural conservatism of engineering — the very thing that makes the profession trustworthy and safe — also makes it slow to adapt.

What I keep coming back to is this: the teams and organisations that will thrive are the ones that hire figure-outers. Not people who toe the line, but people who can think from first principles, learn rapidly, and adapt as the ground shifts beneath them. Because with AI, figuring things out has become dramatically easier for people with the right foundations.

The premium isn't on knowing anymore. It's on thinking.