Why Capability Maturity Assessment Doesn’t Work – And Why GenAI Won’t Fix It (On Its Own)

Most organisations struggle with strategy execution. Not because they lack strategy – but because they lack a clear, continuous understanding of whether their organisation is actually capable of delivering it. This is exactly what capability maturity assessment is supposed to solve. And yet, in practice, it doesn’t.

The uncomfortable reality

In theory, maturity assessments should be a core management tool. In practice, they are:

  • Infrequent (often annual or ad hoc)
  • Partial (covering only a subset of capabilities)
  • Weakly connected to actual decision-making

In my recent research across Enterprise Architecture practitioners, this pattern was consistent:

Capability maturity assessment is not embedded as a continuous discipline, it is episodic and tactical.

That alone should give us pause. Because if something is genuinely valuable for strategy execution, it doesn’t get used once a year. It’s not about effort. The default explanation is simple: maturity assessments are too time-consuming. But that’s not what the data shows. The biggest barriers are not: Time, Effort, or Tooling. They are:

  1. Lack of management buy-in
  2. Limited perceived value
  3. Weak integration into governance

In other words, organisations don’t avoid maturity assessment because it’s hard.
They avoid it because they don’t believe it’s worth doing. That is a very different problem.

A deeper issue: we’re measuring the wrong thing

Even when assessments are performed, there is a structural flaw. Most maturity models are process-centric. They measure formalisation, control, and compliance. But capabilities are not just processes. They are the combination of people, technology, data, and governance

So what happens?

We end up measuring how well things are documented and controlled, not how well they actually work. Which leads to a subtle but critical distortion: Organisations can appear “mature” while still being ineffective.

Enter GenAI — and the illusion of a solution

Generative AI seems to offer a way out. It promises: Faster assessments, Greater consistency, and the possibility of continuous evaluation. And to be fair, many practitioners see that potential. But here’s the catch: The real constraint isn’t analysis… It’s trust.

The real bottleneck: data and legitimacy

Two things emerged very clearly:

  1. Data quality is a major limiting factor
  2. Stakeholder trust does not follow from technical objectivity

Even if GenAI produces consistent outputs, organisations still ask: Is the data reliable? Does this reflect reality? Can I defend this decision? As one practitioner put it: “Bad data in, bad data out.” (Although, perhaps those weren’t the exact words he used). GenAI doesn’t remove the need for good data. It makes it unavoidable.

The more interesting insight

Perhaps the most important finding is this:

There is a gap between the realised value of maturity assessment today and its intrinsic value under better conditions

Most organisations don’t get much value from current approaches, but they still believe that, if done properly, it should be valuable. That tension explains why maturity assessment persists—despite underperforming in practice.

What actually needs to change

If we take this seriously, the solution is not just better tools or better models. It’s a shift in how we think about maturity assessment altogether. From periodic, static diagnostics to continuous, strategy-linked capability management. That means:

  1. Linking assessments directly to strategic objectives
  2. Embedding them into governance and delivery
  3. Triggering reassessment based on real change (not fixed cycles)
  4. Using GenAI to augment analysis (not replace judgement)

A final thought

GenAI is often positioned as the breakthrough. But in this case, it’s more of a forcing function. It exposes the underlying problem: Capability maturity assessment has never really been treated as a core management system. Until that changes, making it faster or more automated won’t fundamentally alter its impact. The real opportunity is not just to improve assessment. It is to rethink it entirely – as a continuous, integrated mechanism for managing strategy execution.


This article is based on a MSc master’s thesis. To read the full report in detail, download it here.