AI investment is surging driven by board mandates, competitive fear, and the pressure of not being left behind, yet most organisations have very little to show for it. It is the organisation that deploys a chatbot because the board asked for “an AI initiative,” celebrates the launch, and calls it transformation.
The problem is not a lack of ambition. It is a confusion between wanting something and knowing how to get it. “We will grow AI-driven revenue by 30%” is not a strategy. It is a goal. And when organisations commit technology budgets on the basis of a goal dressed up as strategy, they are not investing. They are hoping.
There is a framework that explains exactly why and what to do instead. Chapter 5 of Richard Rumelt’s Good Strategy / Bad Strategy (Rumelt 2011) gets to the heart of this. Rumelt argues that a real strategy is a tool for overcoming obstacles, not a vision statement, not a goal, not a budget. He calls the structure behind it the strategic kernel with three key elements that must work together: A diagnosis that names the real obstacle honestly, a guiding policy that decides how to attack it, and coherent actions that point every resource, money, people, time in the same direction, as illustrated in Figure 1.
The kernel gives you the structure. But in The Crux (Rumelt 2022), Rumelt adds a sharper demand: within your diagnosis, you must identify the crux, the one obstacle that, if removed, unlocks everything else. Most organisations sense where it lies. They just choose not to go there. Instead, they build strategy around it while the real obstacle sits untouched.
Nowhere is this more visible than in enterprise AI. Despite unprecedented investment, 80% of organisations are running pilots, yet only 5% have extracted measurable value (Joshi 2026; MLQ.ai 2025). That gap is not a technology failure, it is a strategy failure, and often a double one. The kernel is rarely applied properly: no honest diagnosis, no real guiding policy, just a list of initiatives dressed up as direction. Without the kernel, the crux goes unfound. The result is not merely effort pointed at the wrong problem. It is effort that is not pointed at anything at all.
The question is how to operationalise these ideas inside a real organisation. That is where technology roadmapping comes in. At its core, a technology roadmap is a structured visual framework that aligns market needs, business objectives, and technology capabilities across time (Phaal et al. 2004). It is not a project plan or a feature backlog, it is a strategic instrument that forces the hard questions: why are we doing this, what exactly are we building, and do we have what it takes (Phaal et al. 2010)? When built around the kernel, it becomes the mechanism that translates diagnosis into direction and direction into coherent action.
This post — the first of two — explores the strategic framework behind this combination: how Rumelt’s kernel and structured technology roadmapping diagnose the same failure, and why their integration offers a more disciplined path. Part 2 puts the framework into practice with a five-step AI roadmapping process and a guide to spotting bad AI strategy before it costs you.
Why Most AI Roadmaps Fail — and What the Kernel Changes
The failure starts before the roadmap is drawn. IBM reports that 64% of CEOs admit to investing in technology before fully understanding its potential impact (IBM Institute for Business Value 2025). There are two ways this goes wrong. The first is a technology-driven strategy — starting with available capabilities and working backwards, hoping a business case materialises. The second, less obvious failure is waiting for perfect market insight before engaging with the technology at all. In AI, neither extreme works. The roadmapping literature points to a more disciplined path: one that starts with a real, felt problem, uses technology awareness to sharpen the diagnosis, and then asks what capabilities are genuinely required to solve it (Vishnevskiy et al. 2016; Noh et al. 2021), as shown in Figure 2.
To correct this, organisations need a roadmap that starts with a real problem not with an available capability. In AI, this does not mean ignoring what the technology makes possible. It means using that awareness as an input to diagnosis, not as a substitute for it. According to Phaal et al. (2010), a roadmap only becomes truly strategic when it honestly addresses five questions: Why, What, How, When, and How Well. Without all five, you do not have a strategy — you have a to-do list with a Gantt chart attached, as shown in Figure 3.
What makes this combination powerful is how naturally Rumelt’s three kernel elements map onto the roadmapping questions that Kerr et al. (2022) identifies: Where are we now? Where do we want to go? How can we get there? and onto the knowledge layers that Phaal et al. (2004) describes. Rather than two separate frameworks running in parallel, they function as three lenses on the same strategic problem, each reinforcing the other, as shown in Table 1.
| Rumelt’s Kernel | Roadmapping Question | Roadmap Layer | Strategic Focus |
|---|---|---|---|
| Diagnosis | Where are we now? | Know-Why | Market trends, value gaps, and identifying the “Crux” |
| Guiding Policy | Where do we want to go? | Know-What | Strategic intent, AI offerings, and business alignment |
| Coherent Actions | How can we get there? | Know-How | Skills, capabilities, and execution |
| Coherent Actions | When and how well? | Know-When / How Well | Timelines, milestones, performance metrics, and continuous review |
That mapping matters because it changes what the roadmapping process is asked to do. The real value of strategic roadmapping is not the diagram you produce, it is the disciplined thinking the process forces you to do. When built around the kernel and properly facilitated (Phaal et al. 2004; Komssi et al. 2013), each element of the kernel finds its operational expression in the roadmap, directly addressing the failure modes described above:
- Strategic alignment: diagnosis and guiding policy ensure market, product, and technology investments pull in the same direction, not against each other.
- Shared language: the roadmap makes the kernel visible across diverse teams without requiring everyone to read the same strategy document.
- Honest prioritisation: the crux drives resource allocation, so effort flows to what matters most rather than to what is loudest in the room.
- Adaptive planning: the “How Well” layer keeps the roadmap live, updating as assumptions are tested and new information arrives.
- Managed experimentation: coherent actions replace scattered pilots, each tied to a specific question the organisation is trying to answer.
The layered nature of this framework — Market, Product, and Technology working together — is shown in Figure 4.
The question is no longer whether to build a roadmap. It is whether to build one that actually works.
Understanding why the kernel changes what a roadmap can do is the necessary first step. But a framework without a process for applying it stays theoretical. Part 2 picks up here — walking through a five-step AI roadmapping process built around the kernel, and giving you the tools to recognise a bad AI strategy before it costs you.
References
Citation
@online{faustine2026,
author = {Faustine, Anthony},
title = {The {Hype} Vs. {The} {Kernel:} {Why} {Your} {AI} {Strategy}
Is {Failing}},
date = {2026-03-28},
url = {https://sambaiga.github.io/pages/blog/posts/2026/03/},
langid = {en}
}