AI workforce planning: Why your skills data matters more than your AI tool

Organisations are investing heavily in AI for workforce planning. Most are building on a foundation that will make those investments fail.

Editorial Team
09.04.2026
Copy link

​​Only 8% of organisations have reliable data on the skills their workforce currently possesses. That figure, from Gartner's 2024 HR research, is the number that should be front of mind for every organisation currently evaluating — or already deploying — AI workforce planning tools. Because AI workforce planning is only as good as the skills data you feed it. For the remaining 92% of organisations, that foundation is still missing.

The urgency is understandable. The World Economic Forum's Future of Jobs Survey 2025 found that 63% of employers consider skills gaps the single biggest barrier to business transformation. Organisations are under real pressure to close those gaps — and AI workforce planning tools promise exactly that. But most are deploying the technology without first fixing the data that would allow the AI to see their skills gaps clearly. The result is unreliable outputs, even when the tooling is advanced.

This is not a technology problem. AI tools are increasingly capable. It is a data foundation problem – and it needs to be solved before the AI layer goes on top, not after.

AI adoption in HR is accelerating regardless. According to SHRM's 2025 Talent Trends report, 43% of organisations used AI for HR tasks in 2025 – up from just 26% the year before. Workforce planning is one of the primary use cases: predicting future skill requirements, identifying gaps before they become critical, matching people to roles at a speed no spreadsheet or staffing meeting could match. The promise is real. But the organisations seeing the strongest returns are not necessarily the ones who bought the most sophisticated tools first. They are the ones who got their data right first.

What AI workforce planning actually requires

AI workforce planning tools work by analysing patterns in workforce data to predict future needs, identify gaps, and model scenarios. The more accurate and current the input data, the more useful the output. 

The inverse is equally true: if the underlying skills data is outdated, incomplete, or structured inconsistently, the AI will tend to reproduce and amplify those limitations at scale.

Most organisations' skills data is all three. Skills are recorded in job titles that no longer reflect what people actually do. They live in CVs that were last updated when someone joined. They exist in performance reviews that measure output rather than capability. And they are stored in different systems that don't talk to each other, using different terminology for the same things. The data treats skills as static – a record of what someone could do at a point in time, not a live picture of what they can do now. AI tools built on that foundation are making decisions about a workforce that no longer exists.

In practice, this shows up in very concrete ways: people who are overbooked assigned to projects, capability gaps identified too late, or hiring decisions made for skills that already exist internally but were not visible.

The skills gap in AI workforce planning itself

There is a clear contradiction emerging. Organisations are investing in AI to solve their skills gap problem – but the skills gap in their own data is what prevents the AI from helping.

Consider what an AI workforce planning tool needs to function well. It needs to know what capabilities exist in the workforce right now, at what level of proficiency. It needs to understand which skills are in demand for current and upcoming projects. It needs to map those two pictures against each other – the supply of capability and the demand for it – and identify where the distance is greatest and most urgent. It needs to do this continuously, not once a year.

Without a skills intelligence platform maintaining that live picture, the AI is modelling from a snapshot that has already aged. The gap analysis it produces reflects what the workforce looked like six months ago. The staffing recommendations it makes are based on job titles, not validated capabilities. The scenario planning it enables is built on assumptions rather than evidence.

There is a second, less obvious problem layered on top of this. Even when skills data exists, it typically reflects whoever the decision-maker already knows – the people in visible roles, familiar offices, recent projects. People with relevant capabilities in quieter roles, different locations, or less prominent teams get overlooked not because they lack the skills, but because those skills are not visible at the moment the decision is being made. The AI inherits this blind spot. It can only surface what the data contains. If the data is shaped by proximity bias, so are the outputs.

This is why, as Daniel Nilsson, co-founder of MuchSkills, puts it: "A skills gap isn't always about missing people. Most skills gaps can be closed with existing staff – through better visibility of what people already know." The reflex to deploy a new AI tool – or to hire – is often triggered by poor skills visibility, not actual capability absence.

The wrong kind of data is still the wrong kind of data

There is a more fundamental problem that even well-resourced AI workforce planning initiatives run into – and it is not just that the data is stale. It is that most AI tools are working from the wrong kind of data to begin with.

The dominant approach in the market is top-down: AI tools infer what skills people probably have based on job titles, career histories, and labour market signals – what the market says people in that role typically know. 

Platforms such as Eightfold and Lightcast are often strongest on the demand side of the equation. They provide strong insight into what skills the labour market values, what skills people in comparable roles at other organisations tend to have, and what capabilities are trending in job postings. That intelligence is genuinely useful for benchmarking and hiring.

But it does not tell you what your specific people can actually do today.

That distinction matters for internal workforce decisions. When you are deciding who to staff on a project, who to develop for a future role, or where your capability gaps are most acute – you are not making a statistical inference about a population. You are making a decision about real individuals in your organisation. Labour market data cannot answer that question. Only bottom-up data can.

This does not make external, market-level data irrelevant. Labour market intelligence is critical for understanding where demand is heading, how roles are evolving, and which skills are becoming more valuable over time. The limitation is not the data itself, but applying it to decisions about individuals without a clear internal view.

Ready to see where your team stands? Use our Skills Gap Analysis tool to build your AI-ready workforce today.

Cute fox
Contents

Subscribe to our newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Continue reading

AI workforce planning: Why your skills data matters more than your AI tool

Learn more

CV management software for consulting firms: Why skills-based tools outperform traditional CV databases

Learn more

Consulting Skills Matrix: A practical guide for firms that need to staff by Thursday

Learn more