
Daniel Nilsson, co-founder and CEO of MuchSkills, on the decisions, trade-offs, and surprises behind one of the largest skills mapping initiatives ever undertaken
In 2025, the Nigerian Federal Government launched PASGA – the Personnel Audit and Skills Gap Analysis – one of the most ambitious civil service reform initiatives in the country's recent history. Coordinated by the Office of the Head of the Civil Service of the Federation (OHCSF) and led on the ground by Lagos-based management consultancy Phillips Consulting Limited, the programme – known internally as Project Phoenix – set out to establish a comparable, structured skills baseline across nearly 55,000 public-sector employees spanning 45 ministries. MuchSkills provided the technology platform that made the skills assessment possible at that scale, with close to 3.9 million skill entries, 4,800+ certifications, and 19,000+ development goals contributed by employees through the platform.
Below, Daniel Nilsson – co-founder and CEO of MuchSkills – reflects on what it actually took to design and deliver that result: The structural decisions made before a single employee logged in, the operational realities that forced the team to adapt in real time, and what the experience revealed about skills mapping at scale. For the full strategic case study, read Skills visibility at national scale: Lessons from Project Phoenix.
Several things, but the scale is the obvious starting point. 55,000 employees is a different category of challenge from anything we had handled before – and probably from most skills initiatives that have been done anywhere. But scale alone wasn't what made it genuinely hard.
What made it hard was the operating conditions. Government-issued email credentials were not yet in active use across much of the workforce. Workforce data reflected the complexity of a large, evolving institution – some records incomplete, others reflecting structural changes that had not yet propagated across central systems. And the whole thing had to be completed within a year-end delivery window.
So we were not just running a skills assessment. We were working with a workforce foundation that was still being verified and stabilised at the same time as building on top of it – which demanded a different kind of operational resilience than a standard enterprise implementation.
This took significant time and it had to come first. The Nigerian Federal Civil Service spans 45 ministries – each with its own mandate, internal departments, and workforce composition. Within each ministry, there are three dimensions we needed to understand and map: Bands, which are grade levels indicating seniority; departments, which are the operational units responsible for specific functions; and cadres, which are job families grouping roles with shared responsibility areas.
The government has a formal scheme of service that defines the cadres and the roles beneath them – everything from vehicle drivers to environmental engineers. That gave us a structured foundation to work from. But understanding how 55,000 people were distributed across all of this, and how movement between ministries through posting and secondment affected that picture, required careful work before we could configure anything.
Getting this right mattered enormously. If an employee logged in and found themselves assigned to the wrong ministry, the wrong department, or the wrong grade level, the assessment would feel wrong to them – and wrong data would flow into the reporting. The structural work was not setup overhead. It determined whether the whole exercise would be credible.
Doing a project of this scale using individual roles would have been nearly impossible. In a civil service of this size, there are thousands of distinct roles. Mapping skills for each one would have added enormous setup time, produced data that was almost impossible to compare across ministerial boundaries, and created a taxonomy so detailed it would have collapsed under its own weight.
Instead, we worked with cadres – job families. A cadre groups many roles that share the same core responsibility areas and capability expectations. Focusing at the cadre level rather than the role level meant we could capture what actually mattered for analysis: Patterns across groups of people doing related work, rather than attempting precision at a level of detail that wouldn't improve the insight.
For large organisations, this is a lesson worth sitting with. The instinct is often to be as precise as possible – to map every role, every competency, every sub-skill. But at scale, that level of detail creates friction without creating understanding. The question leaders need to answer is not "what can this specific person do?" It is "what can this group of people do, and where are we strong or underdeveloped?" Cadres answer that question. Roles make it harder to ask it.
There was genuine debate about this before the project started. The concern – which is understandable – is that without manager validation, employees might inflate their skill levels, and the data would be unreliable.
We came to see that concern as the wrong frame. At this scale, in a hierarchical organisation of this complexity, requiring manager validation would have introduced bottlenecks that could have seriously compromised the programme's momentum – and it's worth noting that even in our own recommended practice, we reserve validation for key critical skills rather than the full skills profile. In this context, managers would have become a source of delay and inconsistency – and the data might have looked more validated on paper while being less honest in practice.
More fundamentally, we were not trying to produce perfect individual-level data. We were trying to produce credible pattern-level data – data good enough to tell leaders where the civil service is strong, where it is underdeveloped, and where investment is needed. For that purpose, broad and honest self-assessment from 55,000 people is far more valuable than validated data from a fraction of them.
What we focused on instead was making honest self-assessment the easy path. Employees used a 1-9 proficiency scale with clear descriptors. They could see how colleagues had assessed themselves – but we surfaced beginner and intermediate profiles first, deliberately, so that employees at any level felt safe sharing accurate information rather than feeling pressure to perform. That design decision turned out to matter. The engagement we saw, and the distribution of skill levels in the data, suggests people were genuinely trying to represent themselves accurately.
Identity and access quickly became the single largest operational barrier. Government-issued email credentials were not yet in active use across much of the workforce – employees had been communicating through personal accounts and WhatsApp rather than official channels, reflecting the reality of digital infrastructure at national scale. Establishing verified access for tens of thousands of employees became a central workstream in its own right.
To manage this, we built a dedicated portal – skills.ohcsf.gov.ng – where employees could authenticate using their IPPIS number, which is their government employee identifier. The portal also collected missing data points – grade level, cadre, ministry assignment – before routing employees into the assessment itself. It became the central entry point for the entire programme, and it hosted instructional video content that was watched more than 44,000 times over the course of the project.
That video investment was one of the better decisions we made. When you cannot communicate directly with tens of thousands of individuals via email or automated workflows, clear and accessible instructions become load-bearing infrastructure. People watched, followed the steps, and completed the process. Without that, the support burden would have been unmanageable.
The volume grew as the programme progressed – we're now at somewhere between 2,500 and 3,000 requests in total, and the vast majority have come from civil servants working through access and assessment issues. Cluster consultants account for maybe 5% of that. The rest is employees – people working through access issues, navigating the verification steps, resolving discrepancies in their ministry or department assignments, or with questions about how to complete the assessment.
What kept it from becoming a bottleneck was the network of 15 cluster consultants – specialists from Phillips Consulting's network, each responsible for one or more ministries. They were the on-the-ground layer between us and the employees. We communicated with them via WhatsApp groups, which meant issues could be raised and resolved quickly without formal escalation chains. We also built a dedicated guide and FAQ resource for them so that common questions could be handled without coming to us at all.
The platform itself also had to adapt as load increased. Pages that performed well under normal enterprise conditions showed strain when tens of thousands of users were active simultaneously. We identified those points quickly and resolved them – in some cases within hours. A small number of edge cases around screen sizes and display also emerged and were fixed as they appeared.
The honest reflection is that operational readiness is as important as design quality. A well-designed programme that cannot absorb real-world load or adapt quickly when something breaks will not deliver. The team's ability to respond fast – to surface problems, decide, and fix without bureaucratic delay – was as critical as any structural decision made before launch.
Genuinely – how much they did beyond what was asked of them.
The participation rate was strong, and the average number of skills recorded per contributing employee exceeded 70, which is a high number for any skills assessment programme. But beyond the required steps, employees were setting development goals, recording certifications, exploring parts of the platform they had not been directed to, and engaging with features we had not specifically promoted for this project.
There is something important in that. The instinct in large organisations is often to worry that employees won't engage – that they will see skills assessment as HR paperwork and do the minimum. What we consistently see, and what Project Phoenix confirmed at a scale we had never tested before, is that employees actually want to map their skills when the process is clear, the tool is easy to use, and they can see their own profile taking shape. They want to be seen accurately. They want their capabilities recognised. When the design makes that possible, engagement follows.
One finding that stayed with me: The skill levels we saw across the civil service were higher than expected. There is significant talent in a workforce of this size that has simply never been made visible – to leaders, or to the employees themselves. That is not unique to government. It is probably true of most large organisations.
The most fundamental change is that capability across the Nigerian Federal Civil Service can now be examined in a structured, comparable way – across ministries, departments, grade levels, and cadres – for the first time. Leaders can see a capability profile for their ministry, compare it against others, and understand where they are relatively strong and where they are underdeveloped.
What matters about this is not any single finding – the analysis belongs to the client and the consulting teams who will use it to develop recommendations. What matters is the type of conversation it makes possible. Before this project, workforce decisions in the civil service were made on inference, historical records, and what individual leaders happened to know about their people. Now there is a common reference point – a shared, structured picture that everyone is looking at. That shifts the conversation from opinion to evidence.
That skills visibility is achievable – even at extreme scale, even under difficult conditions, even without perfect data to start from. There is a tendency in large organisations to treat the complexity of their workforce as a reason not to attempt this kind of initiative. The data isn't clean enough. The governance isn't aligned. There are too many divisions, too many roles, too many variables. Project Phoenix is evidence that those are design problems, not fundamental barriers.
The other thing I would say is: Do not get trapped by roles. The reflex in most organisations is to try to map skills for every individual role – and at scale, that approach collapses. Focus instead on skill families, on the groups of people doing related work. Ask what skills that group needs to do great work. Then ask whether you have them. That question, asked clearly and answered with real data, is where workforce strategy stops being theoretical and starts being useful.
And do not underestimate soft skills. Technical capabilities matter enormously – but the organisations that are genuinely ready for whatever comes next are the ones where people are curious, adaptable, and able to learn. Those qualities can be mapped, measured, and developed. They just rarely are. That is a missed opportunity for almost every large organisation we have worked with.
Three things above all. First, skills visibility is achievable even under difficult conditions – fragmented data, inconsistent governance, and scale are design problems, not reasons to avoid the effort. Second, the design of the system matters more than the system itself: focus on patterns over perfection, skill families over roles, and trust over enforcement. Third, skills visibility should be treated as ongoing infrastructure rather than a one-off exercise – its value compounds over time as the baseline is refined and used to inform real decisions. For the full story of how Project Phoenix was designed and delivered, read the Project Phoenix case study.
Most organisations start with a skills gap analysis – a structured assessment of what capabilities exist across the workforce compared to what is needed. This gives leaders a baseline to work from and makes the case internally for investing in a broader skills intelligence programme. For very large or complex organisations, starting with a single division or business unit as a pilot is often the most practical route – it proves the value before scaling across the whole organisation.
The initial mapping is the foundation, not the finish line. Skills data needs to stay live – updated as people develop new capabilities, complete certifications, or move between roles. MuchSkills is designed for continuous use rather than periodic audits: employees update their own profiles, managers receive prompts to review critical skills, and the platform surfaces changes over time. The organisations that get the most value from skills intelligence treat it as ongoing infrastructure – not a one-off exercise.
Yes – MuchSkills is designed to sit alongside existing HR systems, not replace them. SAP SuccessFactors, Workday, and similar platforms manage HR data well. What they don't do is make skills visible, searchable, and actionable at the depth organisations need for workforce planning and development decisions. MuchSkills integrates with these systems and acts as the skills intelligence layer on top – structured, comparable skills data that makes your existing HR investment more strategic. See how MuchSkills fits into your stack.
If your organisation is facing a skills visibility challenge at scale – whether across divisions, geographies, or a complex workforce – book a demo to see how MuchSkills approaches it, or explore the platform to get started.
More newS