How a national-scale skills matrix and workforce intelligence framework was designed and delivered across nearly 55,000 public-sector employees
Most large organisations cannot answer a basic question: do we have the skills to execute our strategy? Not because the ambition is missing - but because the data never existed in a usable form.
Challenge
The Nigerian Federal Civil Service needed a structured skills baseline across nearly 55,000 employees, 45 ministries, and multiple grade levels - in three months, with fragmented workforce data, limited digital infrastructure, and no prior skills mapping at any scale.
Solution
MuchSkills built a dedicated assessment portal, a shared skills taxonomy across all 45 ministries, and a custom analytics environment for cross-government comparison. Nearly 55,000 civil servants contributed close to 3.9 million skill entries. For the first time, Nigeria's Federal Civil Service had a consistent, comparable skills baseline it could act on.

Most large organisations cannot answer a straightforward board question: do we have the skills to execute our strategy? Not because the question is unreasonable, but because the data to answer it has never existed in a usable form. Skills live in CVs, manager memories, and HR systems that were never designed to make skills capability visible at scale. And when organisations do attempt skills mapping at scale, they typically produce data no one trusts – too fragmented to compare, too slow to collect, too brittle to support decisions. Project Phoenix was a test of whether that outcome was inevitable, or a design problem.
In late 2025, the Nigerian Federal Government launched PASGA – the Personnel Audit and Skills Gap Analysis – one of the most ambitious civil service reform initiatives in Nigeria's recent history. Coordinated by the Office of the Head of the Civil Service of the Federation (OHCSF) and led on the ground by Lagos-based management consultancy Phillips Consulting Limited (pcl), the programme – known internally as Project Phoenix – set out to establish a comparable, structured skills baseline across nearly 55,000 public-sector employees spanning 45 ministries, multiple grade levels, departments, and cadres. MuchSkills, an AI-powered skills intelligence platform, provided the technology that made the skills assessment possible at that scale. This was not a pilot or a theoretical exercise. It was a time-bound, operational programme carried out under real-world constraints.
Within three months, nearly 55,000 employees had contributed skills data through the MuchSkills platform – adding close to 3.9 million individual skill entries, 4,800+ certifications, and 19,000+ development goals. The average number of skills recorded per contributing employee exceeded 70. For the first time, the Nigerian Federal Civil Service had a consistent, comparable skills baseline it could use to analyse capability across ministries, identify gaps, and make evidence-based workforce decisions at national scale.
This piece reflects on what it took to design and deliver that visibility: the decisions made, the trade-offs involved, and the lessons relevant to any large or compliance-driven organisation seeking to move from assumed capability to evidence-based insight.
Most organisations do not have true skills visibility. They rely on roles, titles, CVs, and fragmented records to infer capability – rather than having a clear, structured view of the skills that actually exist across the workforce. This gap is the primary reason MuchSkills was created.
Project Phoenix took that challenge to an entirely different level.
The Nigerian Federal Civil Service is one of the largest civil services in the world. Its 45 ministries operate with distinct mandates, internal structures, and capability requirements – ranging from administrative and policy functions to technical, engineering, and operational roles. Employees span a wide range of grade levels, from entry positions to senior leadership. Officers move between ministries through deployment, posting, and secondment, and these movements are not always synchronised across central systems.
The ambition of Project Phoenix was defined by two tightly connected objectives running in parallel. The first was a physical personnel verification exercise – civil servants attending designated locations across the country to present documentation, confirm their records, and be verified as active members of the workforce. This strand was led entirely by Phillips Consulting and its network of 15 cluster consultants on the ground. The second was the digital skills assessment: every verified employee completing a structured self-assessment of their skills, certifications, and development goals through the MuchSkills platform. Both strands had to run simultaneously within the same compressed delivery window – which meant the skills assessment was being built and deployed while the workforce foundation it depended on was still being verified and corrected.
MuchSkills provided the skills assessment platform and core skills architecture – the technology layer that made structured, comparable data collection possible at national scale. The applied framework was developed in close collaboration with Phillips Consulting, adapted to reflect the specific structure of the Nigerian Federal Civil Service. Phillips Consulting led workforce verification, ministry-level coordination, and end-user support. As skills data was structured within the MuchSkills platform, the consulting teams used it – together with MuchSkills' reporting tools and cross-ministry analytics – to generate insights and develop recommendations. In short: MuchSkills enabled structured data collection and comparability at scale; Phillips Consulting and its cluster network translated that visibility into analysis, advisory work, and policy guidance. For more on how skills intelligence works as a foundation for decisions like these, see MuchSkills' skills intelligence guide.
Delivering this ambition required operating in conditions very different from most private-sector skills initiatives.
Workforce data reflected the complexity of a large, evolving institution – some records incomplete, others reflecting structural changes that had not yet propagated across central systems. Key attributes – department, grade level, cadre, contact details – could not be assumed to be correct across the board. The programme operated while its own structural foundation was still being stabilised.
Identity and access quickly became the single largest operational barrier. Government-issued email credentials were not yet in active use across much of the workforce – employees had been communicating through personal accounts and WhatsApp rather than official channels, reflecting the reality of digital infrastructure at national scale. Establishing verified access for tens of thousands of employees became a central workstream in its own right.
To manage authentication without universal email access, MuchSkills built a dedicated portal – skills.ohcsf.gov.ng – through which employees could use their IPPIS number (their government employee identifier) to verify themselves and complete a data collection step before beginning their skills assessment. The portal also hosted instructional video content that was viewed more than 44,000 times over the course of the project, proving essential for participation at scale without face-to-face support at every location.
Communication could not depend on official email alone. It relied on posters, government-wide broadcasts, and close coordination with cluster consultants – 15 specialists, each responsible for one or more ministries, who worked with employees on the ground and communicated with the MuchSkills team via WhatsApp groups for fast issue resolution. Across the project lifecycle, MuchSkills handled approximately 2,500–3,000 support requests, the overwhelming majority from civil servants working through access and assessment issues, alongside sustained support for the consulting network.
The platform itself had to adapt in real time. Pages that functioned well at normal enterprise load proved too slow when tens of thousands of users were active simultaneously. Screen-size edge cases emerged. Data fields that were expected to be populated across the workforce turned out to be incomplete for significant segments of it – a common reality in large-scale workforce systems. Each issue was identified and resolved within hours or days – none required stopping the programme.
Time pressure was non-negotiable. The programme launched with a year-end delivery window. Design, configuration, communication, and execution progressed in parallel. Assumptions were tested quickly, adjustments made in real time. These constraints were not treated as obstacles but as the operating conditions the programme needed to succeed within.
"We are not looking for perfection. We are looking for patterns – for decision material. Even if the data is not perfect, we will see where we're strong, where we're less strong, and where skill levels require immediate attention. That is the interesting data." – Daniel Nilsson, CEO, MuchSkills
Given these constraints, Project Phoenix was shaped by a small number of deliberate design decisions. They determined whether skills visibility could be achieved in a way that was credible, scalable, and usable for decision-making.
One of the most consequential decisions was to avoid role-level skills mapping. In a civil service of this size, attempting a role-level skills matrix would have meant structuring thousands of distinct roles across 45 ministries – adding months to setup without improving the quality of insight.
Instead, the project focused on cadres: job families that group roles with shared responsibility areas and capability expectations. The Nigerian Federal Civil Service has an established scheme of service that defines these cadres, from drivers and administrative officers through to engineers and policy specialists. By working within this structure rather than against it, the project was able to capture skills at a level meaningful for analysis while remaining manageable at national scale. Cadre-level mapping made cross-ministry comparison possible in a way that role-level mapping never could.
Another critical decision was to prioritise pattern recognition over individual-level precision. At this scale, the value of skills data lies in understanding distributions, strengths, and gaps across groups – not in achieving perfect accuracy at the individual level.
Manager validation was considered and rejected. In a large hierarchical organisation, requiring manager sign-off on skills assessments risks introducing bottlenecks, inconsistency, and hesitation among employees – ultimately slowing delivery and reducing participation. Instead, the programme relied on employee self-assessment using a 1–9 proficiency scale – beginner, intermediate, or expert – supported by design features that made honest engagement the path of least resistance.
One of these was the approach to peer visibility. Employees could see how colleagues had assessed themselves – but MuchSkills deliberately surfaced beginner and intermediate profiles first. Employees who saw that colleagues at similar levels had assessed themselves at moderate proficiency felt safer sharing accurate information rather than inflating their own. The result was participation that was both broad and, by the patterns it revealed, credible.
Deciding what to measure – and at what level – proved as important as deciding how to measure it. Project Phoenix mapped skills across three dimensions: band or grade level, department, and cadre.
Grade levels provided insight into cultural and behavioural maturity across the organisation – reflecting the expectation that seniority correlates with certain leadership and cross-functional capabilities. Departments made it possible to understand operational capability, identify strengths and weaknesses, and surface future skill needs within specific functional areas. Cadres enabled comparison across groups of related roles without collapsing everything into overly generic categories.
This three-dimensional structure allowed data to be analysed in multiple ways while remaining coherent and comparable across the system. It also made the assessment intuitive for employees – which helped participation rates.
Before any ministry-specific variation was introduced, a shared base taxonomy was established across all 45 ministries. It covered soft skills, cultural and behavioural capabilities, and core technical areas that cut across ministerial boundaries.
Without this foundation, meaningful comparison would not have been possible. When skills data from 45 different ministries is aggregated and reported, each skill name must carry the same meaning regardless of where it originated. Standardisation came first. Customisation was introduced only where it added analytical value – not where it would fragment understanding.
The project was designed to encourage participation rather than enforce it. Clear instructions, simplified workflows, a mobile-optimised platform, and the peer visibility features described above all reduced friction at the point of assessment. Employees who completed all stages of the project were also offered a certification – a signal of recognition that drove further engagement. MuchSkills' certification tracking capability made it possible to record and verify completions at scale.
Across the project, employees went beyond what was asked of them. They enriched profiles, set development goals, and explored platform features they were never directed to use. At the scale of a national civil service, that level of voluntary engagement is unusual. It reflects the cumulative effect of design decisions that treated skills visibility as something worth participating in, not an administrative obligation. For more on how MuchSkills approaches skills architecture and taxonomy design, see the MuchSkills methodology.
By the conclusion of Project Phoenix, the most significant change was not a single metric or report. It was a shift in visibility.
For the first time, skills across the Nigerian Federal Civil Service could be examined through a consistent, cross-ministry skills matrix spanning ministries, departments, grades, and cadres. The scale of participation demonstrated that large-scale engagement in skills mapping is achievable, even across complex public-sector organisations operating under real-world constraints.
Nearly 55,000 employees contributed skills data. The average number of skills recorded per contributing employee exceeded 70 – a strong signal of serious engagement, not minimal compliance. Certifications that had previously been scattered across documents and individual records were formalised within a single system. Development goals were set by thousands of employees who had never been asked to think about their growth in a structured way.
At an organisational level, each ministry now has a capability profile across departments, grades, and cadres, enabling cross-government comparison. Relative strengths, underdeveloped areas, and imbalances that had previously been invisible are now visible.
Because each ministry signed into MuchSkills as a separate organisation, cross-ministry comparison wasn't possible within the platform directly. To enable it, MuchSkills built a dedicated analytics environment for OHCSF on top of the platform – aggregating skills data across all 45 ministries into a single interface where analysts can examine profile completion rates, skill adoption patterns, and proficiency distributions at both national and ministry level, without needing to query the underlying platform directly.
A key feature of this environment is calibrated scoring. Because Project Phoenix relied on self-assessed proficiency across a workforce that had never previously undertaken structured skills mapping, raw scores needed to be interpreted against a cohort baseline rather than in absolute terms. The analytics layer applies a calibration factor derived from the observed average across the full dataset, adjusting scores to a consistent midpoint and making cross-ministry comparisons meaningful even where rating tendencies differ between ministries.
The framework also reflects the OHCSF's six-pillar skills structure – covering core civil service capabilities, digital and data skills, leadership, project and change management, service delivery, and innovation – with each ministry's workforce profiled against all six.
This makes it possible to ask not just what skills the civil service has in aggregate, but where specific ministries are strong, where gaps exist relative to the pillar benchmarks, and which skills are least adopted and most in need of development investment. The structured ministry summaries generated through this environment form the analytical foundation for Phillips Consulting's recommendations to OHCSF.
The impact is already being felt on the ground. One cluster consultant working with a federal ministry described how the data is being put to immediate use: "We have run the gaps for each department and used it in teaching how to create training, development and systems to support the gap closure. Developing the materials has been such a joy because of MuchSkills."
One finding the data already confirms: Capability across the civil service is higher than leaders had assumed. Significant skills and talent exist that have simply never been made visible – to the organisation, or to the employees themselves. That is not a surprise unique to government. It is a pattern MuchSkills sees consistently across large organisations. The problem is rarely that capability is absent. The problem is that it is unknown, and therefore unused.
"What we consistently see is that organisations don't lack ambition around skills. What they lack is a way to make skills visible that works under real conditions. At scale, the challenge is not collecting more data, but deciding what to measure so that it can actually be compared and used." – Daniel Nilsson, CEO, MuchSkills
While Project Phoenix took place within a national civil service, the lessons it surfaced are not specific to government. Large enterprises, consulting organisations, and industries where workforce capability is subject to audit and compliance face many of the same structural challenges. The following principles apply wherever large organisations seek to move from assumed capability to evidence-based insight.
The structural conditions that made Project Phoenix complex – fragmented workforce data, governance that varied across divisions, employees with no consistent digital touchpoint, pressure to produce board-level insight on a fixed timeline – are not unique to government. They exist in any large enterprise attempting skills intelligence at scale. The names change. The problems are recognisable.
"Skills visibility is not about eliminating complexity. It is about designing systems that can work with it. When you get that right, scale stops being a barrier and becomes a source of insight." – Daniel Nilsson, CEO, MuchSkills
For any large organisation considering a skills initiative, the question is rarely whether the ambition is worthwhile. It is whether the delivery is genuinely possible under real conditions – with imperfect data, distributed governance, and a workforce that has never been asked to do this before. Project Phoenix is evidence that it is.
Working across a national civil service – adapting continuously to evolving conditions, making design and execution decisions under pressure, and supporting tens of thousands of simultaneous users – clarified what it truly takes to deliver when complexity cannot be simplified away. The discipline lies in knowing what must remain consistent, such as shared definitions and taxonomy, and where flexibility is essential, such as communication channels, platform adaptation, and on-the-ground support structures.
From a capability perspective, Project Phoenix demonstrated that this type of work is repeatable – not because future contexts will look the same, but because the design principles, workflows, and execution patterns behind the project proved operationally viable under extreme conditions. Those patterns can be applied wherever large organisations seek to move from assumed capability to evidence-based workforce intelligence. MuchSkills' recognition as a Major Contender in the Everest Group PEAK Matrix® for Skills Management Platforms, and its status as a UK Government approved supplier under G-Cloud 14, reflect the same readiness – an independently validated platform capable of operating at enterprise and institutional scale.
Project Phoenix also added materially to MuchSkills' workforce dataset – now spanning skills profiles from over 100,000 professionals globally – reinforcing a data foundation that grows more valuable with every deployment.
Skills visibility is not an end state. It is a foundation. Once that foundation exists, organisations can act on it – staffing better, developing people more deliberately, planning with evidence rather than inference. Project Phoenix demonstrated that the foundation can be built, even at national scale, even under real-world constraints. That experience has shaped how MuchSkills approaches large, distributed organisations going forward.
A national-scale skills matrix applies the same principles as an organisational skills matrix – mapping what employees know and can do against a consistent framework – but across a much larger and more structurally complex workforce. In Project Phoenix, this meant establishing comparable skills data across 45 ministries and nearly 55,000 employees, using a shared taxonomy that allowed cross-ministry analysis rather than ministry-by-ministry silos.
Data quality in large-scale self-assessment programmes comes from design rather than validation. In Project Phoenix, this included a peer visibility feature that surfaced beginner and intermediate profiles first – making honest self-assessment feel safe rather than risky. The goal is not individual-level perfection but pattern-level credibility: data that is consistent enough across a large group to support strategic decisions, even where individual outliers exist.
Yes – but it requires a different approach from standard enterprise implementations. In Project Phoenix, workforce verification and skills mapping ran in parallel rather than sequentially, because waiting for clean data would have been incompatible with the delivery window. Skills mapping must be designed to function within the data reality an organisation has, not the data reality it wishes it had.
A structured skills baseline enables organisations to identify capability gaps across functions, grades, or locations; compare skill levels across teams or entities; prioritise learning and development investment; and make defensible workforce deployment decisions. For the Nigerian Federal Civil Service, the baseline now makes it possible to examine capability across all 45 ministries through a common framework – surfacing where it is strong, where it is underdeveloped, and where investment is most needed.
If your organisation is facing a skills visibility challenge at scale – whether across divisions, geographies, or a complex workforce – book a demo to see how MuchSkills approaches it, or explore the platform to get started.
For a more detailed executive reflection on the decisions and trade-offs behind Project Phoenix, read: CEO Q&A: Mapping the skills of nearly 55,000 civil servants