Development of a workforce self-assessment tool for public health emergency preparedness

Abstract Background In collaboration with six European public health agencies as part of the PANDEM-2 consortium, we have developed and validated a self-assessment tool that captures the workforce capacities and capabilities needed at the institutional level within National Public Health Institutes (NPHIs) to deal with public health emergencies. Methods The work carried out in this study included (i) a review of existing tools for workforce assessment, (ii) focus group discussions and interviews to map the experiences and needs of NPHI’s, (iii) the development of a tool for NPHI’s to assess their workforce capacity and capabilities in public health emergency preparedness (PHEP) and (iv) refinement of the assessment tool via a Delphi study. Results Capacity markers were identified to assess the workforce required for PHEP functions and the availability of surge capacity during a public health emergency. The tool also enables NPHIs to analyze gaps in PHEP staff competencies. The assessment scores can assist NPHI pandemic preparedness by identifying and prioritizing training and recruitment needs. Conclusions In line with EU Regulation 2022/2371 on serious cross-border threats to health, article 11 Training of healthcare staff and public health staff, Member States (MS) are tasked with assessing current workforce capacity and capability gaps. The PANDEM-2 workforce self-assessment tool aligns with this requirement and will support effective planning and development to strengthen the public health workforce capacity in EU MS.


T
he COVID-19 pandemic demonstrated the magnitude of the threat posed by communicable disease outbreaks and the demands on public health workforces to adequately respond to such outbreaks.Assessment of the public health workforce is a prerequisite for improving our ability to identify gaps, forecast future workforce needs, guide workforce development and policy and ultimately strengthen the workforce capacity.This is embedded within the International Health Regulations (IHR, 2005) 1 Monitoring and Evaluation Framework which was developed to support countries in assessing capacity.The framework comprises the World Health Organization (WHO) States Parties Annual Reporting (SPAR) 2 tool for mandatory annual reporting and includes the Joint External Evaluation (JEE), 3 for assessment and testing of IHR core capacities.Moreover, the Health Emergency Preparedness Self-Assessment (HEPSA) tool 4 was developed by the European Centre for Disease Prevention and Control (ECDC) to aid countries in improving their level of PHEP by evaluating levels of preparedness, identify potential gaps, identify vulnerabilities and detect areas for improvement at a national level.
Despite the availability of these tools, challenges remain.There have been several attempts to classify the public health workforce into groups, however, the broad scope of public health makes this challenging.Consequently, a standardized European classification is not available.This leads to a lack of reliable data in many countries and difficulties in obtaining information.Importantly, current health security assessment tools address national-level workforce capacity, whereas public health agencies require a tool applicable at the 'institutional level', allowing more granularity for assessment and planning.][7] The PANDEM-2 project (Pandemic Preparedness and Response, 883285) aims to enhance pandemic preparedness through innovations in information technology and training, which includes the development of a suite of tools for readiness assessment.We worked with six national European Union (EU) Member State (MS) NPHIs to develop and validate a tool to inventory the workforce capacity both in quantitative (full-time equivalents, FTE's) and in qualitative (competency) terms.This self-assessment tool will enable European NPHIs to analyze deficits in public health emergency preparedness (PHEP) capabilities at the institutional level, ensuring effective planning to strengthen the workforce capacity in EU MS against future public health threats.

Methods
The work carried out in this study was conducted in four stages: (i) a review of existing tools for workforce assessment, (ii) a focus group discussion and interviews to map the experiences and needs of NPHIs, (iii) development of a tool for NPHIs to assess their workforce capacity and capabilities in PHEP and (iv) refinement of the assessment tool through a Delphi study.
Firstly, current methodologies to assess how pre-existing tools enumerate, map and assess the workforce were reviewed.Three main sources were identified: (i) WHO, (ii) ECDC and (iii) other.These are explored in the results section.
Secondly, semi-structured interviews were conducted with participants from six EU MS (Finland, the Netherlands, Sweden, Romania, Germany and Portugal) NPHI to evaluate experiences and expectations on workforce assessment.Participants with working knowledge of one, or several of the following: (i) current practices to assess the PHEP workforce at the NPHI; (2) the use of tools by WHO, ECDC or other organizations that involve workforce assessment, such as the WHO JEE and/or IHR SPAR, ECDC HEPSA or the assessments of workforce capacity and training needs for the prevention and control of communicable diseases in the EU/EEA 8 ; or, (iii) has an understanding of assessment needs of the NPHI at different PHEP domains (laboratory, statistics/modelling, communications, etc.) were targeted.Interviews were held online in December 2022.Three over-arching themes were explored: (i) participant experiences with pre-existing workforce methods, (ii) current NPHI best practices for workforce assessment and planning and (iii) specific needs of NPHI for a workforce assessment tool.Participants were informed of the anonymity of their answers so that no person or country could be identified from the report.A full list of interview questions can be found in the Supplementary information.
A pilot version of the workforce self-assessment tool was developed.The metrics for assessing the capacities and capabilities relevant to the PHEP context in NPHI were selected based on existing literature, the focus group discussion and interviews, as well as internal discussion within the working group.We selected capacity and capability indicators that are specifically relevant to the PHEP context in NPHIs.Consequently, the workforce indicators in the WHO SPAR and JEE tools, as well as the ECDC HEPSA tool were omitted from the PANDEM-2 tool as they were designed to describe country-level capacities and capabilities.To fulfill the aim of including crosscutting capabilities in the tool, we reviewed other resources, such as the Council on Linkages between Academia and Public Health Practice: Core Competencies for Public Health Professionals (2014, 2021) 9 and de Beaumont Foundation: Adapting and Aligning Public Health Strategic Skills (2021). 10As PHEP is often only one part of a professional's tasks in an NPHI, we chose FTEs to enumerate workforce capacity.For assessing competencies, we chose to use a similar approach as the Hennessy-Hicks Training Needs Analysis (TNA), 11 a widely used tool endorsed by the WHO.
Finally, the pilot tool was validated by participants from the six EU MS NPHI (n ¼ 14) using a modified Delphi study design based on the RAND/UCLA appropriateness method. 12Ten of 14 respondents had been employed by the NPHI 10þ years, two for 5-10 years and two for <5 years and all had an active role in PHEP within their institute.Each indicator was scored using a 9-point Likert scale (1 ¼ not relevant at all, 9 ¼ highly relevant), or 'I don't know'.Agreement between respondents was calculated as the percentage of given scores in the highest tertile (7-9)/all given scores.Indicators with a median score >7 and agreement >70% were included in the final tool.Indicators with a median score >7 and agreement <70% required discussion and voting in a consensus round.Indicators with a median score �7 were excluded.In the consensus round, only indicators where >70% consensus was met were included in the final tool.

Review of existing tools for workforce assessment
A total of eight tools related to workforce capacity and capability of NPHI's were reviewed.
These were divided into three subcategories: (i) WHO, (ii) ECDC and (iii) other.A summary overview of each tool reviewed and key information can be found in the Supplementary information.

Experiences of NPHIs on existing assessment methods
At the time of the interview, only two of the participating countries (Finland and Germany) had undergone the WHO JEE.In one country (The Netherlands), a JEE had been postponed due to COVID-19.Two experts were involved in the development of the tool.One expert had contributed to three JEEs as an evaluator, mainly in low-economic status countries.Participants agreed that the JEE tool outputs on workforce capacity and capability are mainly qualitative.The JEE tool was seen as straightforward to use, but the required data are not always available.
Several participants had experience with SPAR.One participant found SPAR difficult to implement at NPHIs as it is based on national level indicators which are not easily translated to the institutional level.It was noted that the revision of SPAR last year improved the assessment, as indicators are now better defined to provide a more accurate reflection of workforce capacity and capabilities.
Views on the HEPSA tool were mixed: one participant organization suggested it provides more quantitative data regarding workforce capacity and capabilities, but another one found that the tool does not measure workforce capacity, but only equipment.One NPHI had been involved in the development of HEPSA, ranking indicators for the tool.The NPHI had organized in-house simulation training and found the HEPSA tool useful but concluded that personnel would require training on the tool prior to implementation.
With current assessment tools, the participants identified several gaps in the aftermath of COVID-19.Firstly, more capacity to analyze the available data was needed.Secondly, experience from different sectors, such as logistics and pharmacy would have been valuable regarding COVID-19 vaccine procurement and roll-out.Thirdly, the capacities and capabilities for communication were insufficient, and fourthly, legal expertise was needed to understand legislation with regard to pandemic plans.Conversely, it was noted that due to the dynamic and unpredictable nature of a pandemic, it is difficult to say with certainty, which indicators would be 'future-proof'.

Current practices in workforce capacity and capability planning at NPHIs
Several participants had experience in workforce planning in their organization.It was agreed that general preparedness plans incorporate surge capacity for the workforce, but these aspects are generic and do not specifically apply to NPHIs.Many stated that the surge capacity required during the COVID-19 pandemic had been recruited on an ad hoc basis, without a pre-existing procedure.In two countries, staff had been seconded from local PH services to the NPHI, and in several institutes, personnel were 'borrowed' from other departments.One NPHI reported having a database of public health system personnel, from which they can request personnel to NPHI.However, this database is not PHEP-specific, and there is a shortage of personnel with epidemiological expertise.Some of the relocated staff had received previous training for competencies required in PHEP, and it was agreed that these competencies should be maintained for the future.Training was also provided on an ad hoc basis, as some staff members lacked the required skills.
Learning from the COVID-19 crisis, NPHIs are making plans to prepare for rapid upscaling in future crises.This includes implementing 'reserve training' during peacetime to provide a baseline level of knowledge to provide a pool of personnel that can be redeployed with minimal requirement of training and identifying suitable personnel from a PHEP-specific database.With regards to workforce funding, the NPHIs found it difficult to affect budget decisions on the ministry or strategic level by merely stating the number of persons needed.Instead, it was considered more effective to state which functions the PHEP personnel can perform with current funding, and which can be performed with increased funding.

Expectations of NPHIs for an ideal tool for workforce assessment
Based on the experiences from the COVID-19 crisis, a gap was clearly identified: NPHIs require tools for identifying and predicting workforce requirements, especially for PHE situations.The existing tools generally address national-level workforce capacity, whereas

Development of a workforce self-assessment tool 483
NPHIs require a practical tool applicable to the organization level, allowing more granularity for assessment and planning.
Using the tool should enable determining the right number of personnel that possess the needed competencies for PHEP functions, as well as identifying and addressing training needs.One participant suggested that the tool could couple capacity (absolute numbers) with competency, identifying personnel needed for specific roles and hours needed to complete the role, providing a better assessment of capability than the existing assessment tools.One participant suggested the tool to include a three-tiered system as in a national Emergency Operations Centre (Level 1: work in the usual core group; Level 2: draw resources from the same institute to upscale the staff; Level 3: recruit outside of the organization).
It was agreed that the PANDEM-2 tool should be customizable according to national guidelines and needs, fulfilling the mandates and roles of different agencies/countries.The tool should be adjustable over the years and from gained experience.Regarding the use of the tool, there was a consensus on a hybrid approach: the process could be started by a small team, including experts and decisionmakers to ensure the outputs of the tool have practical and financial support.The human resources department was suggested as the initiation point, with technical support from the other units (e.g.finances).The work of this 'core' team could be followed by a consensus meeting held at the institutional/department level.Involvement of the Ministry of Health was also suggested.
Using the tool, three years was considered as a suitable time for repeating the assessment and planning, as an annual assessment would be too much of a burden and would not offer sufficient time for gaps to be addressed.It was suggested that the assessment could follow the same period as the upcoming EU country reporting on prevention, preparedness and response planning (based on Article 7 in EU Regulation 2022/2371 on serious cross-border threats to health). 13

Pilot tool development
An overview of the indicators included in the pilot tool is outlined in figure 1.A detailed outline of the pilot tool can be found in the Supplementary information.
In total, 115 qualitative competency indicators specific to the PHEP context were selected for the pilot tool.This included 102 competencies from the Public Health Emergency Preparedness: Core competencies for EU MSs (2017) 14 framework, divided under five categories: (i) detection and assessment, (ii) policy development, adaptation and implementation, (iii) healthcare services, (iv) coordination and communication (within the PHEP system) and (v) emergency risk communication (with the public).As the framework had been developed prior to the COVID-19 pandemic, we sought Figure 1 Overview of the quantitative and qualitative indicators included in the PANDEM-2 pilot workforce self-assessment tool.A full outline can be found in the Supplementary information also indicators to reflect the requirements that emerged during the pandemic.Five such indicators were identified in the new domain 'Infodemiology and infodemic management' in Core competencies in applied infectious disease epidemiology in Europe, 15 and were included under the category 5: Emergency risk communication (with the public) of the PANDEM-2 pilot tool.
Cross-cutting competencies that would be relevant in crisis situations were also included.Eight additional indicators in five subcategories were identified: (i) scientific skills, (ii) systems thinking, (iii) problem solving, (iv) resource management and (v) change management, from two sources: the Core Competencies for Public Health Professionals by the Council on Linkages Between Academia and Public Health Practice 9 and the strategic skills comprised by the U.S. National Consortium for Public Health Workforce Development. 16he capacity indicators for the pilot tool were chosen based on workforce groups that were listed in WHO JEE and SPAR workforce indicators, as well as the ECDC workforce capacity and training needs survey.The most relevant workforce groups regarding NPHIs were identified through discussions within the working group.As a result, we selected five specific PHEP workforce groups as indicators for which the tool would assess both 'peace-time' and surge capacity requirements: (i) public health epidemiologists, (ii) public health microbiologists, (iii) data scientists/statisticians, (iv) modellers and (v) communication experts.Furthermore, Total workforce in PHEP and Total workforce at the NPHI were selected as indicators.As the eighth indicator, we chose the Proportion (%) of total NPHI staff in PHEP, calculated as follows: Proportion (%) of total NPHI staff in PHEP ¼ Total workforce in PHEP/Total workforce at the NPHI.
Competencies were assessed using a modified Hennessy-Hicks TNA. 11Here, items (skills, tasks or competencies) are scored on a seven-point scale according to two criteria: 'How critical the task is to the successful performance of the respondent's job' and 'How well is the task currently performed'.The questionnaire can identify how best to improve performance in areas where deficits have been identified: in addition to importance and performance measures, each item can also be rated along a seven-point scale according to how far the respondent believes that the competency gap can be addressed by organizational changes or training.As we validated the importance of each competency indicator through a Delphi process involving end-users, in the final version of the tool, we omitted the rating of importance.For improving performance, instead of the original two options (organizational changes and training), we selected three alternatives that are not mutually exclusive: (i) training the current PHEP staff performing the function, (ii) training other staff from your organization to perform the function and (iii) recruiting new staff.This selection was supported by the results of focus group discussions, where it was suggested a similar approach for a three-level surge capacity system in a Public Health Emergency Operations Centre (Level 1: work in the usual core group; Level 2: draw resources from the same institute to upscale the staff; Level 3: recruit outside of the organization).

Responses from questionnaire
From the eight quantitative indicators (capacities) in the pilot tool version, the Delphi questionnaire respondents agreed to include seven indicators.One indicator was nominated for discussion during the consensus round.Respondents suggested an additional seven capacity indicators which were discussed and voted on in the consensus round (table 1).
From the 115 qualitative indicators (competencies), responses to the questionnaire resulted in 98 indicators being included in the final tool, and 15 indicators were excluded based on feedback.Two indicators were nominated for discussion with an additional two indicators suggested by respondents.Therefore, four qualitative indicators were included for discussion in the consensus round (table 1).
A summary of the 15 qualitative indicators excluded based on the analysis criteria can be found in table 2. The excluded indicators were mainly from category 3 Healthcare: respondents indicated that these indicators are not the responsibility of the NPHI, but mainly of authorities such as the Ministry of Health or the healthcare sector.

Consensus round
In the consensus round, the three indicators nominated for discussion/voting (table 1) were rejected (see table 3 for information).
In response to the questionnaire, participants suggested a total of nine additional (seven quantitative, two qualitative) indicators to be discussed and voted on in the consensus round.Several were rejected in their original form but based on participant feedback, some were modified and partly combined with previously accepted indicators.Consequently, from the initial nine additional indicators suggested, two quantitative and two qualitative indicators were added to the final tool.These include the following: Quantitative indicators: (1) Workforce in PHEP at NPHI: Management and support.
(2) Workforce in PHEP at NPHI: Doctors with expertise in Infectious Diseases and Public Health, and other specialists in infection prevention and hygiene.

Qualitative indicators:
(1) Advocate the development of plans to collaborate with universities and other training organizations to meet training needs under a PHE (e.g.contact tracing).(2) Advocate the development of plans to receive surge workforce capacity from universities to the NPHI (e.g.epidemiologists, statisticians and modellers).

Discussion
In collaboration with six EU MS NPHI, we developed and validated a tool that enables the assessment of the PHEP workforce both in quantitative (FTEs) and qualitative (competency) terms.Through a review of existing resources and focus group discussions, we have identified indicators and metrics relevant to the PHEP context.This tool specifically addresses the current gap in the assessment of PHEP workforce Development of a workforce self-assessment tool 485 needs and requirements of NPHIs at the institutional level.This tool can be used to assess the needed workforce for PHEP functions during normal circumstances, and the availability of surge capacity under a Public Health Emergency situation.Furthermore, the tool enables NPHIs to analyze the gaps in PHEP staff competencies.
Subsequently, the assessment scores can be used to identify and prioritize training and recruitment needs.
A major gap identified in this study was the distinct lack of a central registry of public health or PHEP professionals, and that the numbers of staff are not available to the NPHIs.This finding is supported by the Assessment of workforce capacity and training needs for the prevention and control of communicable diseases in the EU/EEA (2022). 18The WHO is currently addressing the challenge of workforce enumeration through a roadmap to define, map and measure the workforce delivering essential public health functions with a specific focus on emergency preparedness and response. 19Due to the existing challenges and the ongoing larger WHO initiative, the PANDEM-2 workforce selfassessment tool was designed to help address these challenges by providing a way to measure total NPHI staff capacity and emergency preparedness at the institutional level.The tool will also provide a useful benchmark for NPHIs to advocate for additional investment in the PHEP workforce.Acknowledging that the organization of Development of a workforce self-assessment tool 487 PHEP functions in the European countries differ, the tool may also be used at regional Public Health Agency level, especially in those countries where public health functions are decentralized.Information from such an assessment will be highly relevant in future, as the EU is currently taking actions to implement activities under the article 11 'Training of healthcare staff and public health staff' of Regulation (EU) 2022/2371 on serious cross-border threats to health. 13These actions will include the assessment of current workforce capacity and capability gaps in EU MSs.The PANDEM-2 workforce self-assessment can therefore align with this need and will help effective planning and development to strengthen the workforce capacity in EU MSs.

Table 1
Summary of the Delphi questionnaire results which demonstrates the initial number of indicators in the pilot tool and how this evolved based on outcomes of the Delphi questionnaire.Notes: An in-depth explanation of indicators excluded is available in table 2. In the consensus meeting, a total of 12 indicators (eight quantitative; four qualitative were discussed and voted on).The results of these discussions are outlined in table 3. Green ¼ included, Red ¼ excluded, Yellow ¼ discussed/voted on in the consensus meeting.

Table 2
Indicators excluded based on questionnaire analysis criteria.:Asoutlined above, the majority of indicators were excluded as they were activities deemed to be outside the remit of a National Public Health Institute.The colours in Table2are used to differentiate between the different indicators and sub-groups.The colours themselves do not have any relevance. Note

Table 3
Indicators discussed and voted on the Delphi consensus round including respondent comments and decision/modification made in terms of inclusion in the final PANDEM-2 workforce capacity self-assessment tool.