This FAQ page provides further technical details about the KEF. If your question is not answered below, please let us know by emailing KEF@re.ukri.org.
Frequently asked questions
-
Metric source data
Where does the data for the KEF come from?
The majority of data is currently derived from the Higher Education Business and Community Interactions (HE-BCI) survey. This annual survey is run by HESA and completed by all English higher education providers registered with the Office for Students as ‘Approved Fee-Cap’ and across the UK by all higher education providers regulated by the Scottish Funding Council, Higher Education Funding Council for Wales (HEFCW) and Department for the Economy, Northern Ireland (although note that the KEF only includes English providers at present).
The KEF also includes additional data provided by Innovate UK (Working with business) and Elsevier (Co-authorship in Research Partnerships). Full details of the source data for each metric is available as a downloadable excel file named ‘Annex D: KEF3 metrics data sources’ published alongside the June 2023 KEF3 decisions report, which remain unchanged for KEF4.
The Public & Community Engagement score is derived from a self-assessment – see ‘Public & Community Engagement self-assessment’ for further information.
-
Data and calculations
Can I download the source data for the KEF?
You can download a summary of the calculated quintile data displayed in the metric, or a more detailed data file, in either CSV or Excel format. It is also possible to download images of the KEF dashboards as Image, PDF or PowerPoint files. If you require the information in a different format, please contact KEF@re.ukri.org.
-
Previous KEF results
Can I still access the results of KEF1-3?
The interactive dashboards for the first three iterations of the KEF are no longer available. However, you can still download the detailed and summary data files for KEF1-3 in either CSV or Excel format using the links below:
KEF1
Download a summary of the first iteration of KEF in CSV or in Excel format.
Download a more detailed version of the data from the first iteration of the KEF as CSV or in Excel format.
KEF2
Download a summary of the second iteration of KEF in CSV or in Excel format.
Download a more detailed version of the data from the second iteration of the KEF as CSV or in Excel format.
KEF3
Download a summary of the third iteration of KEF in CSV or in Excel format.
Download a more detailed version of the data from the third iteration of the KEF as CSV or in Excel format.
-
Can I compare my results from KEF4 with those from previous iterations of the KEF?
Generally, provider results between KEF2, KEF3 and KEF4 can be directly compared as there have been no changes to the underlying methodology used to calculate the results between these iterations of the KEF. However, there was some variation in the cluster memberships between KEF2 and KEF3 so although the methodology has not changed, in some circumstances there may be more substantial differences for individual providers or minor changes to the cluster averages for these iterations.
It should be noted that between KEF1 and KEF2 there were significant differences to the underlying methodology used to calculate the results with the removal of a scaling step and results being presented in five quintiles rather than ten deciles. This means it is not possible to directly compare an individual provider’s results between the first and second iterations KEF.
When comparing results to previous years we would encourage providers to broadly consider their relative performance in relation to their cluster average as an indicator of whether there has been an improvement in performance.
-
What data years are in which KEF iterations?
Table 1 shows the data years that are included in each iteration of the KEF up to and including KEF4.
Table 1 Summary of data years included in each iteration of the KEF
KEF Iteration Data years KEF4 - Sept 2024 2022-23
2021-22
2020-21KEF3 - Sept 2023 2021-22
2020-21
2019-20KEF2 - Sept 2022 2020-21
2019-20
2018-19KEF Review No KEF publication KEF1 - March 2021 2018-19
2017-18
2016-17 -
Will the metrics change in the future?
In May 2023 we published the decisions of RE’s Review of Knowledge Exchange Funding. In this publication we also clarified the purpose of the KEF in the short/medium term, to continue to meet the following purposes:
- To provide higher education providers (HEPs) with a useful source of information and data on their knowledge exchange (KE) activities, for the purposes of understanding, benchmarking and improving their own performance.
- Underpinned by the objective of providing more easily accessible and comparable information on performance for the purposes of transparency and public accountability.
We confirmed that the current design and development work will continue through further iterations and be focussed on the purpose to support HEP performance until at least KEF5 in 2025. The continued use of the current design and methodology will allow the KEF to be used by HEPs to compare their performance with each iteration.
In the long-term we will bring forward proposals on the development of the KEF for use in funding when we have the appropriate data and metrics available to make more fundamental changes to our funding approach (proposals not to be brought forward before 2025/26, with any subsequent implementation likely to take several further years). Specific ambitions for areas of long-term metric development for the KEF were set out in the KEF review report published in February 2022.
-
What has changed in KEF4
No substantive changes have been made to the underpinning methodology calculating the metrics that underpin KEF4. The presentation of the results will also remain the same. The KE clustering of HE providers also remain the same as those used for KEF3 (published in Annex B of the KEF3 Decisions report).
Regarding narrative statements, KEF3 included updated narrative statements. In response to the feedback received through the 2021 KEF review, it was agreed that the KEF narrative statements (and associated self-assessment scores) would not be subject to annual updates. KEF4 continues to display the narrative statements that were submitted for KEF3. It has not yet been confirmed when the next narrative update will be incorporated, but we confirm that there will be no updates until KEF6 (2026) at the earliest.
-
Have changes been made to narrative statements or public & community engagement self-assessment scores for KEF4?
KEF4 will continue to display the narrative statements submitted for KEF3. It will not be possible for providers who have not previously submitted a narrative statement to submit a new statement for KEF4. We are also unable to accept updates to the information or self-assessment scores provided in KEF3. We anticipate that narrative statements will be updated every three years, but this may be amended to timelines converging with accountability requirements for our KE funding. Further calls for updated narrative statements will be no earlier than KEF6.
-
Timescales
Is the data by academic, financial or calendar year?
The vast majority of data is provided by academic year. The only exceptions are:
- Data provided by Innovate UK for the Working with Business perspective is provided by April-March financial year.
- Data provided by Elsevier for the Research Partnerships perspective is provided by calendar year (2021, 2022, 2023).
Elsevier data aligns with the relevant academic year: i.e. 2022 Elsevier data is presented within the 2021-22 academic year.
Extraction dates
Data is extracted at the latest possible date to prepare the KEF dashboards for publication in September. In some cases source data may change after the extraction date (for example Elsevier continually update Scopus data, and other provider data amendments may occur through the year). The KEF results for a single HEP are calculated relative to all other eligible HEPs. For this reason KEF dashboards will remain fixed to the data available at the pre-publication data extraction point and post publication updates will only be made in exceptional circumstances.
Approach to sourcing 2022/23 student and finance data
In some instances data is unavailable for a HEP for a given year, and therefore data for the previous year will be used instead. For example a HEP may not have submitted the Annual Finance Return (AFR) to the Office for Students by the KEF data extraction date in Spring 2024. The metrics that use finance data are therefore calculated for these providers by using the 2021/22 finance figures in place of the missing 2022/23 figures. In future KEF iterations when this data, in this example the 2022/23 data, is now available, this data is then incorporated into the KEF.
In exceptional circumstances where previous data is not available we will use best endeavours to obtain correct data after the data extraction date.
Due to delayed availability of 2022/23 student data, for all providers 2021/22 student data has been used for KEF4 in the place of 2022/23 data where required.
-
Quintile calculations
Which HEPs are included in the sector quintile and cluster benchmark calculations?
The metrics of all eligible HEPs were included in the sector quintile and cluster benchmark calculations, irrespective of whether their individual metrics are displayed or not as an individual dashboard.
How have you calculated the average values for the KEF?
We are using two methods for calculating the three-year averages, and the method selected for a particular metric will depend on which is most appropriate for the underlying dataset as follows:
Average Method 1: will be used where the dataset has zero values in the denominator of one or more of the three years being averaged (which would otherwise result in a ‘divide by zero’ error when using method 2).
\[(a_1+a_2+a_3) \over (b_1+b_2+b_3)\]
Average Method 2: will be used for all other metrics
\[ \frac{a_1}{b_1} + \frac{a_2}{b_2} + \frac {a_3}{b_3} \over 3 \]
Where average method 1 needs to be used for a single HEP to prevent a divide by zero error, it will be used for all HEPs within that metric.
Further details and example calculations are included in the KEF3 decisions report.
What happens when the denominator is zero for each of the three years?
In this scenario both averaging methods return a divide by zero error. In these instances, we will manually apply an average of zero for the metric.
How are perspective quintiles calculated?
To calculate the perspective quintile; first sum the relative positions of a given provider for each of the contributing metrics. This figure is then used to calculate the quintile for each provider in that perspective.
Are the full range of quintiles used for every metric and perspective?
The full range of quintiles is used for every metric and perspective.
Why do some quintiles contain more providers than others
Where one or more providers share exactly the same value in an ordered list, we apply the position number they would have if they did not share the position with any other providers. Where this is the case we will not falsely alter the position values of other providers, for example if three providers share a position of 5, the next provider will be given a position value of 8. This can result in some quintiles containing more than 20% of the participating providers to accommodate all providers who share that position and the number of providers in the adjacent quintiles will be adjusted accordingly.
How are cluster averages calculated?
Cluster averages are calculated by taking the mean average of the perspective positions of providers belonging to that cluster for each perspective, and then using this figure to calculate its quintile. The engagement level associated with that quintile is then reported.
I think there is an error in my data – who do I contact?
Contact us by emailing KEF@re.ukri.org in the first instance. If your provider wishes to put forward amendments to HESA data (including HE-BCI, and the finance or student records), there is a formal data amendments process.
-
KE Clusters
What is the purpose of the KE Clusters?
The purpose of clustering is to group providers into KE clusters that have similar capabilities and resources available to them to engage in knowledge exchange activities. In this way, the KEF provides a fairer comparison of performance between similar providers.
Is one cluster better than another?
No - it is important to note that the KE clusters are not a ranking in themselves. No one cluster is better or worse than another – they are simply a means to enable fair comparison across a very diverse higher education sector.
When are the clusters updated?
The clustering of providers was updated in May 2023 in order to take account of the most recently available data, including from the Research Excellence Framework (REF2021). There have been minor updates to the cluster descriptions and a small number of changes to cluster placements but generally providers have remained similarly placed across the seven KE clusters identified to enable meaningful and fair comparison. These seven comprise the five broad discipline clusters, plus the ‘STEM specialists’ and ‘Arts specialists’ clusters.
Cluster variables represent a ‘capability base’ which can be thought of as quasi-fixed in the medium-term, but can change over the longer-term through investments in research, teaching and related physical capital. We will therefore periodically re-cluster all English HEPs as appropriate, such as when new data becomes available (particularly REF data). Ahead of the next REF2028 results we have no specific plans to update the cluster placements, but we will regularly review the available data and if we consider there have been significant changes we will re-run the cluster process.
How did you decide which providers were put into which cluster?
The clusters were determined through a statistical analysis undertaken by Tomas Coates Ulrichsen at the University Commercialisation and Innovation Policy Evidence Unit.
-
Further information on the co-authorship metric
Elsevier is a global leader in information analytics and has supported the development of the metric for co-authorship with non-academic partners. The metric, and its underlying data, is generated using Elsevier’s SciVal and Scopus tools.
Elsevier took the list of KEF eligible providers and identified all known affiliated organisations. From a data extract of 1 April 2024 and covering the three calendar years 2021 to 2023, the following outputs were collected for each provider and its affiliates: articles; conference papers; reviews; books; book chapters. The collected outputs were then analysed for the presence of non-academic authors, enabling the proportion of outputs involving non-academic co-authorship to be calculated. Further details on the method used and how to access the underlying data are given below.
How was the metric for Co-authorship with non-academic partners produced and what database was used?
The co-authorship metric was calculated using Elsevier’s SciVal system and the Scopus database. SciVal is an analytical tool enabling research activity and performance to be systematically evaluated. SciVal has a global reach covering more than 24,000 research providers and their associated researchers across more than 230 nations. Elsevier’s Scopus is a comprehensive, source-neutral abstract and citation database curated by independent subject matter experts. Scopus covers outputs from over 7,000 publishers, drawing from some 28,153 serials, over 158,000 conferences and over 351,000 books. Updates occur daily with some 13,000 articles/day indexed. As of April 2024, the database included over 96 million documents.
What parameters were employed for generating the co-authorship metric?
A snapshot of data from the Scopus database was taken as at 1 April 2024. This has been analysed by calendar year for the three years 2021, 2022 and 2023. Analysis of the snapshot is focused on the following five output types: Article; Conference Paper; Review; Book Chapter; Book. This focus mirrors the methodology Elsevier uses in the Times Higher Education World University Rankings. Similarly, the list of organisational affiliations used to generate the KEF metric began from the affiliations employed for the THES rankings.
How are non-academic co-authors identified in the analysis?
The Scopus database employs a combination of automated tagging, manual curation and feedback from its user community to classify organisations and to generate affiliations. The database includes over 94,000 organisations (affiliation profiles) and alongside a range of metadata are classified, where data is sufficient to support, by function e.g. research institutes, policy institutes, charities etc. Not all affiliations can be classified. Within the analysis, having excluded single-author outputs from the co-author set, SciVal was employed to identify all non-academic co-authors through their affiliation to relevant organisations including those where the organisations are not UK based. Organisations (and hence non-academic co-authorships) were classified as being “medical”, “corporate”, “government” and “other”.
How are outputs attributed? Can an output appear more than once within the analysis, for example if the output involves an academic collaboration between a number of KEF eligible providers?
SciVal has been used to show an output involving collaboration with multiple KEF eligible providers as an output for each of those KEF eligible providers. While an output is attributed to each relevant KEF provider, it is recorded only once for each provider e.g. an output appears once for each eligible KEF provider involved no matter how many non-academic collaborators there are, even if those non-academic collaborators are from different sectors or countries.
How has the calculation of the co-authorship metric changed from KEF3 to KEF4?
No change has been made to the underlying methodology relating to the co-authorship metric between KEF 3 and KEF4.
However, in the Elsevier classification there are a number of affiliation options. One of those affiliation options is an “attributable to” relationship. In the model, a research output from an organisation that has an attributable to relationship to a University is counted as being from that University.
In our KEF analysis, a University cannot collaborate with itself so an output from University X that has an author in Faculty Y and an author in “Attributable To” organisation Z is not collaborative.
Likewise an “attributable to” organisation is always considered to be an academic organisation. Thus, University A that collaborates with a hospital, Organisation Z, that is “attributable to” University B is considered to have an academic collaboration with University B and not a non-academic collaboration even though this is with a hospital. This approach ensures a consistent treatment across all Universities for their collaborations with “attributable to” affiliations.
If an affiliation status changes the history is effectively rewritten so the relationship is recorded as it exists at the date of the extract and the previous data assumes this affiliation has always existed. For example, if a change in affiliation status occurred in, say, November 2023 and was recorded in Scopus as at 01 April 2024 (date of KEF 2024 extract) , that change will also result in changes to the data generated for Years 2021 and Years 2022. Thus, the data for Years 2021 and 2022 is not static between KEF 2023 and KEF 2024. The results for KEF 2024 reflect the realities of the affiliation as they were recorded as at 01 April 2024.
Can I review the underpinning co-authorship data for a provider?
Elsevier’s SciVal users will be able to generate much of the data for themselves using the parameters and methods described above. In addition, Elsevier has agreed to provide the underlying publication data that has been generated for the co-authorship metric to authorised individuals from each provider. The process to obtain the data is for the institutional KEF contact to contact the Research England team at kef@re.ukri.org. Research England will pass on all legitimate requests to Elsevier along with details of the relevant KEF contact. Elsevier will then liaise directly with the KEF contact to produce the required data. This will be provided to the KEF contact as an Excel spreadsheet. A copy of the data will also be sent to Research England along with any other details associated with the response.