Skip to main content

Technical notes

This FAQ page provides further technical details about the KEF. If your question is not answered below, please let us know by emailing KEF@re.ukri.org.

Frequently asked questions

  1. Metric source data

    Where does the data for the KEF come from?

    The majority of data is currently derived from the Higher Education Business and Community Interactions (HE-BCI) survey. This annual survey is run by HESA and completed by all English higher education providers registered with the Office for Students as ‘Approved Fee-Cap’ and across the UK by all higher education providers regulated by the Scottish Funding Council, Higher Education Funding Council for Wales (HEFCW) and Department for the Economy, Northern Ireland (although note that the KEF only includes English providers at present).

    The KEF also includes additional data provided by Innovate UK (Working with business) and Elsevier (Co-authorship in Research Partnerships). Full details of the source data for each metric is available as a downloadable excel file named ‘Annex D: KEF3 metrics data sources’ published alongside the June 2023 KEF3 decisions report.

    The Public & Community Engagement score is derived from a self-assessment – see ‘Public & Community Engagement self-assessment’ for further information.

  2. Data and calculations

    Can I download the source data for the KEF?

    You can download a summary of the calculated quintile data displayed in the metric, or a more detailed data file, in either CSV or Excel format. It is also possible to download images of the KEF dashboards as Image, PDF or PowerPoint files. If you require the information in a different format, please contact KEF@re.ukri.org.

  3. Previous KEF results

    Can I still access the results of KEF1 and KEF2?

    The interactive dashboards for the first two iterations of the KEF are no longer available. However, you can still download the detailed and summary data files for both KEF1 and KEF2 in either CSV or Excel format using the links below:

    KEF1

    Download a summary of the first iteration of KEF in CSV or in Excel format.

    Download a more detailed version of the data from the first iteration of the KEF as CSV or in Excel format.

    KEF2

    Download a summary of the second iteration of KEF in CSV or in Excel format.

    Download a more detailed version of the data from the second iteration of the KEF as CSV or in Excel format.

  4. Can I compare my results from KEF3 with those from previous iterations of the KEF?

    Generally, provider results between KEF2 and KEF3 can be directly compared as there have been no changes to the underlying methodology used to calculate the results between KEF2 and KEF3. However, there has been some variation in the cluster memberships so although the methodology has not changed, in some circumstances there may be more substantial differences for individual providers or minor changes to the cluster averages.

    It should be noted that between KEF1 and KEF2 there were significant differences to the underlying methodology used to calculate the results with the removal of a scaling step and results being presented in five quintiles rather than ten deciles. This means it is not possible to directly compare an individual provider’s results between the first and second iterations KEF.

    When comparing results to previous years we would encourage providers to broadly consider their relative performance in relation to their cluster average as an indicator of whether there has been an improvement in performance.

  5. What data years are in which KEF iterations?

    Figure 1 shows the data years that are included in each iteration of the KEF up to and including KEF3

    Figure showing KEF iterations;
                 KEF 1 included 2016/17 to 2018/19;
                 KEF 2 included 2018/19 to 2020/21;
                 KEF 3 included 2019/20 to 2021/22
    Figure 1 - KEF data years for KEF1, KEF2, and KEF3.
  6. Will the metrics change in the future?

    In May 2023 we published the decisions of RE’s Review of Knowledge Exchange Funding. In this publication we also clarified the purpose of the KEF in the short/medium term, to continue to meet the following purposes:

    • To provide higher education providers (HEPs) with a useful source of information and data on their knowledge exchange (KE) activities, for the purposes of understanding, benchmarking and improving their own performance.
    • Underpinned by the objective of providing more easily accessible and comparable information on performance for the purposes of transparency and public accountability.

    We confirmed that the current design and development work will continue through further iterations and be focussed on the purpose to support HEP performance until at least KEF5 in 2025. The continued use of the current design and methodology will allow the KEF to be used by HEPs to compare their performance with each iteration.

    In the long-term we will bring forward proposals on the development of the KEF for use in funding when we have the appropriate data and metrics available to make more fundamental changes to our funding approach (proposals not to be brought forward before 2025/26, with any subsequent implementation likely to take several further years). Specific ambitions for areas of long-term metric development for the KEF were set out in the KEF review report published in February 2022.

  7. What has changed in KEF3?

    As detailed in the KEF3 Decisions report, there have been no substantive changes to the methodology used from KEF2 to KEF3, however KE clusters and narrative statements have been updated. For further details see ‘What changes have been made to the narrative statements for KEF3?’ and ‘KE cluster’ notes.

  8. What changes have been made to narrative statements for KEF3?

    Narrative statements for the ‘Public and Community Engagement’ and ‘Local growth and regeneration’ perspectives, and the ‘Institutional Context’ narrative have been updated for KEF3. We anticipate that narrative statements will be updated every three years, but this may be amended to timelines converging with accountability requirements for our KE funding. Further calls for updated narrative statements will be no earlier than KEF6.

  9. Have public & community engagement self-assessment scores been updated for KEF3?

    Yes, new public & community engagement self-assessment scores have been provided for KEF3. All eligible providers were invited to submit new self-assessment scores, appropriately evidenced by updated narrative statements.

    As with previous iterations, the total public and community engagement score is calculated from the sum of the individual aspect scores, which is displayed as a quintile result, as with all other metrics and perspectives.

  10. Timescales

    Is the data by academic, financial or calendar year?

    The vast majority of data is provided by academic year. The only exceptions are:

    • Data provided by Innovate UK for the Working with Business perspective is provided by April-March financial year.
    • Data provided by Elsevier for the Research Partnerships perspective is provided by calendar year (2020, 2021, 2022).

    Elsevier data aligns with the relevant academic year: i.e. 2022 Elsevier data is presented within the 2021-22 academic year.

    Extraction dates

    Data is extracted at the latest possible date to prepare the KEF dashboards for publication in September. In some cases source data may change after the extraction date (for example Elsevier continually update Scopus data, and other provider data amendments may occur through the year). The KEF results for a single HEP are calculated relative to all other eligible HEPs. For this reason KEF dashboards will remain fixed to the data available at the pre-publication data extraction point and post publication updates will only be made in exceptional circumstances.

  11. Quintile calculations

    Which HEPs are included in the sector quintile and cluster benchmark calculations?

    The metrics of all eligible HEPs were included in the sector quintile and cluster benchmark calculations, irrespective of whether their individual metrics are displayed or not as an individual dashboard.

    How have you calculated the average values for the KEF?

    We are using two methods for calculating the three-year averages, and the method selected for a particular metric will depend on which is most appropriate for the underlying dataset as follows:

    Average Method 1: will be used where the dataset has zero values in the denominator of one or more of the three years being averaged (which would otherwise result in a ‘divide by zero’ error when using method 2).


    \[(a_1+a_2+a_3) \over (b_1+b_2+b_3)\]


    Average Method 2: will be used for all other metrics


    \[ \frac{a_1}{b_1} + \frac{a_2}{b_2} + \frac {a_3}{b_3} \over 3 \]


    Where average method 1 needs to be used for a single HEP to prevent a divide by zero error, it will be used for all HEPs within that metric.

    Further details and example calculations are included in the KEF3 decisions report.

    However, in some instances data is unavailable for a HEP for a given year, and therefore data for the previous year will be used instead. For example, a HEP may not have submitted the Annual Finance Return (AFR) to the Office for Students by the KEF data extraction date in Spring 2023. The metrics that use finance data are therefore calculated for these providers by using the 2020/21 finance figures in place of the missing 2021/22 figures. In exceptional circumstances where previous data is not available we will use best endeavours to obtain correct data after the data extraction date.

    What happens when the denominator is zero for each of the three years?

    In this scenario both averaging methods return a divide by zero error. In these instances, we will manually apply an average of zero for the metric.

    How are perspective quintiles calculated?

    To calculate the perspective quintile; first sum the relative positions of a given provider for each of the contributing metrics. This figure is then used to calculate the quintile for each provider in that perspective.

    Are the full range of quintiles used for every metric and perspective?

    The full range of quintiles is used for every metric and perspective.

    Why do some quintiles contain more providers than others

    Where one or more providers share exactly the same value in an ordered list, we apply the position number they would have if they did not share the position with any other providers. Where this is the case we will not falsely alter the position values of other providers, for example if three providers share a position of 5, the next provider will be given a position value of 8. This can result in some quintiles containing more than 20% of the participating providers to accommodate all providers who share that position and the number of providers in the adjacent quintiles will be adjusted accordingly.

    How are cluster averages calculated?

    Cluster averages are calculated by taking the mean average of the perspective positions of providers belonging to that cluster for each perspective, and then using this figure to calculate its quintile. The engagement level associated with that quintile is then reported.

    I think there is an error in my data – who do I contact?

    Contact us by emailing KEF@re.ukri.org in the first instance. If your provider wishes to put forward amendments to HESA data (including HE-BCI, and the finance or student records), there is a formal data amendments process.

  12. KE Clusters

    What is the purpose of the KE Clusters?

    The purpose of clustering is to group providers into KE clusters that have similar capabilities and resources available to them to engage in knowledge exchange activities. In this way, the KEF provides a fairer comparison of performance between similar providers.

    Is one cluster better than another?

    No - it is important to note that the KE clusters are not a ranking in themselves. No one cluster is better or worse than another – they are simply a means to enable fair comparison across a very diverse higher education sector.

    When are the clusters updated?

    The clustering of providers was updated in May 2023 in order to take account of the most recently available data, including from the Research Excellence Framework (REF2021). There have been minor updates to the cluster descriptions and a small number of changes to cluster placements but generally providers have remained similarly placed across the seven KE clusters identified to enable meaningful and fair comparison. These seven comprise the five broad discipline clusters, plus the ‘STEM specialists’ and ‘Arts specialists’ clusters.

    Cluster variables represent a ‘capability base’ which can be thought of as quasi-fixed in the medium-term, but can change over the longer-term through investments in research, teaching and related physical capital. We will therefore periodically re-cluster all English HEPs as appropriate, such as when new data becomes available (particularly REF data). Ahead of the next REF2028 results we have no specific plans to update the cluster placements, but we will regularly review the available data and if we consider there have been significant changes we will re-run the cluster process.

    How did you decide which providers were put into which cluster?

    The clusters were determined through a statistical analysis undertaken by Tomas Coates Ulrichsen at the University Commercialisation and Innovation Policy Evidence Unit.

  13. Further information on the co-authorship metric

    Elsevier is a global leader in information analytics and has supported the development of the metric for co-authorship with non-academic partners. The metric, and its underlying data, is generated using Elsevier’s SciVal and Scopus tools.

    Elsevier took the list of KEF eligible providers and identified all known affiliated organisations. From a data extract of 1 April 2023 and covering the three calendar years 2020 to 2022, the following outputs were collected for each provider and its affiliates: articles; conference papers; reviews; books; book chapters. The collected outputs were then analysed for the presence of non-academic authors, enabling the proportion of outputs involving non-academic co-authorship to be calculated. Further details on the method used and how to access the underlying data are given below.

    How was the metric for Co-authorship with non-academic partners produced and what database was used?

    The co-authorship metric was calculated using Elsevier’s SciVal system and the Scopus database. SciVal is an analytical tool enabling research activity and performance to be systematically evaluated. SciVal has a global reach covering more than 24,000 research providers and their associated researchers across more than 230 nations. Elsevier’s Scopus is a comprehensive, source-neutral abstract and citation database curated by independent subject matter experts. Scopus covers outputs from over 7,000 publishers, drawing from some 27,950 serials, over 149,000 conferences and over 292,000 books. Updates occur daily with some 13,000 articles/day indexed. As of May 2023, the database included over 91 million documents.

    What parameters were employed for generating the co-authorship metric?

    A snapshot of data from the Scopus database was taken as at 1 April 2023. This has been analysed by calendar year for the three years 2020, 2021 and 2022. Analysis of the snapshot is focused on the following five output types: Article; Conference Paper; Review; Book Chapter; Book. This focus mirrors the methodology Elsevier uses in the Times Higher Education World University Rankings. Similarly, the list of organisational affiliations used to generate the KEF metric began from the affiliations employed for the THES rankings.

    How are non-academic co-authors identified in the analysis?

    The Scopus database employs a combination of automated tagging, manual curation and feedback from its user community to classify organisations and to generate affiliations. The database includes over 94,000 organisations (affiliation profiles) and alongside a range of metadata are classified, where data is sufficient to support, by function e.g. research institutes, policy institutes, charities etc. Not all affiliations can be classified. Within the analysis, having excluded single-author outputs from the co-author set, SciVal was employed to identify all non-academic co-authors through their affiliation to relevant organisations including those where the organisations are not UK based. Organisations (and hence non-academic co-authorships) were classified as being “medical”, “corporate”, “government” and “other”.

    How are outputs attributed? Can an output appear more than once within the analysis, for example if the output involves an academic collaboration between a number of KEF eligible providers?

    SciVal has been used to show an output involving collaboration with multiple KEF eligible providers as an output for each of those KEF eligible providers. While an output is attributed to each relevant KEF provider, it is recorded only once for each provider e.g. an output appears once for each eligible KEF provider involved no matter how many non-academic collaborators there are, even if those non-academic collaborators are from different sectors or countries.

    How has the calculation of the co-authorship metric changed from KEF2 to KEF3?

    No change has been made to the underlying methodology relating to the co-authorship metric between KEF 2 and KEF3.

    However, in the Elsevier classification there are a number of affiliation options. One of those affiliation options is an “attributable to” relationship. In the model, a research output from an organisation that has an attributable to relationship to a University is counted as being from that University.

    In our KEF analysis, a University cannot collaborate with itself so an output from University X that has an author in Faculty Y and an author in “Attributable To” organisation Z is not collaborative.

    Likewise an “attributable to” organisation is always considered to be an academic organisation. Thus, University A that collaborates with a hospital, Organisation Z, that is “attributable to” University B is considered to have an academic collaboration with University B and not a non-academic collaboration even though this is with a hospital. This approach ensures a consistent treatment across all Universities for their collaborations with “attributable to” affiliations.

    If an affiliation status changes the history is effectively rewritten so the relationship is recorded as it exists at the date of the extract and the previous data assumes this affiliation has always existed. For example, if a change in affiliation status occurred in, say, November 2022 and was recorded in Scopus as at 01 April 2023 (date of KEF 2023 extract) , that change will also result in changes to the data generated for Years 2020 and Years 2021. Thus, the data for Years 2020 and 2021 is not static between KEF 2022 and KEF 2023. The results for KEF 2023 reflect the realities of the affiliation as they were recorded as at 01 April 2023.

    Can I review the underpinning co-authorship data for a provider?

    Elsevier’s SciVal users will be able to generate much of the data for themselves using the parameters and methods described above. In addition, Elsevier has agreed to provide the underlying publication data that has been generated for the co-authorship metric to authorised individuals from each provider. The process to obtain the data is for the institutional KEF contact to contact the Research England team at kef@re.ukri.org. Research England will pass on all legitimate requests to Elsevier along with details of the relevant KEF contact. Elsevier will then liaise directly with the KEF contact to produce the required data. This will be provided to the KEF contact as an Excel spreadsheet. A copy of the data will also be sent to Research England along with any other details associated with the response.