Skip to main content

Technical notes

This FAQ page provides further technical details about the KEF. If your question is not answered below, please let us know by emailing KEF@re.ukri.org.

Frequently asked questions

  1. Metric source data

    Where does the data for the KEF come from?

    The majority of data is currently derived from the Higher Education Business and Community Interactions (HE-BCI) survey. This annual survey is run by HESA and completed by all English higher education providers registered with the Office for Students as ‘Approved Fee-Cap’ and across the UK by all higher education providers regulated by the Scottish Funding Council, Higher Education Funding Council for Wales (HEFCW) and Department for the Economy, Northern Ireland (although note that the KEF only includes English providers at present).

    The KEF also includes additional data provided by Innovate UK (Working with business) and Elsevier (Co-authorship in Research Partnerships). Full details of the source data for each metric is available as a downloadable excel file named ‘Annex C: KEF2 metrics data sources’ published alongside the May 2022 KEF2 decisions report.

    The Public & Community Engagement score is derived from a self-assessment – see ‘Public & Community Engagement self-assessment’ for further information.

  2. Data and calculations

    Can I download the source data for the KEF?

    You can download a summary of the calculated quintile data displayed in the metric, or a more detailed data file, in either CSV or Excel format. It is also possible to download images of the KEF dashboards as Image, PDF or PowerPoint files. If you require the information in a different format, please contact KEF@re.ukri.org.

    Will the metrics change in the future?

    In the first iteration of the KEF we chose what we considered to be the most suitable metrics currently available. In 2021 we undertook a detailed review of the first iteration of the KEF to consider the effectiveness of the metrics used and whether any changes were required and we published our findings in February 2022. The review report sets out our short, medium and long term development plans for the KEF. We will continue to activity develop the metrics used in future iterations of the KEF in line with our plans set out in the KEF review report.

    Following the review we proposed a number of amendments to the underlying methodology and metrics and after further engagement with providers we published the KEF2 Decisions report confirming the changes that would be applied to the second iteration of the KEF.

  3. What has changed in KEF2?

    We have detailed the changes made for KEF2 in the KEF2 Decisions report published in May 2022, however in summary we have confirmed the following for KEF2:

    • Underlying methodology amended to remove sector-wide scaling
    • Research partnerships perspective – co-authorship with non-academic partners metric, output types includes trade journals
    • Perspective title of ‘Skills, enterprise and entrepreneurship’ changed to ‘Continuing professional development (CPD) and graduate start-ups’
    • CPD and graduate start-ups perspective – CPD/CE learner days metric removed
    • IP and commercialisation – average external investment per spin-out metric denominator amended
  4. Why have narrative statements not been updated for KEF2?

    The KEF review identified a number of areas where we could develop and improve the narrative statement templates, criteria and guidance before calling for new statements. The review also demonstrated a preference for narrative statement updates to only take place on a two or three year cycle. For these reasons we have taken the decision to allow factual corrections to the existing narrative statements but not to invite substantive updates.

    The second iteration of the KEF will therefore continue to display the narrative statements that were submitted for the first iteration. The content of the statements will remain focussed on activities undertaken in the previous three academic years up to the publication of KEF1, i.e. 2016-17, 2017-18 and 2018-19.

  5. Why have public & community engagement self-assessment scores not been updated for KEF2?

    As we have outlined above, we will not be inviting substantive updates to the narrative statements for KEF2. The public & community engagement self-assessment scores are evidenced by the supporting narrative statement, and as such we consider that it will not be appropriate to allow either amendments to, or new submissions of, self-assessment scores for KEF2. All eligible providers will be invited to submit new self-assessment scores alongside updated narrative statements for KEF3, which will be published in 2023.

  6. If the public & community engagement self-assessment scores have not been updated for KEF2, why is my overall result looking different?

    We have not amended the original self-assessment scores displayed in the narrative submission, i.e. each of the five aspects being scored between 1-5 resulting in a total score out of 25. However, we have adapted the methodology used to calculate the perspective score as a quintile in line with methodology changes applied to the other perspectives.

    To convert the self-assessment score out of 25 into a quintile result, all providers in the sector are ordered by their total score to give a metric position (1st-135th). All providers sharing the same total score will be given the same metric and therefore perspective position. The sector is divided into quintiles based on their perspective positions.

  7. Timescales

    Is the data by academic, financial or calendar year?

    The vast majority of data is provided by academic year. The only exceptions are:

    • Data provided by Innovate UK for the Working with Business perspective is provided by April-March financial year.
    • Data provided by Elsevier for the Research Partnerships perspective is provided by calendar year (2019, 2020, 2021).

    Elsevier data aligns with the relevant academic year: i.e. 2019 Elsevier data is presented within the 2018/19 academic year.

  8. Quintile calculations

    Which HEPs are included in the sector quintile and cluster benchmark calculations?

    The metrics of all eligible HEPs were included in the sector quintile and cluster benchmark calculations, irrespective of whether their individual metrics are displayed or not as an individual dashboard.

    How have you calculated the average values for the KEF?

    We are using two methods for calculating the three-year averages, and the method selected for a particular metric will depend on which is most appropriate for the underlying dataset as follows:

    Average Method 1: will be used where the dataset has zero values in the denominator of one or more of the three years being averaged (which would otherwise result in a ‘divide by zero’ error when using method 2).


    \[(a_1+a_2+a_3) \over (b_1+b_2+b_3)\]


    Average Method 2: will be used for all other metrics


    \[ \frac{a_1}{b_1} + \frac{a_2}{b_2} + \frac {a_3}{b_3} \over 3 \]


    Where average method 1 needs to be used for a single HEP to prevent a divide by zero error, it will be used for all HEPs within that metric.

    Further details and example calculations are included in the KEF2 decisions report.

    Note that Goldsmiths and the University of Northampton did not submit a HESA Finance return in 2019/20. The metrics that use finance data as a denominator are therefore calculated for these providers by using the 2018/19 finance figures in place of the missing 2019/20 figures.

    What happens when the denominator is zero for each of the three years?

    In this scenario both averaging methods return a divide by zero error. In these instances, we will manually apply an average of zero for the metric.

    How are perspective quintiles calculated?

    To calculate the perspective quintile; first sum the relative positions of a given provider for each of the contributing metrics. This figure is then used to calculate the quintile for each provider in that perspective.

    Are the full range of quintiles used for every metric and perspective?

    The full range of quintiles is used for every metric and perspective.

    Why do some quintiles contain more providers than others

    Where one or more providers share exactly the same value in an ordered list, we apply the position number they would have if they did not share the position with any other providers. Where this is the case we will not falsely alter the position values of other providers, for example if three providers share a position of 5, the next provider will be given a position value of 8. This can result in some quintiles containing more than 20% of the participating providers to accommodate all providers who share that position and the number of providers in the adjacent quintiles will be adjusted accordingly.

    How are cluster averages calculated?

    Cluster averages are calculated by taking the mean average of the perspective positions of providers belonging to that cluster for each perspective, and then using this figure to calculate its quintile. The engagement level associated with that quintile is then reported.

    Can I still access the results of KEF1?

    The interactive dashboards for the first iteration of the KEF are no longer available. However, you can still download the detailed and summary data files for KEF1 in either CSV or Excel format using the links below:

    Download a summary of the first iteration of KEF in CSV or in Excel format.

    Download a more detailed version of the data from the first iteration of the KEF as CSV or in Excel format.

    Can I compare my results from KEF1 with those in KEF2?

    There have been notable changes to the underlying methodology used to calculate the KEF2 results, with the removal of a scaling step and results being presented in five quintiles rather than ten deciles. This means it is not possible to directly compare an individual provider’s results between the first and second iterations KEF. However, we would encourage providers to broadly consider their relative performance in relation to their cluster average as an indicator of whether there has been an improvement in performance.

    Alternatively, you may wish to use the following as an approximate key to compare KEF1 deciles with KEF2 quintiles:

    KEF decile KEF2 quintile
    1 or 2 1
    3 or 4 2
    5 or 6 3
    7 or 8 4
    9 or 10 5

    I think there is an error in my data – who do I contact?

    Contact us by emailing KEF@re.ukri.org in the first instance. If your provider wishes to put forward amendments to HESA data (including HE-BCI, and the finance or student records), there is a formal data amendments process.

  9. KEF Clusters

    What is the purpose of the KEF Clusters?

    The purpose of clustering is to group the KEF participants into KEF clusters that have similar capabilities and resources available to them to engage in knowledge exchange activities. In this way, the KEF provides a fairer comparison of performance between similar providers.

    Is one cluster better than another?

    No - it is important to note that the KEF clusters are not a ranking in themselves. No one cluster is better or worse than another – they are simply a means to enable fair comparison across a very diverse higher education sector.

    When will the clusters be updated?

    The clustering of providers will not change in KEF2 from the first iteration of the KEF. As confirmed in the KEF2 decisions report, providers will remain in one of the seven KE clusters identified to enable meaningful and fair comparison. These seven comprise the five broad discipline clusters, plus the ‘STEM specialists’ and ‘Arts specialists’ clusters.

    Cluster variables represent a ‘capability base’ which can be thought of as quasi-fixed in the medium-term, but can change over the longer-term through investments in research, teaching and related physical capital. We will therefore periodically re-cluster all English HEPs as appropriate, such as when new data becomes available (particularly REF data). It is therefore expected that the approach to clustering will be revisited ahead of KEF3, using new REF2021 data and ensuring there is sufficient time to consider the methodology and engage with providers.

    How did you decide which providers were put into which cluster?

    The clusters were determined through a statistical analysis undertaken by Tomas Coates Ulrichsen using the following data:

    Scale & focus of knowledge activity by domain
    • Number of academics by function
    • Teaching/research
    • Teaching only
    • Research only
    • Proportion of academics by 12-split discipline
      • Clinical medicine
      • Allied health other medical, and dentistry
      • Agriculture, forestry and veterinary science
      • Physical sciences and mathematics
      • Biological sciences
      • Engineering and materials science
      • Computer science
      • Architecture and planning
      • Social sciences and law
      • Business and management studies
      • Humanities, languages and education
      • Creative and performing arts, and design
    • Educational function of HEPs:
      • Student FTEs at undergraduate level (full-time/part-time)
      • Student FTEs involved in taught postgraduate (full-time/part-time)
      • Student FTEs involved in research postgraduate (full-time/part-time)
    Physical assets
    • Scale of spending on research-related capital infrastructure
    • Intensity of capital spending (spend per academic)
    Scale of knowledge generation by domain
    • Scale of knowledge generation activity in different knowledge domains
      • Recurrent research income (QR)
      • Research grants and contracts income by STEM, SSB, AH
      • Research quality by STEM, SSB, AH (number of academic FTEs getting 4* publications in REF2014)
    • Research orientation:
      • Research grants and contracts from different sources:
      • UK research councils
      • Charities
      • Government bodies / local authorities, health/hospital authorities
      • Industry
    • International linkages in research:
      • Research grants from overseas
    Intensity of knowledge generation by domain
    • Research-focus of HEP
      • Proportion of academic FTEs submitting to REF
      • Proportion of students undertaking postgraduate research
    • Research intensity by discipline
      • Research grants and contracts income per academic by STEM, SSB, AH
      • Proportion of researchers generating 4* publications in REF2014 by STEM, SSB, AH
    • Research orientation intensity
      • Research grants and contracts income from different sources (RCs, charities, gov’t, industry) per academic
    • Research internationalisation intensity
      • Research grants and contracts income from overseas per academic
  10. Further information on the co-authorship metric

    Elsevier is a global leader in information analytics and has supported the development of the metric for co-authorship with non-academic partners. The metric, and its underlying data, is generated using Elsevier’s SciVal and Scopus tools.

    Elsevier took the list of KEF eligible providers and identified all known affiliated organisations. From a data extract of 8 June 2022, and covering the three calendar years 2019 to 2021, the following outputs were collected for each provider and its affiliates: articles; conference papers; reviews; books; book chapters. The collected outputs were then analysed for the presence of non-academic authors, enabling the proportion of outputs involving non-academic co-authorship to be calculated. Further details on the method used and how to access the underlying data are given below.

    How was the metric for Co-authorship with non-academic partners produced and what database was used?

    TThe co-authorship metric was calculated using Elsevier’s SciVal system and the Scopus database. SciVal is an analytical tool enabling research activity and performance to be systematically evaluated. SciVal has a global reach covering more than 20,000 research providers and their associated researchers across more than 230 nations. Elsevier’s Scopus database is the World’s largest curated abstract and citation database. Scopus is source-neutral and covers outputs from over 7,000 publishers, drawing from some 27,100 serials, over 140,000 conferences and over 261,000 books. Updates occur daily with some 10,000 articles/day indexed. As of June 2022, the database included over 87 million documents.

    What parameters were employed for generating the co-authorship metric?

    A snapshot of data from the Scopus database was taken as at 8 June 2022. This has been analysed by calendar year for the three years 2019, 2020 and 2021. Analysis of the snapshot is focused on the following five output types: Article; Conference Paper; Review; Book Chapter; Book. This focus mirrors the methodology Elsevier uses in the Times Higher Education World University Rankings. Similarly, the list of organisational affiliations used to generate the KEF metric began from the affiliations employed for the THES rankings.

    How are non-academic co-authors identified in the analysis?

    The Scopus database employs a combination of automated tagging, manual curation and feedback from its user community to classify organisations and to generate affiliations. The database includes over 90,000 organisations and alongside a range of metadata are classified by function e.g. research institutes, policy institutes, charities etc. Within the analysis, having excluded single-author outputs from the co-author set, SciVal was employed to identify all non-academic co-authors through their affiliation to relevant organisations including those where the organisations are not UK based. Organisations (and hence non-academic co-authorships) were classified as being “medical”, “corporate”, “government” and “other”.

    How are outputs attributed? Can an output appear more than once within the analysis, for example if the output involves an academic collaboration between a number of KEF eligible providers?

    SciVal has been used to show an output involving collaboration with multiple KEF eligible providers as an output for each of those KEF eligible providers. While an output is attributed to each relevant KEF provider, it is recorded only once for each provider e.g. an output appears once for each eligible KEF provider involved no matter how many non-academic collaborators there are, even if those non-academic collaborators are from different sectors or countries.

    How has the calculation of the co-authorship metric changed from KEF1 to KEF2?

    The content output types included in KEF2 are: Article; Conference Paper; Review; Book Chapter; Book. There are some 68,000 such Trade Journal outputs in Scopus, and Trade Journals are indexed in Scopus and included in the KEF for these content output types.

    Since KEF 1, there has been a change to the way affiliated hospitals and NHS Trusts are included within SciVal to clarify those whose work is actually directly “attributable to” a University (KEF provider) i.e. the hospital/NHSTrust is effectively part of the University. The definition has been tightened and so the outputs of fewer hospitals and NHS Trusts are actually attributable to KEF providers in KEF2 than was the case in KEF1. In addition, where a hospital/Trust is considered to be attributable to a University, then it is considered to be “academic” for the purpose of the collaboration metric. This is the case for all that Trust/hospital’s collaborations (including for Universities other than the host/parent).

    This approach to the affiliations of NHS Trusts/hospitals is being adopted in SciVal and reflects changes that have been made to the Times Higher Education World University Ranking process.

    Can I review the underpinning co-authorship data for a provider?

    Elsevier’s SciVal users will be able to generate much of the data for themselves using the parameters and methods described above. In addition, Elsevier has agreed to provide the underlying publication data that has been generated for the co-authorship metric to authorised individuals from each provider. The process to obtain the data is for the institutional KEF contact to contact the Research England team at kef@re.ukri.org. Research England will pass on all legitimate requests to Elsevier along with details of the relevant KEF contact. Elsevier will then liaise directly with the KEF contact to produce the required data. This will be provided to the KEF contact as an Excel spreadsheet. A copy of the data will also be sent to Research England along with any other details associated with the response.