The UK Teaching Excellence Framework (TEF) has returned following revisions, but how has it changed? Are we any nearer to solving the wicked problem of measuring university teaching? And why did England, which already has mature quality assurance arrangements, need to introduce the TEF in the first place?
This September will see universities in England receive new Teaching Excellence Framework (TEF) ratings. The TEF is a national scheme designed to assess universities for ‘excellence’ in teaching, learning and student outcomes for undergraduate level education. It confers Olympic-medal-inspired awards of gold, silver, or bronze— which the UK government sees as a way of incentivising universities to deliver excellence in the areas that matter most to students.
For the last few years, the TEF – which produced its first set of results seven years ago – has been on hiatus. During this time, the framework has been revised and methodological work has been undertaken to develop new ways of measuring university teaching.
With this work complete, in January, all higher education providers with over 500 students were required to enter the scheme. This involves a written submission which is considered alongside a set of indicators, which includes responses to the National Student Survey and metrics on student continuation and completion. This evidence is assessed by TEF panels, comprised of academics with leadership responsibilities and students with experience of representing their peers.
New features for this round of the TEF include an independent student submission, designed to provide insights into what it is like to be a student at a particular provider. Although this component is optional, 204 student submissions from 228 participating institutions were received.
Another change is a clearer distinction between student experience and student outcomes in the framework. To reflect this, in addition to all providers receiving one overall Olympic-medal style award they will also receive two underpinning ratings – one for student outcomes and another for student experience – to signal where a provider excels in one aspect.
Time for Transparency
The TEF can be seen as a response to a range of calls for universities to be subject to greater transparency. What if quality assurance is not enough? What if we don’t just want to assure or enhance quality but measure performance and compare a university with another? These drivers account for the UK government’s decision to introduce the TEF, and its continued commitment to the scheme in the face of sustained criticism. The TEF can therefore be viewed as more of a transparency tool than a quality assessment.
We can see a movement in the last two decades calling for more information on how well universities perform. Rather than assuring quality, this is about assessing and comparing performance. We need to know how things are, not what we think they should be. In other words, we need to know how well universities perform in the game, not what the rules of the game are.
This trend can be seen in the work of Dirk Van Damme, then Head of the Centre for Educational Research and Innovation at the OECD, who wrote in 2009 about the need for new ‘transparency-enhancing regulatory systems’.
Reflecting on the effects of the Bologna Process at that time, Van Damme talked of the need for:
“a system which provides information on the essential dimensions of programme and institutional diversity in higher education, not driven by ex ante types of regulatory divisions, but on evidence-based ex post documentation. The number of dimensions should be sufficient to allow for a fair assessment of institutions and programmes, but not too many, allowing easy consumption of the information and avoiding information overload.” (Van Damme, 2009 p. 51)
Greater transparency was required, Van Damme argued, as quality assurance and accreditation systems “cannot fully satisfy the demand for transparency” as they do not present information to the public in a single format they can easily absorb.
Teaching and Transparency
Signally the quality of universities is, of course, a role now fulfilled by the rankings that are widely reported in the media. However, even if we set aside the methodological issues, there is a problem with using rankings – they are largely concerned with research. The absence of teaching in rankings creates a rationale for new transparency tools focusing on university teaching. And if we want to know about university teaching, do we also need to measure learning and student outcomes?
However, as van der Wende and Westerheijden (2009) point out, we are immediately confronted with the longstanding problem: “there are, in fact, no widely accepted methods for measuring teaching quality” and it “is even more difficult, it seems, to generate data based on measures of the value added during the educational process”.
Over the last two decades work has been undertaken to address this problem. Those following European endeavours in this area will recall the OECD’s abandoned Assessment of Higher Education Learning Outcomes (AHELO) project. The failure of this project reminds us that measuring what students have ‘learned’ or ‘gained’ is no easier than measuring teaching quality.
The TEF and Transparency
The TEF can been seen as an English response to the calls for greater transparency, and an attempt to address the ‘wicked problem’ of measuring university teaching and learning. As my book Teaching Excellence? Universities in an age of student consumerism identifies, comparisons can be drawn between the difficulties encountered by AHELO and the development of ‘Learning Gain’ metrics for the TEF.
The book charts the development of the TEF from its announcement in 2015, through its various revisions, to its current iteration. This story provides insights into the methodological work undertaken to measure university teaching. Several new metrics were developed and piloted, including: graduate earnings, teaching intensity to measure contact time, and learning gain to assess the ‘journey travelled’ by students. Of note is why some of the new metrics did not make the grade and do not feature in the revised scheme.
Teaching Excellence? Universities in an age of student consumerism also looks at the role of the TEF in providing information to applicants in the more liberalised market. Here, the UK government originally envisaged the TEF as helping consumers to make an informed choice on where to study. For example, universities rated gold would attract more students.
However, as the book explores, the limitations of the TEF as a measure of teaching quality also hamper its ability to inform choice. Moreover, reducing the performance of a whole university to one medal rating is questionable, and comparing institutions – even within one country with the same quality arrangements – is difficult.
The story of the TEF is of value as it contributes to the debate on how to measure teaching and learning in higher education. The lessons learnt from the TEF, including the aspects that did not work, provide a benchmark for scholars and other systems to learn from.
Andrew Gunn is a Lecturer at the University of Manchester (UK) and the author of Teaching Excellence? Universities in an age of student consumerism published by SAGE.
Van Damme, D. (2009). The search for transparency: Convergence and diversity in the Bologna Process. In van Vught, F. (Ed.). Mapping the higher education landscape. Towards a European classification on higher education, (pp.39-55), Springer.
Van der Wende, M., & Westerheijden, D. (2009). Rankings and classifications: The need for a multidimensional approach. In van Vught, F. (Ed.). Mapping the higher education landscape. Towards a European classification of higher education, (pp.71-86), Springer.