Abstract
Rankings in higher education are now common, but do they mean anything? Can they accurately reflect the quality of an institution? University rankings, while imperfect, serve as a proxy for comparative measures of quality. This paper begins by providing a philosophical and historical profile of the notion of “quality,” considers what might constitute quality in higher education, and examines how rankings specifically convey this impression for the disciplines of art and design. The paper illustrates the wider role played by rankings in the highly competitive international higher education sector by exploring the various types of rankings, their methodologies, and the criteria they use to measure institutions. It highlights how different rankings measure different research and teaching activities, and the various tensions that can arise across disciplinary boundaries; among institutional and departmental priorities; in research, teaching and learning; and across national and international dimensions within the fields of art and design when rankings compare unique offerings quantitatively.
Original language | English |
---|---|
Pages (from-to) | 243-255 |
Number of pages | 13 |
Journal | She Ji |
Volume | 2 |
Issue number | 3 |
DOIs | |
Publication status | Published - 1 Sep 2016 |
Externally published | Yes |
Keywords
- Art and design
- Higher education
- Quality
- Rankings
Fingerprint
Dive into the research topics of 'Zen and the Art of University Rankings in Art and Design'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver
}
Zen and the Art of University Rankings in Art and Design. / Thompson-Whiteside, Scott.
In: She Ji, Vol. 2, No. 3, 01.09.2016, p. 243-255.Research output: Contribution to journal › Article › peer-review
TY - JOUR
T1 - Zen and the Art of University Rankings in Art and Design
AU - Thompson-Whiteside, Scott
N1 - Funding Information: Few higher education institutions or their academic staff would admit to delivering poor quality in what they do. But nobody is able to precisely articulate what they mean by quality, or indicate the criteria they have applied to assess it. Any form of comparison is often disregarded—particularly by the institution with the lower score—because institutions have different missions and thus different measures of quality. The exception is when institutions perform well on comparative measures and the results are used to enhance their reputation or used for marketing purposes. Quality is not a one-dimensional concept that can be measured easily, but rankings have become so easy to understand that almost everyone accepts them. Historically, institutions like Oxford, Cambridge, and Harvard have promised the highest quality in education. They retain high levels of prestige and reputation, which closely allies them with the notion of high quality, and buoys their symbolic capital in the quality debate. More recently, the association between research and prestige—and therefore high quality—has been strengthened by the publication of global university rankings. The global exposure of rankings has meant that high research performance is now a proxy measurement for high quality per se. Most academics know that research performance has little to do with teaching and learning quality, and therefore provides the world with an incomplete assessment of an institution. Even though high-ranking, research-focused institutions enjoy the symbolic capital of prestige, and can attract high-caliber students, is the quality of teaching and learning at these institutions any better than institutions that start with less academically-prepared students? The answer is a matter of perspective. The Times Higher Education World University Ranking and the QS World University Ranking do include some measurable dimensions of teaching and learning. Some of these indicators are metric based—student to staff ratios, the percentage of staff with doctoral degrees, and so on—while other indicators are based on a global survey of academics and employers, with multipliers applied depending on the response. indicates the variation of criteria used across four major ranking systems, and the weight allocated to each criterion expressed as a percentage. Figure 1 If one were to combine these percentages across the four rankings, the number and influence of teaching and learning indicators is very low (see ). Figure 2 Research indicators often dominate simply because precise measurements of teaching and learning quality are difficult. Broadly speaking, there are three different ways to measure teaching and learning quality: by gauging the caliber of prospective students, or the amount of value added by the learning received, or the success graduates have obtaining employment and impacting society. Yet there is no precise way to measure any of these. Some rankings aim to measure teaching and learning via quantitative indicators, but frankly this is a kind of “pseudo-quantification.” Measuring the quality of teaching and learning with metrics is a meaningless application of numbers to answer questions that are not ideally suited to quantitative analysis. This is the primary reason why most rankings do not attempt to measure teaching and learning. Although quantifying teaching and learning sounds like a useful exercise, it is a seductive trap. It becomes easy to lose sight of what one is trying to measure and the purpose of measuring it in the first place. As one critique of the U.S. News and World Report Education Rankings 18 18 Malcolm Gladwell, “The Order of Things,” The New Yorker , 14 February 2011, accessed December 10, 2012, http://www.newyorker.com/reporting/2011/02/14/110214fa_fact_gladwell . demonstrates, not only the number of criteria selected but also the variations to the weighting of those criteria are factors so opaque and full of implicit ideological choices that the use of such data cannot be justified. During the coming 2017/2018 academic year, the UK government will be testing a Teaching Excellence Framework (TEF). 19 19 “Higher Education: Success as a Knowledge Economy—White Paper,” GOV.UK , accessed January 20, 2017, https://www.gov.uk/government/publications/higher-education-success-as-a-knowledge-economy-white-paper . The TEF will predominantly use course satisfaction and employment data from the Higher Education Statistics Agency’s “Destination of Leavers from Higher Education” survey, 20 20 “Graduate Destinations,” HESA , accessed January 20, 2017, https://www.hesa.ac.uk/data-and-analysis/students/destinations . and current student experience assessments from the National Student Survey conducted by the Higher Education Funding Council for England. 21 21 The survey covers four countries: England, Northern Ireland, Scotland, and Wales. For more information, see http://www.hefce.ac.uk/lt/nss/ . The Times Higher Education newspaper even created a mock league table using this data showing Oxford and Cambridge outside of the top ten institutions. 22 22 Chris Havergal, “Mock TEF Results Revealed: A New Hierarchy Emerges,” Times Higher Education , 23 June 2016, accessed on 24 June 2016, https://www.timeshighereducation.com/features/mock-teaching-excellence-framework-tef-results-revealed-a-new-hierarchy-emerges . Similarly, Australia has developed its Quality Indicators for Learning and Teaching (QILT) 23 23 For more information, see QILT website, accessed January 20, 2017, https://www.qilt.edu.au/ . by combining a range of surveys similar to those used in the UK. The government-funded Graduate Destination Survey, Student Experience Survey, Course Experience Questionnaire and Employer Satisfaction Survey all contribute to comparative indicators of teaching and learning quality. Surveying students is obviously the most direct way of obtaining data related to teaching and learning. But are students always the best, most objective judges of high quality, especially given the consumer bias often present once fees have been levied and paid? Even if transparent tools and comparable data sets can help people to form reliable conclusions about institutional performance, problems may arise should institutions attempt to direct internal behavior toward those data sets. In that case, the criteria for measuring performance would likely dictate the institution’s approach to teaching and learning. In fact, this is already the case with research—many universities are now directing their research policies and incentives toward increasing certain metrics. Rankings are clearly influencing university research policies and incentives. 24 24 Maria Yudkevich, Philip G. Altbach, and Laura E. Rumbley, eds., The Global Academic Rankings Game: Changing Institutional Policy, Practice, and Academic Life (Oxford: Routledge, 2016). Ultimately, the danger is that rankings will create very homogenous types of institutions whose offerings all promise the same level of quality. Targeting ever-higher rankings encourages institutional isomorphism 25 25 For further information about organisational isomorphism, see Paul DiMaggio and Walter Powell, “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organisational Fields,” American Sociological Review 48, no. 2 (1983): 147–60. rather than diversity and differentiation. Is this what academia wants? The range and character of existing indicators is already driving behaviors at an institutional level. Performance indicators are already being used to manipulate—and, some would argue, control—the system. Whoever dictates the type of indicator and the scale or threshold of achievement expected is in a position to dominate not only the process but also the result. As Broadfoot suggests, “Though the prevailing assessment discourse has persuaded both policy-makers and managers of the desirability of panoptic surveillance as a key to quality and efficiency, in reality: ‘the idea that if we calibrate our instruments more finely, we can somehow achieve perfect measurement of quality is a modern illusion, created by computers and statisticians who make a living out of it.’” 26 26 Patrica Broadfoot, “Quality Standards and Control in Higher Education: What Price Life-long Learning?” International Studies in Sociology of Education 8, no.2 (1998): 175. Articulating, judging and measuring quality in higher education to such a degree also creates interdisciplinary tension within institutions, especially for art and design departments that exist within comprehensive universities. Specific, unique guidelines and recommendations about research topics and formats, publication platforms, teaching and learning styles, student to staff ratios, and degrees offered have been overwhelmingly affected by indicators and incentives that likely distort quality at the disciplinary level. It seems ridiculous for art and design to aim to achieve the same quality standards as the so-called “hard” sciences—especially considering that the disciplines themselves are not individually ranked using the same criteria. In the 2016 QS World University Rankings, teaching and learning indicators like reputation make up 40% of any institution’s overall rank (see Figure 1 ). But when it comes to individual subject rankings, QS reputation indicators may represent anywhere from twenty-five to one hundred percent of a discipline’s overall rank. For example, reputational indicators comprise only fifty percent of the overall individual philosophy, environmental science, and civil/structural engineering rankings—the other fifty percent QS derives from citation indicators—while one hundred percent of the art and design discipline ranking is derived from reputation indicators alone. 27 27 “QS World University Rankings by Subject 2016,” accessed January 16, 2017, http://www.topuniversities.com/subject-rankings/2016 . Quality is clearly a relative concept that can vary enormously from institution to institution. Arbitrary though this may be, the way perceived quality informs an institution’s reputation sends an important signal to both prospective students and potential employers. A survey of employers in the United Kingdom showed that eighty percent regarded institutional reputation as the most important indicator of graduate standards. 28 28 Louise Morley and Sarah Aynsley, “Employers, Quality and Standards in Higher Education: Shared Values and Vocabularies or Elitism and Inequalities?” Higher Education Quarterly 61, no. 3 (2007): 229–49. Graduates are (only) as “good” as the institution where they graduated. Such perceptions can only serve to make ranking even more influential within competing institutions. On the one hand, while quantitative data does not actually reflect the complex nature of teaching and learning, qualitative judgments of quality through surveys are also imperfect. When assessing the quality of student outcomes, judgments might vary from academic to academic working for the same institution—and may differ even more among academics working in the same discipline at completely different institutions. Comparative judgments in degree programs are largely based on tacit values and assumptions. 29 29 Roger Brown, Comparability of Degree Standards? (Oxford: Higher Education Policy Institute, 2010). In the UK for example, research has suggested that up to fifteen per cent of students would have obtained a different grade classification if they had attended another institution . 30 30 Harvey Woolf and David Turner, “Honours Classifications: The Need for Transparency,” The New Academic 6, no.3 (1997): 10–12. The QS and the Times Higher Education World University Rankings both use reputational surveys, but have often been criticized for their lack of data transparency and quantifiable metrics. Many see surveys as easier ways to ‘game’ the system. For the QS, universities are asked yearly to submit eight hundred new names that the QS can solicit to participate in the survey—four hundred academic contacts and four hundred employer contacts. As of the 2015/2016 academic year, the number of names in that database has risen to over 75,000 academics and employers. Of course, institutions likely submit only candidates that serve their interests and are likely to rank them high in the survey. Specialized institutions can provide eight hundred names within a narrow field or discipline, whereas a comprehensive university has to submit eight hundred names across twenty or more disciplines. It follows that institutional decisions about which names to include could be motivated by shifting priorities and the need to boost or safeguard rankings in one discipline or another. For disciplines like art and design, it is probably harder to get enough relevant names on that list if they have to compete with unrelated disciplines like environmental science, medicine, engineering, technology, and so on. What does the parent institution wish to make itself known for? The QS first published its discipline-specific art and design ranking in 2015. The QS currently restricts respondents to academics and employers. In their overall university-ranking tally, the QS draws 40% from academic survey results, and 10% from employer survey results (as indicated in Figure 1 ). Unlike other QS subject rankings, no research citations are used to contribute to the art and design department rankings. The impact of art and design research is notoriously difficult to gauge. Gemser and de Bont, for example, found distinct differences in citation impact after publication in design-focused journals versus design-related journals. 31 31 Gerda Gemser and Cees de Bont, “Design-Related and Design-Focused Research: A Study of Publication Patterns in Design Journals,” She Ji: The Journal of Design, Economics, and Innovation 2, no. 1 (Spring 2016): 46–58. As a result, publication citation data for art and design does not necessarily provide a good indicator for research performance, especially from Scimago and Leiden which collect data from Scopus (Elsevier) and Web of Science (Thomson Reuters). provides the ranking for the top fifteen institutions listed on the QS Art and Design subject-specific ranking for 2016. It also shows their 2015 ranking, as well as the overall institutional ranking on the QS, Times and ARWU rankings. There are some stark contrasts between the subject-based ranking for art and design and the overall institutional rankings, largely because of the different metrics used and the emphasis on research citation impact in the overall rankings. Over half of the top 15 art and design institutions do not feature at all in the overall university rankings. Figure 3 To the wider art and design community that includes academics and employers, the ranking of most institutions on that list is probably not a surprise, but can we genuinely say that those institutions are better that the institutions ranked between 15-50, or indeed after 50? In reality what is the difference in quality between an institution ranked 20th and one ranked 100th? One might even argue that some of the institutions listed in the QS Art and Design ranking top 15 do not house an ‘art and design’ department proper. MIT, for example, offer a course in Integrated Design Management, and have an extremely important School of Architecture—ranked number one on the QS subject ranking for Architecture—but do not have a school of art and design, nor do they offer courses in the disciplines of fine arts or design per se. Some institutions may have fine arts departments, or possibly even architecture, and as a result, the reputations among the communities of art, design, and architecture have been blurred. Arguably the quality and character of art, design, and architecture courses, and the perception of what constitutes quality within each of those sub-communities of practice would vary enormously. Perhaps art and design should be separated into two separate disciplines, and also clearly segregated from architecture? How, using which criteria, does one judge the quality of any interdisciplinary or collaborative teaching, practice, and research that takes place? Even within the discipline of art and design, there is disagreement regarding quality indicators. Most public higher education systems have their own performance rankings systems and league tables whose character and criteria for assessment vary considerably. In the 2014 UK Research Excellence Framework (REF) assessment for art and design, Reading University was ranked first, followed by The Courtauld Institute of Art—neither of which featured in the QS Art and Design rankings. The Royal College of Art—first in the QS overall ranking—and Reading University tied for nineteenth in the UK REF. Neither ranking system is “correct” in any objective sense of course, but institutions are trying to do as well across as many different rankings as possible—considering the differences in criteria they are seeking to satisfy, there is a sense that many institutions and departments must try to be all things to all people. A further analysis of the top performing UK institutions listed on the QS Art and Design ranking in comparison with other UK performance tables can be seen in . The table illustrates just how different the performance of the same institution might be, depending on the methodology of the league table or ranking system. Figure 4 Since even more rankings systems are likely to appear over the next decade, the problem of appealing to such disparity in criteria and assessment methodology looks like it might be here to stay. Since the ARWU started in 2003, over twenty global rankings for higher education have emerged, and each year more are introduced. Research and reputation will continue to play a large factor in the overall performance of international rankings. However, at the national level, measures like student satisfaction, graduate employment, as well as research performance are likely to have a higher influence. High-quality teaching and learning, and high-quality research, no matter how they are measured, are the goals for each institution. Higher education institutions must find their own path and schools of art and design must navigate along them without losing sight of their own objectives. As Boyle and Bowden say, “quality is never attained in an absolute sense; it is constantly being sought.” 32 32 Patrick Boyle and John Bowden, “Educational Quality Assurance in Universities: An Enhanced Model,” Assessment & Evaluation in Higher Education 22, no. 2 (1997): 111. Publisher Copyright: © 2016 Tongji University and Tongji University Press
PY - 2016/9/1
Y1 - 2016/9/1
N2 - Rankings in higher education are now common, but do they mean anything? Can they accurately reflect the quality of an institution? University rankings, while imperfect, serve as a proxy for comparative measures of quality. This paper begins by providing a philosophical and historical profile of the notion of “quality,” considers what might constitute quality in higher education, and examines how rankings specifically convey this impression for the disciplines of art and design. The paper illustrates the wider role played by rankings in the highly competitive international higher education sector by exploring the various types of rankings, their methodologies, and the criteria they use to measure institutions. It highlights how different rankings measure different research and teaching activities, and the various tensions that can arise across disciplinary boundaries; among institutional and departmental priorities; in research, teaching and learning; and across national and international dimensions within the fields of art and design when rankings compare unique offerings quantitatively.
AB - Rankings in higher education are now common, but do they mean anything? Can they accurately reflect the quality of an institution? University rankings, while imperfect, serve as a proxy for comparative measures of quality. This paper begins by providing a philosophical and historical profile of the notion of “quality,” considers what might constitute quality in higher education, and examines how rankings specifically convey this impression for the disciplines of art and design. The paper illustrates the wider role played by rankings in the highly competitive international higher education sector by exploring the various types of rankings, their methodologies, and the criteria they use to measure institutions. It highlights how different rankings measure different research and teaching activities, and the various tensions that can arise across disciplinary boundaries; among institutional and departmental priorities; in research, teaching and learning; and across national and international dimensions within the fields of art and design when rankings compare unique offerings quantitatively.
KW - Art and design
KW - Higher education
KW - Quality
KW - Rankings
UR - http://www.scopus.com/inward/record.url?scp=85056002710&partnerID=8YFLogxK
U2 - 10.1016/j.sheji.2017.01.001
DO - 10.1016/j.sheji.2017.01.001
M3 - Article
AN - SCOPUS:85056002710
VL - 2
SP - 243
EP - 255
JO - She-Ji
JF - She-Ji
SN - 2405-8726
IS - 3
ER -