Oceanography The Official Magazine of
The Oceanography Society
Volume 33 Issue 1

View Issue TOC
Volume 33, No. 1
Pages 92 - 93

OpenAccess

THE OCEANOGRAPHY CLASSROOM • The Bureaucratic Oaf

By Simon Boxall  
Jump to
Full text Citation References Copyright & Usage
Full Text

Universities worldwide have changed considerably over recent decades, largely in response to a significant growth in the student body and a relative drop in both teaching and research funding. That these institutions have become more accountable and less autonomous could, on the one hand, be welcomed. On the other hand, it suggests that we have lost sight of the main goals of excellence in both teaching and research that produce the best outcomes across the subject spectrum, as well as the best leaders and researchers for tomorrow. It also starts to impinge on academic freedom and in part on academic integrity.

In the latter half of the twentieth century, universities grew based on the quality of their outputs, on peer-reviewed publications, on their general reputations. Academia was generally self-​regulating—a free market economy. For many countries such as the UK, funding came from government to support students and some areas of research. Students made choices based on institutions’ reputations, and these choices were largely market driven. It is fair to say that this was not an ideal environment for students when compared to today—sink or swim comes to mind. Getting feedback was a treat.

Internally, the administration chain was short—helped in part by lower numbers of both academic staff and students compared to today. Our Department of Oceanography had a Head of Department’s secretary, a departmental secretary, an assistant, and a part-time finance administrator. The system worked, and one never felt overawed by paperwork. Keep in mind that this was pre-desktop computer and pre-mobile phone days. The secretarial staff typed letters (if they could read the academic’s writing), handled phone calls, and often kept calendars for academics. Centrally, within the university, there were administrators overseeing exam issues, finance, timetabling (manually), admissions, and other key roles, and they worked closely with, and in support of, academics.

Over recent years, universities have undergone changes. Every few years there is a major shuffle of support staff with a view to “rationalizing.” Each restructuring seems to actually increase the number of support staff, while also increasing the bureaucratic load on academics. Typing my own emails is a time saver compared with spending time helping our secretary decode my own unique form of Sanskrit. However, knowing how much I have in a budget is now an art form. Once I could wander along the corridor to our financial administrator and ask a simple question like: “how much money is left in my Impact of Seawater on Chocolate grant, and will it cover staff costs to the end of the year”? Now I am told that admin staff can no longer tackle such trivia and am steered toward “Agresso”—an apt name for our automated finance system. As with most modern finance systems. a PhD in oceanography is no match for the machinations of the software developers who dream up these programs. I suspect they include many who are frustrated Dungeons & Dragons players, keen for us to enter portals with magic hidden buttons and a strange language. Make an error and you will have 300 weeks of rental cars debited to your account and have to sack half of your research staff to make up for it. The same goes for many other aspects of university life today. As for timetabling—the system has become so convoluted that some education programs are now driven by the complexities of timetables rather than by educational imperatives.

I hasten to add that the administration staff in our own school at Southampton are superb, and we spend many hours working with them trying to fathom the latest challenges from on high or from central government. However, it does all mean that more time is spent on administrative goals rather than on those core roles of teaching and research.

We then have external factors to deal with. The first of these is student centered and from “Good University” type guides. In the UK, we are driven by reviews such as The Guardian or The Times Higher Education guides. A series of metrics determine which courses and universities rank highly in these “league tables.” We are judged on the level of qualifications our students come in with (the higher the better), our staffs, student ratios, our employment rates, and how much we spend on each student per course—the list is extensive. Senior management drives academics to improve the ranking position, and a rise of two or three places can cause great celebration—a slip can be seen as abject failure. But these are not always great indicators of whether a course would suit a particular student, and there are plenty of quirks in the system. On the one hand, as an academic admissions tutor I am pressured to bring in as many A-grade students as possible to improve the league table positioning. At the same time, we are under pressure (and rightly so) to be inclusive and encourage applications from students who have less advantaged backgrounds or come from low achieving schools and colleges. The value-added benefit that a student might receive from higher education doesn’t seem to matter much to these league tables.

Then, there are the hoops set by government. For the UK, there are a number of assessment exercises, including the Teaching Excellence Framework (TEF) and The National Student Survey (NSS). The first of these is a series of metrics that the UK government uses for universities to determine the quality of the educational experience. I am a great believer in ensuring good quality, but some of the metrics used are flawed. One example is employment. It is a good indicator, but it only measures six months post graduation and doesn’t look at what the employment is. In oceanography, many of our students (over two-thirds) attain careers in the marine science sector, compared with less than 30% for most other degree subjects. However, the nature of our subject means that often our students take a gap year post graduation, some volunteering for conservation programs. These students count as unemployed. In our 2019 cohort of graduates, five out of eight of my tutees decided to go off together and explore. They all had first class degrees, and four now have postgraduate posts starting in September, but my 2019 tutees show an official employment rate of only 37.5%. We spend more time trying to improve our metrics than improving our true educational experience.

The NSS is student led, carried out shortly prior to graduation and their final exams, usually when they are most stressed. However, there are so many student-led surveys in higher education that by the time they reach the end of their degrees, they are surveyed out. For a while, our university introduced mid-semester surveys for each course in addition to the end-of-semester and overall year-end surveys. The average student was encouraged to complete over 17 surveys per year—it is no wonder they never had time for all of those essays! As a result, by the time they get to the end of their degrees, the ones completing the NSS are either addicted to surveys or had a bad experience around about the time the NSS opened and wanted to vent their anger—Trip Advisor is primarily made up of these scenarios. It is of note that generally the higher the response rate for a course, the higher the ranking.

The UK and the US have equally strong concerns about ensuring high-​quality teaching, recognizing that in the most prestigious research-led universities, teaching can play second fiddle to research. A number of very good reports and guides have been commissioned to determine what good higher education should look like, to identify what measures really count in improving outcomes. One example is a series of articles edited by Haras et al. (2017) that was commissioned by the American Council on Education. The recommended approach is far more carrot than stick in terms of getting education its proper recognition in the wider university system.

Teaching isn’t the only aspect of an academic’s university life that is being buried in paperwork and performance measures. Performance-based Research Funding Systems (PRFS) have been adopted by many countries. In Europe, these have tended to be based around institutional performance reviews, and in the United States, they are driven by market forces based around reputation and the quality of output (both in terms of citations and commercial success). Sweden considered taking a more centralized approach with a program called FOKUS (Research Quality Evaluation in Sweden). They concluded that it tended to stifle innovation and so dropped the plan. However, in the UK in 1986, the Research Assessment Exercise (RAE) was introduced, which involved a panel evaluation and peer review of research-led departments in universities throughout the country. While the concept was well intentioned, it is not surprising that the intellectuals who populate universities played the system strategically. It was later renamed the Research Excellence Framework (REF), probably to try to confuse those pesky academics. The UK Research and Innovation (UKRI), which brings together a number of quasi-​​government funding agencies with a budget of over £7 billion (US $9 billion), commissioned a report on the metrics used (Wilsden et al., 2015). It included an assessment of the external funding received for research as well as the number of individual publications in higher ranked journals, the impact the research achieved, and the quality of the research environment (how well does a university or department support its researchers and PhD students?). The resulting paper trail pretty much takes up a full-time academic staff member, significant central administration, and substantive time not doing research or teaching by all others. Wilsden et al. (2015) point out a danger that the nirvana of academic freedom, which attracted many of us into universities, becomes a box-ticking exercise chasing flawed metrics.

That eventual retirement looks ever more enticing—maybe then I can get some science done.

Citation

Boxall, S. 2020. The bureaucratic oaf. Oceanography 33(1):92-93, https://doi.org/10.5670/oceanog.2020.114.

References
    Haras, C., S.C. Taylor, M.D. Sorcinelli, and L. von Hoene, eds. 2017. Institutional Commitment to Teaching Excellence: Assessing the Impacts and Outcomes of Faculty Development. American Council on Education, Washington, DC, 98 pp., https://www.acenet.edu/Documents/Institutional-Commitment-to-Teaching-Excellence.pdf.
  1. Wilsdon, J., L. Allen, E. Belfiore, P. Campbell, S. Curry, S. Hill, R. Jones, R. Kain, S.R. Kerridge, M. Thelwall, and others. 2015. The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Technical report for UK Research Institute Innovation, 178 pp., https://re.ukri.org/sector-guidance/publications/metric-tide/.
Copyright & Usage

This is an open access article made available under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format as long as users cite the materials appropriately, provide a link to the Creative Commons license, and indicate the changes that were made to the original content. Images, animations, videos, or other third-party material used in articles are included in the Creative Commons license unless indicated otherwise in a credit line to the material. If the material is not included in the article’s Creative Commons license, users will need to obtain permission directly from the license holder to reproduce the material.