Granular, objective and comparable data analysis for colleges
[Extracts taken from a March 2021 interview]
Why do you benchmark?
The reason that we undertook benchmarking is that the college was formed from a merger of two colleges in 2017; we had various plans put in place at that point but we've struggled financially since the merger with some pretty substantial reductions in student numbers leading to substantial reductions in income. We've undertaken a range of successful cost reductions, which firstly responded to merger, then focussed on curriculum delivery efficiency - the merger was predicated on two similar size organisations coming together with a full integration, keeping the curriculum but making savings in management and back-office - so we delivered that pretty much in line with the plans. We then undertook focussed work designed to increase our average group size.
Taking stock at that point, we felt that we needed to have some proper objective comparisons to be able to inform what we do about the fact that our college was not delivering a sustainable financial model. We had taken some generic and general action, but it wasn’t working; so why is that?
We had some views as to some areas of college that might be inefficient, but we wanted to have some detailed comparisons and some granularity around those different areas. So, we were beginning to look at getting some objective external information. Really we were saying, “We know we've got to make savings, but it's not obvious where we're being financially outperformed by the sector. We need some proper insight into where we're not meeting those same levels of efficiencies.
How does benchmarking help with KPI setting and ‘difficult conversations’?
For us, it was about picking some specific areas where we’ve ended up with our structures being set up in a specific way. So, a specific example was with our technicians. We had a view that we had more technicians than would be comparable, but our curriculum was saying that they’re all essential. So, it was really about saying that a college of this size with this cohort is typically operating ‘x’ number of technicians; we've got ‘y’ - it's a lot more. Whatever the reason we've got to where we are now, it's not sustainable, and therefore, here are some objective comparisons that we can use to inform decisions and change something about that.
And it takes it away from being an opinion. The depth and the reputation of the Tribal work - the fact that we're able to say this is a well-respected piece of work done for the sector – means it's not just us saying it; the college can't afford that. So it just helps take it away from being me saying, “You've got too many people in IT” to, “This is generally what the sector has.”
As an example, when reviewing IT, we were able to not just look at the overall department, but to break it down into desktop, infrastructure, developers etc. The way the report was structured and set up - it's Tribal’s own activity cost model - it's very, very straightforward to translate what was in it to what we have in our college.
Ultimately this is a tool designed to do a very specific task and it does it in a very helpful, clear, objective way. What we’ve got with that is some helpful comparison and granularity in terms as to how we’re operating financially as an organisation to support any of those detailed conversations.
The FE Benchmarking Team is on hand to discuss the outputs of the benchmarking exercise, and how it can provide the valuable insight and granular, objective data analysis to inform decision-making at .