As universities expand, concern about falling standards is inevitable. Falling standards are not inevitable, but concern is. And we would hope that universities, before they expand, put measures in place to ensure standards are maintained: we need to be able to rely on the quality of graduates. They are going to teach our children, nurse our sick, prepare our tax returns.
But universities and government are rightly concerned about falling standards even before the next big expansion. This is because the secret universities never tell is that standards have been falling for some time - not in all universities, but in plenty.
There is a problem with the fact that they won't say this. The government and the public deserves to know when there is a problem and government needs at least the opportunity to provide the funding to fix it. Naturally, the reason universities won't say that standards are falling is because that would be bad marketing. So academic standards and the reliability of graduates are sacrificed to universities' ability to attract fee income. If we're honest, everyone knew that it would happen as the whole system was gradually commodified. Everyone that is except the few who were convinced a bit of market competition would drive standards up: it is clear it has achieved the opposite.
In this system these massive mistakes seem only to be outdone by the ludicrous solutions. We have falling standards because universities will lower entrance scores to attract fees and 50% of teaching is done by casual staff with no capacity to review curriculum and who don't get paid enough to provide the additional support students need. Those with more permanent jobs are worked so hard they can't fix any problems either. And fewer and fewer of those people have sufficient job security to either feel motivated or safe enough to make a difference.
But instead of fixing the causes of falling standards, the discipline with the relentless desire to measure things decided to define standards. The standards project might be a good one, I don't know, but it sounds to me like a good way to make sure knowledge doesn't progress or change for a while.
Another example. The need to quantify everything in sight has meant academics have been getting promoted on the quantity, not quality of their publications and so there end up being lots of crap ones. But instead of fixing this simple problem, a thousand additional problems are created by ranking the journals (rank them one year and by the next year the ranking is true - and then the real trouble starts). Instead of valuing new knowledge, surely the purpose of research, we ask universities to keep the knowledge they pursue in areas of research defined by numeric codes and require academics to keep track of every little thing they do against those codes. All this instead of getting on with working on new knowledge that doesn't have a code yet.
It is enough to make you tear out your hair. When the solutions are more terrifying than the problems we should consider ourselves to be in very big trouble. Not start squabbling about exactly how to measure the next immeasurable thing.
I'm considering making this my email signature: you don't get quality by measuring it.