‘Tutor proof’ tests: The failed experiment

This week, education secretary Justine Greening repeated the government’s line that entrance exams to the planned new grammar schools could be made ‘tutor proof’. Perhaps she hopes that if she repeats the phrase ‘tutor proof’ often enough, the pipe dream will become a reality. In the meantime, she may like to look at what the evidence shows.

In 2013, the introduction of the so-called ‘tutor proof’ 11-plus test in Bucks presented a unique opportunity to look at whether or not an aptitude test can assess ability in isolation from background. The test was developed by the Centre for Evaluation and Monitoring at Durham University – apparently leaders in this field. If anyone could find the holy grail of the tutor proof test, they could.

So Local Equal Excellent set about using freedom of information requests to collect data on how outcomes for Bucks children were changing under the new test. Except, the figures we collected quickly showed that nothing was changing at all – and for some children things were getting worse.

Because prep (private) schools in Bucks provide intensive 11-plus test preparation, and because paying for tutoring requires a higher income, any test that is resistant to coaching should eliminate these advantages. So we looked at four specific changes that we would expect to see:

Improved pass rates for state school children

The pass rate for Bucks state school pupils decreased from 23% to 20% in the first year of the new 11-plus test. In each subsequent year, the gap between the pass rate for Bucks state school pupils and the overall pass rate has got bigger.

Lower pass rates for children at private schools

The pass rate for private school children was 56% in 2012. In 2016, after three years of the new test, it had risen to 60%. This means that a child from a Bucks private school is nearly three times more likely to pass the ‘tutor proof’ 11-plus test than a child from a Bucks state primary school.

Lower pass rates for children from more affluent homes

There has continued to be a considerable difference in the pass rates for children from the most and least well-off areas of Bucks. In all three years of the new test, a child from Chiltern District was at least twice as likely to pass the exam than a child from Aylesbury Vale District.

Higher pass rates for children on Free School Meals

In 2014, of the 276 children on free school meals who sat the new 11-plus test, just 10 passed – a shockingly low pass rate of 4% compared to the average that year of 30%. Or to put it another way, a child from a private school was over thirteen times more likely to pass the new ‘tutor proof’ test than a child on free school meals.

(After 2014, the grammar schools in Bucks decided to stop collecting data on pass rates for children on FSM.)

 

So CEM’s new 11-plus test has failed against all key measures of fairness. Far from being ‘tutor proof’, it is faithfully reproducing all the social and educational inequalities that emerge in the first ten years of a child’s life.

If anything the new test seems to be conferring even more of an advantage on children from certain backgrounds – which would suggest it is even more coachable than the previous test, or that more middle-class parents are investing in coaching than ever before. The vast tutoring industry in Bucks is booming under the new test, allowing children from better-off homes to continue to come out on top.

The credibility of selective education turns on the ability of the 11-plus test to accurately identify children with greater aptitude. The evidence from Bucks not only shows that it does not, but also indicates that it cannot – not even with the best efforts of a so-called market leader in test development.

As a result of our challenges to CEM, they have now withdrawn their online brochure claiming that their 11-plus test assesses ‘natural ability’. Perhaps more surprisingly they have even conceded that they have no evidence that their test is in fact resistant to coaching. In a recent email they stated, “Without extensive and expensive research, it is not possible to quantify the impact of coaching on the results from our tests.”

The effort to produce a test that was more resistant to coaching was perhaps worthwhile on its own terms. But it has failed, and its failure results in profound unfairness for thousands of Bucks children every year. Ms Greening is constructing a policy on an edifice of empty claims.