Project Blueprint: 'Not sufficiently robust'
The ³ÉÈË¿ìÊÖ Office must have hoped no-one would notice. Quietly, without press release or even a statement, two weeks ago ministers published a long and eagerly-awaited .
Why, one might ask, did they not want to trumpet the conclusion of a major research project which took six years of work and close to £6m of our money? After all, "Project Blueprint" had been hailed as the most important UK assessment of what works in trying to stop children taking drugs.
The answer is that the science had been so bungled that the research was almost useless. Here is the key finding:
What?
Yes, a significant programme to assess whether a new way of preventing young people using illegal drugs actually worked could do no such thing. It emerges that they had failed to follow two of the most basic rules of such research:
• Make sure your sample is large enough
• Make sure you have a control group for comparison
The evaluation of the Blueprint approach was done in 23 schools in four areas of England with another six local schools acting as a control. But it quickly became clear that the methodology was flawed, as the researchers admit:
So, at some point during the years of research, ³ÉÈË¿ìÊÖ Office ministers must have been told of the problem at the heart of the Blueprint project. It would appear they were asked for more money to make the findings robust, but refused. Rather than pulling the plug on the whole evaluation, however, the process was allowed to struggle on in the hope that some broader comparisons might still be valid. It was to prove a vain hope.
The ³ÉÈË¿ìÊÖ Office is putting a brave face on this evidential disaster. In a statement sent to me, a spokesman said:
"The Blueprint programme has helped to raise and improve our understanding about the delivery of drug education in schools. The data gathered from Blueprint schools has been extremely useful in improving our understanding about what children and young people want out of drug education lessons."
However, even the arguably Panglossian statement admits that:
"Blueprint has clearly highlighted some of the key challenges to delivery of evidence-based drug education."
Well, yes. The challenge for the £6m evaluation was to demonstrate whether this new system of drug education - going beyond the classroom to involve parents, local media, trading standards (to try and stop shops selling glue and aerosols to children) and other agencies - worked better than traditional methods. However you dress it up, the evaluation failed to answer that fundamental question.
³ÉÈË¿ìÊÖ Office statisticians are anxious to distance themselves from the affair. One source made it clear to me that the evaluation was commissioned by the drugs "policy team" rather than by science and research.
There is also anger and frustration among those working in the drugs prevention field who already feel that the ³ÉÈË¿ìÊÖ Office cares more about raids and treatment than it does about stopping people taking drugs in the first place.
Andrew Brown, co-ordinator of the , described the evaluation report as "hugely disappointing". He told me that "there was a great deal of expectation that we would get something really useful out of it", but that instead, practitioners will have to rely on American research which may be of limited value in the UK.
Eric Carlin, who sat on the Advisory Group to the Blueprint project, has . Do read the thread, which contains some conspiratorial theories.
To some, this failure fits into a wider problem with ³ÉÈË¿ìÊÖ Office evaluations. You may recall the rows over the "Tackling Knives Action Programme" (TKAP) revealed by this blog earlier in the year.
On that occasion, as now, the absence of a robust control group was .
And there is academic criticism suggesting ³ÉÈË¿ìÊÖ Office ministers have form when it comes to cherry-picking bits of evaluations they like and ignoring the bits they don't.
For instance, the introduction of Drug Treatment and Testing Orders (DTTOs) in 1998 is examined in a 2007 report, :
"Before the DTTO was rolled out across England and Wales, a study of three pilot areas was commissioned which concluded 'we could hardly portray the pilot programmes as unequivocally successful' (Turnbull et al., 2000: 87). The response in terms of policy was typical of the 'farming' mechanism. The negative findings were not publicised and the roll-out went ahead."
It is a similar story with another ³ÉÈË¿ìÊÖ Office plan - the "Reducing Burglary Initiative". the episode "illustrates what might happen when responsibility for validating policy - that is, for establishing 'what works' - is placed in the hands of (social) science, but the evidence produced is not, apparently, congenial to the particular 'network of governance' that is responsible for the policy".
If there is an upside to this story, it is that Blueprint evaluation has had to be honest and up-front about its limitations. Almost two years late and smuggled out though it may have been, the report suggests that statistical integrity is beginning to count for a bit more inside the ³ÉÈË¿ìÊÖ Office.
Comments
or to comment.