Sifting through the data mountain

Image source, Thinkstock

Image caption, The regulator has analysed 150 different sets of data
  • Author, Nick Triggle
  • Role, Health correspondent

The Care Quality Commission has gathered together a mountain of data.

A total of 161 hospital trusts have been assessed against 150 measures from mortality and infection rates to errors and patient survey results.

Even taking into account the fact that not all the data was applicable to every trust there is still well over 12,000 bits of data to wade through.

So how can patients navigate their way around it?

The first thing to say is that this is not a final judgement on hospital trusts.

That will come over the next two years as the inspection programme is rolled out.

Instead, the CQC has referred to it as a screening tool rather than diagnosis - it is to be used to prioritise which trust to visit first.

But that does not mean it cannot and should not be used by patients.

Foolproof

What the regulator has done is pull together all the key bits of information already available on performance and provided an analysis of which hospital trusts are furthest outside the expected range.

Until the inspection process is complete, it is perhaps the most comprehensive and informative assessment of what is happening in the NHS at the moment.

But it does not mean the exercise is fool-proof.

That much is clear already.

For example, four of the trusts put into special measures earlier this year because of serious concerns about standards did not make it into the highest risk band one category. In fact two (Burton Hospitals and George Eliot) ended up in band three.

Although all were subsequently bumped up to band one because of the issues already identified.

When the final ratings are produced it is likely some of those judged low risk will get a bad rating, while some of the high risk ones will be given a good or outstanding rating.

One of the reasons for this may be related to the fact the different measures have not been weighted for seriousness.

For example a trust with a high sickness rate or poor uptake of vaccination among its staff would get the same risk rating as another with a high mortality rate.

Accuracy

In some ways, that is understandable.

For example, a trust with high absence rates is likely to have a problem with morale and management. That is not good for patients.

The problem is that we just do not know yet whether it is an accurate way of risk profiling.

The regulator has promised to fine-tune its system as it goes along.

Of the first 18 trusts to be inspected, there is an even spread between the top two bands, the middle two and the bottom two.

That will give it an early indication of the accuracy of its methods.

In the meantime, patients can pore through.

There is a summary page of the risks identified, setting out the areas for which the trust has a poor record.

It will be up to patients to decide what to do with the information and whether they want to exercise choice where they can.

But as one person who is heavily involved in analysing data for the NHS put it to me: "If you can avoid it, why would you go to a hospital where there is much more evidence of risk than another?"