Why predicting what you expect to happen adds value to research

Gareth Hooper, Health and Social Care Commissioning Manager, NHS Gloucestershire Clinical Commissioning Group, writes about how to come up with good hypotheses at the start of an intervention.

  • 10th January 2019

Gareth Hooper, Health and Social Care Commissioning Manager, NHS Gloucestershire Clinical Commissioning Group, writes about how to come up with good hypotheses at the start of an intervention, so that you can effectively evaluate whether it’s been successful or not.

I’ve just booked a weekend to Amsterdam for early next year. Like many of us now I can research many aspects of my trip before I go: the ratings the accommodation has from previous guests; the most popular vegetarian restaurants and other peoples’ reviews on what to see when I get there. Whilst there’s room for serendipity, I want assurance of what I am going to get from my money and time. And health interventions should follow a similar process. We should have a pretty good idea of what our intervention is going to deliver for our money and effort.

The most fundamental step in any evaluation which seeks to quantify the impact of an intervention is having a clear and well evidenced idea of what you expect to see before you begin the intervention. This will be your hypothesis. The final evaluation will be to measure what you actually saw versus what you predicted would happen.

In this blog I want to talk about how we can come up with good hypotheses at the start of an intervention, that are evidence based but take account of the complexity and uncertainty that real world research has to deal with.

The importance of clarity and logic

“Fuzzy thinking can never be proven wrong. And only when we are proven wrong so clearly that we can no longer deny it to ourselves will we adjust our mental models of the world—producing a clearer picture of reality. Forecast, measure, revise: it is the surest path to seeing better.”
– Philip Tetlock, Superforecasting

Philip Tetlock is a Superforecaster and internationally renowned for his work in predicting world events and I’ll mention his work again later on. But this quote is a key starting point to ensure clarity in the hypothesis. It’s not enough to say an intervention “will improve quality of life” as it can’t really be proven wrong and you’ll not know whether your project had value for the money spent. It’s not a sign of failure if your hypothesis is proven wrong. If the outcomes of the intervention differ to those expected, this adds to the layer of knowledge for someone else who is planning a similar piece of work.

A hypothesis is best developed alongside a theory of change. A theory of change is a logical series of steps demonstrating why you think the intervention will work and why. For example: we know that age related muscle loss (sarcopenia) increases the risk of an older person falling over due to loss of muscle mass and function. It follows then that if older adults engage in weight bearing exercise several times a week, their muscle mass and function will improve and reduce the risk of them falling over. The logical steps are all evidence based and link together to form a logical leap, from people doing more exercise to improved quality of life and lower healthcare use.

Whilst trials, pilot projects and research are conducted to reduce uncertainty, this shouldn’t be a reason not to form an evidence based hypothesis before you begin. Even if the work you’re doing is ground-breaking, there will probably be some existing research to indicate possible outcomes.

Thinking slow

Developing a clear and well evidence theory of change and hypothesis does require a lot of slow thinking. OK, a question you may have already come across before:

A bat and a ball cost £1.10 in total. The bat costs £1.00 more than the ball. How much does the ball cost?

(It has relevance, I promise! The answer is at the bottom of the page too.)

We’re often confronted with similarly seemingly straightforward questions in healthcare: “do older people who exercise more have less falls?” The intuitive answer is yes. But there’s much more detail to be considered: how old is older, what counts as exercise, and, less falls than people who don’t exercise or less falls than they did before they started exercising?

Daniel Kahneman, in his book Thinking Fast and Slow, suggests that our quickly formed intuitive answers are System One thinking, and our deep complex thinking is System Two. System Two is what is needed for complex problem solving, for example what is 17 multiplied by 34? No calculators allowed! It’s important to use our system two thinking and to avoid hunches or intuition. Humans are great but fallible, and systematic biases in our thinking about the world around us is one of our fallibilities. We can reduce the bias in our thinking by taking time to objectively review evidence.

Going back to the exercising older people example, the best method of predicting what will happen in our intervention is to look at previous research. A systematic review of journals would be a great starting point. Other studies can give context, such as “people aged 65-85 who exercise more than twice a week have half as many falls as the same group of people who do not exercise at all” (this is made up for the purposes of the blog post only). Other papers might vary by cohort group, exercise frequency or risk of falling. The information could be combined into a meta-analysis (formally combining results from multiple studies to summarise their aggregated findings) if you have the technical expertise, or you could just use the range of data and suggest a broad hypothesis: “we expect falls to drop by 25-50%”.

The importance of generating a prediction about what you think will happen allows a financial case to be made. If you know the number of falls the target cohort has per year, and you know the number of falls it’s plausible to prevent, the range of possible savings can be calculated compared to the cost of the intervention.

Where the intervention is not to save costs but to improve something, say quality of life, the potential benefits can be correlated against a willingness to pay threshold. For example, if the intervention is to improve the health related quality of life of people with a specific disease, you may be willing to pay £500 per unit of quality of life per person. If the evidence suggests that the outcome of the intervention is two quality of life units per person, and you’re willing to pay £1,000 per person, this means you expect it to be cost effective against your budget. If the evidence suggested the outcome is only 0.5 units of quality of life you have an opportunity to rethink.

Avoiding myside bias

Philip Tetlock, who I mentioned earlier, set up the Good Judgement Project in the USA to find out just how accurate forecasters could be across a range of world events. One of his tips from the project is to surround yourself by a range of people who have differing views when coming up with your forecast. This is a valuable exercise to do when looking at health interventions too. A room with just similar types of people attracts confirmation bias.

Myside bias, or confirmation bias, is when everyone just ends up agreeing with each other because you’re all of a similar ilk. Ideally, when setting hypotheses you should have a couple of analysts, a couple of clinicians and a couple of project managers and a lay member. Make every effort not to pick people you think will just go with the flow.

The key is to set a hypothesis that is clear and unambiguous. No fuzzy predictions, but evidence based and well debated in a room of people who are keen to get to the truth. If the intervention doesn’t follow your hypothesis, you can re-examine your initial prediction or the intervention itself. You won’t end up rationalising whatever you see.

So, that ball from earlier, it cost 5p. Well done to those who got it right. For those that didn’t, let this question serve you well in the future of your slow thinking!