Judgement calls

Photos: Matthew Henry, on Burst

This webpage is for trust fundraisers with three or more years’ experience. Beginners should use this page instead.

This web page is about what to do if you feel your judgment calls aren’t great: 

1. “I don’t really know what the cut-off point is for when it’s worth doing a trust”

Sometimes the issue is that people don’t appreciate the maths. The value of an opportunity can be identified by looking at the chance of getting a grant multiplied by the size of grant you think you’d get. In that context, do you really have the time to fit this one in, or not?

A deeper issue that I used to have was recognizing who was really the kind of trust that was a good target. An effective solution I found for that was to take a day away from the front line and “reverse research” your existing funded trusts. Go through them, read all the criteria and grants lists for those trusts and think: what do they have in common? 

What you’ll find will vary from charity to charity. However, nearly all could be clearly within the criteria and a lot could have had at least a few grants to your field of work, with plenty of grants changing. Of the remainder, you might find that many have been gradually accumulated since the year dot and the others there are special cases, e.g., a personal contact. You may find a high proportion of income has come from warm or lapsed funders, but there’s X potential for growth based on past experience.

Anyway, whatever: you’ll find what you find. So, what do you NOW think of the funders you’ve been looking at, based on what’s worked in the past?

2. “My judgement calls are still a bit ropey”

There are two kinds of mistakes that people make with judgement calls:

a. Biases

A book by Chip and Dan Heath, Decisive, may help. It focuses mainly on biases. They mention four mistakes people often make in their judgements:

 

(1) Narrow framing People see things in a binary, “yes”/”no” way. There are often more than two options. You might broaden your perspective by asking:

  • Is there a better way?
  • Is there something else I could do?
  • How else could I use the time/resource?

(2) Confirmation bias If I had to guess which bias causes the most problems to trust fundraisers, it would be confirmation bias. We only have so much actual evidence as to what’s going on, which makes it hard to challenge our takes on the world. I’ve seen trust fundraisers with adamant, “true believer” views of things that other trust fundraisers wouldn’t accept at all. (A few of the biggest candidates: How long should a proposal be? Do proposals have to be for new work? How important are emotions in trust fundraising?) We have our way we’ve done things. It’s working to the extent that it’s working. So, it must be right.

When people have the opportunity to collect information from the world, they are more likely to select information that supports their pre-existing attitudes, beliefs and actions. If you see it as the same as what you always choose to do/not do this kind of trust opportunity, maybe there’s a mistake there. These are ways to reality test the assumptions:

  • To ask probing questions, especially of people who have an incentive
  • Zoom out to look at the big picture and zoom in on some of the detail
  • Make small bests, test things out, rather than going all in

(3) Short term emotion. For example, people’s views of what’s possible are skewed by pressures to meet target. A good way to get distance if you think you’re too emotionally caught up is to ask the 10-10-10 question: how will you feel about this 10 minutes from now; how will you feel about this 10 months from now and how will you feel about it 10 years from now? In some work scenarios, maybe you’d like to shorten those so they’re all a bit relevant – e.g., two years on – but the point is to develop a breadth of vision and some more emotional distance.

(4) Overconfidence. Overestimating how many proposals you can get out is a common thing I see. There are ways to prepare yourself for the possibility of being wrong:

  • Estimating what the range is, between a better or a worse outcome
  • Try and get out of being on autopilot. E.g., decide “in one month I’ll reassess”

It’s worth looking into the scientific method, founded in the work of the great philosopher of science Karl Popper, where you form a series of conjectures, or hypotheses about a situation, and gradually build evidence that supports and refutes your different conjectures. It’s completely different as an approach from saying “I refuse to believe X, because Y happened and that disproves it” or (even worse) “I believe X, because last year Y happened, which accords with that view.” That’s a deeply unscientific way of talking – but you hear it all the time.

The positive thing with bias as a source of errors is that if you know the right answer, you can spot this regular leaning away from the truth and that may help you correct for it.

b. “Noise”

Daniel Kahneman, Olivier Sibony and Cass Sunstein refer to a different kind of error, whose effects are somewhat more scattergun, rather than always leaning one way. That’s “noise” – which is where there are variations in judgement call between people, or the same person at different times / under different circumstances. It’s like using a slightly faulty set of scales, where each time you weigh something, you get a different result.

Whereas to detect bias you need to know the right answer, with noise it doesn’t help, because you can just adjust away from the direction of the bias. However, say Kahneman et al, it’s harder to spot noise as a type of error, because we find it easier to think causally and attribute the mistake to that (“They’re biased in favour of optimistically expecting big results”) than to think statistically (“They messed up because they’re just a bit all over the place when it comes to big trusts”). 

As with a lot of the articles on this site, I’m offering a summary of some ideas. If you want to dig much deeper, look into the book behind the ideas and learn a lot more.

So, if we suspect our judgements are noisy we have to do a “noise audit”, This can involve the following: all of your trusts team are presented with a problem which is realistic, the kind of problem that they could encounter on their job. A set of those interchangeable employees are all presented with the same question and are asked a very precise question, e.g., the value in pounds of an opportunity.

However, that’s not normally realistic for us, so it’s more likely that you’ll need to look at bias and only when those are eliminated can you assume it’s actually a noise issue, not a biases one.

Having identified where there may be noise, the following can help you reduce it, which Kahneman et al refer to as “decision hygiene”:

  • Recognising that the point of forming a good judgement is accuracy, not freedom to express yourself/your opinion. That there’s something objective “out there” that you’re trying to determine.
  • Resist premature intuition. Professionals make very quick decisions based on past experience, which is a key source of bias and noise. “Intuition need not be banned, but it should be informed, disciplined, and delayed,” Kahneman says.
  • ‘The single best advice we have in framing is broad framing. See the decision as a member of a class of decisions that you’ll probably have to take.’
  • If it is a simple enough situation that it can be formulated as a rule/algorithm, then it’s more objective. (This has been criticised as potentially smuggling biases into your decision making, swapping one form of error for another. This issue is probably whether it’s really the right kind of situation.)
  • Obtain independent judgments from multiple judges, then consider aggregating those judgments. So, if you’re lucky enough to have a few people in the trusts team, ask them all for their opinions, independently of each other and with no collusion. Clearly there are issues with comparing people and so it would need careful set-up.
  • Can you get good advice?
  • Breaking the issue down into clearly defined components and assessing each of those separately, so that you can develop some precision, cutting out some of the room for loose judgements. An example would be with a feasibility study, you start with a clear checklist:
    • Do you think the work is outside the criteria of enough of the trusts we’re considering that there’s very unlikely to be enough money?
    • Do you think the work is outside the policy of enough of the trusts we’re considering that there’s very unlikely to be enough money?
    • Is there a clear, serious risk of the project not delivering major outcomes?
    • Is the timescale for fundraising too short for the funders you are looking at?
    • Is the need compelling enough to be competitive with the other needs the trusts under consideration may see?
    • Are the benefits great enough to be competitive and value for money?
    • What evidence do I have that I can/cannot work with the Services team on a project like this?
    • … [other points]
    • Is there anything else that makes me think it wouldn’t work?

That’s not my experience of most trust fundraisers, who seem to stare at a project until the answer as to whether it’s a decent prospect intuitively comes to them.