The most effective way to gain supporter consent

the fundraiser image

The most effective way to gain supporter consent

The most effective way to gain supporter consent

What does behavioural science tell us about how to maximise opt-in rates? Kiki Koutmeridou shares the surprising results of her team’s recent testing around the consent issue

 

“So, you have the answer to the consent issue?”

“Yes, the answer is there is no simple answer. New data proves ‘the answer’ changes charity by charity”

 

If you don't measure failure you can call anything a success. But it’s beyond doubt our sector has failed to find the most effective way to gain supporter consent (if we define ‘failure’ as spending a lot of money to lose a lot more!)?

 

Enormous charities, with near infinite budget for testing and innovation, have very publicly made the move to full opt-in. But new data proves that the 'your tick saves lives' approach they took performs worse than any other.

 

Trouble is, you only find that out if you test the right things in the right way. Only a handful of charities have. Here's an example of one who's got it exactly right.

 

Principles affecting supporter consent: the context

 

As a behavioural scientist, the challenge of getting an empirical and applied answer to how best to gain supporter consent fascinates me. As a fundraiser, the lack of robust testing shocks me. I’ve seen a lot of dangerously unscientific ‘research’ translated into fancy looking presentations. But I’ve seen nothing that comes even remotely close to being a solution.

 

So, last year I partnered with the Behavioural Economics Research Group at City University London to study the behavioural science principles affecting supporter consent. We tested 46,000 executions of principles. You can access the full results here.

 

These initial findings provided insightful guidance on generic winning principles. But, as we stressed at the time, there was always going to be an enormous difference between generic factors and organisation-specific ones.

 

And now we have overwhelming proof to back this up.

 

One example I can share is the work we’ve done with Crisis. Testing was done with existing supporters both on winning behavioural science principles from our prior research, and new principles uniquely related to Crisis’s needs.

 

Most importantly, these principles were translated from theory into copy and design reflecting Crisis’s mission, tone of voice and brand.

 

It’s important to stress Crisis weren’t just testing the right things – they were testing in the right way.

 

Testing only one copy and design version of each principle is risky. The near infinite choice of execution you’re faced with means there may be other more effective ways that never get tested. Again, if you don’t measure failure anything can be call success. But it won’t be successful.

 

For this reason, we used the DonorVoice Pre-Test tool – a sophisticated, multivariate online testing platform. This allowed Crisis to not only run many thousands of A/B tests simultaneously, but to do so faster than they could have done with only a handful. This allowed us to compare not just multiple principles but also multiple copy and design executions of each principle.

 

This testing told us two crucial things:

  1. The relative importance of each theoretical principle, or concept for Crisis
  2. The most effective way to execute this principle/concept into words or images, once again for Crisis.

 

What we tested

 

The table below describes all the behavioural science principles we tested.

 

image 1.png

 

In addition to that, Crisis wanted to test two different concept ideas. The first was their version of the supposedly ‘best practise’ “Your tick saves lives…” tagline. So, we included the headline “your tick can end homelessness”. The second concept was about the donor’s impact and the role they play. This was reflected in the headline “we can only do what we do because of supporters like you”.

 

So, what worked?

 

Let me start with a cautionary note: These findings are unique to Crisis and whatever follows isn’t a recommendation for application. Rather, it’s intended to show how the power of the behavioural principles differ significantly when we go from generic to charity-specific copy.

 

What we found

 

DonorVoice’s Pre-Test tool reveals the relative importance of the attributes. For Crisis, the most impactful principles were Perceived Control, Conformity and Transparency.

So far, these results validate our previous findings. Perceived Control and Conformity were the two most important principles in our previous study too. Thus, we can begin to assume that giving donors control, and an image of a person, could influence consent. However, the most effective way to execute these two principles might still differ from charity to charity.

 

But, the above was the only parallel between this and our previous study.

image 2.png

The order of importance of all other principles (shown on the right) was unique to Crisis, and didn’t reflect what we had found when we used a generic charity. 

It is also important to note that the headline was the least effective of all – as a reminder, this wasn’t a behavioural science principle but the concept ideas that Crisis wanted to explore.

 

The second piece of insight our Pre-Test tool provides is which of the copy versions within each attribute/principle is the most impactful. The table below shows a summary of everything we tested. Each row is a principle and the different columns represent the different copy or design versions for each principle.  

The winning version of each principle is shown in bold.  

image 3.png

 

Most of the findings are self-explanatory, but there are a few things I should bring to your attention:

 

1. Looking at the first row, the headline, it becomes clear why it was last in the ranking order; having no headline was actually better than having either the “your tick” or the “donor impact” concept. I repeat. No message was better than either of these two messages.   

 

2. The above table doesn’t show exactly what we tested for the Perceived Control principle. In our prior study, we offered control over the frequency of all communications (weekly, monthly, yearly) and channel control. Channel control had performed better than the controversial –and scary – control over frequency. Good news!

 

But the question remained. Could we offer a type of control which would outperform the standard channel preference?

 

In the image below, you can see the three types of control we tested this time: Crisis’s consent statement offering the standard channel control, a statement offering control over donation frequency, and a third statement offering control over content as well as donation frequency.

 

Next to each type of control, you can also see some scores. These are preference scores. The higher the score, the higher the preference. In addition, they show magnitude of the effect; a score of 60 is twice as preferred as a score of 30.

image 4.png

 

Although offering donation frequency control is the least preferred option, combining it with control over content outperforms channel control. This means that charities may be able to differentiate themselves by offering control over their unique content. This finding warrants further exploration to find which content options are the most effective for each charity. It’s also worth exploring what the best way to combine these different types of control is.

 

3. Finally, we need to stress the importance of testing behavioural science principles that are customised to your charity. What works for Crisis might not be the most effective for your charity. For example, our previous research using a fictional charity, found that the label “supporter of this cause” was the winning one. But, as shown in the image below, the label “you’re someone who cares about homeless people” was the clear winner for Crisis. Actually, the label “you’re a supporter of Crisis”, which in our study scored the highest, here was the lowest scoring label.

 

image 5.png

Using generic language, or applying insight that was generated for someone else, could lead to sub-optimal performance. If Crisis had applied only generic insight, instead of testing their own customised copy, they would have risked losing contact with many thousands of people who’d otherwise have happily continued their support.

 

The only way you can sure not to make that mistake is by testing the right things in the right way.

 

Kiki Koutmeridou is behavioural science strategist at The Donor Voice. For more information about this case study and the methodology, you can arrange a free consultation with Dr. Koutmeridou by emailing chulme@thedonorvoice.com

Gain further insight on gaining supporter consent by attending our one day conference 'Successful Fundraising Post GDPR' on 6 December. 

Get the latest fundraising advice and insight

the fundraiser cover Sign me up