TEST RESULTS: Strengths and Weaknesses of the Identifiable Victim

August 24, 2018      Kevin Schulman, Founder, DonorVoice and DVCanvass

Stalin said “If only one man dies of hunger, that is a tragedy. If millions die, that’s only statistics.”

Mother Theresa said “If I look at the mass, I will never act.  If I look at the one, I will.”

Both were talking about the idea of scope insensitivity (yes, one to save lives, one to take them, but otherwise, same phenomenon).  The idea is that we are so incapable of grasping scope, we simply don’t think of numbers beyond a certain point.  This leads to some interesting, and disheartening, results.  For example, in 2010, the humanitarian aid response to the Haiti earthquake (which affected 3 million people) was more than 3 billion dollars. The flood in Pakistan (which affected an estimated 20 million people) received only 2.2 billion dollars. (study here)  People donate relatively the same amount to save 2,000, 20,000, and 200,000 birds.

Specifically, telling one story at a time, absent statistics, has the greatest impact, in an example of the identifiable victim effect.  There is a famous study in nonprofit marketing that shows that an appeal that tells the story of a child does better than an appeal that tells that same story with information about the general problem of poverty in Africa. Even when you put the personal story with the statistics, the personal story alone does better.

Even more oddly, a story of one boy did as well as the story of one girl; both did better than the story of the boy and the girl. The study is here and it is both fascinating and disheartening.

So as part of the DonorVoice / DMA Nonprofit Federation test, we wanted to see what type of story had the greatest impact.  The results were, well, complicated:

#1 was combining statistics and story:

And #2 was just statistics:

And #5 – last place – was the personal story:

What can explain this?  We’ve seen this with some Pre-test Tool and live results and it has to do with the line “more than one every hour.”  Every time statistics have performed well (or even OK), it has been when the statistics have been boiled down to “one out of six” or “one every hour” or another form of “one.”  That is, when the statistics support an individual narrative, they do better; when they don’t, they don’t.

This is just a hypothesis; we don’t entirely know the mechanism at work here.

Returning to the order, #3 was a listing of mission areas, similar to what you’d see in a lot of nonprofit mailings:

And #4 was a donor testimonial:

Personally, I thought this last one was going to do better.  One of the fun parts about testing generally, and using the pre-test tool in particular (because of the number of variations), is that there’s always something that’s a surprise.

This “why” of giving is likely the most specific to the organization doing the asking and the donor file, so take from this what you will.

Nick

8 responses to “TEST RESULTS: Strengths and Weaknesses of the Identifiable Victim”

  1. Robyn says:

    Interesting article with valid points, but why randomly insert an image of a dead child without any context whatsoever? No explanation. No backstory. Not even a caption. This is an example of how *not* to use an individual to represent a larger issue.

    Yes, this image had become the rallying cry to help give a face to the refugee crisis and inspire action… but you don’t mention that anywhere in your article. It feels creepy and disrespectful.

  2. My apologies. When I wrote this, I originally had used this as an example of the identifiable victim effect and talked about the recent Slovic et al study talking about both the effect and how quickly it decays. I removed it because it the study about two children was more relevant to the post, but forgot and left the picture.

    For those who want greater context, the Slovic et al study shows that even the effect of a single victim does not last for long.  The study is up at http://www.pnas.org/content/114/4/640 and it’s called “Iconic photographs and the ebb and flow of empathic response to humanitarian disasters.” The researchers looked at the specific case of the photo of Alan Kurdi (it was initially reported as Aylan, but apparently Alan is correct). 

    Looking at the Google Trends for Syria, refugees, and the boy’s name, you see within one month, they are back to almost baseline. The same is true for donations to the Swedish Red Cross over that time.  (Overall donations – the graph for just donations to Syria is even more stark).

    As the authors state “this form of empathy quickly faded and donations subsided, even though the number of Syrian refugees seeking asylum in Sweden was relatively high and consistent throughout the period that we sampled (36,000–40,000 per month).”

    Again, my apologies for springing it without that context and thank you for giving me the opportunity to rectify.

  3. Kat says:

    I’ve had better luck with breaking the stats down into something the donor can grasp, like the “one every hour” thing, too. But I can’t help wondering, what would happen if we tested those statistics against a GOOD story, vs. the totally non-empathetic, generic non-story that came in at #5? The stats at least had specificity going for them.

  4. As I wanted to look at how this type of story was actually being used, I took the text largely from the storytelling section of a large nonprofit’s website, then swapped out details (e.g., Sophia is my daughter’s name, Megan is my wife’s name, the discovery wasn’t at school, etc.)

    So it may be thought to be a non-empathetic, generic non-story, but it’s also representative of our storytelling…

  5. Kat says:

    I hear ya, Nick. But if our contention is that stats don’t work because people can’t picture them, shouldn’t we be testing against something people CAN picture? If the point is, “They don’t give to what they can’t picture”? I can picture more from those stats than I can from that story, because at least the stats are specific, ergo, the stats win. I feel mildly empathetic for the mom even though I don’t know the actual story, so that pulls a little better than the story but still worse than the stats because I still can’t picture anything.

    I spend a lot of time with my clients trying to explain the difference between a story and a collection of facts, because a large part of the unemotional “storytelling” being done in fundraising today seems to be based in an inability to distinguish one from the other. The “story” in your test is not actually a story, it’s a collection of facts.

    I think it’s Tom who essentially says of studies that show people don’t want more direct mail or emails, “Well, of course they don’t want more BORING direct mail or emails, and that’s all we’ve given them to choose from.” This feels like kind of the same thing, to me. I feel like all we’ve really done here is ask, “Which is better, more specific or less specific?” Not surprisingly, more specific wins.

  6. Kat, that is a lot of subjective opinion, nothing to argue for or against, merely to point it out.

    The old trope of people not growing tired of fundraising/storytelling, just bad fundraising/storytelling would argue for blasting out as many appeals/stories as possible. After all, if it is really compelling and making them feel good, why would there be any downside to more, more and more?

    Data doesn’t support this, there are massive diminishing returns, irritation is very real and it is largely independent of the quality of those comms and the negative impact on retention quickly drowns out the tiny, incremental, campaign level gains.

    What’s more, the Lake Wobegon effect is on full display with everyone thinking their comms/stories/appeals are above average. A very well done, cross sectional study (Frank Dickerson) of fundraising comms that objectively evaluates those comms on their storytelling and narrative qualities shows the sector communications, as a whole, compare more to academic abstracts than well told stories.

    To Nick’s point, this test uses a very typical example of what is being produced.

  7. Bethany says:

    Kat, I had the same thought. It’s true that many people are terrible storytellers, but a more accurate test would have had the poorly written story and an emotional story. Then we would be comparing *all* the possibilities.

    For me, the study is a bit useless because I would never write something with so little emotion or detail.

    Of course, we’re also dealing with one mere paragraph at a time, so that doesn’t help things either. Unless there’s more of these appeals that we’re not seeing?

  8. Bethany, in terms of the answers, what you see was what was in the test. In a scanning test like this one, it’s important to keep the text both short and similarly sized to each other so it’s content, not length that works on folks.

    And I would agree that this isn’t a repudiation of story or narrative – there’s too much variation in different stories to make such a strong claim. Nor would I want to, as I believe powerful stories That’s why I also included the academic research that shows they can be powerful.

    That said, I do think one can draw some from this – namely, stats that are well-formulated don’t kill narrative as much I had thought they would. And, since the narrative was drawn from the copy of a successful marketing nonprofit and vetted by a committee of nonprofit marketers, it represents at least a typical example of what you see out there.

    Which, as I think everyone in the comments is arguing, is a reason to improve our narrative, not to bury it. But I caution against throwing out baby with bath water or claiming an entire test useless.