Why Donor Opinions Could Steer You Wrong

July 19, 2018      Kevin Schulman, Founder, DonorVoice and DVCanvass

It’s not their fault; it could be yours.

On Monday, Roger talked about the multitude of sins committed in the Charity Commission report. Among them was the flawed approach of asking people why they thought what they thought.

Today, I wanted to explain why it’s so difficult to get people’s opinions of their opinions.  Tomorrow, we’ll talk about how to create donor surveys that actually work.

How do people mislead with survey responses?  Here are a few ways:

The social desirability effect. People say what they want other people to believe in surveys (in fact, they also say what they want to believe about themselves). This is certainly true of donations – almost two-thirds of people overreport their giving on mail surveys. This is due largely to self-deception and the degree of benefit people will get from overreporting. (In other words, if you think people are bad on mail surveys, wait until you see their tax returns).

This is especially pronounced when surveys telegraph the answer they want. Some of this is clearly intentional (witness a Republican poll a year ago with the question “Do you believe that the mainstream media does not do their due diligence fact-checking before publishing stories on the Trump administration?”). But some of this is unintentional with poorly worded questions. It’s arguably worse to get bad data when that wasn’t your goal.

The experimental demand effect. When people know they are part of an experiment, they may behave differently from how they would in real life. Specifically, they will try to do what they believe to be appropriate behavior. Even a picture of human eyes can increase donations or, in a lab setting, the likelihood of reported giving.

In short, we aim to please researchers (whether they are there or not). In fact, even researchers aim to please researchers, funders, or journal editors consciously or subconsciously, which is why double-blind studies where the researcher and participant both don’t know who is getting what treatment are the gold standard for research.

Selection bias. This happens when surveyors don’t get a random sample from the population they are trying to study. For academic studies, this is often because they are done on undergraduates, who tend to be WEIRD (white education industrialized rich Democratic) and cluster in similar economic and demographic zones.

In fundraising, on a related note, there can be a pernicious difference between acquisition audiences and donor audiences. For example, as we note in our ask string white paper, there are significant differences in what you should ask for from an acquisition audience versus a one-time donor audience versus a multi-giving donor audience.

Metacognition challenges. Asking people why they do what they do is a fool’s errand. People literally have no idea why they do what they do (aka metacognition – sounds more fun as a big word, no?), so they couldn’t tell you even if they wanted to.

Roger had a couple great examples Monday. To that, I’ll add a survey reporting that fewer than one-third of U.S. consumers say social media influences their purchasing decisions. One wonders why there is multi-billion $ advertising in social media if no one is influenced by it.

But don’t shed a tear for Facebook — just yet. Despite people saying they aren’t influenced by their ads, I think this scrappy little startup just might make it yet.

Since we run ads that persuade and increase donations on these networks, we can easily guess that people are wrong about what influences them.

Filter effects. And those biases we actually admit? We have them too. As does our boss. Filter effects are when leaders will only accept results that agree with preconceived notions. I once saw a leader look at a poll’s results and declare that even though his/her idea was on the losing side, it was “close enough” and directed a change be made to conform to his idea.

The results were 18% for the idea and 82% against. Close enough?

While it is not technically a problem with surveys, I’ve seen good surveys die this way.

Pony thinking. There is probably a technical name for this, but I like this term for it enough not to go searching. Ask an ask an eight-year-old whether s/he wants a pony or not, s/he will likely say yes.

If you pair this with the question “How committed are you to cleaning up large piles of poop for the rest of your life?, the level of enthusiasm goes down significantly.

Likewise, if you have a survey audience –or, worse, a focus group– and ask them if they want lower-priced products, they will say yes. But for a company, “lower-priced products” also equals “lower quality products.” You can’t survey one without the other.

For nonprofits, if you look at traditional surveys, people will say they want fewer communications and more information on the impact of their donations. These often oppose each other. If you look just at taking away communications, without making sure you are connecting better with the communications you do have, you will likely experience loss.

But if you do what several nonprofits have started to do  where you announce that you are decreasing communications based on their feedback, ask for additional feedback, and make each piece more focused on the donor, you will hit your year one goals and be raising more money in year two, free of the volume rat race.

These are but a few reasons surveys can go astray. But these do not absolve you of the need to get your donors’ and constituents’ thoughts. It just means you must do it better.

How? That’s tomorrow’s topic.

Nick