We know what we think. But not why we think it.

August 24, 2017      Kevin Schulman, Founder, DonorVoice and DVCanvass

Social media advertising is ineffective and on its way to dying! I know it’s true, because I read it on Forbes.com in an article called “Facebook’s Advertising Fallacy: That It Works!.” Quick! Short FB! Its tens-of-billions-of-dollars in annual ad revenue are going to zero any day!*

500 million users. How quaint.

Wait. Maybe I should read more than the headline. Apparently, in a survey, fewer than one-third of U.S. consumers say social media influences their purchasing decisions. Whew. Guess I can ignore everything that comes after this lede, then.** Because human beings are horrible at understanding what influences them.

Take the study that asked people to rank how important 16 influences were on them. They rated “sexual satisfaction” at #14. Then, they tested how people reacted in real life. Sexual cues were number one in this ranking. These were shocking results only to people who had met no other people in their lives.

Or consider subjects in Dutton and Aron’s study of arousal. They had men talk to a female researcher who gave the men her phone number. Half of the men talked to her right after crossing a suspension bridge, where their heart was racing and they were short of breath; half had time to catch their breath. More of the men who had just crossed the bridge called the woman afterward. They mistook their thinking that their arousal came from the woman, not the bridge.

In short, we lack metacognition. That is, we don’t think about how we think.

“But wait!” I hear you yelling. “Don’t you always preach asking your donors for their opinions? If we don’t know why we do things, aren’t all surveys garbage?”

Nope. Only bad surveys are garbage. We’ve all seen people led astray by surveys where people said what they think the researcher wanted to hear. It’s even worse when married with the nonprofit executive who wants to hear a specific answer.

But there is a good type of survey. While we stink at answering why we think things, we are great at answering what we think. I hate that I was double charged for a donation. I like the donor relations staff person who helped me. It was great that you took it off my credit card. I like the organization. These are trivially easy for us to answer and answer accurately.

Just don’t ask us why we feel these things or how important they are. We think that we like the donor relations person because she was crisp and efficient. Actually, her voice subconsciously reminded us of our favorite 2nd-grade teacher.

That leaves us with a conundrum. If we can’t ask people who important something is, how do we know what’s important?

The answer? Don’t ask people. Ask math.

All you need for this is an outcome variable (e.g., level of satisfaction with the organization) and all your input variables (e.g., satisfaction with all experiences with the organization). Then, a bit of modeling later (even a regression analysis can get you some data), you can tease out what’s working, what’s not working, and what doesn’t matter.

A simple example: let’s say that, controlling for everything else, people who like your acknowledgment letters a lot like your organization a lot and give more. People who don’t, don’t.

This is not necessarily causal. However, it is enough evidence to make your acknowledgment program a point of emphasis. After all, does it matter whether your acknowledgments cause someone to be more committed or more committed people like your acknowledgments more? Either way, acknowledgments are important.

This is how we do our DonorVoice commitment studies – getting all the experience and commitment data and finding out what experiences drive commitment. We then go a step further and see how much commitment relates to lifetime value. This allows us to say draw a line between increasing satisfaction with an experience and donations in your pocket.

An example from an anonymous organization: they found that their donation appeals were 70 times more important to a donor’s lifetime value than the no-ask engagement communications. Yet they got comments about both at the same pace and were putting a similar amount of effort into each.

So when you do your next survey, take a look. Are you asking someone to analyze why they believe something? If so, random results are a best-case scenario.

And don’t shed a tear for Facebook just yet. Despite people saying they aren’t influenced by their ads, I think this scrappy little startup just might make it yet.

* For legal reasons, I should say this is not actual investment advice, nor am I a licensed financial advisor. That said, if you are inclined to take financial advice from a guy ranting about nonprofit marketing, feel free to send your money to me. I’ll take really good care of it.

** The rest of the article is moderately interesting***, talking about how Facebook ads work best when you have compelling content and target narrowly. It just doesn’t come close to justifying the headline.

*** The article is also highly ironic for two reasons: 1) it’s on Forbes.com, which is so advertising centric it makes NASCAR cars look bashful, and 2) each section of the article encourages the reader to connect with the author on the social media platforms that allegedly don’t work.

4 responses to “We know what we think. But not why we think it.”

  1. Jono says:

    Facebook has been doing a great job of encouraging brands who advertise with them to measure business metrics instead of vanity social metrics like engagement. This blog post provides some good coverage of that. The autor Nate is required reading for anyone who advertises on Facebook, IMHO:

    “Nearly 90% of Facebook case studies highlight business outcomes.”
    http://www.nate-elliott.com/blog/2017/1/29/how-to-measure-like-facebook-does

  2. Roger Dooley says:

    Nick, you are totally correct to mock the article relying on what people say in a survey about influences on their buying behavior. A great example of why this is ridiculous comes from Robert Cialdini’s studies of energy saving. Asked what would influence their decision to conserve energy, they rated “save money” tops and “what my neighbors do” dead last of four total choices. When the scientists ran an actual test with different letters mailed to different homes, the “neighbors” version outperformed them all. Proof of social proof.

    Oddly, people who are certain they aren’t influenced by cues like sex, actions of others, etc., are often willing to believe that OTHER, presumably less rational, people are influenced by them. 🙂

  3. Jono, thanks! I’ll have to start following Nate. Part of why I did this post is the frustration of seeing a medium or technique mocked or dismissed because those who don’t do it well don’t get good results. Take the quote from the piece “it takes enormous effort to learn how to use it effectively, then an on-going commitment to keep the momentum growing.” I would argue that applies to email, SEO, content marketing, etc. Or more broadly: couldn’t you say that about training to run a marathon? Or learning a language? Or parenting?

    The point I missed in this post that you hit on is the (hopeful) death of vanity metrics like likes. The people doing Facebook well are the ones measuring real business outcomes. As Nate says at http://www.nate-elliott.com/blog/2016/10/14/facebook-has-moved-beyond-engagement (now you have me going through the back catalog!), Facebook itself has said engagement alone isn’t relevant and uses business objectives for its own measurement.

  4. Roger, thank you! I’m a fan of Brainfluence, the Persuasion Slide, and the Brainfluence Podcast, so I’d recommend all of them to those who are interested in neuromarketing.

    The other thing I find fascinating about our inability to determine causes for our thoughts and feelings isn’t all social desirability bias (although that is strong). The Cialdini energy savings experiment is a great example: saving money has no particular social desirability to it, but people thought that’s what would influence them. So people do say things on surveys to make themselves look better, but that’s not the only way they mislead (or are misled by themselves).

    There was an experiment where subjects were supposed to tie two ropes together. but they were hanging vertically too far apart to reach at the same time. One of the answers was to tie an item to the bottom of one rope and swing it to the other. Many people didn’t get this answer until a researcher walked by one of the ropes and (accidentally) bumped it, setting it swinging. When people got that answer immediately after, researchers asked them how they thought of it and fewer than a third (if memory serves) mentioned the researchers. The majority had involved stories of their thought processes that didn’t include this — they only processed the rope swinging at a subconscious level.